JSON snap not working as expected
JSON splitter last week was working correctly. Pipeline was not edited, JSON format did not change, and yet Pipeline is crashing indicating error that JSON splitter is expecting list not an object. Before 17th of Feb it was working perfectly fine. was something changed in snap architecture? “Failure: Json Splitter expects a list, Reason: Found an object of type class java.util.LinkedHashMap, Resolution: The path $.data needs to refer to a list in the incoming document”Solved10KViews0likes16CommentsJSON data size fix
My source data has more than 1M record which is in a csv format. All those records had error , hence were routed to error view. Now , All those records needs to be logged in a S3 folder. Also i send an email to the team which contains the file name and location. The data is loaded in a json format in S3 which is still fine but it takes a longer time to open the json file (which is obvious) but can we do this in a more efficient manner ?Sometime the log file does not load at all ☹️ Data must go to S3 folder but how we are storing it is open for discussion , like the records can be put in csv, txt or json format. I had an idea of Splitting those records and saving it as 2-3 json file but now sure , if it is even appealing. Any ideas ?Solved6.6KViews0likes5CommentsSplit hours into back dated days based on total hours
Hello All, I’ve stumbled upon a scenario, where I need to split the JSON object into n number of objects (backdated) if the total hours coming in from the source crosses 24, it should split until the point the total hours have been exhausted. I have added an input JSON array for clarifying my requirement: [ { "res": "1001", "activity": "First", "hours": "20", "date": "31/01/2022" }, { "res": "1001", "activity": "Second", "hours": "8", "date": "31/01/2022" } ] For res 1001 the total hours gathered from both the activities is 28 hours, which is crossing the threshold of 24 hours for a day, what I would like to achieve is to split any one of the object in such a way that the remaining hours (4 hours → 28-24) gets spread across another object for the previous day. Here is the desired output: [ { "res": "1001", "activity": "First", "hours": "20", "date": "31/01/2022" }, { "res": "1001", "activity": "Second", "hours": "4", "date": "31/01/2022" }, { "res": "1001", "activity": "Second", "hours": "4", "date": "30/01/2022" } ] Had the total hours in input be 60 hours then the output would be required to split across 3 days. 24 + 24 + 12, let’s say activity 1 has 24 hours for date 31/01/2022, activity 2 has 24 hours for 31/01/2022 and activity 3 has 12 hours for date 31/01/2022, then activity 2 would have to shift back to 30/01/2022 and activity 3 would have to shift back to 29/01/2022 as activity 2 has occupied 24 hours on 30/01/2022. Hope this makes sense. Activities will always have the current day’s date (whenever this calculation will occur). Any help on this would be greatly appreciated, thank you in advance. Regards, TanmaySolved2.5KViews0likes2CommentsRandom behavior from pipeline when trying to work/split on multi-level json data
Case/scenrio: Used assets Sample input xml below multiplemstercpttoub454.txt (51.1 KB) Problematic pipeline asset below but_why.slp (67.8 KB) (you may want to remove the splunk, producer etc snaps if connection is unavailable during investigation) Requirement context around which the problematic random behavior identified: In input xml, following two tags may repeat any number of times: RCPT_INVOICE_OUB_IFD RCPT_INVOICE_LINE_OUB_IFD Now the need was to split this xml into individual xml containing one of each from above tags… then transform those xml into a plain text for output Logic was to ensure both of these are treated as arrays (so even when there is just one value for repeating tags, we would still be able to loop through them after transforming them as array else the same expression may not work for both arrays and map) and split them in order keeping the parent values intact Problem: Random behavior… some times it gives correct 2 outputs where expqty != rcvqty but sometimes… same message is repeated twice!?. Screenshot highlighting areas of interest: Working run details (when the behavior is correct per expectations): This time the two outputs are per expectation: Check the output from 3 So there were two records in input with expqty != rcvqty and those data were correctly split and processed. Note the outputs at 1 Note the outputs at 2 The input and output preview at 2 Now tried re-validating a few times until the random wrong behavior kicks in… Incorrect run details (when the result is wrong) This time the two outputs are wrong. Same message got processed twice: Check the output from 3 So there were two records in input with expqty != rcvqty and one of the records gets repeated… Note the outputs at 1 The behavior is correct till here. It messes at 2 Note the outputs at 2 Input and output preview at 2 How did the value of this tag get swapped? And this happens randomly Extra notes a. I tried different approaches at the point 1 like using mapper, structure etc but the random behavior still persisted. b. As a last resort I had to handle all splitting operation (all of it) within script so the output is always correct… but I am still interested to know the reason for this behavior and a better solution if there is one2.6KViews0likes0CommentsJSON Splitter to REST Call capture all response bodies
I have a pipeline that takes an Array of JSON as input. We then feed it to a JSON Splitter that then goes into a REST POST to some endpoint. I see that all of the REST calls go through in the Ultra task, however only the first response body from the first REST call is returned by Snap. Does anyone know how to capture all the response bodies into an array, and only after all REST calls are processed, then SnapLogic sends back its array of responses? Or is this not possible…3.4KViews0likes3CommentsAlternate for JSON Splitter - Split single document into multiple rows
Hi, I am currently using Json Splitter in a pipeline to split records and send it to Oracle merge. As there are huge records, json splitter processing time is very high. Tried moving the json splitter and Oracle merge snap into a different pipeline and enabled re-use in pipeline execute. Still no improvement in the performance. Kindly let me know if there is any alternate for Json Splitter. Tried the below approach in mapper to split single document (consisting multiple objects) into multiple rows. But it returns array of array. Input file : SampleInput.txt (267 Bytes) Mapper expression: {}.extend($response.map((elem, index) => elem)).values() Target: $ Expected output: Best Regards, Pooja4KViews0likes1CommentAfter using a JSON splitter and a filter, how do you put the object/array back together?
Hi, we needed to filter on a JSON input. We managed to get that working by using a JSON splitter on the element and then piping that to a filter snap. Our issue is now the input that looked like this: "allLocations": { "location": [ { "addressInternalid": 2631363, "isDelete": false, "internalSupplierid": 3423589, "acctGrpid": "RT", "address1": "5309 GREENWAY", "address2": "5301 REDWAY", "address3": "5504 BLUEWAY", "poBox": "0912KHJWD", "country": "USA", "state": "US-TX", "city": "FREE", "zip": "78211", "phone": "2229808888", "phoneExtn": "091", "fax": "747", "faxExtn": "737" }, { "addressInternalid": 2631367, "isDelete": false, "internalSupplierid": 3423589, "acctGrpid": "RT", "address1": "11305 4 PTS DR", "address2": "BLDG 2,#100", "country": "USA", "state": "US-TX", "city": "AUSTIN", "zip": "78726", "phone": "5126648805", "phoneExtn": "123", "fax": "123", "faxExtn": "134" }, { "addressInternalid": 2631368, "isDelete": false, "internalSupplierid": 3423589, "acctGrpid": "RT", "address1": "REMIT 11305 4 PTS DR", "address2": "BLDG 3", "country": "USA", "state": "US-TX", "city": "AUSTIN", "zip": "78725", "phone": "5126600000", "phoneExtn": "678", "fax": "678", "faxExtn": "678" } ] }, Looks like this: [ { "addressInternalid": 2631363, "isDelete": false, "internalSupplierid": 3423589, "acctGrpid": "RT", "address1": "5309 GREENWAY", "address2": "5301 REDWAY", "address3": "5504 BLUEWAY", "poBox": "0912KHJWD", "country": "USA", "state": "US-TX", "city": "FREE", "zip": "78211", "phone": "2229808888", "phoneExtn": "091", "fax": "747", "faxExtn": "737", "fullCompanyName": "SUPPLIER MARCH 3 dba TEXT", "requestId": 5272423, "id": "3423589", "facilityCode": "0001", "systemCode": "1", "supplierType": "Operational", "status": "ACTIVE" }, { "addressInternalid": 2631367, "isDelete": false, "internalSupplierid": 3423589, "acctGrpid": "RT", "address1": "11305 4 PTS DR", "address2": "BLDG 2,#100", "country": "USA", "state": "US-TX", "city": "AUSTIN", "zip": "78726", "phone": "5126648805", "phoneExtn": "123", "fax": "123", "faxExtn": "134", "fullCompanyName": "SUPPLIER MARCH 3 dba TEXT", "requestId": 5272423, "id": "3423589", "facilityCode": "0001", "systemCode": "1", "supplierType": "Operational", "status": "ACTIVE" }, { "addressInternalid": 2631368, "isDelete": false, "internalSupplierid": 3423589, "acctGrpid": "RT", "address1": "REMIT 11305 4 PTS DR", "address2": "BLDG 3", "country": "USA", "state": "US-TX", "city": "AUSTIN", "zip": "78725", "phone": "5126600000", "phoneExtn": "678", "fax": "678", "faxExtn": "678", "fullCompanyName": "SUPPLIER MARCH 3 dba TEXT", "requestId": 5272423, "id": "3423589", "facilityCode": "0001", "systemCode": "1", "supplierType": "Operational", "status": "ACTIVE" } ] How do I get it back into an object so I can map it correctly? As you can see in the below image, all of the location fields are coming in as strings with no hierarchy, not the array they were originally. Any tips would be appreciated!3.4KViews0likes2CommentsHow to flatten hierarchial JSON structure?
Hi Forum, I have hierarchical JSON structure coming as input and need to flatten(and transformation) the document as output. Below is the simple example showing input and output structure. I tried doing this with Mapper, Strcuture and JSON Splitter snaps but could not achieve it. Anyone know how to solve this? Input document structure: [ { "entity": { "results": [ { "id": 453721, "custom_fields": [ { "id": 11111, "value": "AAAA" }, { "id": 22222, "value": "BBBB" }, { "id": 33333, "value": "CCCC" } ] } ] } } ] Desired Output Document structure: [ { "entity": { "results": [ { "id": 453721, "11111" : "AAAA", "22222" : "BBBB" } ] } } ]5.8KViews0likes3CommentsIssues in reading Nested Json file
Hi, I tried executing the example for Json Splitter provided in their home page, but the results i’m getting does not match with what they provided in their page. Could someone please correct me if i am wrong somewhere. SamplePipeline.slp (6.6 KB) ExpectedResult.xlsx (8.4 KB) Thanks, Muthu4.9KViews0likes2Comments