HTTP Snap Query parameters issue
So, I have a strange issue with the HTTP snap that I have huge trouble solving. When validating the pipeline I get an error saying query param is not allowed for every query parameter provided in the query parameter fields. But when I copy the request string from HTTP Snap debug and paste it into Postman, the request works as expected. I've tested it both using query parameter fields and by manually mapping the URI with encoded values where appropriate. The request string from debug looks like this (Identifiers have been replaced with *****): /rest/organizationalEntityShareStatistics?q=organizationalEntity&organizationalEntity=urn%3Ali%3Aorganization%3A********&timeIntervals.timeGranularityType=DAY&timeIntervals.timeRange.start=1715844351774&timeIntervals.timeRange.end=1747380351774&ugcPosts[0]=urn%3Ali%3AugcPost%3A***************&ugcPosts[1]=urn%3Ali%3AugcPost%3A**************&ugcPosts[2]=urn%3Ali%3AugcPost%3A**************&ugcPosts[3]=urn%3Ali%3AugcPost%3A************** Does anyone have a clue what the issue may be or what I should do to troubleshoot it further? Thanks!How can I pop the last element from one array and append it to another?
Hello, I’m working with this JSON structure: [ { "field_a": "string", "entries": [{}, {}, {}], "shared": [] } ] I need to remove only the last object from the entries array and append it to the shared array, leaving the other entries intact. What’s the most straightforward way to accomplish this? Thank you for your help. Kind regards, AdamSolved512Views0likes2CommentsPerformance Optimization: Use Static Lookups Instead of Joins
Hi! I wanted to share a powerful and lightweight approach I recently implemented for using static reference data in pipelines—without needing memory-intensive joins or separate file/database reads during runtime. The Challenge In typical scenarios, we handle static or reference data (like lookup tables or code descriptions) by: Reading it from a file or database Performing a Join Snap to enrich the main data stream While effective, joins: Can be memory-heavy, especially with large datasets Add complexity to your pipeline Require both sources to be aligned in structure and timing The New Approach Instead of performing a join, we can: Store static reference data as a JSON file in SnapLogic’s SLDB Load this JSON file in an Expression Library Use filter/map function in your pipeline expressions to fetch data from JSON based on a key No joins. No file readers. Just fast in-memory lookups! Example Sample JSON file (staticData,json) [ { "code": "A1", "desc": "Alpha" }, { "code": "B2", "desc": "Beta" }, { "code": "C3", "desc": "Gamma" } ] Define in Pipeline: Usage in Pipeline: lib.static.filter(x =>x.code == $code_from_source).length > 0 ? lib.static.filter(x =>x.code == $code_from_source)[0].desc : "Unknown" This setup allows you to quickly enrich your data using a simple expression, and the same logic can be reused across multiple pipelines via the library. Benefits Faster: No join processing overhead Simpler pipelines: Fewer snaps and data dependencies Reusable: One JSON file + one function = many pipelines Memory-efficient: Especially helpful when Snaplex memory is a constraint Things to Consider SLDB file size limit: The JSON file stored in SLDB must be under 100MB (SnapLogic’s file size limit for SLDB uploads). Data updates: If your reference data changes frequently (e.g., weekly/monthly), you’ll need to build a separate job or pipeline to overwrite the SLDB file. Search performance: The filter() method checks each item one by one, which can be slow if your JSON has a lot of records. For faster lookups, consider converting the data into a key-value map. Governance: SLDB files have limited access control compared to databases. Ensure your team is aligned on ownership and update responsibility. Maintainability: JSON logic is hardcoded, so changes to structure or logic require modifying the expression library and possibly redeploying affected pipelines. I’ve found this approach especially useful for small to medium-sized static datasets where performance, simplicity, and reusability are key. If you're looking to reduce joins and streamline your pipelines, I highly recommend giving this method a try. To make it easier, I’ve attached a sample pipeline, JSON lookup file, and input CSV so you can see the setup in action. Feel free to explore, adapt, and let me know how it works for you!474Views2likes1CommentGet MIN (oldest) or MAX (most recent) date from array of dates
Hi, I first do an SQL Server - Execute and get this weird {_snaptype_localdatetime: "..."} for the CreationDate and LastUpdate fields. Then, using a Mapper snap I do toString() to just get the date fields in string form. Then, I do a Group By on some fields and add another Mapper snap where I have this: Here, I am trying to get the MAX and MIN dates from the array. This is what the whole pipeline looks like: How can I get the oldest and the most recent dates from the array? Thank you very much in advance! Kind regards, AdamSolved3.8KViews0likes3CommentsIF Statement in Expressions Library file
Hi, I have an expressions library file which creates a JSON object like this: ... .map((elem, idx, arr) => { "fieldA": elem.fieldA, "fieldB": [ { "fieldC1": elem.fieldC1, "fieldC2": elem.fieldC2 }, { "fieldD1": elem.fieldD1, "fieldD2": elem.fieldD2 }, { "fieldE1": elem.fieldE1, "fieldE2": elem.fieldE2 } ] } ) ... "fieldB" is an array of objects. The first object ("fieldC") will always be present, but for the second ("fieldD") and third ("fieldE") objects that is not always the case. How can I create a condition that will remove the whole {...} based on whether elem.fieldD1 or elem.fieldE1 exist? I think there is only the ternary operator ( a ? b : c ), but I am not sure how to insert the ternary operator here. Is something like this correct: ... { "fieldC1": elem.fieldC1, "fieldC2": elem.fieldC2 }, /* what about this "," (comma)? If D1 does not exist, then the comma should not be there.*/ (elem.fieldD1) ? "{ "fieldD1": elem.fieldD1, "fieldD2": elem.fieldD2 }," : "" (elem.fieldE1) ? "{ "fieldE1": elem.fieldE1, "fieldE2": elem.fieldE2 }" : "" ... Lastly, you only have these 3 objects inside the array and nothing more. The order is always kept. If fieldD does not exist, fieldE doesn't either. Does someone know how I can achieve this result? Best regards, AdamSolved2.6KViews0likes3CommentsExpression to extract values in nested JSON objects
Hello, I'm looking for help with an expression to extract values in nested JSON objects. I've figured out how to extract any elements in "field" array where "lastModified" is after "lastExtractionDate". But problem is that I also need to extract the values in elements that has a "plannedChanges" object. See the provided examples below for clarification: This is how the input documents look: { "guid": "bdb6cdfc-8008-4a1d-95df-8e789732ad32", "name": "Eric Employee", "createdOn": "2016-04-14T11:50:28.590+02:00", "lastModified": "2024-06-10T00:05:00.700+02:00", "lastExtractionDate": "2024-05-10T14:11:00+02:00", "hasPlannedChange": "1", "firstName": "Eric", "lastName": "Employee", "legalEntity": "Company Inc", "field": [ { "name": "Country", "lastModified": "2024-05-17T12:46:31.460+02:00", "dataValidFrom": "2024-05-17", "type": "SCALE", "typeId": "8", "data": { "guid": "630dd5a0-aa2a-4ab7-9977-c1e9d48cb379", "value": "Denmark", "alternativeExportValue": "DK" }, "visible": "1" }, { "name": "E-mail", "lastModified": "2024-06-10T00:05:00.700+02:00", "dataValidFrom": "2024-06-10", "type": "TEXT", "typeId": "1", "data": { "value": "eric_employee_2@company.com" }, "visible": "1", "plannedChange": { "lastModified": "2024-06-10T13:39:30.923+02:00", "dataValidFrom": "2024-06-11", "dataValidTo": "", "status": "1", "data": { "value": "eric.employee@company.com" } } }, { "name": "Manager", "lastModified": "2024-06-06T10:38:42.333+02:00", "dataValidFrom": "2024-06-06", "type": "PERSON", "typeId": "10", "data": { "guid": "a4666edc-d1ab-456a-ae06-8625eb933c06", "value": "Manager Boss", "username": "man.boss@company.com", "employeeId": "200384" }, "visible": "1" } ] } The output should look something like this: { "guid": "bdb6cdfc-8008-4a1d-95df-8e789732ad32", "name": "Eric Employee", "lastModified": "2024-06-10T00:05:00.700+02:00", "lastExtractionDate": "2024-05-10T14:11:00+02:00", "hasPlannedChange": "1", "firstName": "Eric", "lastName": "Employee", "legalEntity": "Company Inc", "changedFields": [ { "name": "Country", "value": "Denmark", "dataValidFrom": "2024-05-17" }, { "name": "E-mail", "value": "eric.employee@company.com" "isPlannedChange": "1", "dataValidFrom": "2024-06-11", "dataValidTo": "2025-06-11", // Exclude if value is empty string. }, { "name": "Manager", "value": "Manager Boss", "dataValidFrom": "2024-06-06", } ] } Maybe a bit messy explanation, but I think you get the gist of it. Very grateful for any help solving this. Best regards TeddieSolved3.1KViews0likes1CommentSlicing Data from JSON
Hello SnapLogic Community, I'm a Salesforce administrator working with SnapLogic to process data from an Oracle server. My pipeline calls a function that returns a multi-level JSON file. I need to extract data from this JSON and insert it into various Salesforce objects (master record and related child records). Current Approach & Challenges: My current solution involves copying the JSON into multiple pipeline branches. Within each branch, I use JSON Splitter snaps to break the data apart based on each available fields; then I use Join snaps to merge the data. Here's why this isn't ideal: Scalability: This approach becomes unwieldy with a large number of fields, requiring excessive branches. Error Potential: If fields lack values, the Join snap may misalign data during reassembly. Example: Input JSON: { "Contacts": [ { "First Name": "John", "Last Name": "Smith", "Email": "john.smith@abc.com", "Phone": "1234567890" }, { "First Name": "Jane", "Last Name": "Taylor", "Email": "jane.taylor@cba.com", "Phone": "0987654321" } ] } Desired Output (CSV/table Format): First Name,Last Name,Email,Phone John,Smith,john.smith@abc.com,1234567890 Jane,Taylor,jane.taylor@cba.com,0987654321 Goal: I'm looking for a more robust and scalable way to handle this JSON parsing and Salesforce insertion. My main aim is to keep data associated within each record intact without the complex splitting and merging that introduces error risks. Request for Help: Can anyone please suggest alternative solutions or workarounds to achieve my goal? I'd greatly appreciate any ideas that don't require any advanced coding skills. I have been trying to use expressions in a Mapper snap, with no success, and experimented with different snaps but I haven't been able to achieve the expected results (only the splitting/joining worked, but it's not ideal, and I believe there must be an other way, something I haven't tried yet). Thank you in advance for your help and insights!Solved3.5KViews0likes3CommentsHTTP Client snap issue encoding diacritical marks/accents
Hello all! Hoping for some helpful suggestions or insight regarding an issue we're facing with the new HTTP Client snap. We're sending a POST request with a JSON body. Within the body are street addresses, and some addresses contain diacritical marks/accents. An example would be "Houssières". We're supplying the body ($body) from an input Mapper into the snap as a raw entity, with the Content-Type header set to application/json: I can't supply the full input body as it contains PII unfortunately, but the street address is supplied within the body like so. You can see the accent mark in question: On debug, the snap parses the input body into a string labeled "requestString" within the response body, and within the string the accented character is shown as an unrecognized character: The encoding issue is causing the endpoint we're POSTing to to respond with a 400: So, is there some way I can combat this/force the snap to correctly encode similar strings within the request body? Maybe via JS within the raw entity in the snap, somehow? Anything I'm overlooking here that someone else has caught? For context this isn't an issue we've faced with the older REST POST snap, but we like the extended functionality of the HTTP Client snap, and would prefer to use it (if feasible). Thank you!Solved2.4KViews0likes1Commentjsonpath in expression library
I would like to use a jsonpath in an expression library, but I am unsure of the syntax with parameters. custfield is a json array of custom field data and fieldname is a string This is what I attempted: { customField: (custfield,fieldname) => jsonPath($, "custfield[*]").find(f=>f.FieldName==fieldname) } What should the syntax be?2.1KViews0likes2CommentsFiltering Objects
Hello Experts, I have below Object input containing string values. Can someone please provide expression to get rid of duplicate values? [ { "ack": { "QueueURL": "https://www.google.com", "ReceiptHandle": "12345678", "content": "ABCDEFG" } }, { "ack": { "QueueURL": "https://www.google.com", "ReceiptHandle": "12345678", "content": "ABCDEFG" } }, { "ack": { "QueueURL": "https://www.google.com", "ReceiptHandle": "12345678", "content": "ABCDEFG" } } ] Thanks in Advance2.5KViews0likes3Comments