Recent Content
File Extraction Patterns
Hi All, What I'm looking for with this post is some input around simple file extraction patterns (just A to B without transformations). Below I will detail a couple of patterns that we use, with pros and cons, and I'd like to know what others are using and what the benefit is over another method. Caveat on my SnapLogic experience, I've been using it for just over 4 years, and everything I know about it is figured out or self taught from community and documentation, so in the approaches below, there may be some knowledge gaps that could be filled in by more experienced users. In my org, we use either a config driven generic pattern split across multiple parent/child pipelines, or a more reductive approach with a single pipeline where the process could be reduced to "browse then read then write". If there is enough interest in this topic, I can expand it with some diagrams and documentation. Config driven approach A file is created with details of source, target, file filter, account etc a parent pipeline reads the config and per each row, detail is passed to a child as parameters child pipeline uses the parameters to check if file exists, if no, then end process, raise a notification if yes, then another child is called to perform the source to target read/write. Pros Cons high observability in dashboard through use of execution labels in pipeline execute snaps more complex to set up child pipelines can be reused for other processes increased demand on snaplex child pipelines can be executed in isolation with correct parameters might be more difficult to move through development stages easier to do versioning and update separate components of process requires some documentation for new users some auditing available via parameter visibility in dashboard concurrency could cause issues with throttling/bottlenecks at source/target. can run parallel extractions child fails are isolated pipelines follow principle of single responsibility easy to test each component Single pipeline approach can run from a config file or have directories hard coded In one pipeline browser checks for files, reader then writer perform the transfer depending on requirement, some filtering or routing can be done to different targets Pros Cons less assets to manage no visibility of what files were picked up without any custom logging (just a number in the reader/writer in the dashboard) faster throughput small files more difficult to rerun or extract specific files less demand on snaplex not as reusable as a a multi pipeline approach, less modular easier to promote through development stages no concurrency, so process could be long, depending on files volume/size more difficult to test each component I think both approaches are valid; the choice to me seems to be a trade off between operability (observability, isolation, re-runs) and simplicity/throughput. I'm interested to hear insights, patterns, and examples from the community to help refine or reinforce what we're doing. Some other question which I think would be useful to get input on! Which pattern do you default to for file extractions, and why? If you use a single pipeline, how do you implement file-level observability (e.g., DB tables, S3 JSON logs, groundplex logs)? How do you handle retries and idempotency in practice (temporary folders, checksums, last-load tracking)? What limits or best practices have you found for concurrency control when fanning out with Pipeline Execute? Have you adopted a hybrid approach that balances clarity with operational efficiency, and how do you avoid asset sprawl? Cheers LeePagination and nextCursor in header
Hello all, I'm using a HTTP Client snap to retrieved a few thousands of records, and I need to use pagination. The system that I'm calling is using cursor based pagination. If the number of elements returned is higher than the limit defined, the response header will contain a "nextCursor" value that I need to use as parameter to the "cursor" key for the next call, and so on until no more "nextCursor". This should be working fine, however I can't seem to get the content of the response header for my next call. When I use Postman I can see that there is a header returned, and the value that I need is stored under the key "X-Pagination-Next-Cursor" and not "nextCursor" as I expected. How can I access the values of the header? In the Snap itself, in the Pagination section, there is a "Override headers" part that I tried to configure by mapping the "cursor" key with either $nextCursor, $headers.nextCursor or $headers.X-Pagination-Next-Cursor, but nothing works, I'm only getting the records from the first page, there is no failure and no pagination. Thanks in advance for any help! JFSolvedHow to get filename from file reader
I need to get the name of the file read by the file reader snap and use it as part of the data downstream. Really the goal to save the file name as part of the data pulled from a file. Screen snippet attached here. I have spent some time looking into this but there is no obvious method to me. Please I will appreciate any input and recommendations. Thanks.Solved60Views0likes2CommentsGenerate expression file from database query
For some data transformations I would like to use an expression file that is generated each night, instead of querying a SQL database everytime the pipeline is started. I already have data available in the database and now I need to get the data transformed in the expression file JSON format, but I am stuck on getting the right ouput. Coming from a XML oriented environment (with extensive knowledge in XSL but not so much JSON) I have quite some issues with switching to snaps and JSON... Data sample (JSON) from the database [ { "code": "ARTICLEGROUP", "source": "JLG", "target": "10" }, { "code": "COMMODITYCODE", "source": "31251501", "target": "0" }, { "code": "COUNTRYCODE", "source": "AF", "target": "AF" }, { "code": "COUNTRYCODE", "source": "AL", "target": "AL" }, { "code": "COUNTRYCODE", "source": "DZ", "target": "DZ" }, { "code": "COUNTRYCODE", "source": "AS", "target": "AS" }, { "code": "COUNTRYCODE", "source": "AD", "target": "AD" }, { "code": "COUNTRY_ISOCODE", "source": "ARE", "target": "AE" }, { "code": "COUNTRY_ISOCODE", "source": "AFG", "target": "AF" }, { "code": "COUNTRY_ISOCODE", "source": "ALA", "target": "AX" }, { "code": "COUNTRY_ISOCODE", "source": "ALB", "target": "AL" }, { "code": "UOM", "source": "EA", "target": "pi" }, { "code": "UOM", "source": "M", "target": "me" }, { "code": "UOM", "source": "BG", "target": "za" } ] Desired output { "ARTICLEGROUP" : { "JLG": "10" }, "COMMODITYCODE" : { "31251501": "0" }, "COUNTRYCODE" : { "AF": "AF", "AL": "AL", "DZ": "DZ", "AS": "AS", "AD": "AD" }, "COUNTRY_ISOCODE" : { "ARE": "AE", "AFG": "AF", "ALA": "AX", "ALB": "AL" }, "UOM" : { "EA": "pi", "M": "me", "BG": "za" } , getValue : (type, source) => this[type][source] } Anyone can point me in the right direction? Have tried multiple things already, but I can't get the "arrays" right for some reason.SolvedCan we generate XML file in pretty print format using native snapLogic snaps?
Hi Team, I was curious to know if anybody has worked on a use case where they are generating an XML file in pretty print format? We do have "pretty-print" option in JSON formatter however the same is not available in XML formatter snap. Any suggestions? Thanking in advance. Best Regards, DarshSolved701Views0likes3Comments401 error with HTTP Client and NTLM
Hello, I'm trying to connect to an API with NTLM authentication using the Snap HTTP Client. Problem: I'm getting a 401 - Unauthorized response from the endpoint. The same request responds successfully on Postman. I think the problem comes from the Linux Groundplex. Did anyone had the same issue? How did you solve it? Thank you.SolvedIngesting Data into Veeva Vault CRM via SnapLogic – Alternatives to SFDC Snaps
We are currently in the process of migrating from our existing Veeva CRM (Salesforce-based) platform to Veeva Vault CRM. In our current integration landscape, we use SnapLogic to ingest data from our Specialty Pharma SFTP source into Veeva CRM, leveraging the Salesforce (SFDC) snaps for data ingestion and transformation. However, as we transition to Vault CRM, we’ve identified a gap—SnapLogic does not currently provide a native Snap pack for Veeva Vault CRM. We understand that support for Vault CRM is on SnapLogic’s product roadmap, but it is not expected in the immediate future. As part of our integration planning, we are reaching out to the SnapLogic community and experts to explore the following: Are there any existing Snap packs (e.g., REST, HTTP Client, SOAP, or JDBC snaps) that can be configured to support integration with Vault CRM? Has anyone implemented custom pipelines or reusable components for Vault CRM ingestion using generic SnapLogic snaps? Any known limitations, authentication considerations or Vault-specific constraints we should be aware of when building these integrations? We greatly appreciate any insights, lessons learned, or recommendations from those who have explored similar integration use cases. Thank you in advance for your time and input.30Views0likes2CommentsJavascript to promote top level lists
I just cannot seem to get this expression to work. Is purpose is to scan for top-level fields of an object and replaces any single-element array value (list) with just that one value. I do not want it to recurse. Here are some examples I am looking for: Input { "name": ["Alice"], "roles": ["admin", "editor"], "active": [true], "profile": { "city": ["Springfield"] } } Output: { "name": "Alice", // promoted "roles": ["admin", "editor"], // unchanged "active": true, // promoted "profile": { "city": ["Springfield"] } // untouched (no recursion) } I keep getting: Failure: The output document is a primitive value: null, Reason: The output document must be an array or object, Resolution: Check for target paths that write to the root What am I missing? { promoteSingleArrays : (obj) => // Promote top-level single-item arrays of primitive types only obj .mapValues((val, key) => (Array.isArray(val) && val.length == 1 && (typeof val[0] == 'string' || typeof val[0] == 'number' || typeof val[0] == 'boolean')) ? val[0] : val ) }SolvedSnapLogic Product Update Snippet Videos - March 2025 Release
Enhance your SaaS and database account governance with this valuable update to the SnapLogic Asset Catalog. Administrators can now monitor and manage not only Tasks (APIs) and Pipelines, but also all associated accounts, within a single, unified interface. Custom Privacy Notice: In today's data-driven world, ensuring compliance with data privacy regulations like GDPR and CCPA is of paramount importance for enterprises. SnapLogic's latest feature aims to simplify this process by providing users with a clear and accessible notice about data privacy considerations. New Oracle HCM (Human Capital Management) Snap Pack: With this Snap Pack, businesses can: Retrieve employee data for reporting, compliance, and workforce planning. Automate employee record management, ensuring real-time updates to HR systems. Sync HR data with payroll, finance, and IT systems for seamless cross-department operations. Maintain accurate employee records by enabling bulk data updates and efficient record-keeping.