Recent Content
SnapLogic Product Release - October 2025
This week we released the SnapLogic October 2025 Release. This update brings key enhancements across AI, automation, and observability—plus an important change to how you monitor your pipelines. Dashboard Retirement & New Monitor Training As of this release, the legacy Dashboard has been officially retired. All execution, health, and observability functions are now available in Monitor, which is your primary and default app going forward. To help people get started, a new on-demand training video is available that walks through the Monitor layout, key features, and customization options. Just follow the link here to watch: Monitor Overview & Training Video. You can already read more about SnapLogic Monitor by checking out the Monitor community post October 2025 Release Highlights AgentCreator Introduced LLM-agnostic Function Generator Snaps for building reusable agent functions across OpenAI, Azure OpenAI, Google GenAI, and Amazon Bedrock Added GPT-5 and Claude 4 model support Prompt Composer now features adjustable panels for a more flexible workspace. AutoSync Added Google Service Account JSON authentication for BigQuery endpoints. Enhanced error visibility and reliability for integrations that previously stalled in “running” state. Snaps PostgreSQL Multi Execute Snap for multiple write operations in one transaction. In-memory OAuth2 Accounts improve HTTP Client Snap performance. AWS Signature V4 and Redshift Snaps enhanced for IAM and cross-account access. Monitor The new destination for monitoring and metrics. New usability improvements: Search within filters Scrollable execution tables Status icons now include descriptive text for clarity Platform and Snaplex Update We recommend upgrading to Snaplex version main-36396 - 4.42.2.0 to benefit from performance fixes and enhanced reliability in Triggered Tasks and Snaplex node logging. For full release details, visit the October 2025 Release NotesPagination Logic Fails After Migrating from REST GET to HTTP Client Snap
Hello everyone, Three years ago, I developed a pipeline to extract data from ServiceNow and load it into Snowflake. As part of this, I implemented pagination logic to handle multi-page responses by checking for the presence of a "next" page and looping through until all data was retrieved. This job has been running successfully in production without any issues. Recently, we were advised by the Infrastructure team to replace the REST GET Snap with the HTTP Client Snap, as the former is being deprecated and is no longer recommended. I updated the pipeline accordingly, but the pagination logic that worked with REST GET is not functioning as expected with the HTTP Client Snap. The logic I used is as follows: Pagination → Has Next: isNaN($headers['link'].match(/",<([^;"]*)>;rel="next",/)) Override URI → Next URL: $headers['link'].match(/\",<([^;\"]*)>;rel=\"next\",/) ? $headers['link'].match(/\",<([^;\"]*)>;rel=\"next\",/)[1].replace(_servicenow_cloud_base_url, _servicenow_b2b_base_url) : null However, with the HTTP Client Snap, I’m encountering the following error: Error Message: Check the spelling of the property or, if the property is optional, use the get() method (e.g., $headers.get('link')) Reason: 'link' was not found while evaluating the sub-expression '$headers['link']' This exact logic works perfectly in the existing job using REST GET, with no changes to the properties. It seems the HTTP Client Snap is not recognizing or parsing the link header in the same way.SolvedSnapLogic Monitor: Official Training Module
As previously announced, we will be officially sunsetting Classic Dashboard and transitioning to Monitor as the exclusive monitoring experience with the Snaplogic October 8, 2025, release. To help with this transition, we're excited to launch a detailed, in-depth training course focused exclusively on Monitor. This comprehensive, self-paced course is designed to provide your team with the expertise to master Monitor at your own pace. The curriculum delivers in-depth coverage on all aspects of Monitor, including monitoring and troubleshooting pipeline executions, observing node and snaplex infrastructure health metrics, activity logging, asset catalog and insights. Integrated knowledge checks are included to reinforce key concepts, ensuring you can confidently leverage the full power of the new experience. We strongly encourage you and your teams to take advantage of this new training. SnapLogic is here to accelerate your Monitor journey and this course will be free of charge for the next 6 months. You can access this training by clicking on the link below or by copying and pasting in your browser: https://learn.snaplogic.com/snaplogic-monitor In addition to the above, we also have the below resources to help with this transition: Monitor Tutorial Youtube Videos Monitor Migration Guide Monitor FAQ If you have any technical challenges or questions about the course, please contact your customer success manager or Snaplogic customer support at support@snaplogic.com.Filter in map after aggregate & group by
Hi, I am using an Aggregate step with a Group By and I am trying to get a value based on another object's value. e.g. in the below example, I'd like to return the last_updated value where status = complete i.e. "2025-01-01" I tried this but it just returns true/false: $status == "Complete" ? $last_updated : null ``` [ { "status": "Complete", "last_updated": "2025-01-01" }, { "status": "Pending", "last_updated": "2025-05-01" } ] ``` Any help would be much appreciated! ThanksSnapLogic Product Update - August & September 2025 Release
Keeping up with every new feature drop can be tough, so we wanted to give you a quick tour of the highlights from our August and September releases. The two releases include enhancements to the platform, AgentCreator, APIM, Snap Packs, and other enhancements. The August release includes AI enhancements to speed the creation of AI agentic agents and use of AI capabilities in pipelines. This includes the ability to perform tool calling across multiple different LLM offerings of your choice. With RAG being a top use case for customers, we’ve simplified how RAG can be used. For those who want to leverage managed RAG instead of doing it all themselves, our Google Vertex AI Snap pack helps to round out our existing capabilities that already exists for OpenAI and Azure OpenAI. Additionally, support around APIM 3.0 capabilities, Monitor, AutoSync, and Snap enhancements are all included. Generative AI & Agent Enhancements Google Vertex AI Snap Pack: New Snaps for Embedder, Gemini Generate, and RAG to simplify Retrieval-Augmented Generation (RAG) use cases Tool Calling Support: Now available across Amazon Bedrock, Azure OpenAI, Google Gemini, and OpenAI APIs Agent Visualizer: Get a dual-view interface combining diagrams and logs for debugging and visibility AgentCreator Snaps: Universal Function Generator and Function Result Generator Snaps added to the LLM Utilities Snap Pack. Prompt Composer: UI enhancement for a cleaner and more flexible workspace, include panel settings and saving of custom layouts APIM 3.0 Swagger 2.0 Import & OAS 3.x Export: Seamlessly publish APIs and view specs in DeveloperHub. Improved Service Navigation: Tag-based grouping and duplication for easier API version management. Lifecycle Governance: CI/CD with GitHub Security: JWT bearer token enhancements Monitor New Notification Center: Centralized, real-time visibility into alerts, activities, and custom notices. Pipeline Control: Added controls to the pipeline execution table AutoSync Schema flexibility: Supports NOT NULL to NULLABLE changes in target schema, minimizing manual intervention. New Snaps & Enhancements Syndigo (New Snaps): Syndigo PIM/MDM integration with read, create, update, delete, and execute support. Expanded Snap Capabilities: Enhancements to Kafka, MongoDB, MySQL, Redshift, Salesforce, Snowflake, SQL Server, and more. SnapGPT in Snap Settings: Access AI assistant directly in Snap configuration dialogs. August Release Webinar August Release Notes September Release Notes Keep your eye out for new features and enhancements with each of our releases coming this year.Hi, Is there a way where we can add delay/wait of 3~5 seconds before every post call?
Hi, I have a requirement where i need to post (http client post call) a data by splitting it into multiple batches, like 100 records per batch. so, is there a way where we can add delay/wait of 3~5 seconds before every post call?SolvedAutomating Untracked assets to GIT
Hi I am trying to understand if there’s a way to automate committing untracked assets to Git. Specifically, I’d like to know Is there any public API that allows adding untracked files and committing them? Are there other recommended ways to automate Git commits in a SnapLogic pipeline or related automation setup? Any guidance examples or best practices would be greatly appreciated. Thanks, SnehaSolvedFile Extraction Patterns
Hi All, What I'm looking for with this post is some input around simple file extraction patterns (just A to B without transformations). Below I will detail a couple of patterns that we use, with pros and cons, and I'd like to know what others are using and what the benefit is over another method. Caveat on my SnapLogic experience, I've been using it for just over 4 years, and everything I know about it is figured out or self taught from community and documentation, so in the approaches below, there may be some knowledge gaps that could be filled in by more experienced users. In my org, we use either a config driven generic pattern split across multiple parent/child pipelines, or a more reductive approach with a single pipeline where the process could be reduced to "browse then read then write". If there is enough interest in this topic, I can expand it with some diagrams and documentation. Config driven approach A file is created with details of source, target, file filter, account etc a parent pipeline reads the config and per each row, detail is passed to a child as parameters child pipeline uses the parameters to check if file exists, if no, then end process, raise a notification if yes, then another child is called to perform the source to target read/write. Pros Cons high observability in dashboard through use of execution labels in pipeline execute snaps more complex to set up child pipelines can be reused for other processes increased demand on snaplex child pipelines can be executed in isolation with correct parameters might be more difficult to move through development stages easier to do versioning and update separate components of process requires some documentation for new users some auditing available via parameter visibility in dashboard concurrency could cause issues with throttling/bottlenecks at source/target. can run parallel extractions child fails are isolated pipelines follow principle of single responsibility easy to test each component Single pipeline approach can run from a config file or have directories hard coded In one pipeline browser checks for files, reader then writer perform the transfer depending on requirement, some filtering or routing can be done to different targets Pros Cons less assets to manage no visibility of what files were picked up without any custom logging (just a number in the reader/writer in the dashboard) faster throughput small files more difficult to rerun or extract specific files less demand on snaplex not as reusable as a a multi pipeline approach, less modular easier to promote through development stages no concurrency, so process could be long, depending on files volume/size more difficult to test each component I think both approaches are valid; the choice to me seems to be a trade off between operability (observability, isolation, re-runs) and simplicity/throughput. I'm interested to hear insights, patterns, and examples from the community to help refine or reinforce what we're doing. Some other question which I think would be useful to get input on! Which pattern do you default to for file extractions, and why? If you use a single pipeline, how do you implement file-level observability (e.g., DB tables, S3 JSON logs, groundplex logs)? How do you handle retries and idempotency in practice (temporary folders, checksums, last-load tracking)? What limits or best practices have you found for concurrency control when fanning out with Pipeline Execute? Have you adopted a hybrid approach that balances clarity with operational efficiency, and how do you avoid asset sprawl? Cheers Lee