Recent Content
Automating Untracked assets to GIT
Hi I am trying to understand if there’s a way to automate committing untracked assets to Git. Specifically, I’d like to know Is there any public API that allows adding untracked files and committing them? Are there other recommended ways to automate Git commits in a SnapLogic pipeline or related automation setup? Any guidance examples or best practices would be greatly appreciated. Thanks, SnehaHi, Is there a way where we can add delay/wait of 3~5 seconds before every post call?
Hi, I have a requirement where i need to post (http client post call) a data by splitting it into multiple batches, like 100 records per batch. so, is there a way where we can add delay/wait of 3~5 seconds before every post call?AWS SageMaker Model Integration
I am using Snaplogic to create a data set that I then write as a CSV to S3. My next step is to make a call to the SageMaker model that reads the data and writes an output file to S3. I am currently not able to execute the SageMaker model. I am attempting to use HTTP Client snap AWS Signature V4 Account Is there anything special that you did to the user account or SageMaker? Here is a screen shot of the Snap.6Views0likes0CommentsFile Extraction Patterns
Hi All, What I'm looking for with this post is some input around simple file extraction patterns (just A to B without transformations). Below I will detail a couple of patterns that we use, with pros and cons, and I'd like to know what others are using and what the benefit is over another method. Caveat on my SnapLogic experience, I've been using it for just over 4 years, and everything I know about it is figured out or self taught from community and documentation, so in the approaches below, there may be some knowledge gaps that could be filled in by more experienced users. In my org, we use either a config driven generic pattern split across multiple parent/child pipelines, or a more reductive approach with a single pipeline where the process could be reduced to "browse then read then write". If there is enough interest in this topic, I can expand it with some diagrams and documentation. Config driven approach A file is created with details of source, target, file filter, account etc a parent pipeline reads the config and per each row, detail is passed to a child as parameters child pipeline uses the parameters to check if file exists, if no, then end process, raise a notification if yes, then another child is called to perform the source to target read/write. Pros Cons high observability in dashboard through use of execution labels in pipeline execute snaps more complex to set up child pipelines can be reused for other processes increased demand on snaplex child pipelines can be executed in isolation with correct parameters might be more difficult to move through development stages easier to do versioning and update separate components of process requires some documentation for new users some auditing available via parameter visibility in dashboard concurrency could cause issues with throttling/bottlenecks at source/target. can run parallel extractions child fails are isolated pipelines follow principle of single responsibility easy to test each component Single pipeline approach can run from a config file or have directories hard coded In one pipeline browser checks for files, reader then writer perform the transfer depending on requirement, some filtering or routing can be done to different targets Pros Cons less assets to manage no visibility of what files were picked up without any custom logging (just a number in the reader/writer in the dashboard) faster throughput small files more difficult to rerun or extract specific files less demand on snaplex not as reusable as a a multi pipeline approach, less modular easier to promote through development stages no concurrency, so process could be long, depending on files volume/size more difficult to test each component I think both approaches are valid; the choice to me seems to be a trade off between operability (observability, isolation, re-runs) and simplicity/throughput. I'm interested to hear insights, patterns, and examples from the community to help refine or reinforce what we're doing. Some other question which I think would be useful to get input on! Which pattern do you default to for file extractions, and why? If you use a single pipeline, how do you implement file-level observability (e.g., DB tables, S3 JSON logs, groundplex logs)? How do you handle retries and idempotency in practice (temporary folders, checksums, last-load tracking)? What limits or best practices have you found for concurrency control when fanning out with Pipeline Execute? Have you adopted a hybrid approach that balances clarity with operational efficiency, and how do you avoid asset sprawl? Cheers LeeSnapLogic MCP Support
8 MIN READ Introduction Since the inception of the Model Context Protocol (MCP), we've been envisioning and designing how it can be integrated into the SnapLogic platform. We've recently received a significant number of inquiries about MCP, and we're excited to share our progress, the features we'll be supporting, our release timeline, and how you can get started creating MCP servers and clients within SnapLogic. If you're interested, we encourage you to reach out! Understanding the MCP Protocol The MCP protocol allows tools, data resources, and prompts to be published by an MCP server in a way that Large Language Models (LLMs) can understand. This empowers LLMs to autonomously interact with these resources via an MCP client, expanding their capabilities to perform actions, retrieve information, and execute complex workflows. MCP Protocol primarily supports: Tools: Functions an LLM can invoke (e.g., data lookups, operational tasks). Resources: File-like data an LLM can read (e.g., API responses, file contents). Prompts: Pre-written templates to guide LLM interaction with the server. Sampling (not widely used): Allows client-hosted LLMs to be used by remote MCP servers. An MCP client can, therefore, request to list available tools, call specific tools, list resources, or read resource content from a server. Transport and Authentication MCP protocol offers flexible transport options, including STDIO or HTTP (SSE or Streamable-HTTP) for local deployments, and HTTP (SSE or Streamable-HTTP) for remote deployments. While the protocol proposes OAuth 2.1 for authentication, an MCP server can also use custom headers for security. Release Timeline We're excited to bring MCP support to SnapLogic with two key releases: August Release: MCP Client Support We'll be releasing two new snaps: the MCP Function Generator Snap and the MCP Invoke Snap. These will be available in the AgentCreator Experimental (Beta) Snap Pack. With these Snaps, your SnapLogic agent can access the services and resources available on the public MCP server. Late Q3 Release: MCP Server Support Our initial MCP server support will focus on tool operations, including the ability to list tools and call tools. For authentication, it will support custom header-based authentication. Users will be able to leverage the MCP Server functionality by subscribing to this feature. If you're eager to be among the first to test these new capabilities and provide feedback, please reach out to the Project Manager Team, at pm-team@snaplogic.com. We're looking forward to seeing what you build with SnapLogic MCP. SnapLogic MCP Client MCP Clients in SnapLogic enable users to connect to MCP servers as part of their Agent. An example can be connecting to the Firecrawl MCP server for a data scraping Agent, or other use cases that can leverage the created MCP servers. The MCP Client support in SnapLogic consists of two Snaps, the MCP Function Generator Snap and the MCP Invoke Snap. From a high-level perspective, the MCP Function generator Snap allows users to list available tools from an MCP server, and the MCP Invoke Snap allows users to perform operations such as call tools, list resources, and read resources from an MCP server. Let’s dive into the individual pieces. MCP SSE Account To connect to an MCP Server, we will need an account to specify the URI of the server to connect to. Properties URI The URI of the server to connect to. Don’t need to include the /sse path Additional headers Additional HTTP headers to be sent to the server Timeout The timeout value in seconds, if the result is not returned within the timeout, the Snap will return an error. MCP Function Generator Snap The MCP Function Generator Snap enables users to retrieve the list of tools as SnapLogic function definitions to be used in a Tool Calling Snap. Properties Account An MCP SSE account is required to connect to an MCP Server. Expose Tools List all available tools from an MCP server as SnapLogic function definitions Expose Resources Add list_resources, read_resource as SnapLogic function definitions to allow LLMs to use resources/read and resources/list (MCP Resources). Definitions for list resource and read resource [ { "sl_type": "function", "name": "list_resources", "description": "This function lists all available resources on the MCP server. Return a list of resources with their URIs.", "strict": false, "sl_tool_metadata": { "operation": "resources/list" } }, { "sl_type": "function", "name": "read_resource", "description": "This function returns the content of the resource from the MCP server given the URI of the resource.", "strict": false, "sl_tool_metadata": { "operation": "resources/read" }, "parameters": [ { "name": "uri", "type": "STRING", "description": "Unique identifier for the resource", "required": true } ] } ] MCP Invoke Snap The MCP Invoke Snap enables users to perform operations such as tools/call, resources/list, and resources/read to an MCP server. Properties Account An account is required to use the MCP Invoke Snap Operation The operation to perform on the MCP server. The operation must be one of tools/call, resources/list, or resources/read Tool Name The name of the tool to call. Only enabled and required when the operation is tools/call Parameters The parameters to be added to the operation. Only enabled for resources/read and tools/call. Required for resources/read, and optional for tools/call, based on the tool. MCP Agents in pipeline action MCP Agent Driver pipeline An MCP Agent Driver pipeline is like any other MCP Agent pipeline; we’ll need to provide the system prompt, user prompt, and run it with the PipeLoop Snap. MCP Agent Worker pipeline Here’s an example of an MCP Agent with a single MCP Server connection. The MCP Agent Worker is connected to one MCP Server. MCP Client Snaps can be used together with AgentCreator Snaps, such as the Multi-Pipeline Function Generator and Pipeline Execute Snap, as SnapLogic Functions, tools. This allows users to use tools provided by MCP servers and internal tools, without sacrificing safety and freedom when building an Agent. Agent Worker with MCP Client Snaps SnapLogic MCP Server In SnapLogic, an MCP Server allows you to expose SnapLogic pipelines as dynamic tools that can be discovered and invoked by language models or external systems. By registering an MCP Server, you effectively provide a API that language models and other clients can use to perform operations such as data retrieval, transformation, enrichment, or automation, all orchestrated through SnapLogic pipelines. For the initial phase, we'll support connections to the server via HTTP + SSE. Core Capabilities The MCP Server provides two core capabilities. The first is listing tools, which returns structured metadata that describes the available pipelines. This metadata includes the tool name, a description, the input schema in JSON Schema format, and any additional relevant information. This allows clients to dynamically discover which operations are available for invocation. The second capability is calling tools, where a specific pipeline is executed as a tool using structured input parameters, and the output is returned. Both of these operations—tool listing and tool calling—are exposed through standard JSON-RPC methods, specifically tools/list and tools/call, accessible over HTTP. Prerequisite You'll need to prepare your tool pipelines in advance. During the server creation process, these can be added and exposed as tools for external LLMs to use. MCP Server Pipeline Components A typical MCP server pipeline consists of four Snaps, each with a dedicated role: 1. Router What it does: Routes incoming JSON requests—which differ from direct JSON-RPC requests sent by an MCP client—to either the list tools branch or the call tool branch. How: Examines the request payload (typically the method field) to determine which action to perform. 2. Multi-Pipeline Function Generator (Listing Tools) What it does: Converts a list of pipeline references into tool metadata. This is where you define the pipelines you want the server to expose as tools. Output: For each pipeline, generates: Tool name Description Parameters (as JSON Schema) Other metadata Purpose: Allows clients (e.g., an LLM) to query what tools are available without prior knowledge. 3. Pipeline Execute (Calling Tools) What it does: Dynamically invokes the selected SnapLogic pipeline and returns structured outputs. How: Accepts parameters encoded in the request body, maps them to the pipeline’s expected inputs, and executes the pipeline. Purpose: Provides flexible runtime execution of tools based on user or model requests. 4. Union What it does: Merges the result streams from both branches (list and call) into a single output stream for consistent response formatting. Request Flows Below are example flows showing how requests are processed: 🟢 tools/list Client sends a JSON-RPC request with method = "tools/list". Router directs the request to the Multi-Pipeline Function Generator. Tool metadata is generated and returned in the response. Union Snap merges and outputs the content. ✅ Result: The client receives a JSON list describing all available tools. �� tools/call Client sends a JSON-RPC request with method = "tools/call" and the tool name + parameters. Router sends this to the Pipeline Execute Snap. The selected pipeline is invoked with the given parameters. Output is collected and merged via Union. ✅ Result: The client gets the execution result of the selected tool. Registering an MCP Server Once your MCP server pipeline is created: Create a Trigger Task and Register as an MCP Server Navigate to the Designer > Create Trigger Task Choose a Groundplex. (Note: This capability currently requires a Groundplex, not a Cloudplex.) Select your MCP pipeline. Click Register as MCP server Configure node and authentication. Find your MCP Server URL Navigate to the Manager > Tasks The Task Details page exposes a unique HTTP endpoint. This endpoint is treated as your MCP Server URL. After registration, clients such as AI models or orchestration engines can interact with the MCP Server by calling the /tools/list endpoint to discover the available tools, and the /tools/call endpoint to invoke a specific tool using a structured JSON payload. Connect to a SnapLogic MCP Server from a Client After the MCP server is successfully published, using the SnapLogic MCP server is no different from using other MCP servers running in SSE mode. It can be connected to by any MCP client that supports SSE mode; all you need is the MCP Server URL (and the Bearer Token if authentication is enabled during server registration). Configuration First, you need to add your MCP server in the settings of the MCP client. Taking Claude Desktop as an example, you'll need to modify your Claude Desktop configuration file. The configuration file is typically located at: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json Add your remote MCP server configuration to the mcpServers section: { "mcpServers": { "SL_MCP_server": { "command": "npx", "args": [ "mcp-remote", "http://devhost9000.example.com:9000/mcp/6873ff343a91cab6b00014a5/sse", "--header", "Authorization: Bearer your_token_here" ] } } } Key Components Server Name: SL_MCP_server - A unique identifier for your MCP server Command: npx - Uses the Node.js package runner to execute the mcp-remote package URL: The SSE endpoint URL of your remote MCP server (note the /sse suffix) Authentication: Use the --header flag to include authorization tokens if the server enabled authentication Requirements Ensure you have Node.js installed on your system, as the configuration uses npx to run the mcp-remote package. Replace the example URL and authorization token with your actual server details before saving the configuration. After updating the configuration file, restart Claude Desktop for the changes to take effect. To conclude, the MCP Server in SnapLogic is a framework that allows you to expose pipelines as dynamic tools accessible through a single HTTP endpoint. This capability is designed for integration with language models and external systems that need to discover and invoke SnapLogic workflows at runtime. MCP Servers make it possible to build flexible, composable APIs that return structured results, supporting use cases such as conversational AI, automated data orchestration, and intelligent application workflows. Conclusion SnapLogic's integration of the MCP protocol marks a significant leap forward in empowering LLMs to dynamically discover and invoke SnapLogic pipelines as sophisticated tools, transforming how you build conversational AI, automate complex data orchestrations, and create truly intelligent applications. We're excited to see the innovative solutions you'll develop with these powerful new capabilities.Pagination and nextCursor in header
Hello all, I'm using a HTTP Client snap to retrieved a few thousands of records, and I need to use pagination. The system that I'm calling is using cursor based pagination. If the number of elements returned is higher than the limit defined, the response header will contain a "nextCursor" value that I need to use as parameter to the "cursor" key for the next call, and so on until no more "nextCursor". This should be working fine, however I can't seem to get the content of the response header for my next call. When I use Postman I can see that there is a header returned, and the value that I need is stored under the key "X-Pagination-Next-Cursor" and not "nextCursor" as I expected. How can I access the values of the header? In the Snap itself, in the Pagination section, there is a "Override headers" part that I tried to configure by mapping the "cursor" key with either $nextCursor, $headers.nextCursor or $headers.X-Pagination-Next-Cursor, but nothing works, I'm only getting the records from the first page, there is no failure and no pagination. Thanks in advance for any help! JFSolvedOpenAI Responses API
7 MIN READ Introduction OpenAI announced the Responses API, their most advanced and versatile interface for building intelligent AI applications. Supporting both text and image inputs with rich text outputs, this API enables dynamic, stateful conversations that remember and build on previous interactions, making AI experiences more natural and context-aware. It also unlocks powerful capabilities through built-in tools such as web search, file search, code interpreter, and more, while enabling seamless integration with external systems via function calling. Its event-driven design delivers clear, structured updates at every step, making it easier than ever to create sophisticated, multi-step AI workflows. Key features include: Stateful conversations via the previous response ID Built-in tools like web search, file search, code interpreter, MCP, and others Access to advanced models available exclusively, such as o1-pro Enhanced support for reasoning models with reasoning summaries and efficient context management through previous response ID or encrypted reasoning items Clear, event-based outputs that simplify integration and control While the Chat Completions API remains fully supported and widely used, OpenAI plans to retire the Assistants API in the first half of 2026. To support the adoption of the Responses API, two new Snaps have been introduced: OpenAI Chat Completions ⇒ OpenAI Responses API Generation OpenAI Tool Calling ⇒ OpenAI Responses API Tool Calling Both Snaps are fully compatible with existing upstream and downstream utility Snaps, including the OpenAI Prompt Generator, OpenAI Multimodal Content Generator, all Function Generators (Multi-Pipeline, OpenAPI, and APIM), the Function Result Generator, and the Message Appender. This allows existing pipelines and familiar development patterns to be reused while gaining access to the advanced features of the Responses API. OpenAI Responses API Generation The OpenAI Responses API Generation Snap is designed to support OpenAI’s newest Responses API, enabling more structured, stateful, and tool-augmented interactions. While it builds upon the familiar interface of the Chat Completions Snap, several new properties and behavioral updates have been introduced to align with the Responses API’s capabilities. New properties Message: The input sent to the LLM. This field replaces the previous Use message payload, Message payload, and Prompt properties in the OpenAI Chat Completions Snap, consolidating them into a single input. It removes ambiguity between "prompt" as raw text and as a template, and supports both string and list formats. Previous response ID: The unique ID of the previous response to the model. Use this to create multi-turn conversations. Model parameters Reasoning summary: For reasoning models, provides a summary of the model’s reasoning process, aiding in debugging and understanding the model's reasoning process. The property can be none, auto, or detailed. Advanced prompt configurations Instructions: Applied only to the current response, making them useful for dynamically swapping instructions between turns. To persist instructions across turns when using previous_response_id, the developer message in the OpenAI Prompt Generator Snap should be used. Advanced response configurations Truncation: Defines how to handle input that exceeds the model’s context window. auto allows the model to truncate the middle of the conversation to fit, while disabled (default) causes the request to fail with a 400 error if the context limit is exceeded. Include reasoning encrypted content: Includes an encrypted version of reasoning tokens in the output, allowing reasoning items to persist when the store is disabled. Built-in tools Web search: Enables the model to access up-to-date information from the internet to answer queries beyond its training data. Web search type Search context size User location: an approximate user location including city, region, country, and timezone to deliver more relevant search results. File search: Allows the model to retrieve information from documents or files. Vector store IDs Maximum number of results Include search results: Determines whether raw search results are included in the response for transparency or debugging. Ranker Score threshold Filters: Additional metadata-based filters to refine search results. For more details on using filters, see Metadata Filtering. Advanced tool configuration Tool choice: A new option, SPECIFY A BUILT-IN TOOL, allows specifying that the model should use a built-in tool to generate a response. Note that the OpenAI Responses API Generation Snap does not support the response count or stop sequences properties, as these are not available in the Responses API. Additionally, the message user name, which may be specified in the Prompt Generator Snap, is not supported and will be ignored if included. Model response of Chat Completions vs Responses API Chat Completions API Responses API The Responses API introduces an event-driven output structure that significantly enhances how developers build and manage AI-powered applications compared to the traditional Chat Completions API. While the Chat Completions API returns a single, plain-text response within the choices array, the Responses API provides an output array containing a sequence of semantic event items—such as reasoning, message, function_call, web_search_call, and more—that clearly delineate each step in the model's reasoning and actions. This structured approach allows developers to easily track and interpret the model's behavior, facilitating more robust error handling and smoother integration with external tools. Moreover, the response from the Responses API includes the model parameters settings, providing additional context for developers. Pipeline examples Built-in tool: web search This example demonstrates how to use the built-in web search tool. In this pipeline, the user’s location is specified to ensure the web search targets relevant geographic results. System prompt: You are a friendly and helpful assistant. Please use your judge to decide whether to use the appropriate tools or not to answer questions from the user. Prompt: Can you recommend 2 good sushi restaurants near me? Output: As a result, the output contains both a web search call and a message. The model uses the web search to find and provide recommendations based on current data, tailored to the specified location. Built-in tool: File search This example demonstrates how the built-in file search tool enables the model to retrieve information from documents stored in a vector store during response generation. In this case, the file wildfire_stats.pdf has been uploaded. You can create and manage vector stores through the Vector Store management page. Prompt: What is the number of Federal wildfires in 2018 Output: The output array contains a file_search_call event, which includes search results in its results field. These results provide matched text, metadata, and relevance scores from the vector store. This is followed by a message event, where the model uses the retrieved information to generate a grounded response. The presence of detailed results in the file_search_call is enabled by selecting the Include file search results option. OpenAI Responses API Tool Calling The OpenAI Responses API Tool Calling Snap is designed to support function calling using OpenAI’s Responses API. It works similarly to the OpenAI Tool Calling Snap (which uses the Chat Completions API), but is adapted to the event-driven response structure of the Responses API and supports stateful interactions via the previous response ID. While it shares much of its configuration with the Responses API Generation Snap, it is purpose-built for workflows involving function calls. Existing LLM agent pipeline patterns and utility Snaps—such as the Function Generator and Function Result Generator—can continue to be used with this Snap, just as with the original OpenAI Tool Calling Snap. The primary difference lies in adapting the Snap configuration to accommodate the Responses API’s event-driven output, particularly the structured function_call event item in the output array. The Responses API Tool Calling Snap provides two output views, similar to the OpenAI Tool Calling Snap, with enhancements to simplify building agent pipelines and support stateful interactions using the previous response ID: Model response view: The complete API response, including extra fields: messages: an empty list if store is enabled, or the full message history—including messages payload and model response—if disabled (similar to the OpenAI Tool Calling Snap). When using stateful workflows, message history isn’t needed because the previous response ID is used to maintain context. has_tool_call: a boolean indicating whether the response includes a tool call. Since the Responses API no longer includes the finish_reason: "tool_calls" field, this new field makes it easier to create stop conditions in the pipeloop Snap within the agent driver pipeline. Tool call view: Displays the list of function calls made by the model during the interaction. Tool Call View of Chat Completions vs Responses API Uses id as the function call identifier when sending back the function result. Tool call properties (name, arguments) are nested inside the function field. Each tool call includes: • id: the unique event ID • call_id: used to reference the function call when returning the result The tool call structure is flat — name and arguments are top-level fields. Building LLM Agent Pipelines To build LLM agent pipelines with the OpenAI Responses API Tool Calling Snap, you can reuse the same agent pipeline pattern described in Introducing Tool Calling Snaps and LLM Agent Pipelines. Only minor configuration changes are needed to support the Responses API. Agent Driver Pipeline The primary change is in the PipeLoop Snap configuration, where the stop condition should now check the has_tool_call field, since the Responses API no longer includes the finish_reason:"tool_calls". Agent Worker Pipeline Fields mapping A Mapper Snap is used to prepare the related fields for the OpenAI Responses API Tool Calling Snap. OpenAI Responses API Tool Calling The key changes are in this Snap’s configuration to support the Responses API’s stateful interactions. There are two supported approaches: Option 1: Use Store (Recommended) Leverages the built-in state management of the Responses API. Enable Store Use Previous Response ID Send only the function call results as the input messages for the next round. (messages field in the Snap’s output will be an empty array, so you can still use it in the Message Appender Snap to collect tool results.) Option 2: Maintain Conversation History in Pipeline Similar to the approach used in the Chat Completions API. Disable Store Include the full message history in the input (messages field in the Snap’s output contains message history) (Optional) Enable Include Reasoning Encrypted Content (for reasoning models) to preserve reasoning context efficiently OpenAI Function Result Generator As explained in Tool Call View of Chat Completions vs Responses API section, the Responses API includes both an id and a call_id. You must use the call_id to construct the function call result when sending it back to the model. Conclusion The OpenAI Responses API makes AI workflows smarter and more adaptable, with stateful interactions and built-in tools. SnapLogic’s OpenAI Responses API Generation and Tool Calling Snaps bring these capabilities directly into your pipelines, letting you take advantage of advanced features like built-in tools and event-based outputs with only minimal adjustments. By integrating these Snaps, you can seamlessly enhance your workflows and fully unlock the potential of the Responses API.Automating Git Commits for Untracked Assets
Hi I am trying to understand if there’s a way to automate committing untracked assets to Git. Specifically, I’d like to know Is there any public API that allows adding untracked files and committing them? Are there other recommended ways to automate Git commits in a SnapLogic pipeline or related automation setup? Any guidance examples or best practices would be greatly appreciated. Thanks, Sneha16Views0likes0CommentsREST Get Pagination in various scenarios
Hi all, There are various challenges witnessed while using REST GET pagination. In this article, we can discuss about these challenges and how to overcome these challenges with the help of some in-build expressions in snaplogic. Let's see the various scenarios and their solution. Scenario 1: API URL response has no total records indicator, but works with limit and offset: In this case, as there are no total records that the API is going to provide in advance the only way is to navigate each of the page until the last page of the API response. The last page of the API is the page where there are no records in the response output. Explanation of how it works and Sample data: has_next condition: $entity.length > 0 has_next explanation: If the URL has n documents and it is not sure if the next page iteration is valid, the function $entity.length will check the response array length from the URL output and proceeds with the next page iteration only when the $entity.length is greater than zero. If the response array length is equal to zero, it’s evident that there are no more records to be fetched and hence the condition on has_next “$entity.length > 0” will fail and stops the next iteration loop. next_url condition: $original.URL+"?limit=" + $original.limit + "&offset=" + ( parseInt($original.limit) * snap.out.totalCount ) next_url explanation: Limit (limit parameter) and API URL values are static, but the offset value will need to change for each iteration. Hence the approach is to multiply the default limit parameter (limit) with the snap.out.totalCount function to shift the offset per API page iteration. snap.out.totalCount is the snap system variable which used to hold the total number of documents that have passed through output views of the snap. In this “REST Get”, each API page iteration response output is one json array and hence the snap.out.totalCount will be equal to the number of API page iteration completed Sample response: For First API call: { "statusLine": { "protoVersion": "HTTP/1.1", "statusCode": 200, "reasonPhrase": "OK" }, "entity": [ { "year": "2022", "month": "08", "Name": "Mark", }, { "year": "2022", "month": "08", "Name": "John", },………………. 1000 records in this array ], "original": { "effective_date": "2023-08-31", "limit": "1000", "offset": "0", "URL": "https://Url.XYZ.com" } } For Second API Call: { "statusLine": { "protoVersion": "HTTP/1.1", "statusCode": 200, "reasonPhrase": "OK" }, "entity": [ { "year": "2024", "month": "08", "Name": "Ram", }, { "year": "2021", "month": "03", "Name": "Joe", },………………. 1000 records in this array ], "original": { "effective_date": "2023-08-31", "limit": "1000", "offset": "1000" "URL": "https://Url.XYZ.com" } } Scenario 2: API URL response has total records in the response header and pagination is using limit & offset: As there are total records, the total records column in the API response can be used to traverse through the API response pages. Explanation on how it works and Sample data: has_next condition: parseInt($original.limit) * snap.out.totalCount < $headers[total-records] has_next condition explanation: If the URL Response has n documents where n is equal to total, there needs a check whether the limit is less than total records, for example: if there were 120 total records and 100 as a limit, it loops through only 2 times. It loops through as below, limit = 100, snap.out.totalCount =0: has_next condition will evaluate 0 < 120 limit = 100, snap.out.totalCount =1 has_next condition will evaluate 100 < 120 limit = 100, snap.out.totalCount =2 has_next condition will evaluate 200 < 120 pagination breaks and next page is not processed next_url condition: $original.URL+"?limit=" + $original.limit + "&offset=" + (parseInt($original.limit)* snap.out.totalCount) next_url Explanation: Limit and url values are static, but the offset value need to be derived as limit multiplied with snap.out.totalCount function. snap.out.totalCount indicates the total number of documents that have passed through all of the Snap's output views. So it will traverse next API page until the has_next condition is satisfied Sample Response: For First API call: { "statusLine": { "protoVersion": "HTTP/1.1", "statusCode": 200, "reasonPhrase": "OK" }, "entity": [ { "year": "2022", "month": "08", "Name": "Mark" }, { "year": "2022", "month": "08", "Name": "John" },….....100 records ], "original": { "effective_date": "2023-08-31", "limit": "100", "offset": "0", "URL": "https://Url.XYZ.com" } "headers": { "total-records": [ "120" ] } } For Second API Call: { "statusLine": { "protoVersion": "HTTP/1.1", "statusCode": 200, "reasonPhrase": "OK" }, "entity": [ { "year": "2022", "month": "08", "Name": "Ram" }, { "year": "2022", "month": "08", "Name": "Raj" },….....20 records ], "original": { "effective_date": "2023-08-31", "limit": "100", "offset": "100", "URL": "https://Url.XYZ.com" } "headers": { "total-records": [ "120" ] } } Scenario 3: API has no total records indicator and pagination is using page_no: The scenario here is that, there is no total records indication in the API output but API has page number as parameter. So the API pagination is possible by incrementing the page number parameter by 1 until the length of the API output array length is greater than 0, else the pagination loop need to break. Explanation on how it works and Sample data: Has-next condition: $entity.length > 0 Has-next Condition Explanation: As there is no total record count known from API output, next page of the API need to be fetched if the current page has any output elements in the output array. next-url condition: $original.URL+"&page_no= " + $headers.page_no+1 Next-Url Condition Explanation: As every document has page number in it, same can be used in the has-next condition. Sample Response: For First API call: { "statusLine": { "protoVersion": "HTTP/1.1", "statusCode": 200, "reasonPhrase": "OK" }, "entity": [ { "year": "2022", "month": "08", "Name": "Mark", }, { "year": "2022", "month": "08", "Name": "John", },………………. 1000 records in this array ], "original": { "effective_date": "2023-08-31", "URL": "https://Url.XYZ.com" }, "headers": { "page_no": 1 } } For Second API call: { "statusLine": { "protoVersion": "HTTP/1.1", "statusCode": 200, "reasonPhrase": "OK" }, "entity": [ { "year": "2022", "month": "08", "Name": "Ram", }, { "year": "2022", "month": "08", "Name": "Raj", },………………. 1000 records in this array ], "original": { "effective_date": "2023-08-31", "URL": "https://Url.XYZ.com" }, "headers": { "page_no": 2 } } Scenario 4: has total records in the response header and pagination is using page_no The scenario is there is a total records count indicator and page number in the API Url response. API next page traverse can be through incrementing page number by 1 and validate if the total records count is less than the total rows fetched so far (multiplication of snap.out.totalCount and page limit). Explanation on how it works and Sample data: Has_next condition: parseInt($original.limit) * snap.out.totalCount < $headers[total-records] Has-next Explanation: If the URL Response has n documents where n is equal to total, has_next condition is to check whether the rows fetched is less than total records, For example: if there were have 120 total records and 100 as the limit factor for the API (predefined as part of design/implementation), it loops through exactly 2 times (first and second page only). it loops through as below, limit = 100, snap.out.totalCount =0: has_next condition will evaluate 0 < 120 limit = 100, snap.out.totalCount =1 has_next condition will evaluate 100 < 120 limit = 100, snap.out.totalCount =2 has_next condition will evaluate 200 < 120 pagination breaks and next page is not processed Next-url condition: $original.URL+"&page_no= " + $headers.page_no+1 next-url explanation: As every API URL output has page number in it, same can be used in the has-next condition and also in incrementing page number to get to the next document. Sample Response: { "statusLine": { "protoVersion": "HTTP/1.1", "statusCode": 200, "reasonPhrase": "OK" }, "entity": [ { "year": "2022", "month": "08", "Name": "Mark", }, { "year": "2022", "month": "08", "Name": "John", },………………. 100 records in this array ], "original": { "effective_date": "2023-08-31", "in_limit": "100", "URL": "https://Url.XYZ.com" }, "headers": { "page_no": 1 } } For Second API call: { "statusLine": { "protoVersion": "HTTP/1.1", "statusCode": 200, "reasonPhrase": "OK" }, "entity": [ { "year": "2022", "month": "08", "Name": "Ram", }, { "year": "2022", "month": "08", "Name": "Raja", },………………. 20 records in this array ], "original": { "effective_date": "2023-08-31", "limit": "100", "URL": "https://Url.XYZ.com" }, "headers": { "page_no": 2 } } Please give us Kudos if the article helps you😍2.3KViews4likes2Comments