Node Navigation
Featured Content
Community Update
To prevent further spamming incidents, we have changed our permissions for the members of this community. While anyone can visit our community, only verified customers, partners, and employees will have posting access. Email community@snaplogic.com to verify your account.
Community Activity
What is the Integration Nation
1 MIN READ Community is more than just an online platform. It is the coming together of people with a shared interest to discuss common use cases, find solutions, and grow their expertise. At the same time, recognition of customers and individuals that go above and beyond helps develop leaders in an industry. This article describes the Integration Nation, SnapLogic's new community program.dmiller3 years agoFormer Employee4.6KViews12likes5CommentsExpression Library Starter Pack
Download Link Shared Libraries.zip (4.8 KB) Description: These expression files have been created as examples for integration professionals and/or convenience functions for citizen developers. The expression files can be included at the organization’s root-level shared folder or project-space shared folder for ease of use. The included pipeline provides simple examples for many of the functions contained in these libraries. Samples: lib.string.lpad(‘123’, 5, ‘0’) yields “00123” lib.date.fromEpoch(1234567890) yields “2009-02-13T23:31:30.000 UTC” lib.date.trunc(Date.now(), ‘HH’) yields current date/time truncated to hour Release used: master-7317 - 4.21 GA ============================================================= date.expr Many new customers struggle with date manipulation in JavaScript. This expression library was created to provide some of the functionality they may see as “missing” in JavaScript. Static Variables Two static variables are defined: timezone - The timezone to be used by the functions in this expression library if a timezone is not provided. This should be set using the appropriate timezone values for Date.toLocaleDateTimeString function defaultFormat - The format string to be used by the functions in this expression library if a format string is not provided Functions format (Date[, FormatString[, TimeZone]]) - format a Date object, returned as a String If FormatString is not specified, “defaultFormat” provided in the Static Variables section will be used If TimeZone is not specified, “timezone” provided in the Static Variables section will be used firstDayOfMonth (Date) - return a Date object for the first day of the month from the Date object specified lastDayOfMonth (Date) - return a Date object for the last day of the month from the Date object specified toEpoch (Date) - return a BigInt value representing the epoch time (number of seconds since 1-Jan-1970) from the Date object specified fromEpoch (seconds) - return a Date object calculated from the epoch time specified getAge (Date, AsOfDate) - return an integer value representing the difference in years between AsOfDate and Date specified If AsOfDate is not specified, current date/time is used segmentHour (Date, numSegments) - return a Date object where the Minutes value has been set according to the number of segments specified. For example, if numSegments is 4 the minutes returned will be 00, 15, 30, 45; if numSegments is 6 the minutes returned will be 00, 10, 20, 30, 40, 50 Note that the minutes value is truncated (not rounded) to the segment, so a numSegments of 4: Minutes 0-14 → 00 Minutes 15-29 → 15 Minutes 30-44 → 30 Minutes 45-59 → 45 trunc (Date[, FormatString) - return a Date object that has been truncated based on the desired format Valid FormatString values: yyyy - truncate to year MM - truncate to month dd - truncate to day HH - truncate to hour mm - truncate to minute ss - truncate to second string.expr Many new customers struggle with formatting strings for specific purposes. This expression library was created to provide some of the functionality they may see as “missing” in JavaScript. Functions lpad (String, size[, fill-character]) - return a String value that has been left-padded to the specified size, using the fill-character specified Default fill character is space String will be truncated to “size” if original string is longer rpad (String, size[, fill-character]) - return a String value that has been right-padded to the specified size, using the fill-character specified Default fill character is space String will be truncated to “size” if original string is longer objectToKeyValuePairs (Object[, key/value separator[, data separator[, encode]]]) - return a String that contains key/value pairs joined from all fields in an object If “key/value separator” is not specified, default value colon “:” is used If “data separator” is not specified, default value comma “,” is used If “encode” is set to true, each data value will be passed through encodeURIComponent keyValuePairsToObject (String[, key/value separator[, data separator[, decode]]]) - return an Object parsed from the key/value pairs String If “key/value separator” is not specified, default value colon “:” is used If “data separator” is not specified, default value comma “,” is used If “decode” is set to true, each data value will be passed through decodeURIComponent flattenObject (Object, SubObjectName) - return a new object with the members of SubObjectName merged into Object; useful for flattening data after a Lookup or other snaps that move data into “original” numberWithCommas (Number[, Precision]) - return a String that is formatted from the specified Number, placing a comma “,” character at every 3rd position Decimal values are not formatted but will be rounded to the requested precision bytesToFileSize (Bytes[, Precision]) - convert a number that represents a file size in bytes to a human readable (concise) format up to YottaByte notation xml.expr The XML Generator does not handle incoming data that is not XML-safe. This expression library provides simple functions to encode and decode String values. Functions encode (String) - return a String that has replaced all occurrences of XML special characters with appropriate escape-sequence strings decode (String) - return a String that has replaced all occurrences of XML escape-string sequences with equivalent symbolskoryknick6 years agoEmployee8.7KViews9likes5CommentsAdvent of Code via SnapLogic IIP
I’ve been a pretty big fan of Advent of Code since I found out about it in 2019 and when I started with SnapLogic in 2021, I figured it could be pretty cool to try to use the SnapLogic IIP to solve all (well, not all, but at least some) of the Advent of Code daily puzzles, mostly to learn better how some of the snaps work, but also get more experience designing pipelines since I’m typically working more in the individual snap development. This year, I figured I’d post about it here in the community, and to see if others have an interest in attempting to solve the daily puzzles on SnapLogic IIP. I think there are a certain level of these problems that ARE solvable via the IIP, and some that aren’t due to some of them which aren’t. My ground rules for considering a day solved is: Get the input into a pipeline in whatever way possible, either via file download and read, or via Constant snap (My posted examples will use the sample input with Constant snap, but my final solutions typically will use a file reader) No use of the Script Snap (if it can’t be solved without a script snap, it’s considered unsolvable, but you’d be surprised what things you can do without a script snap with our snaps) No use of external services (databases, rest endpoints, etc) as those are likely to have some level of “cheating” associated with them similar to a script snap Basically, using only the transform, flow, and file reader/writer (to read input files, create, delete, read, and write temporary files, and write final output files) Pipe Execs are allowed I figure this might be something that other members of the community might be interested in doing, if you want to participate, feel free to join in on the conversation, I figure we can probably keep discussion to a single thread and do replies per day? Not sure how many might be interested in this, though. What is Advent of Code? From the website: Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as interview prep, company training, university coursework, practice problems, a speed contest, or to challenge each other. You don’t need a computer science background to participate - just a little programming knowledge and some problem solving skills will get you pretty far. Nor do you need a fancy computer; every problem has a solution that completes in at most 15 seconds on ten-year-old hardware. If you want to join in, go to https://adventofcode.com and connect one of the authentication mechanisms. (I use github, but you can auth with google, twitter, or reddit as well) Logging in for an account is required so that you can receive input specific to you. If you plan to join and want to join a leaderboard for this project, feel free to join my private leaderboard with the code 1645534-1249c834 .8.3KViews8likes20CommentsTour the Integration Nation Community Platform
Welcome! Welcome to the Integration Nation! This tutorial will help you to get started and provides an overview about various community platform features, areas, and offerings. Explore the various features of Integration Nation community platform, get to know new content areas, and discover how to get the most out of your membership You will learn: how to update your profile where to go for different information Prerequisite Ensure that you have a community account and are logged in before continuing. We recommend using your business account so that you can be granted access to customer or partner areas if applicable. Your User Profile and Settings Once you are logged in, click on the avatar icon in the upper right and select My profile to access your user profile. On your profile page, you can: Give yourself a shiny new avatar by clicking on the image in the hero art and either choose one from the Community set we've created for you or upload your own. See your community stats and badges earned. Manage your uploaded images. Check out what is also on your profile, then head to **My settings** also in the menu. On your first day, you want to focus on: Preferences > Preference Options for your timezone and whether or not to automatically subscribe to posts you participate in. Preferences > Privacy for setting who can see your private information and email. Subscriptions & Notifications > Notification Settings to manage in community notifications and email notifications (there can be a lot) FAQ Visit the FAQ at the top of the page for information on using this community platform. If you need more information or can't find what you are looking for, post a question in the About this Community category. Menu Bar and Categories The menu bar across the top provides access to all the categories of our community. Discussions As indicated by the title, this section is primarily where discussions will be held: Product Category Discussions categories are where we talk about integration and automation from a big picture level. Getting the Most Out of the SnapLogic Platform categories is where we discuss features and functionality of SnapLogic products. SnapLogic Technical blog for how-tos, best practices, and other technical information from SnapLogic's subject matter experts. Enhancement Requests (customers/partners only) focuses on the discussing new functionality. User Experience Program gives you the chance to participate in user research studies to improve our products. Solutions for Your Business is the place to talk end-to-end solutions for your industry or your department. SnapLabs is our brand-new innovation hub, designed to offer you a unique, early-access experience of our most innovative features and capabilities. Learning This section contains: sample exercises to get you started links to our training options access to our Architecture Center Events See what events SnapLogic is either hosting or attending, and start a conversation about what you learn. Note that the “Will you be attending?” area does not register you for an event. It is there as a reminder of the event. You must complete the registration within the message of the event. User Groups Looking to find other SnapLogic customers in your industry or region? User Groups will be the place to be. We are excited to build out this section. The creation of and maintenance of new user user groups will depend on interest. Have a suggestion for a new user group, message the Community Manager dmiller. Nation Hub This is where we support all the programs of the Integration, from this community platform, our MVP Program, and our customer/partner awards. Support Accessible only to customers and partners, this area provides information on submitting support tickets and will be the home of our knowledge base based primarily on questions resolved in here the community. Last updated: Oct 23, 2023dmiller3 years agoFormer Employee3.5KViews7likes0CommentsAdvance Prompt Engineering
12 MIN READ This guide will cover some advanced prompt engineering techniques and how to apply them in Snaplogic GenAI App Builder to help you tackle more complex tasks and enhance overall performance. You will learn how to use system prompts, structure responses in JSON, create complex prompts, manage tokens, and consider prompt and context size. First, let’s level set on what exactly prompt engineering is and why it’s important. What is Prompt Engineering? At its core, prompt engineering is about designing the input (the “prompt”) that you give to an AI model. The way you phrase your prompt can significantly impact the quality and relevance of the model’s output. It’s not just about what you ask the AI to do, but how you ask it. Why is Prompt Engineering Important? Even the most advanced AI models rely heavily on the prompts they receive. A well-crafted prompt can lead to insightful, accurate, and highly relevant responses, while a poorly structured prompt can result in vague, inaccurate, or irrelevant answers. Understanding the nuances of prompt engineering can help you maximize the effectiveness of your AI applications. Prerequisites Basic of Snaplogic OpenAI, Azure OpenAI, Amazon Bedrock Anthropic Claude, or Google Gemini account System prompt The System prompt is a special input that defines the LLM’'s behavior, tone, and boundaries before it engages with users. It establishes the context and sets the rules for interactions, ensuring that the assistant's responses align with the desired persona and goals. Imagine you’re a travel agent assistant. Your job is to provide clients with tailored and precise travel recommendations. To do this effectively, it’s essential to establish the LLM’s behavior through the System prompt: defining the assistant’s role, setting the appropriate tone and style, and including important instructions. Drag the "OpenAI Chat Completion" "Azure OpenAI Chat Completion" "Anthropic Claude on AWS Messages", or "Google Gemini Generate" onto the canvas Select "Account" tab and select your configured account Select “Settings” tab to configure these fields Select a model: click the chat bubble icon to view the list of available models and select your preferred one. Set “Prompt” “I’m organizing a 3-day honeymoon in Japan. We’re interested in cultural experiences, and nature.” Set “System Prompt”: expand the “Advanced prompt Configuration” and enter the System prompt “You are a travel agent assistant. Provide personalized travel recommendations based on user preferences. Maintain a friendly and conversational tone. If asked about travel plan, include accommodations and restaurants.” Save and close the snap settings panel Validate the pipeline and let’s see the result by clicking the preview data output To summarize the above example System prompt: You are a travel agent assistant. Provide personalized travel recommendations based on user preferences. Maintain a friendly and conversational tone. If asked about travel plan, include accommodations and restaurants. Question: I’m organizing a 3-day honeymoon in Japan. We’re interested in cultural experiences, and nature. Answer: That's so exciting! Japan is the perfect destination for a honeymoon filled with cultural experiences and beautiful scenery. For your 3-day trip, I recommend starting in Kyoto, known for its traditional temples and gardens. You can visit iconic sites like Fushimi Inari Shrine, Arashiyama Bamboo Grove, and Kinkaku-ji Temple. Be sure to catch a traditional tea ceremony or try on a kimono for a truly immersive experience. Next, head to Hakone for a taste of Japan's stunning natural beauty. You can explore the Hakone Open-Air Museum, take a relaxing dip in an onsen (hot spring), and enjoy a scenic cruise on Lake Ashi with views of Mt. Fuji. For accommodations, I suggest staying in a traditional ryokan for an authentic Japanese experience. Ryokans offer tatami-matted rooms, futon beds, and kaiseki dinners featuring seasonal dishes. As for dining, make sure to try local specialties like sushi, ramen, and tempura. And don't miss out on trying sake, Japan's famous rice wine! Let me know if you'd like more details or assistance with booking accommodations and restaurants. Have a wonderful honeymoon in Japan! 🎎🌸🗾 The response adheres to the system prompt by providing personalized recommendations in a friendly tone, including details on accommodations and dining options. JSON response Structuring response in JSON format is a crucial technique for enhancing clarity, maintaining consistency, and ensuring seamless integration with various systems, including front-end applications, APIs, and databases. For example, if you need to present information from the above travel agent assistant example in a predefined format on a website, generate a PDF, or connect with other services that facilitate booking hotels, and activities, it is essential to format the prompt response as JSON. This approach ensures compatibility and smooth interaction across different platforms and services. Let's try modifying the system prompt from the previous example to produce output in a specific JSON format. Click the Chat Completion snap to open settings. Update the system prompt to instruct the LLM to produce the JSON response: "You are a travel agent assistant. Provide a JSON response that includes destination, trip_duration, list of activities, list of hotels (with fields for name and description), and list of restaurants(with fields for name, location, and description)." Check the “JSON mode” checkbox. The snap will output a field named json_output that contains the parsed JSON object of response. Save and close the snap settings panel. Validate the pipeline and let’s see the result. The prompt answer is the JSON string and the parsed JSON object can be found in the “json_output” field since the JSON mode is enabled. The JSON response complies with the structure specified in the system prompt, ensuring that all necessary fields are included. The structured format supports seamless integration with downstream applications. For a travel agency, this capability allows for the efficient generation of personalized itineraries, which can be utilized to populate web pages, generate PDFs or Excel documents, send emails, or directly update travel booking systems, including querying flight availability and checking hotel options. Complex prompt Using a list of messages to incorporate conversation history helps maintain context in ongoing dialogues. This approach ensures responses are relevant and coherent, improving the overall flow of the conversation. Additionally, these messages can be provided as examples of user responses to guide the model in interacting effectively. By including previous interactions, it enhances continuity and user engagement, facilitating the model's ability to handle complex, multi-turn exchanges. This technique allows the model to generate more natural and accurate responses, especially when building on earlier details, resulting in a more seamless and intuitive conversation. Moreover, they can be used for example of response to let model know how should interact with user. Each message contain a role and content. The common roles are: System: Provides the initial context, setting the tone and behavior for the LLM. User: Represents the user’s input, guiding the conversation based on their queries or commands. Assistant/Model: Contains previous responses from the LLM or examples of desired behavior. This section will guide you through the process of constructing a message list and using it as input for the LLM. We'll create the following pipeline to make a travel agent assistant be able to answer questions by leveraging the context from previous conversations. In this example, user asks about Japan's attractions in April and later inquires about the weather without specifying a location or time. Let’s create the pipeline and see how it works. Drag the "JSON Generator" snap onto the canvas. Click on the "JSON Generator" to open it, then click on the "Edit JSON" button in the main Settings tab Highlight all the text from the template and delete it. Paste in this text. This prompt will be used as the user question. { "prompt": "Can you tell me what the weather’s going to be like?" } The "JSON Generator" should now look like this Click "OK" in the lower-right corner to save the prompt Save the settings and close the snap Drag the “OpenAI Prompt Generator” or “Azure OpenAI Prompt Generator” onto the canvas. Connect the Prompt Generator to the “JSON Generator” Click on the "Prompt Generator" to open settings. Change the label to “System Prompt” Click on the "Edit prompt" to open the prompt editor Highlight all the text from the template and delete it. Paste in this text. We will use it as the system prompt. You are a travel agent assistant. Provide personalized travel recommendations based on user preferences. The prompt editor should now look like this Click "OK" in the lower-right corner to save the prompt Select the “Advanced prompt output” checkbox. The “User role” field will be populated. Set the “User role” field to “SYSTEM” The final settings of the “System Prompt” should now look like this. Save the settings and close the snap Drag the second “Prompt Generator” onto the canvas and connect it to the prior snap. This snap will handle the previous user’s questions. Follow step 9 to 17 as a guide to configure the following fields Label: User Message 1 Prompt editor: I am planning a trip to Japan in April. Can you help me find some tourist attractions? User role: USER The final settings of the “User Message 1” should be like this. Drag the third “Prompt Generator” onto the canvas and connect it to the prior snap. This snap will handle the previous LLM’s answer. Follow step 9 to 17 as a guide to configure the following fields Label: Assistant Message Prompt editor: Sure! Some tourist attractions in Japan during your trip in April are: 1. Cherry Blossom Viewing 2. Fushimi Inari Shrine 3. Hiroshima Peace Memorial Park 4. Mount Fuji 5. Gion District Let me know if you need more information or assistance with planning your trip! User role: ASSISTANT The final settings of the “Assistant Message” should be like this. Drag the fourth “Prompt Generator” onto the canvas and connect it to the prior snap. This snap will handle the user question. Follow step 9 to 17 as a guide to configure the following fields: Label: User Message 2 Prompt editor: {{prompt}} User role: USER The final settings of the “User Message 2” should be like this. Drag the “Chat Completion” onto the canvas and connect it to “User Message 2”. Click on the "Chat Completion" to open settings. Select the account in the Account tab. Select the Settings tab. Select the model name. Check “Use message payload” checkbox. The prompt generator will create a list of messages in the "messages" field. Enabling "Use message payload" is necessary to use this list of messages as input. The “Message payload” field appears. Set the value to $messages. The settings of the Chat Completion should now look like this Save and close the setting panel Validate the pipeline and let’s see the result. Click on the output view of “User Message 2” to see the message payload, which we have constructed using the advanced mode of the prompt generator snap. Click on the output view of “Chat Completion” snap to see the LLM response. The result is: In April, the weather in Japan is generally mild and pleasant with cherry blossoms in full bloom. The temperatures are typically around 10-20°C (50-68°F) and there may be occasional rain showers. It's a great time to explore outdoor attractions and enjoy the beautiful spring scenery. Make sure to pack layers and an umbrella just in case! The model effectively delivered weather information for Japan in April, even the last user query did not specify location or time. This is possible because the model uses the entire conversation history to understand the context and flow of the dialogue. Furthermore, the model echoed the user’s question before responding, maintaining a consistent conversational style. To achieve the best results, make sure your message list is complete and well-organized, as this will help the LLM generate more relevant and coherent responses, enhancing the quality of the interaction. Tokens Tokens are units of text, including words, character sets, or combinations of words and punctuation, that language models use to process and generate language. They can range from single characters or punctuation marks to entire words or parts of words, depending on the model. For instance, the word "artificial" might be split into tokens like "art", "ifi", and "cial". The total number of tokens in a prompt affects the model's response capability. Each model has a maximum token limit, which includes both the input and output. For instance, GPT-3.5-Turbo has a limit of 4,096 tokens, while GPT-4 has limits of 8,192 tokens and 32,768 tokens for the 32k context version. Effective token management ensures responses remain within these limits, improving efficiency, reducing costs, and enhancing accuracy. To manage token usage effectively, the maximum tokens parameter is essential. It sets a limit on the number of tokens the model can generate, ensuring the combined total of input and output stays within the model’s capacity. Setting a maximum tokens parameter has several benefits: it prevents responses from becoming excessively long, reduces response times by generating more concise outputs, optimizes performance, and minimizes costs by controlling token usage. Additionally, it enhances user experience by providing clear, focused, and quicker responses. Use case Examples: Customer Support Chatbots: By setting maximum tokens, you ensure that the chatbot's responses are brief and focused, providing quick, relevant answers to user inquiries without overwhelming them with excessive detail. This enhances user experience and keeps interactions efficient. Content summarization: Helps generate concise summaries of long texts, suitable for applications with space constraints, such as mobile apps or notifications. Interactive Storytelling: Controls the length of narrative segments or dialogue options, maintaining engaging and well-paced storytelling. Product Descriptions: Generate brief and effective product descriptions for e-commerce platforms, maintaining relevance and fitting within space constraints. Let's walk through how to configure the maximum tokens in the SnapLogic Chat Completion snap using the prompt: “Describe the Photosynthesis in simple terms.”. We’ll see how the LLM behaves with and without the maximum token setting. Drag the “OpenAI Chat Completion” or “Azure OpenAI Chat Completion”, or “Google Gemini Generate” onto the canvas Select “Account” tab and select your configured account Select “Settings” tab Select your preferred model to use Set prompt to the message “Describe the Photosynthesis in simple terms.” The Chat Completion settings should now look like this Save the snap settings and validate the pipeline to see the result. In the result, the “usage” field provide us the token consumption detail. prompt_tokens: tokens used by the input completion_tokens: tokens used for generating response. total_tokens: the combined number of tokens used for both the input prompt and the generated response We can see that the response is quite long and the token used for response (completion_tokens) is 241. Let’s set the maximum token and see the result again Expand the “Model parameters” Set “Maximum tokens” to 100 Save the snap settings and validate the pipeline to see the result. The result is more concise compared to the output when the maximum tokens are not set. In this case, the number of completion_tokens used is only 84, indicating a shorter and more focused response. Using the maximum tokens effectively ensures that responses are concise and relevant, optimizing both performance and cost-efficiency. By setting this limit, you can prevent excessively long outputs, reduce response times, and maintain clarity in the generated content. To achieve optimal results, align the maximum tokens setting with your specific needs, such as the desired response length and application requirements. Regularly review and adjust this parameter to balance brevity with completeness, ensuring that the outputs remain useful and within operational constraints. Prompt size considerations In the previous section, we covered techniques for managing response size to stay within token limits. Now, we turn our focus to prompt size and context considerations. By ensuring that both prompts and context are appropriately sized, you can improve the accuracy and relevance of model responses while staying within token limits. Here are some techniques for managing prompt and context size: Keep prompt clear and concise By making prompts clear and direct, you reduce token usage, which helps keep the prompt within the model's limits. Focusing on essential information and removing unnecessary words enhances the accuracy and relevance of the model's responses. Additionally, specifying the desired output length further optimizes the interaction, preventing excessively long responses and improving overall efficiency. Example Prompt: “Could you please provide a detailed explanation of how the process of photosynthesis works in plants, including the roles of chlorophyll, sunlight, and water?” Better prompt: "Explain the process of photosynthesis in plants, including the roles of chlorophyll, sunlight, and water, in about 50 words." Splitting complex tasks into simpler prompts Breaking down complex tasks into smaller, more manageable subtasks not only reduces the size of each individual prompt but also enables the model to process each part more efficiently. This approach ensures that each prompt stays within token limits, resulting in clearer and more accurate responses. Example Complex Task: "Write a detailed report on the economic impact of climate change in developing countries, including statistical analysis, case studies, and policy recommendations." Simplified Prompts: "Summarize the economic impact of climate change in developing countries." "Provide a statistical analysis of how climate change affects agriculture in developing countries." "List case studies that demonstrate the economic consequences of climate change in developing countries." "Suggest policy recommendations for mitigating the economic impact of climate change in developing countries." Use a sliding window for chat history From the complex prompt section, we know that including the entire chat history helps maintain context, but it can also quickly use up available tokens. To optimize prompt size, employ a sliding window approach. This technique involves including only a portion of the chat history, focusing on recent and relevant exchanges, to keep the prompt within token limits. Summarize contexts Use a summarization technique to condense context into a brief summary. Instead of including extensive conversation history, create a concise summary that captures the essential information. This approach reduces token usage while retaining key details for generating accurate responses. By applying these techniques, you can effectively manage prompt and context size, ensuring that interactions remain efficient and relevant while optimizing token usage.ChompooPanida2 years agoEmployee1.7KViews7likes0CommentsPerforming an Action when there is no data
A common integration pattern is to do something when no data is received. For example, we might read a file, parse it, and find that no records meet some filter criteria. As a result, we might send an email, or insert a ticket into a ticket management system like ServiceNOW. However, in SnapLogic, this can be somewhat more difficult than it seems initially because of the streaming architecture. In fact, many snaps will not execute without input documents - rather hard to accomplish when there is no document to flow through: So, how can we take an action, even though there’s no document to run the snap? The trick is to make a document and force it into the stream with a Join Snap (set to Merge): Note in this figure that even though nothing flows into the top view of the Join Snap, it still produces an output. This enables us to use the Router snap to check whether or not a field that we know will exist in the source does in fact exist. If it does, we know that data has passed through the Filter snap and then been merged with the Sequence data. If it does not, we know that only the Sequence data has been passed through and therefore nothing made it through the Filter . Only one of these views will have anything pass through it. The magic here is in the Merge option in the Join snap. As long as it is receiving data on a view, it will output - even if it has nothing to ‘join’ to. Meanwhile, the Sequence snap will always output the number of documents that you set it to. In this case, only one is required. (This pipeline here: Do something after no doc filter_2017_04_26.slp (11.1 KB) )ncrouch9 years agoFormer Employee14KViews7likes14CommentsBox's Snaplogic Knowledge Base
Here is a Box folder containing utilities and endpoint specific pipelines.aleung9 years agoContributor III4.3KViews7likes1CommentSnapGPT Beginner's Guide
What is SnapGPT? SnapGPT is a generative AI solution in early release and currently available only to users who have been invited to SnapLabs. Built right into the SnapLogic web interface (screenshot below), you can now prompt SnapGPT for a wide variety of help creating Pipelines, configuring Snaps, for suggestions about which Snap to use, and so much more. How can I get started with SnapGPT? In this section we cover a few examples that should be repeatable as a way to send your first few prompts to SnapGPT and observe the outcome. After that you can explore our SnapGPT Prompt Catalog, which contains even more prompts to copy/paste into SnapGPT as you explore. One caveat here is that as a generative AI solution that is always learning, it is possible that outcomes will change over time. When SnapGPT creates a Pipeline for you it will be a bit like importing a Pipeline in the sense that it will have a wizard to help select accounts and finalize the Pipeline. Log in at https://snapgpt.labs.snaplogic.com If SnapGPT is not shown by default, press the SnapGPT button in the upper-right corner of the SnapLogic web interface to make it visible; to make it always visible, click your name in the upper-right corner > User Settings > Opt-in Features > Check the box for “Open SnapGPT by Default”: A new box will appear on the right-hand side of the SnapLogic web interface for you to start typing to SnapGPT: Examples: See SnapGPT in Action Now let’s talk about getting your feet wet, hands dirty, or whatever saying floats your boat. Below are several examples you can use to start exploring SnapGPT and they should be precise enough to yield consistent results. Example 1: Create a pipeline that pulls Salesforce Opportunities Our first example is one that will generate a short but complete Pipeline for us. With any generative AI, SnapGPT included, it is important to remember that the more specific you are with the prompt the more accurate a response you will receive, or in this example, the more accurate a Pipeline we will receive. Prompt: “Create a Pipeline using Salesforce Read to fetch my Opportunities, Filter out any opportunities outside of the last fiscal quarter, then write them to Snowflake.” Here is a screenshot of the short Pipeline created by SnapGPT that closely resembles the prompt we provided: Inside the Filter Snap we can see that SnapGPT created an expression for us to filter the $CloseDate file for us: Example 2: Ask help for identifying which Snap to use At some point we were all new to using SnapLogic and we learned it from CSM-led training, trial-and-error, reviewing existing pipelines, etc. What we did not have was an always-on AI assistant ready to answer our questions (we still love you Iris and wouldn’t be here without you!). This example helps show us how SnapGPT can be prompted with natural language to let us know exactly what Snap we need. Prompts: “What snap can I use to remove records from my pipeline based on a given condition?” “Which snap acts like a case statement or switch to allow me to move records down different pathways based on a condition?” Example 3: Ask for help to learn when to use one Snap over a different Snap Another example of using SnapGPT more for educational purposes or documentation skimming would be to ask it when you might want to use one Snap instead of another. Prompt: “When would I need to use the Salesforce SOQL snap instead of the Salesforce Read snap?” Example 4: Generate sample data We can also use SnapGPT to generate sample data, for those times when we need to get started on a business process and show some results but maybe we don’t yet have access to the source system. Prompt: “Create a single-snap pipeline with a JSON Generator that has 10 example Salesforce Lead records” Example 5: Fetch exchange data from third-party API It is also possible to use SnapGPT to pull data from a third-party site such as exchange data. Prompt: “Fetch exchange rate data from the European Central Bank and save it to a JSON file” What should I be aware of when using SnapGPT? As with any early access release of software, especially generative AI that is always learning, there are some key points to keep in mind as you explore SnapGPT and share feedback with the SnapLogic team (including any previously mentioned and/or typical disclaimers about using ChatGPT or SnapGPT): SnapGPT may generate Pipelines with unnecessary Snaps (like kids overpacking to visit grandma’s house!) SnapGPT depends on ChatGPT availability, so there are times when you might see a response like this: What if I have questions? Our goal is to provide several ways to interact with our team, which we’ve broken out below. Community: Using the SnapLogic Community’s locked SnapLabs Category, which is the same category you should be reading this content from (please do not post on the public forums yet since this is a limited release at this time). Office hours: Roger Sramkoski, one of our Sr. Technical Marketing Managers, will be setting up office hours once or twice a week. These will be purely optional and will be minimal agendas so we can focus on open conversations. Email: You can also contact Roger Sramkoski directly at rsramkoski@snaplogic.comRogerSramkoski3 years agoEmployee7.2KViews7likes3CommentsError Handling in SnapLogic - InterWorks
Hello We have a blog post posted on the topic Error Handling and Best Practices on our website which i am sending the URL below. We cover most of the stuff you need to know about error handling and what are the best practices to implement in the pipelines for the error handling. Please be free to leave a feedback in the comments section and if you have any further questions you can contact me. https://interworks.com.mk/error-handling-in-snaplogic/Andrej_Bogdanov7 years agoNew Contributor III2.3KViews7likes0CommentsInfosys: Modernising the Client's Landscape Containing 500+ Legacy Interfaces
2022 Partner Innovation Award Nomination What were the underlying reasons or business implications for the need to innovate and transform the business? Client organisation was carved out from a large conglomerate and they inherited a large set of legacy applications built on disparate legacy technologies. The client wanted to modernise the landscape and also look at utilising a platform led approach, in a new cloud based setup Following were the key major challenges: Heterogenous integration landscape with 500+ interfaces spread across various legacy technologies like SAP PI, WebMethods and Cast Iron with EOL approaching fast Minimal documentation of existing landscape High maintenance cost of several operative environments with respect to platform and application support Challenges with skill availability on legacy platforms leading to high cost and business continuity risk High license and Infrastructure cost Selecting the right platform that meets future needs Migrating 500+ interfaces involving multiple internal stakeholders, external vendors and third party with an aggressive timeline Which customer did you implement this on? Siemens Energy Global GmbH Describe your strategy and execution to solve the business challenge. Considering the business challenges and criticality as well as technical complexity, Infosys and Siemens Energy worked in close collaboration to devise a robust strategy and execution plan. Strategy After evaluating various platforms, Snaplogic was shortlisted as the platform of choice for its robust capabilities The timeline of the project was aligned to the license expiry date of the legacy integration platforms Wave wise approach was finalised based on business criticality, application availability and technology grouping Ensure zero business impact with on par performance and no end system impact 5.Feasibility checks to confirm connectivity with various applications 6.Focus on maximizing the use of standard features and reduce customization for better maintainability Rationalisation of number of interfaces for better performance and cost reduction in collaboration Execution Cloud and On-premise Application integrations have been performed utilising the various SNAPS available In built SNAPS have been used to connect to various applications like Workday , SAP HANA etc. REST/SOAP connectors have been used for the Integration of Non-Standard Applications to ensure security Handle security aspects of Internal and third apps/vendors using various authentication mechanism Real Time Vs Batch processing requirements have been executed using various connector and the supporting parameters provided by Snaplogic Snaplogic Dashboard used extensively for optimized monitoring of the pipeline executions Parallel Processing of threads for improved performance Snaplogic Architects and SMEs were consulted for architecture, design and performance related aspects Who was and how were they involved in building out the solution? To migrate 500+ legacy interfaces, a peak team of 25 people from Infosys were involved, with support from Siemens Energy integration experts and application teams When we encountered issues that required support from Snaplogic Product team, regular interactions were organised to look for suitable solutions What were the business results after executing the strategy? Siemens Energy achieved the following business results with the successful execution of the migration program: numbers of interfaces reduced by approx. 1/3rd in the Snaplogic landscape reduction in technical complexity with reduction from 4 platforms to single platform, leading to significant cost reduction for platform/application support and skills modern cloud based setup with assured future scalability Introduction of APIs Improved predictability of business continuity What was the ROI the customer gained from executing the strategy? Below are the overall ROI gained by customer with the success of the migration program: ~50% reduction in license and infrastructure cost ~30% projected cost reduction for platform and application support Anything else you would like to add? The following were the key highlights of the migration program: There was focus on maximizing the use of standard features and reduce customization for better maintainability Extremely complex requirements were also implemented with standard snaplogic feautres to eliminate the existing custom code Component tracker was introduced which is an innovative way to maintain and access information about interfaces at a single placedmiller3 years agoFormer Employee3.2KViews6likes0Comments
Getting Started
Here are some links to help you get quickly familiarized with the Integration Nation community platform.
Top Content
Recent Blogs
7 MIN READ
Flashback: What’s an Agent?
“Agents are autonomous LLM-based processes that can interact with external systems to carry out a high-level goal.”
Agents are LLM-based systems that can perform actions based on the user’s request and the scenario, determined by the LLM of the Agent system. A minimal agent consists of 1. an LLM component, and 2. tools that the Agent can use. Think of the Agent as a Robot with a brain (LLM) + robotic arms (Tools). Based on the request, the brain can “decide” to do something, and then the arm will carry out the action decided by the brain. Then, depending on the scenario, the brain can determine if more action is needed, or end if the request is complete.
The process of an agent
We previously introduced the “Agent Driver and Agent Worker“ pipeline pattern, which clearly defines every single operation that would occur in an Agent process. The process of the pattern can be described as follows
Agent Driver
Define the instruction of the Agent. (System prompt)
Format the user’s request into a conversation. (Messages array)
Define tools to make available to the Agent.
Send all information above into a “loop“, run the Agent worker until the process is complete.
Agent Worker
Call the LLM with the instructions, conversation, and tool definitions
LLM decides…
If it is able to complete the request, end the conversation and go to step 7.
If tool calls are required, go to step 3.
Call the tools.
Format the tool result.
Add the tool results to the conversation
Back to step 1.
Request is complete, the agent responds.
The rationale
From the Agent Driver and the Agent Worker pipeline, here’s an observation:
The driver pipeline handles all of the “configuration“ of the Agent.
The worker pipeline handles the “operation“ of the Agent.
Now, imagine this:
If we can package the “Agent operation” into a single module, so that we can create Agents just by providing instructions, and tools. Wouldn’t this be great?
This is exactly what Agent Snap does. The Agent Snap combines the PipeLoop Snap and the Agent Worker pipeline, so all of the agent operations happen in a single Snap.
Information and prerequisites
Now, before dreaming about having your own company of agents, since building agents is now so simple, there is some information to know about and conditions to be met before this can happen.
1. Agent Snaps are model-specific
The Agent Snap is a combination of the “loop” and the Agent Worker, therefore, the LLM provider to be used for an Agent Snap is also fixed. This design allows users to stick to their favorite combination of customized model parameters.
2. Function(Tool) definitions must be linked to a pipeline to carry out the execution
Previously, in an Agent Worker pipeline, the Tool Calling Snap is connected to Pipeline Execute Snaps to carry out tool calls, but this is no longer the case with the Agent Snap. Instead, a function definition should include the path of the pipeline to carry out the execution if this tool is called. This way, we can ensure every tool call can be performed successfully. If the user does not provide a tool pipeline with the function definition, the Agent Snap will not proceed.
3. Expected Input and Output of a tool pipeline
When a tool call is requested by an LLM, the LLM will provide the name of the tool to call and the corresponding parameters to call. The Agent Snap will unwrap the parameters and send them directly to the tool pipeline.
Here’s an example: I have a tool get_weather, which takes city: string as a parameter. The LLM decides to call the tool get_weather with the following payload:
{ "name": "get_weather", "parameters": { "city": "New York City" }, "sl_tool_metadata": { ... } }
For this to work, my tool pipeline must be able to accept the input document :
{"city": "New York City"}
On a side note, the sl_tool_metadata object will also be available to the tool pipeline as the input for APIM and OpenAPI tools.
Now, assume my tool pipeline has successfully retrieved the weather of New York City, It’s time for the Agent Snap to collect the result of this tool call. The Agent Snap will collect everything from the output document of the tool pipeline as the tool call result*. So that the LLM can determine the next steps properly.
*Note: with one exception, if the output of a “tool pipeline“ contains the field “messages“ or "contents", it will be treated as the conversational history of the “child agent”, which will be filtered and will not be included.
Build an Agent with Agent Snap
We’ve understood the idea, we’ve gone through the prerequisites, and it’s time to build an Agent.
In this example, we have an Agent with 2 tools: a weather tool and a calendar tool. We first start with a prompt generator to format the user input. Then define the tools the Agent can access.
Let’s look into one of the tool definitions.
In this example tool, we can see the name of the tool, the description of the tool, the parameters, and the path of the tool pipeline to carry out this task. This satisfies the requirement of a tool to be used by an Agent Snap.
After we have the tools set, let’s look at the Agent Snap, using the Amazon Bedrock Converse API Agent Snap as an example.
The configuration of an Agent Snap is similar to its corresponding Tool calling Snap, except for some extra fields, such as a button to visualize the agent flow, and a section to configure the operation of the Agent, such as iteration limit and number of threads for tool pipeline executions.
The Agent Snap handles the whole executional process, and terminates when 1. The request is complete (no more tool calls are required) or 2. An error occurred.
Voila! You have created an agent.
After the Agent pipeline completes a round of execution, the user can use the “Visualize Agent Flow“ button in the Agent Snap to see the tools that are called by the LLM.
Tips and Tricks for the Agent Snap
Let’s take a look at the features built into the Agent Snap.
Reuse pipelines
Most agentic tool calls are processes that can be reused. To minimize execution load, we can use the “Reuse tool pipeline“ feature. This feature allows tool pipeline instances to be reused, so that the Agent will not need to spawn a pipeline every time a tool is called.
To use this feature, the tool pipeline to be reused must be “Ultra compatible“; otherwise, the pipeline execution would hang, and the Agent Snap would eventually timeout.
Tool call monitoring
Agents can be long-running; it’s not rare to have an Agent run multiple iterations. To see what’s happening in the process, Agent Snap has built in monitoring during validation. The user will be able to see the iteration index, the tool that is currently being called, and the parameters that are used for the tool call in the pipeline statistics status bar.
Selecting the “Monitor tool call“ option includes the parameter in the status update, which is an opt-in feature. If the user does not wish to expose the information to SnapLogic, the user should disable this.
Warnings
Agent configuration is a delicate process; a mistake can potentially lead to errors. The Agent Snap has a bunch of built-in warning capabilities, so the user can be better aware of what could go wrong.
1. Agent process completed before all tool calls completed
In the Agent Snap, there is an Iteration limit setting, which limits the number of iterations the Agent can run. If the user provided a smaller limit, which caused the Agent to stop while the LLM is still awaiting tool calls, this warning would pop up to signal the user that the execution is incomplete.
2. Tool pipeline path is not defined
A function (tool) definition to be used by the Agent Snap should include a tool pipeline path, so the Agent Snap can link to the actual pipeline that carries out the execution. If the pipeline path is not included in the function definition, this warning will pop up to signal the user that the Agent will not proceed.
3. Duplicate tool naming
As we try to add more and more tools to the Agent Snap, two tools likely share the same name. The Agent Snap has the ability to rename the tools being sent to the LLM, and then still link to the correct pipeline. There will also be a warning available in the pipeline statistics to alert the user about a change in the behavior.
Release Timeframes
The Agent Snap is the foundation of the next-generation SnapLogic Agent. We will be releasing 4 Agent Snaps in the November release:
Amazon Bedrock Converse API Agent
OpenAI Chat Completions Agent
Azure OpenAI Chat Completions Agent
Google Gemini API Agent
To better use the Agent Snaps, we will be introducing new capabilities to some of our Function Generators as well. Here is the list of Function Generator Snaps that will be modified soon:
APIM Function Generator Snap
OpenAPI Function Generator Snap
MCP Function Generator Snap
We hope you are as excited as we are about this one.
2 days ago0likes
Introduction
In today's fast-paced integration landscape, ensuring the reliability and performance of your SnapLogic pipelines is crucial. We're excited to introduce a comprehensive test automation framework that combines the power of Robot Framework with SnapLogic's APIs to deliver a robust, scalable, and easy-to-use testing solution.
This approach leverages the snaplogic-common-robot [PyPI-published library] to provide prebuilt Robot Framework keywords for interacting with SnapLogic Public APIs, integrated within a Docker-based environment..
This lets teams spin up full SnapLogic environments on demand—including Groundplex, databases, and messaging services—so tests run the same way everywhere
This blog post explores two key components of our testing ecosystem:
snaplogic-common-robot: A PyPI-published library https://pypi.org/project/snaplogic-common-robot/ providing reusable Robot Framework keywords for SnapLogic automation
snaplogic-robotframework-examples: A public repository providing a complete testing framework with baseline test suites and Docker-based infrastructure for comprehensive end-to-end pipeline validation
Key Features and Benefits
1. Template-Based Testing
The framework supports template-driven test cases, allowing you to:
Define reusable test patterns
Parameterize test execution
Maintain consistency across similar test scenarios
2. Intelligent Environment Management
The framework automatically:
Loads environment variables from multiple .env files
Auto-detects JSON values and converts them to appropriate Robot Framework variables
Validates required environment variables before test execution
Why Robot Framework for SnapLogic Testing?
Robot Framework offers several advantages for SnapLogic test automation:
Human-readable syntax: Tests are written in plain English, making them accessible to both technical and non-technical team members
Keyword-driven approach: Promotes reusability and reduces code duplication
Extensive ecosystem: Integrates seamlessly with databases, APIs, and various testing tools
Comprehensive reporting: Built-in HTML reports with detailed execution logs
CI/CD friendly: Easy integration with Jenkins, GitLab CI, and other automation platforms
The Power of Docker-Based Testing Infrastructure
One of the most powerful features of our framework is its Docker-based architecture.
Isolated Test Environments: Each test run operates in its own containerized environment
Groundplex Control: Automatically spin up and tear down Groundplex instances for testing
Database Services: Pre-configured containers for Oracle, PostgreSQL, MySQL, SQL Server, DB2, and more
Message Queue Systems: Integrated support for Kafka, ActiveMQ, and other messaging platforms
Storage Services: MinIO for S3-compatible object storage testing
This architecture allows below capabilities:
Test in production-like environments without affecting actual production systems
Quickly provision and tear down complete testing stacks
Run parallel tests with isolated resources
Ensure consistency across different testing environments
snaplogic-common-robot Library
Installation
The snaplogic-common-robot library is published on PyPI, making installation straightforward https://pypi.org/project/snaplogic-common-robot/
pip install snaplogic-common-robot
Core Components
The library provides the below components
SnapLogic APIs: Low-level keywords for direct API interactions
SnapLogic Keywords: High-level business-oriented keywords for common operations
Common Utilities: Database connections, file operations, and utility functions.
Dependency Libraries: Install all necessary dependency libraries to run Robot Framework tests for SnapLogic. These libraries support API testing, database operations, Docker container testing, JMS messaging, and AWS integration tools.
The following libraries are automatically installed as dependencies when you install snaplogic-common-robot, providing comprehensive API support. This library ecosystem continues to expand as we add support for additional features and capabilities.
Snaplogic RobotFramework-examples Repository
The snaplogic-robotframework-examples repository demonstrates how to build a complete testing framework using the snaplogic common library.https://github.com/SnapLogic/snaplogic-robotframework-examples
Framework Overview
Note: This project structure is continuously evolving! We're actively working to make the framework easier and more efficient to use,The structure is subject to change as we iterate on improvements to enhance developer experience and framework efficiency.
The framework follows a modular architecture with clear separation of concerns:
Configuration Layer
.env and .env.example manage environment variables for sensitive credentials and URLs
env_files/ folder have all details required for creating accounts
Makefile provides a central command interface for all build and test operations
docker-compose.yml orchestrates the entire multi-container environment with a single command
Build Automation
makefiles/ directory contains modular scripts organized by service type (databases, messaging, mocks)
Each service category has dedicated makefiles for independent lifecycle management
Infrastructure
docker/ holds Docker configurations for all services (Groundplex, Oracle, PostgreSQL, Kafka)
env_files/ stores service-specific environment variables to isolate configuration
Containerized approach ensures reproducible test environments across all systems
Test Organization
test/suite/ contains Robot Framework test suites organized by pipeline functionality
test/test_data/ provides input files and expected outputs for validation
Tests are grouped logically (Oracle, PostgreSQL+S3, Kafka) for easy maintenance
Pipeline Assets
src/pipelines/ stores the actual SnapLogic SLP files being tested
src/tools/ includes helper utilities and requirements.txt with Python dependencies
The snaplogic-common-robot library is installed via requirements.txt, providing reusable keywords
Test Reporting
Robot Framework automatically generates comprehensive test reports after each execution
report.html provides a high-level summary with pass/fail statistics and execution timeline
log.html offers detailed step-by-step execution logs with keyword-level information
output.xml contains structured test results in XML format for CI/CD integration
Reports include screenshots, error messages, and detailed traceability for debugging
All reports are timestamped and can be archived for historical analysis
Supporting Components
travis_scripts/ enables CI/CD automation for continuous testing
README/ holds project documentation and setup guides
Key Architecture Benefits
Modular design allows independent service management
Docker isolation ensures consistent test environments
Makefile automation simplifies complex operations
Clear directory structure improves maintainability
CI/CD integration enables automated testing workflows
Integration with CI/CD Pipelines
One of the most powerful aspects of our Robot Framework testing solution is its seamless integration with CI/CD pipelines. This enables continuous testing, ensuring that every code change is automatically validated against your SnapLogic integrations.
Why CI/CD Integration Matters
In modern DevOps practices, manual testing becomes a bottleneck. By integrating our Robot Framework tests into your CI/CD pipeline, you achieve:
Automatic Test Execution: Tests run automatically on every commit, pull request, or scheduled interval
Early Bug Detection: Issues are caught immediately, not days or weeks later in production
Consistent Testing: The same tests run every time, eliminating human error and variability
Fast Feedback Loop: Developers know within minutes if their changes broke existing integrations
Quality Gates: Prevent deployments if tests fail, ensuring only validated code reaches production
Jenkins is one of the most popular CI/CD tools, and integrating our Robot Framework tests is straightforward.
How It Works?
Stage 1: Prepare Environment Install SnapLogic common Robot library and required dependencies
Stage 2: Start Docker Services Launches Groundplex, Oracle DB, Kafka, and MinIO containers
Stage 3: Run Robot Framework Tests Execute test suites in parallel across 4 threads using pabot
Stage 4: Publish Test Results Generate HTML reports, XML results, and test artifacts and can upload to S3.
Stage 5: Send Notifications Distributes test results via Slack.
Post: Cleanup Tears down containers, removes temp files, archives logs
Slack Notifications include the below details
Our CI/CD pipeline automatically sends detailed test execution reports to Slack, providing immediate visibility into test results for the entire team.
HTML Reports have the below details
Robot Framework automatically generates comprehensive HTML reports after each test execution, providing detailed insights into test results, performance, and execution patterns.
Real-World Benefits
Here's what this means for your team:
For Developers
Push code with confidence - Tests run automatically
Get feedback in minutes - No waiting for QA cycles
Fix issues immediately - While the code is still fresh in your mind
For QA Teams
Focus on exploratory testing - Let automation handle regression
Better test coverage - Tests run on every single change
Clear reports - See exactly what's tested and what's not
Future Enhancements
We're continuously improving the framework with planned features include:
Enhanced support for more end points
Integration with cloud storage services
Advanced performance testing capabilities
Enhanced security testing features
Conclusion
The combination of snaplogic-common-robot library and snaplogic-robotframework-examples framework provides a powerful, scalable solution for SnapLogic test automation. By leveraging Docker's containerization capabilities, Robot Framework's simplicity, and SnapLogic's robust APIs, teams can:
Reduce testing time from hours to minutes
Increase test coverage with automated end-to-end scenarios
Improve reliability through consistent, repeatable tests
Enable continuous testing in CI/CD pipelines
Whether you're testing simple pipeline transformations or complex multi-system integrations, this framework provides the tools and patterns needed for comprehensive SnapLogic testing.
Getting Involved
We welcome contributions from the SnapLogic community! Here's how you can get involved:
Try the Framework: Install snaplogic-common-robot and run the example tests
Report Issues: Help us improve by reporting bugs or suggesting enhancements
Contribute Code: Submit pull requests with new keywords or test patterns
Share Your Experience: Let us know how you're using the framework in your organization
Resources
snaplogic-common-robot on PyPI: pip install snaplogic-common-robot https://pypi.org/project/snaplogic-common-robot/
snaplogic-robotframework-example repo: https://github.com/SnapLogic/snaplogic-robotframework-examples
Documentation: Comprehensive HTML documentation available after installation via README folder
Community Support: Join the discussion in SnapLogic Community forums
Start automating your SnapLogic tests today and experience the power of comprehensive, containerized test automation!
Questions? We're Here to Help!
We hope this comprehensive guide helps you get started with automated testing for your SnapLogic integrations. The combination of snaplogic-common-robot and Docker-based infrastructure provides a powerful foundation for building reliable, scalable test automation.
Have questions or need assistance implementing this framework?
The SLIM (SnapLogic Intelligent Modernization) team is here to support you! We'd love to hear about your use cases, help you overcome any challenges, or discuss how this framework can be customized for your specific needs.
Contact the SLIM Team:
Reach out to us directly through the SnapLogic Community forums
Tag us in your questions with @slim-team
Email us at: slim-team@snaplogic.com
We're committed to helping you achieve testing excellence and look forward to seeing how you leverage this framework to enhance your SnapLogic automation journey!
Happy Testing! The SLIM Team
15 days ago2likes
8 MIN READ
Introduction
Since the inception of the Model Context Protocol (MCP), we've been envisioning and designing how it can be integrated into the SnapLogic platform. We've recently received a significant number of inquiries about MCP, and we're excited to share our progress, the features we'll be supporting, our release timeline, and how you can get started creating MCP servers and clients within SnapLogic. If you're interested, we encourage you to reach out!
Understanding the MCP Protocol
The MCP protocol allows tools, data resources, and prompts to be published by an MCP server in a way that Large Language Models (LLMs) can understand. This empowers LLMs to autonomously interact with these resources via an MCP client, expanding their capabilities to perform actions, retrieve information, and execute complex workflows.
MCP Protocol primarily supports:
Tools: Functions an LLM can invoke (e.g., data lookups, operational tasks).
Resources: File-like data an LLM can read (e.g., API responses, file contents).
Prompts: Pre-written templates to guide LLM interaction with the server.
Sampling (not widely used): Allows client-hosted LLMs to be used by remote MCP servers.
An MCP client can, therefore, request to list available tools, call specific tools, list resources, or read resource content from a server.
Transport and Authentication
MCP protocol offers flexible transport options, including STDIO or HTTP (SSE or Streamable-HTTP) for local deployments, and HTTP (SSE or Streamable-HTTP) for remote deployments.
While the protocol proposes OAuth 2.1 for authentication, an MCP server can also use custom headers for security.
Release Timeline
We're excited to bring MCP support to SnapLogic with two key releases:
August Release: MCP Client Support
We'll be releasing two new snaps: the MCP Function Generator Snap and the MCP Invoke Snap. These will be available in the AgentCreator Experimental (Beta) Snap Pack. With these Snaps, your SnapLogic agent can access the services and resources available on the public MCP server.
Late Q3 Release: MCP Server Support
Our initial MCP server support will focus on tool operations, including the ability to list tools and call tools. For authentication, it will support custom header-based authentication. Users will be able to leverage the MCP Server functionality by subscribing to this feature.
If you're eager to be among the first to test these new capabilities and provide feedback, please reach out to the Project Manager Team, at pm-team@snaplogic.com. We're looking forward to seeing what you build with SnapLogic MCP.
SnapLogic MCP Client
MCP Clients in SnapLogic enable users to connect to MCP servers as part of their Agent. An example can be connecting to the Firecrawl MCP server for a data scraping Agent, or other use cases that can leverage the created MCP servers.
The MCP Client support in SnapLogic consists of two Snaps, the MCP Function Generator Snap and the MCP Invoke Snap. From a high-level perspective, the MCP Function generator Snap allows users to list available tools from an MCP server, and the MCP Invoke Snap allows users to perform operations such as call tools, list resources, and read resources from an MCP server.
Let’s dive into the individual pieces.
MCP SSE Account
To connect to an MCP Server, we will need an account to specify the URI of the server to connect to.
Properties
URI
The URI of the server to connect to. Don’t need to include the /sse path
Additional headers
Additional HTTP headers to be sent to the server
Timeout
The timeout value in seconds, if the result is not returned within the timeout, the Snap will return an error.
MCP Function Generator Snap
The MCP Function Generator Snap enables users to retrieve the list of tools as SnapLogic function definitions to be used in a Tool Calling Snap.
Properties
Account
An MCP SSE account is required to connect to an MCP Server.
Expose Tools
List all available tools from an MCP server as SnapLogic function definitions
Expose Resources
Add list_resources, read_resource as SnapLogic function definitions to allow LLMs to use resources/read and resources/list (MCP Resources).
Definitions for list resource and read resource
[ { "sl_type": "function", "name": "list_resources", "description": "This function lists all available resources on the MCP server. Return a list of resources with their URIs.", "strict": false, "sl_tool_metadata": { "operation": "resources/list" } }, { "sl_type": "function", "name": "read_resource", "description": "This function returns the content of the resource from the MCP server given the URI of the resource.", "strict": false, "sl_tool_metadata": { "operation": "resources/read" }, "parameters": [ { "name": "uri", "type": "STRING", "description": "Unique identifier for the resource", "required": true } ] } ]
MCP Invoke Snap
The MCP Invoke Snap enables users to perform operations such as tools/call, resources/list, and resources/read to an MCP server.
Properties
Account
An account is required to use the MCP Invoke Snap
Operation
The operation to perform on the MCP server. The operation must be one of tools/call, resources/list, or resources/read
Tool Name
The name of the tool to call. Only enabled and required when the operation is tools/call
Parameters
The parameters to be added to the operation. Only enabled for resources/read and tools/call. Required for resources/read, and optional for tools/call, based on the tool.
MCP Agents in pipeline action
MCP Agent Driver pipeline
An MCP Agent Driver pipeline is like any other MCP Agent pipeline; we’ll need to provide the system prompt, user prompt, and run it with the PipeLoop Snap.
MCP Agent Worker pipeline
Here’s an example of an MCP Agent with a single MCP Server connection. The MCP Agent Worker is connected to one MCP Server.
MCP Client Snaps can be used together with AgentCreator Snaps, such as the Multi-Pipeline Function Generator and Pipeline Execute Snap, as SnapLogic Functions, tools. This allows users to use tools provided by MCP servers and internal tools, without sacrificing safety and freedom when building an Agent.
Agent Worker with MCP Client Snaps
SnapLogic MCP Server
In SnapLogic, an MCP Server allows you to expose SnapLogic pipelines as dynamic tools that can be discovered and invoked by language models or external systems.
By registering an MCP Server, you effectively provide a API that language models and other clients can use to perform operations such as data retrieval, transformation, enrichment, or automation, all orchestrated through SnapLogic pipelines.
For the initial phase, we'll support connections to the server via HTTP + SSE.
Core Capabilities
The MCP Server provides two core capabilities.
The first is listing tools, which returns structured metadata that describes the available pipelines. This metadata includes the tool name, a description, the input schema in JSON Schema format, and any additional relevant information. This allows clients to dynamically discover which operations are available for invocation.
The second capability is calling tools, where a specific pipeline is executed as a tool using structured input parameters, and the output is returned.
Both of these operations—tool listing and tool calling—are exposed through standard JSON-RPC methods, specifically tools/list and tools/call, accessible over HTTP.
Prerequisite
You'll need to prepare your tool pipelines in advance. During the server creation process, these can be added and exposed as tools for external LLMs to use.
MCP Server Pipeline Components
A typical MCP server pipeline consists of four Snaps, each with a dedicated role:
1. Router
What it does: Routes incoming JSON requests—which differ from direct JSON-RPC requests sent by an MCP client—to either the list tools branch or the call tool branch.
How: Examines the request payload (typically the method field) to determine which action to perform. 2. Multi-Pipeline Function Generator (Listing Tools)
What it does: Converts a list of pipeline references into tool metadata. This is where you define the pipelines you want the server to expose as tools.
Output: For each pipeline, generates:
Tool name
Description
Parameters (as JSON Schema)
Other metadata
Purpose: Allows clients (e.g., an LLM) to query what tools are available without prior knowledge.
3. Pipeline Execute (Calling Tools)
What it does: Dynamically invokes the selected SnapLogic pipeline and returns structured outputs.
How: Accepts parameters encoded in the request body, maps them to the pipeline’s expected inputs, and executes the pipeline.
Purpose: Provides flexible runtime execution of tools based on user or model requests.
4. Union
What it does: Merges the result streams from both branches (list and call) into a single output stream for consistent response formatting.
Request Flows
Below are example flows showing how requests are processed:
🟢 tools/list
Client sends a JSON-RPC request with method = "tools/list".
Router directs the request to the Multi-Pipeline Function Generator.
Tool metadata is generated and returned in the response.
Union Snap merges and outputs the content.
✅ Result: The client receives a JSON list describing all available tools.
�� tools/call
Client sends a JSON-RPC request with method = "tools/call" and the tool name + parameters.
Router sends this to the Pipeline Execute Snap.
The selected pipeline is invoked with the given parameters.
Output is collected and merged via Union.
✅ Result: The client gets the execution result of the selected tool.
Registering an MCP Server
Once your MCP server pipeline is created:
Create a Trigger Task and Register as an MCP Server
Navigate to the Designer > Create Trigger Task
Choose a Groundplex. (Note: This capability currently requires a Groundplex, not a Cloudplex.)
Select your MCP pipeline.
Click Register as MCP server
Configure node and authentication.
Find your MCP Server URL
Navigate to the Manager > Tasks
The Task Details page exposes a unique HTTP endpoint.
This endpoint is treated as your MCP Server URL.
After registration, clients such as AI models or orchestration engines can interact with the MCP Server by calling the /tools/list endpoint to discover the available tools, and the /tools/call endpoint to invoke a specific tool using a structured JSON payload.
Connect to a SnapLogic MCP Server from a Client
After the MCP server is successfully published, using the SnapLogic MCP server is no different from using other MCP servers running in SSE mode. It can be connected to by any MCP client that supports SSE mode; all you need is the MCP Server URL (and the Bearer Token if authentication is enabled during server registration).
Configuration
First, you need to add your MCP server in the settings of the MCP client. Taking Claude Desktop as an example, you'll need to modify your Claude Desktop configuration file. The configuration file is typically located at:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Add your remote MCP server configuration to the mcpServers section:
{ "mcpServers": { "SL_MCP_server": { "command": "npx", "args": [ "mcp-remote", "http://devhost9000.example.com:9000/mcp/6873ff343a91cab6b00014a5/sse", "--header", "Authorization: Bearer your_token_here" ] } } }
Key Components
Server Name: SL_MCP_server - A unique identifier for your MCP server
Command: npx - Uses the Node.js package runner to execute the mcp-remote package
URL: The SSE endpoint URL of your remote MCP server (note the /sse suffix)
Authentication: Use the --header flag to include authorization tokens if the server enabled authentication
Requirements
Ensure you have Node.js installed on your system, as the configuration uses npx to run the mcp-remote package. Replace the example URL and authorization token with your actual server details before saving the configuration.
After updating the configuration file, restart Claude Desktop for the changes to take effect.
To conclude, the MCP Server in SnapLogic is a framework that allows you to expose pipelines as dynamic tools accessible through a single HTTP endpoint. This capability is designed for integration with language models and external systems that need to discover and invoke SnapLogic workflows at runtime. MCP Servers make it possible to build flexible, composable APIs that return structured results, supporting use cases such as conversational AI, automated data orchestration, and intelligent application workflows.
Conclusion
SnapLogic's integration of the MCP protocol marks a significant leap forward in empowering LLMs to dynamically discover and invoke SnapLogic pipelines as sophisticated tools, transforming how you build conversational AI, automate complex data orchestrations, and create truly intelligent applications. We're excited to see the innovative solutions you'll develop with these powerful new capabilities.
2 months ago0likes
7 MIN READ
Introduction
OpenAI announced the Responses API, their most advanced and versatile interface for building intelligent AI applications. Supporting both text and image inputs with rich text outputs, this API enables dynamic, stateful conversations that remember and build on previous interactions, making AI experiences more natural and context-aware. It also unlocks powerful capabilities through built-in tools such as web search, file search, code interpreter, and more, while enabling seamless integration with external systems via function calling. Its event-driven design delivers clear, structured updates at every step, making it easier than ever to create sophisticated, multi-step AI workflows.
Key features include:
Stateful conversations via the previous response ID
Built-in tools like web search, file search, code interpreter, MCP, and others
Access to advanced models available exclusively, such as o1-pro
Enhanced support for reasoning models with reasoning summaries and efficient context management through previous response ID or encrypted reasoning items
Clear, event-based outputs that simplify integration and control
While the Chat Completions API remains fully supported and widely used, OpenAI plans to retire the Assistants API in the first half of 2026.
To support the adoption of the Responses API, two new Snaps have been introduced:
OpenAI Chat Completions ⇒ OpenAI Responses API Generation
OpenAI Tool Calling ⇒ OpenAI Responses API Tool Calling
Both Snaps are fully compatible with existing upstream and downstream utility Snaps, including the OpenAI Prompt Generator, OpenAI Multimodal Content Generator, all Function Generators (Multi-Pipeline, OpenAPI, and APIM), the Function Result Generator, and the Message Appender. This allows existing pipelines and familiar development patterns to be reused while gaining access to the advanced features of the Responses API.
OpenAI Responses API Generation
The OpenAI Responses API Generation Snap is designed to support OpenAI’s newest Responses API, enabling more structured, stateful, and tool-augmented interactions. While it builds upon the familiar interface of the Chat Completions Snap, several new properties and behavioral updates have been introduced to align with the Responses API’s capabilities.
New properties
Message: The input sent to the LLM. This field replaces the previous Use message payload, Message payload, and Prompt properties in the OpenAI Chat Completions Snap, consolidating them into a single input. It removes ambiguity between "prompt" as raw text and as a template, and supports both string and list formats.
Previous response ID: The unique ID of the previous response to the model. Use this to create multi-turn conversations.
Model parameters
Reasoning summary: For reasoning models, provides a summary of the model’s reasoning process, aiding in debugging and understanding the model's reasoning process. The property can be none, auto, or detailed.
Advanced prompt configurations
Instructions: Applied only to the current response, making them useful for dynamically swapping instructions between turns. To persist instructions across turns when using previous_response_id, the developer message in the OpenAI Prompt Generator Snap should be used.
Advanced response configurations
Truncation: Defines how to handle input that exceeds the model’s context window. auto allows the model to truncate the middle of the conversation to fit, while disabled (default) causes the request to fail with a 400 error if the context limit is exceeded.
Include reasoning encrypted content: Includes an encrypted version of reasoning tokens in the output, allowing reasoning items to persist when the store is disabled.
Built-in tools
Web search: Enables the model to access up-to-date information from the internet to answer queries beyond its training data.
Web search type
Search context size
User location: an approximate user location including city, region, country, and timezone to deliver more relevant search results.
File search: Allows the model to retrieve information from documents or files.
Vector store IDs
Maximum number of results
Include search results: Determines whether raw search results are included in the response for transparency or debugging.
Ranker
Score threshold
Filters: Additional metadata-based filters to refine search results. For more details on using filters, see Metadata Filtering.
Advanced tool configuration
Tool choice: A new option, SPECIFY A BUILT-IN TOOL, allows specifying that the model should use a built-in tool to generate a response.
Note that the OpenAI Responses API Generation Snap does not support the response count or stop sequences properties, as these are not available in the Responses API. Additionally, the message user name, which may be specified in the Prompt Generator Snap, is not supported and will be ignored if included.
Model response of Chat Completions vs Responses API
Chat Completions API
Responses API
The Responses API introduces an event-driven output structure that significantly enhances how developers build and manage AI-powered applications compared to the traditional Chat Completions API. While the Chat Completions API returns a single, plain-text response within the choices array, the Responses API provides an output array containing a sequence of semantic event items—such as reasoning, message, function_call, web_search_call, and more—that clearly delineate each step in the model's reasoning and actions. This structured approach allows developers to easily track and interpret the model's behavior, facilitating more robust error handling and smoother integration with external tools. Moreover, the response from the Responses API includes the model parameters settings, providing additional context for developers.
Pipeline examples
Built-in tool: web search
This example demonstrates how to use the built-in web search tool. In this pipeline, the user’s location is specified to ensure the web search targets relevant geographic results.
System prompt: You are a friendly and helpful assistant. Please use your judge to decide whether to use the appropriate tools or not to answer questions from the user.
Prompt: Can you recommend 2 good sushi restaurants near me?
Output:
As a result, the output contains both a web search call and a message. The model uses the web search to find and provide recommendations based on current data, tailored to the specified location.
Built-in tool: File search
This example demonstrates how the built-in file search tool enables the model to retrieve information from documents stored in a vector store during response generation. In this case, the file wildfire_stats.pdf has been uploaded. You can create and manage vector stores through the Vector Store management page.
Prompt: What is the number of Federal wildfires in 2018
Output:
The output array contains a file_search_call event, which includes search results in its results field. These results provide matched text, metadata, and relevance scores from the vector store. This is followed by a message event, where the model uses the retrieved information to generate a grounded response. The presence of detailed results in the file_search_call is enabled by selecting the Include file search results option.
OpenAI Responses API Tool Calling
The OpenAI Responses API Tool Calling Snap is designed to support function calling using OpenAI’s Responses API. It works similarly to the OpenAI Tool Calling Snap (which uses the Chat Completions API), but is adapted to the event-driven response structure of the Responses API and supports stateful interactions via the previous response ID. While it shares much of its configuration with the Responses API Generation Snap, it is purpose-built for workflows involving function calls.
Existing LLM agent pipeline patterns and utility Snaps—such as the Function Generator and Function Result Generator—can continue to be used with this Snap, just as with the original OpenAI Tool Calling Snap. The primary difference lies in adapting the Snap configuration to accommodate the Responses API’s event-driven output, particularly the structured function_call event item in the output array.
The Responses API Tool Calling Snap provides two output views, similar to the OpenAI Tool Calling Snap, with enhancements to simplify building agent pipelines and support stateful interactions using the previous response ID:
Model response view: The complete API response, including extra fields:
messages: an empty list if store is enabled, or the full message history—including messages payload and model response—if disabled (similar to the OpenAI Tool Calling Snap). When using stateful workflows, message history isn’t needed because the previous response ID is used to maintain context.
has_tool_call: a boolean indicating whether the response includes a tool call. Since the Responses API no longer includes the finish_reason: "tool_calls" field, this new field makes it easier to create stop conditions in the pipeloop Snap within the agent driver pipeline.
Tool call view: Displays the list of function calls made by the model during the interaction.
Tool Call View of Chat Completions vs Responses API
Uses id as the function call identifier when sending back the function result.
Tool call properties (name, arguments) are nested inside the function field.
Each tool call includes: • id: the unique event ID • call_id: used to reference the function call when returning the result
The tool call structure is flat — name and arguments are top-level fields.
Building LLM Agent Pipelines
To build LLM agent pipelines with the OpenAI Responses API Tool Calling Snap, you can reuse the same agent pipeline pattern described in Introducing Tool Calling Snaps and LLM Agent Pipelines. Only minor configuration changes are needed to support the Responses API.
Agent Driver Pipeline
The primary change is in the PipeLoop Snap configuration, where the stop condition should now check the has_tool_call field, since the Responses API no longer includes the finish_reason:"tool_calls".
Agent Worker Pipeline
Fields mapping
A Mapper Snap is used to prepare the related fields for the OpenAI Responses API Tool Calling Snap.
OpenAI Responses API Tool Calling
The key changes are in this Snap’s configuration to support the Responses API’s stateful interactions. There are two supported approaches:
Option 1: Use Store (Recommended) Leverages the built-in state management of the Responses API.
Enable Store
Use Previous Response ID
Send only the function call results as the input messages for the next round. (messages field in the Snap’s output will be an empty array, so you can still use it in the Message Appender Snap to collect tool results.)
Option 2: Maintain Conversation History in Pipeline Similar to the approach used in the Chat Completions API.
Disable Store
Include the full message history in the input (messages field in the Snap’s output contains message history)
(Optional) Enable Include Reasoning Encrypted Content (for reasoning models) to preserve reasoning context efficiently
OpenAI Function Result Generator
As explained in Tool Call View of Chat Completions vs Responses API section, the Responses API includes both an id and a call_id. You must use the call_id to construct the function call result when sending it back to the model.
Conclusion
The OpenAI Responses API makes AI workflows smarter and more adaptable, with stateful interactions and built-in tools. SnapLogic’s OpenAI Responses API Generation and Tool Calling Snaps bring these capabilities directly into your pipelines, letting you take advantage of advanced features like built-in tools and event-based outputs with only minimal adjustments. By integrating these Snaps, you can seamlessly enhance your workflows and fully unlock the potential of the Responses API.
2 months ago0likes
7 MIN READ
Introduction
OpenAI announced the Responses API, their most advanced and versatile interface for building intelligent AI applications. Supporting both text and image inputs with rich text outputs, this API enables dynamic, stateful conversations that remember and build on previous interactions, making AI experiences more natural and context-aware. It also unlocks powerful capabilities through built-in tools such as web search, file search, code interpreter, and more, while enabling seamless integration with external systems via function calling. Its event-driven design delivers clear, structured updates at every step, making it easier than ever to create sophisticated, multi-step AI workflows.
Key features include:
Stateful conversations via the previous response ID
Built-in tools like web search, file search, code interpreter, MCP, and others
Access to advanced models available exclusively, such as o1-pro
Enhanced support for reasoning models with reasoning summaries and efficient context management through previous response ID or encrypted reasoning items
Clear, event-based outputs that simplify integration and control
While the Chat Completions API remains fully supported and widely used, OpenAI plans to retire the Assistants API in the first half of 2026.
To support the adoption of the Responses API, two new Snaps have been introduced:
OpenAI Chat Completions ⇒ OpenAI Responses API Generation
OpenAI Tool Calling ⇒ OpenAI Responses API Tool Calling
Both Snaps are fully compatible with existing upstream and downstream utility Snaps, including the OpenAI Prompt Generator, OpenAI Multimodal Content Generator, all Function Generators (Multi-Pipeline, OpenAPI, and APIM), the Function Result Generator, and the Message Appender. This allows existing pipelines and familiar development patterns to be reused while gaining access to the advanced features of the Responses API.
OpenAI Responses API Generation
The OpenAI Responses API Generation Snap is designed to support OpenAI’s newest Responses API, enabling more structured, stateful, and tool-augmented interactions. While it builds upon the familiar interface of the Chat Completions Snap, several new properties and behavioral updates have been introduced to align with the Responses API’s capabilities.
New properties
Message: The input sent to the LLM. This field replaces the previous Use message payload, Message payload, and Prompt properties in the OpenAI Chat Completions Snap, consolidating them into a single input. It removes ambiguity between "prompt" as raw text and as a template, and supports both string and list formats.
Previous response ID: The unique ID of the previous response to the model. Use this to create multi-turn conversations.
Model parameters
Reasoning summary: For reasoning models, provides a summary of the model’s reasoning process, aiding in debugging and understanding the model's reasoning process. The property can be none, auto, or detailed.
Advanced prompt configurations
Instructions: Applied only to the current response, making them useful for dynamically swapping instructions between turns. To persist instructions across turns when using previous_response_id, the developer message in the OpenAI Prompt Generator Snap should be used.
Advanced response configurations
Truncation: Defines how to handle input that exceeds the model’s context window. auto allows the model to truncate the middle of the conversation to fit, while disabled (default) causes the request to fail with a 400 error if the context limit is exceeded.
Include reasoning encrypted content: Includes an encrypted version of reasoning tokens in the output, allowing reasoning items to persist when the store is disabled.
Built-in tools
Web search: Enables the model to access up-to-date information from the internet to answer queries beyond its training data.
Web search type
Search context size
User location: an approximate user location including city, region, country, and timezone to deliver more relevant search results.
File search: Allows the model to retrieve information from documents or files.
Vector store IDs
Maximum number of results
Include search results: Determines whether raw search results are included in the response for transparency or debugging.
Ranker
Score threshold
Filters: Additional metadata-based filters to refine search results. For more details on using filters, see Metadata Filtering.
Advanced tool configuration
Tool choice: A new option, SPECIFY A BUILT-IN TOOL, allows specifying that the model should use a built-in tool to generate a response.
Note that the OpenAI Responses API Generation Snap does not support the response count or stop sequences properties, as these are not available in the Responses API. Additionally, the message user name, which may be specified in the Prompt Generator Snap, is not supported and will be ignored if included.
Model response of Chat Completions vs Responses API
Chat Completions API
Responses API
The Responses API introduces an event-driven output structure that significantly enhances how developers build and manage AI-powered applications compared to the traditional Chat Completions API. While the Chat Completions API returns a single, plain-text response within the choices array, the Responses API provides an output array containing a sequence of semantic event items—such as reasoning, message, function_call, web_search_call, and more—that clearly delineate each step in the model's reasoning and actions. This structured approach allows developers to easily track and interpret the model's behavior, facilitating more robust error handling and smoother integration with external tools. Moreover, the response from the Responses API includes the model parameters settings, providing additional context for developers.
Pipeline examples
Built-in tool: web search
This example demonstrates how to use the built-in web search tool. In this pipeline, the user’s location is specified to ensure the web search targets relevant geographic results.
System prompt: You are a friendly and helpful assistant. Please use your judge to decide whether to use the appropriate tools or not to answer questions from the user.
Prompt: Can you recommend 2 good sushi restaurants near me?
Output:
As a result, the output contains both a web search call and a message. The model uses the web search to find and provide recommendations based on current data, tailored to the specified location.
Built-in tool: File search
This example demonstrates how the built-in file search tool enables the model to retrieve information from documents stored in a vector store during response generation. In this case, the file wildfire_stats.pdf has been uploaded. You can create and manage vector stores through the Vector Store management page.
Prompt: What is the number of Federal wildfires in 2018
Output:
The output array contains a file_search_call event, which includes search results in its results field. These results provide matched text, metadata, and relevance scores from the vector store. This is followed by a message event, where the model uses the retrieved information to generate a grounded response. The presence of detailed results in the file_search_call is enabled by selecting the Include file search results option.
OpenAI Responses API Tool Calling
The OpenAI Responses API Tool Calling Snap is designed to support function calling using OpenAI’s Responses API. It works similarly to the OpenAI Tool Calling Snap (which uses the Chat Completions API), but is adapted to the event-driven response structure of the Responses API and supports stateful interactions via the previous response ID. While it shares much of its configuration with the Responses API Generation Snap, it is purpose-built for workflows involving function calls.
Existing LLM agent pipeline patterns and utility Snaps—such as the Function Generator and Function Result Generator—can continue to be used with this Snap, just as with the original OpenAI Tool Calling Snap. The primary difference lies in adapting the Snap configuration to accommodate the Responses API’s event-driven output, particularly the structured function_call event item in the output array.
The Responses API Tool Calling Snap provides two output views, similar to the OpenAI Tool Calling Snap, with enhancements to simplify building agent pipelines and support stateful interactions using the previous response ID:
Model response view: The complete API response, including extra fields:
messages: an empty list if store is enabled, or the full message history—including messages payload and model response—if disabled (similar to the OpenAI Tool Calling Snap). When using stateful workflows, message history isn’t needed because the previous response ID is used to maintain context.
has_tool_call: a boolean indicating whether the response includes a tool call. Since the Responses API no longer includes the finish_reason: "tool_calls" field, this new field makes it easier to create stop conditions in the pipeloop Snap within the agent driver pipeline.
Tool call view: Displays the list of function calls made by the model during the interaction.
Tool Call View of Chat Completions vs Responses API
Uses id as the function call identifier when sending back the function result.
Tool call properties (name, arguments) are nested inside the function field.
Each tool call includes: • id: the unique event ID • call_id: used to reference the function call when returning the result
The tool call structure is flat — name and arguments are top-level fields.
Building LLM Agent Pipelines
To build LLM agent pipelines with the OpenAI Responses API Tool Calling Snap, you can reuse the same agent pipeline pattern described in Introducing Tool Calling Snaps and LLM Agent Pipelines. Only minor configuration changes are needed to support the Responses API.
Agent Driver Pipeline
The primary change is in the PipeLoop Snap configuration, where the stop condition should now check the has_tool_call field, since the Responses API no longer includes the finish_reason:"tool_calls".
Agent Worker Pipeline
Fields mapping
A Mapper Snap is used to prepare the related fields for the OpenAI Responses API Tool Calling Snap.
OpenAI Responses API Tool Calling
The key changes are in this Snap’s configuration to support the Responses API’s stateful interactions. There are two supported approaches:
Option 1: Use Store (Recommended) Leverages the built-in state management of the Responses API.
Enable Store
Use Previous Response ID
Send only the function call results as the input messages for the next round. (messages field in the Snap’s output will be an empty array, so you can still use it in the Message Appender Snap to collect tool results.)
Option 2: Maintain Conversation History in Pipeline Similar to the approach used in the Chat Completions API.
Disable Store
Include the full message history in the input (messages field in the Snap’s output contains message history)
(Optional) Enable Include Reasoning Encrypted Content (for reasoning models) to preserve reasoning context efficiently
OpenAI Function Result Generator
As explained in Tool Call View of Chat Completions vs Responses API section, the Responses API includes both an id and a call_id. You must use the call_id to construct the function call result when sending it back to the model.
Conclusion
The OpenAI Responses API makes AI workflows smarter and more adaptable, with stateful interactions and built-in tools. SnapLogic’s OpenAI Responses API Generation and Tool Calling Snaps bring these capabilities directly into your pipelines, letting you take advantage of advanced features like built-in tools and event-based outputs with only minimal adjustments. By integrating these Snaps, you can seamlessly enhance your workflows and fully unlock the potential of the Responses API.
3 months ago0likes