Introducing the Agent Snap
Flashback: What’s an Agent? “Agents are autonomous LLM-based processes that can interact with external systems to carry out a high-level goal.” Agents are LLM-based systems that can perform actions based on the user’s request and the scenario, determined by the LLM of the Agent system. A minimal agent consists of 1. an LLM component, and 2. tools that the Agent can use. Think of the Agent as a Robot with a brain (LLM) + robotic arms (Tools). Based on the request, the brain can “decide” to do something, and then the arm will carry out the action decided by the brain. Then, depending on the scenario, the brain can determine if more action is needed, or end if the request is complete. The process of an agent We previously introduced the “Agent Driver and Agent Worker“ pipeline pattern, which clearly defines every single operation that would occur in an Agent process. The process of the pattern can be described as follows Agent Driver Define the instruction of the Agent. (System prompt) Format the user’s request into a conversation. (Messages array) Define tools to make available to the Agent. Send all information above into a “loop“, run the Agent worker until the process is complete. Agent Worker Call the LLM with the instructions, conversation, and tool definitions LLM decides… If it is able to complete the request, end the conversation and go to step 7. If tool calls are required, go to step 3. Call the tools. Format the tool result. Add the tool results to the conversation Back to step 1. Request is complete, the agent responds. The rationale From the Agent Driver and the Agent Worker pipeline, here’s an observation: The driver pipeline handles all of the “configuration“ of the Agent. The worker pipeline handles the “operation“ of the Agent. Now, imagine this: If we can package the “Agent operation” into a single module, so that we can create Agents just by providing instructions, and tools. Wouldn’t this be great? This is exactly what Agent Snap does. The Agent Snap combines the PipeLoop Snap and the Agent Worker pipeline, so all of the agent operations happen in a single Snap. Information and prerequisites Now, before dreaming about having your own company of agents, since building agents is now so simple, there is some information to know about and conditions to be met before this can happen. 1. Agent Snaps are model-specific The Agent Snap is a combination of the “loop” and the Agent Worker, therefore, the LLM provider to be used for an Agent Snap is also fixed. This design allows users to stick to their favorite combination of customized model parameters. 2. Function(Tool) definitions must be linked to a pipeline to carry out the execution Previously, in an Agent Worker pipeline, the Tool Calling Snap is connected to Pipeline Execute Snaps to carry out tool calls, but this is no longer the case with the Agent Snap. Instead, a function definition should include the path of the pipeline to carry out the execution if this tool is called. This way, we can ensure every tool call can be performed successfully. If the user does not provide a tool pipeline with the function definition, the Agent Snap will not proceed. 3. Expected Input and Output of a tool pipeline When a tool call is requested by an LLM, the LLM will provide the name of the tool to call and the corresponding parameters to call. The Agent Snap will unwrap the parameters and send them directly to the tool pipeline. Here’s an example: I have a tool get_weather, which takes city: string as a parameter. The LLM decides to call the tool get_weather with the following payload: { "name": "get_weather", "parameters": { "city": "New York City" }, "sl_tool_metadata": { ... } } For this to work, my tool pipeline must be able to accept the input document : {"city": "New York City"} On a side note, the sl_tool_metadata object will also be available to the tool pipeline as the input for APIM and OpenAPI tools. Now, assume my tool pipeline has successfully retrieved the weather of New York City, It’s time for the Agent Snap to collect the result of this tool call. The Agent Snap will collect everything from the output document of the tool pipeline as the tool call result*. So that the LLM can determine the next steps properly. *Note: with one exception, if the output of a “tool pipeline“ contains the field “messages“ or "contents", it will be treated as the conversational history of the “child agent”, which will be filtered and will not be included. Build an Agent with Agent Snap We’ve understood the idea, we’ve gone through the prerequisites, and it’s time to build an Agent. In this example, we have an Agent with 2 tools: a weather tool and a calendar tool. We first start with a prompt generator to format the user input. Then define the tools the Agent can access. Let’s look into one of the tool definitions. In this example tool, we can see the name of the tool, the description of the tool, the parameters, and the path of the tool pipeline to carry out this task. This satisfies the requirement of a tool to be used by an Agent Snap. After we have the tools set, let’s look at the Agent Snap, using the Amazon Bedrock Converse API Agent Snap as an example. The configuration of an Agent Snap is similar to its corresponding Tool calling Snap, except for some extra fields, such as a button to visualize the agent flow, and a section to configure the operation of the Agent, such as iteration limit and number of threads for tool pipeline executions. The Agent Snap handles the whole executional process, and terminates when 1. The request is complete (no more tool calls are required) or 2. An error occurred. Voila! You have created an agent. After the Agent pipeline completes a round of execution, the user can use the “Visualize Agent Flow“ button in the Agent Snap to see the tools that are called by the LLM. Tips and Tricks for the Agent Snap Let’s take a look at the features built into the Agent Snap. Reuse pipelines Most agentic tool calls are processes that can be reused. To minimize execution load, we can use the “Reuse tool pipeline“ feature. This feature allows tool pipeline instances to be reused, so that the Agent will not need to spawn a pipeline every time a tool is called. To use this feature, the tool pipeline to be reused must be “Ultra compatible“; otherwise, the pipeline execution would hang, and the Agent Snap would eventually timeout. Tool call monitoring Agents can be long-running; it’s not rare to have an Agent run multiple iterations. To see what’s happening in the process, Agent Snap has built in monitoring during validation. The user will be able to see the iteration index, the tool that is currently being called, and the parameters that are used for the tool call in the pipeline statistics status bar. Selecting the “Monitor tool call“ option includes the parameter in the status update, which is an opt-in feature. If the user does not wish to expose the information to SnapLogic, the user should disable this. Warnings Agent configuration is a delicate process; a mistake can potentially lead to errors. The Agent Snap has a bunch of built-in warning capabilities, so the user can be better aware of what could go wrong. 1. Agent process completed before all tool calls completed In the Agent Snap, there is an Iteration limit setting, which limits the number of iterations the Agent can run. If the user provided a smaller limit, which caused the Agent to stop while the LLM is still awaiting tool calls, this warning would pop up to signal the user that the execution is incomplete. 2. Tool pipeline path is not defined A function (tool) definition to be used by the Agent Snap should include a tool pipeline path, so the Agent Snap can link to the actual pipeline that carries out the execution. If the pipeline path is not included in the function definition, this warning will pop up to signal the user that the Agent will not proceed. 3. Duplicate tool naming As we try to add more and more tools to the Agent Snap, two tools likely share the same name. The Agent Snap has the ability to rename the tools being sent to the LLM, and then still link to the correct pipeline. There will also be a warning available in the pipeline statistics to alert the user about a change in the behavior. Release Timeframes The Agent Snap is the foundation of the next-generation SnapLogic Agent. We will be releasing 4 Agent Snaps in the November release: Amazon Bedrock Converse API Agent OpenAI Chat Completions Agent Azure OpenAI Chat Completions Agent Google Gemini API Agent To better use the Agent Snaps, we will be introducing new capabilities to some of our Function Generators as well. Here is the list of Function Generator Snaps that will be modified soon: APIM Function Generator Snap OpenAPI Function Generator Snap MCP Function Generator Snap We hope you are as excited as we are about this one.52Views0likes0CommentsSnapLogic MCP Support
Introduction Since the inception of the Model Context Protocol (MCP), we've been envisioning and designing how it can be integrated into the SnapLogic platform. We've recently received a significant number of inquiries about MCP, and we're excited to share our progress, the features we'll be supporting, our release timeline, and how you can get started creating MCP servers and clients within SnapLogic. If you're interested, we encourage you to reach out! Understanding the MCP Protocol The MCP protocol allows tools, data resources, and prompts to be published by an MCP server in a way that Large Language Models (LLMs) can understand. This empowers LLMs to autonomously interact with these resources via an MCP client, expanding their capabilities to perform actions, retrieve information, and execute complex workflows. MCP Protocol primarily supports: Tools: Functions an LLM can invoke (e.g., data lookups, operational tasks). Resources: File-like data an LLM can read (e.g., API responses, file contents). Prompts: Pre-written templates to guide LLM interaction with the server. Sampling (not widely used): Allows client-hosted LLMs to be used by remote MCP servers. An MCP client can, therefore, request to list available tools, call specific tools, list resources, or read resource content from a server. Transport and Authentication MCP protocol offers flexible transport options, including STDIO or HTTP (SSE or Streamable-HTTP) for local deployments, and HTTP (SSE or Streamable-HTTP) for remote deployments. While the protocol proposes OAuth 2.1 for authentication, an MCP server can also use custom headers for security. Release Timeline We're excited to bring MCP support to SnapLogic with two key releases: August Release: MCP Client Support We'll be releasing two new snaps: the MCP Function Generator Snap and the MCP Invoke Snap. These will be available in the AgentCreator Experimental (Beta) Snap Pack. With these Snaps, your SnapLogic agent can access the services and resources available on the public MCP server. Late Q3 Release: MCP Server Support Our initial MCP server support will focus on tool operations, including the ability to list tools and call tools. For authentication, it will support custom header-based authentication. Users will be able to leverage the MCP Server functionality by subscribing to this feature. If you're eager to be among the first to test these new capabilities and provide feedback, please reach out to the Project Manager Team, at pm-team@snaplogic.com. We're looking forward to seeing what you build with SnapLogic MCP. SnapLogic MCP Client MCP Clients in SnapLogic enable users to connect to MCP servers as part of their Agent. An example can be connecting to the Firecrawl MCP server for a data scraping Agent, or other use cases that can leverage the created MCP servers. The MCP Client support in SnapLogic consists of two Snaps, the MCP Function Generator Snap and the MCP Invoke Snap. From a high-level perspective, the MCP Function generator Snap allows users to list available tools from an MCP server, and the MCP Invoke Snap allows users to perform operations such as call tools, list resources, and read resources from an MCP server. Let’s dive into the individual pieces. MCP SSE Account To connect to an MCP Server, we will need an account to specify the URI of the server to connect to. Properties URI The URI of the server to connect to. Don’t need to include the /sse path Additional headers Additional HTTP headers to be sent to the server Timeout The timeout value in seconds, if the result is not returned within the timeout, the Snap will return an error. MCP Function Generator Snap The MCP Function Generator Snap enables users to retrieve the list of tools as SnapLogic function definitions to be used in a Tool Calling Snap. Properties Account An MCP SSE account is required to connect to an MCP Server. Expose Tools List all available tools from an MCP server as SnapLogic function definitions Expose Resources Add list_resources, read_resource as SnapLogic function definitions to allow LLMs to use resources/read and resources/list (MCP Resources). Definitions for list resource and read resource [ { "sl_type": "function", "name": "list_resources", "description": "This function lists all available resources on the MCP server. Return a list of resources with their URIs.", "strict": false, "sl_tool_metadata": { "operation": "resources/list" } }, { "sl_type": "function", "name": "read_resource", "description": "This function returns the content of the resource from the MCP server given the URI of the resource.", "strict": false, "sl_tool_metadata": { "operation": "resources/read" }, "parameters": [ { "name": "uri", "type": "STRING", "description": "Unique identifier for the resource", "required": true } ] } ] MCP Invoke Snap The MCP Invoke Snap enables users to perform operations such as tools/call, resources/list, and resources/read to an MCP server. Properties Account An account is required to use the MCP Invoke Snap Operation The operation to perform on the MCP server. The operation must be one of tools/call, resources/list, or resources/read Tool Name The name of the tool to call. Only enabled and required when the operation is tools/call Parameters The parameters to be added to the operation. Only enabled for resources/read and tools/call. Required for resources/read, and optional for tools/call, based on the tool. MCP Agents in pipeline action MCP Agent Driver pipeline An MCP Agent Driver pipeline is like any other MCP Agent pipeline; we’ll need to provide the system prompt, user prompt, and run it with the PipeLoop Snap. MCP Agent Worker pipeline Here’s an example of an MCP Agent with a single MCP Server connection. The MCP Agent Worker is connected to one MCP Server. MCP Client Snaps can be used together with AgentCreator Snaps, such as the Multi-Pipeline Function Generator and Pipeline Execute Snap, as SnapLogic Functions, tools. This allows users to use tools provided by MCP servers and internal tools, without sacrificing safety and freedom when building an Agent. Agent Worker with MCP Client Snaps SnapLogic MCP Server In SnapLogic, an MCP Server allows you to expose SnapLogic pipelines as dynamic tools that can be discovered and invoked by language models or external systems. By registering an MCP Server, you effectively provide a API that language models and other clients can use to perform operations such as data retrieval, transformation, enrichment, or automation, all orchestrated through SnapLogic pipelines. For the initial phase, we'll support connections to the server via HTTP + SSE. Core Capabilities The MCP Server provides two core capabilities. The first is listing tools, which returns structured metadata that describes the available pipelines. This metadata includes the tool name, a description, the input schema in JSON Schema format, and any additional relevant information. This allows clients to dynamically discover which operations are available for invocation. The second capability is calling tools, where a specific pipeline is executed as a tool using structured input parameters, and the output is returned. Both of these operations—tool listing and tool calling—are exposed through standard JSON-RPC methods, specifically tools/list and tools/call, accessible over HTTP. Prerequisite You'll need to prepare your tool pipelines in advance. During the server creation process, these can be added and exposed as tools for external LLMs to use. MCP Server Pipeline Components A typical MCP server pipeline consists of four Snaps, each with a dedicated role: 1. Router What it does: Routes incoming JSON requests—which differ from direct JSON-RPC requests sent by an MCP client—to either the list tools branch or the call tool branch. How: Examines the request payload (typically the method field) to determine which action to perform. 2. Multi-Pipeline Function Generator (Listing Tools) What it does: Converts a list of pipeline references into tool metadata. This is where you define the pipelines you want the server to expose as tools. Output: For each pipeline, generates: Tool name Description Parameters (as JSON Schema) Other metadata Purpose: Allows clients (e.g., an LLM) to query what tools are available without prior knowledge. 3. Pipeline Execute (Calling Tools) What it does: Dynamically invokes the selected SnapLogic pipeline and returns structured outputs. How: Accepts parameters encoded in the request body, maps them to the pipeline’s expected inputs, and executes the pipeline. Purpose: Provides flexible runtime execution of tools based on user or model requests. 4. Union What it does: Merges the result streams from both branches (list and call) into a single output stream for consistent response formatting. Request Flows Below are example flows showing how requests are processed: 🟢 tools/list Client sends a JSON-RPC request with method = "tools/list". Router directs the request to the Multi-Pipeline Function Generator. Tool metadata is generated and returned in the response. Union Snap merges and outputs the content. ✅ Result: The client receives a JSON list describing all available tools. �� tools/call Client sends a JSON-RPC request with method = "tools/call" and the tool name + parameters. Router sends this to the Pipeline Execute Snap. The selected pipeline is invoked with the given parameters. Output is collected and merged via Union. ✅ Result: The client gets the execution result of the selected tool. Registering an MCP Server Once your MCP server pipeline is created: Create a Trigger Task and Register as an MCP Server Navigate to the Designer > Create Trigger Task Choose a Groundplex. (Note: This capability currently requires a Groundplex, not a Cloudplex.) Select your MCP pipeline. Click Register as MCP server Configure node and authentication. Find your MCP Server URL Navigate to the Manager > Tasks The Task Details page exposes a unique HTTP endpoint. This endpoint is treated as your MCP Server URL. After registration, clients such as AI models or orchestration engines can interact with the MCP Server by calling the /tools/list endpoint to discover the available tools, and the /tools/call endpoint to invoke a specific tool using a structured JSON payload. Connect to a SnapLogic MCP Server from a Client After the MCP server is successfully published, using the SnapLogic MCP server is no different from using other MCP servers running in SSE mode. It can be connected to by any MCP client that supports SSE mode; all you need is the MCP Server URL (and the Bearer Token if authentication is enabled during server registration). Configuration First, you need to add your MCP server in the settings of the MCP client. Taking Claude Desktop as an example, you'll need to modify your Claude Desktop configuration file. The configuration file is typically located at: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json Add your remote MCP server configuration to the mcpServers section: { "mcpServers": { "SL_MCP_server": { "command": "npx", "args": [ "mcp-remote", "http://devhost9000.example.com:9000/mcp/6873ff343a91cab6b00014a5/sse", "--header", "Authorization: Bearer your_token_here" ] } } } Key Components Server Name: SL_MCP_server - A unique identifier for your MCP server Command: npx - Uses the Node.js package runner to execute the mcp-remote package URL: The SSE endpoint URL of your remote MCP server (note the /sse suffix) Authentication: Use the --header flag to include authorization tokens if the server enabled authentication Requirements Ensure you have Node.js installed on your system, as the configuration uses npx to run the mcp-remote package. Replace the example URL and authorization token with your actual server details before saving the configuration. After updating the configuration file, restart Claude Desktop for the changes to take effect. To conclude, the MCP Server in SnapLogic is a framework that allows you to expose pipelines as dynamic tools accessible through a single HTTP endpoint. This capability is designed for integration with language models and external systems that need to discover and invoke SnapLogic workflows at runtime. MCP Servers make it possible to build flexible, composable APIs that return structured results, supporting use cases such as conversational AI, automated data orchestration, and intelligent application workflows. Conclusion SnapLogic's integration of the MCP protocol marks a significant leap forward in empowering LLMs to dynamically discover and invoke SnapLogic pipelines as sophisticated tools, transforming how you build conversational AI, automate complex data orchestrations, and create truly intelligent applications. We're excited to see the innovative solutions you'll develop with these powerful new capabilities.582Views0likes0CommentsOpenAI Responses API
Introduction OpenAI announced the Responses API, their most advanced and versatile interface for building intelligent AI applications. Supporting both text and image inputs with rich text outputs, this API enables dynamic, stateful conversations that remember and build on previous interactions, making AI experiences more natural and context-aware. It also unlocks powerful capabilities through built-in tools such as web search, file search, code interpreter, and more, while enabling seamless integration with external systems via function calling. Its event-driven design delivers clear, structured updates at every step, making it easier than ever to create sophisticated, multi-step AI workflows. Key features include: Stateful conversations via the previous response ID Built-in tools like web search, file search, code interpreter, MCP, and others Access to advanced models available exclusively, such as o1-pro Enhanced support for reasoning models with reasoning summaries and efficient context management through previous response ID or encrypted reasoning items Clear, event-based outputs that simplify integration and control While the Chat Completions API remains fully supported and widely used, OpenAI plans to retire the Assistants API in the first half of 2026. To support the adoption of the Responses API, two new Snaps have been introduced: OpenAI Chat Completions ⇒ OpenAI Responses API Generation OpenAI Tool Calling ⇒ OpenAI Responses API Tool Calling Both Snaps are fully compatible with existing upstream and downstream utility Snaps, including the OpenAI Prompt Generator, OpenAI Multimodal Content Generator, all Function Generators (Multi-Pipeline, OpenAPI, and APIM), the Function Result Generator, and the Message Appender. This allows existing pipelines and familiar development patterns to be reused while gaining access to the advanced features of the Responses API. OpenAI Responses API Generation The OpenAI Responses API Generation Snap is designed to support OpenAI’s newest Responses API, enabling more structured, stateful, and tool-augmented interactions. While it builds upon the familiar interface of the Chat Completions Snap, several new properties and behavioral updates have been introduced to align with the Responses API’s capabilities. New properties Message: The input sent to the LLM. This field replaces the previous Use message payload, Message payload, and Prompt properties in the OpenAI Chat Completions Snap, consolidating them into a single input. It removes ambiguity between "prompt" as raw text and as a template, and supports both string and list formats. Previous response ID: The unique ID of the previous response to the model. Use this to create multi-turn conversations. Model parameters Reasoning summary: For reasoning models, provides a summary of the model’s reasoning process, aiding in debugging and understanding the model's reasoning process. The property can be none, auto, or detailed. Advanced prompt configurations Instructions: Applied only to the current response, making them useful for dynamically swapping instructions between turns. To persist instructions across turns when using previous_response_id, the developer message in the OpenAI Prompt Generator Snap should be used. Advanced response configurations Truncation: Defines how to handle input that exceeds the model’s context window. auto allows the model to truncate the middle of the conversation to fit, while disabled (default) causes the request to fail with a 400 error if the context limit is exceeded. Include reasoning encrypted content: Includes an encrypted version of reasoning tokens in the output, allowing reasoning items to persist when the store is disabled. Built-in tools Web search: Enables the model to access up-to-date information from the internet to answer queries beyond its training data. Web search type Search context size User location: an approximate user location including city, region, country, and timezone to deliver more relevant search results. File search: Allows the model to retrieve information from documents or files. Vector store IDs Maximum number of results Include search results: Determines whether raw search results are included in the response for transparency or debugging. Ranker Score threshold Filters: Additional metadata-based filters to refine search results. For more details on using filters, see Metadata Filtering. Advanced tool configuration Tool choice: A new option, SPECIFY A BUILT-IN TOOL, allows specifying that the model should use a built-in tool to generate a response. Note that the OpenAI Responses API Generation Snap does not support the response count or stop sequences properties, as these are not available in the Responses API. Additionally, the message user name, which may be specified in the Prompt Generator Snap, is not supported and will be ignored if included. Model response of Chat Completions vs Responses API Chat Completions API Responses API The Responses API introduces an event-driven output structure that significantly enhances how developers build and manage AI-powered applications compared to the traditional Chat Completions API. While the Chat Completions API returns a single, plain-text response within the choices array, the Responses API provides an output array containing a sequence of semantic event items—such as reasoning, message, function_call, web_search_call, and more—that clearly delineate each step in the model's reasoning and actions. This structured approach allows developers to easily track and interpret the model's behavior, facilitating more robust error handling and smoother integration with external tools. Moreover, the response from the Responses API includes the model parameters settings, providing additional context for developers. Pipeline examples Built-in tool: web search This example demonstrates how to use the built-in web search tool. In this pipeline, the user’s location is specified to ensure the web search targets relevant geographic results. System prompt: You are a friendly and helpful assistant. Please use your judge to decide whether to use the appropriate tools or not to answer questions from the user. Prompt: Can you recommend 2 good sushi restaurants near me? Output: As a result, the output contains both a web search call and a message. The model uses the web search to find and provide recommendations based on current data, tailored to the specified location. Built-in tool: File search This example demonstrates how the built-in file search tool enables the model to retrieve information from documents stored in a vector store during response generation. In this case, the file wildfire_stats.pdf has been uploaded. You can create and manage vector stores through the Vector Store management page. Prompt: What is the number of Federal wildfires in 2018 Output: The output array contains a file_search_call event, which includes search results in its results field. These results provide matched text, metadata, and relevance scores from the vector store. This is followed by a message event, where the model uses the retrieved information to generate a grounded response. The presence of detailed results in the file_search_call is enabled by selecting the Include file search results option. OpenAI Responses API Tool Calling The OpenAI Responses API Tool Calling Snap is designed to support function calling using OpenAI’s Responses API. It works similarly to the OpenAI Tool Calling Snap (which uses the Chat Completions API), but is adapted to the event-driven response structure of the Responses API and supports stateful interactions via the previous response ID. While it shares much of its configuration with the Responses API Generation Snap, it is purpose-built for workflows involving function calls. Existing LLM agent pipeline patterns and utility Snaps—such as the Function Generator and Function Result Generator—can continue to be used with this Snap, just as with the original OpenAI Tool Calling Snap. The primary difference lies in adapting the Snap configuration to accommodate the Responses API’s event-driven output, particularly the structured function_call event item in the output array. The Responses API Tool Calling Snap provides two output views, similar to the OpenAI Tool Calling Snap, with enhancements to simplify building agent pipelines and support stateful interactions using the previous response ID: Model response view: The complete API response, including extra fields: messages: an empty list if store is enabled, or the full message history—including messages payload and model response—if disabled (similar to the OpenAI Tool Calling Snap). When using stateful workflows, message history isn’t needed because the previous response ID is used to maintain context. has_tool_call: a boolean indicating whether the response includes a tool call. Since the Responses API no longer includes the finish_reason: "tool_calls" field, this new field makes it easier to create stop conditions in the pipeloop Snap within the agent driver pipeline. Tool call view: Displays the list of function calls made by the model during the interaction. Tool Call View of Chat Completions vs Responses API Uses id as the function call identifier when sending back the function result. Tool call properties (name, arguments) are nested inside the function field. Each tool call includes: • id: the unique event ID • call_id: used to reference the function call when returning the result The tool call structure is flat — name and arguments are top-level fields. Building LLM Agent Pipelines To build LLM agent pipelines with the OpenAI Responses API Tool Calling Snap, you can reuse the same agent pipeline pattern described in Introducing Tool Calling Snaps and LLM Agent Pipelines. Only minor configuration changes are needed to support the Responses API. Agent Driver Pipeline The primary change is in the PipeLoop Snap configuration, where the stop condition should now check the has_tool_call field, since the Responses API no longer includes the finish_reason:"tool_calls". Agent Worker Pipeline Fields mapping A Mapper Snap is used to prepare the related fields for the OpenAI Responses API Tool Calling Snap. OpenAI Responses API Tool Calling The key changes are in this Snap’s configuration to support the Responses API’s stateful interactions. There are two supported approaches: Option 1: Use Store (Recommended) Leverages the built-in state management of the Responses API. Enable Store Use Previous Response ID Send only the function call results as the input messages for the next round. (messages field in the Snap’s output will be an empty array, so you can still use it in the Message Appender Snap to collect tool results.) Option 2: Maintain Conversation History in Pipeline Similar to the approach used in the Chat Completions API. Disable Store Include the full message history in the input (messages field in the Snap’s output contains message history) (Optional) Enable Include Reasoning Encrypted Content (for reasoning models) to preserve reasoning context efficiently OpenAI Function Result Generator As explained in Tool Call View of Chat Completions vs Responses API section, the Responses API includes both an id and a call_id. You must use the call_id to construct the function call result when sending it back to the model. Conclusion The OpenAI Responses API makes AI workflows smarter and more adaptable, with stateful interactions and built-in tools. SnapLogic’s OpenAI Responses API Generation and Tool Calling Snaps bring these capabilities directly into your pipelines, letting you take advantage of advanced features like built-in tools and event-based outputs with only minimal adjustments. By integrating these Snaps, you can seamlessly enhance your workflows and fully unlock the potential of the Responses API.45Views0likes0CommentsOpenAI Responses API
Introduction OpenAI announced the Responses API, their most advanced and versatile interface for building intelligent AI applications. Supporting both text and image inputs with rich text outputs, this API enables dynamic, stateful conversations that remember and build on previous interactions, making AI experiences more natural and context-aware. It also unlocks powerful capabilities through built-in tools such as web search, file search, code interpreter, and more, while enabling seamless integration with external systems via function calling. Its event-driven design delivers clear, structured updates at every step, making it easier than ever to create sophisticated, multi-step AI workflows. Key features include: Stateful conversations via the previous response ID Built-in tools like web search, file search, code interpreter, MCP, and others Access to advanced models available exclusively, such as o1-pro Enhanced support for reasoning models with reasoning summaries and efficient context management through previous response ID or encrypted reasoning items Clear, event-based outputs that simplify integration and control While the Chat Completions API remains fully supported and widely used, OpenAI plans to retire the Assistants API in the first half of 2026. To support the adoption of the Responses API, two new Snaps have been introduced: OpenAI Chat Completions ⇒ OpenAI Responses API Generation OpenAI Tool Calling ⇒ OpenAI Responses API Tool Calling Both Snaps are fully compatible with existing upstream and downstream utility Snaps, including the OpenAI Prompt Generator, OpenAI Multimodal Content Generator, all Function Generators (Multi-Pipeline, OpenAPI, and APIM), the Function Result Generator, and the Message Appender. This allows existing pipelines and familiar development patterns to be reused while gaining access to the advanced features of the Responses API. OpenAI Responses API Generation The OpenAI Responses API Generation Snap is designed to support OpenAI’s newest Responses API, enabling more structured, stateful, and tool-augmented interactions. While it builds upon the familiar interface of the Chat Completions Snap, several new properties and behavioral updates have been introduced to align with the Responses API’s capabilities. New properties Message: The input sent to the LLM. This field replaces the previous Use message payload, Message payload, and Prompt properties in the OpenAI Chat Completions Snap, consolidating them into a single input. It removes ambiguity between "prompt" as raw text and as a template, and supports both string and list formats. Previous response ID: The unique ID of the previous response to the model. Use this to create multi-turn conversations. Model parameters Reasoning summary: For reasoning models, provides a summary of the model’s reasoning process, aiding in debugging and understanding the model's reasoning process. The property can be none, auto, or detailed. Advanced prompt configurations Instructions: Applied only to the current response, making them useful for dynamically swapping instructions between turns. To persist instructions across turns when using previous_response_id, the developer message in the OpenAI Prompt Generator Snap should be used. Advanced response configurations Truncation: Defines how to handle input that exceeds the model’s context window. auto allows the model to truncate the middle of the conversation to fit, while disabled (default) causes the request to fail with a 400 error if the context limit is exceeded. Include reasoning encrypted content: Includes an encrypted version of reasoning tokens in the output, allowing reasoning items to persist when the store is disabled. Built-in tools Web search: Enables the model to access up-to-date information from the internet to answer queries beyond its training data. Web search type Search context size User location: an approximate user location including city, region, country, and timezone to deliver more relevant search results. File search: Allows the model to retrieve information from documents or files. Vector store IDs Maximum number of results Include search results: Determines whether raw search results are included in the response for transparency or debugging. Ranker Score threshold Filters: Additional metadata-based filters to refine search results. For more details on using filters, see Metadata Filtering. Advanced tool configuration Tool choice: A new option, SPECIFY A BUILT-IN TOOL, allows specifying that the model should use a built-in tool to generate a response. Note that the OpenAI Responses API Generation Snap does not support the response count or stop sequences properties, as these are not available in the Responses API. Additionally, the message user name, which may be specified in the Prompt Generator Snap, is not supported and will be ignored if included. Model response of Chat Completions vs Responses API Chat Completions API Responses API The Responses API introduces an event-driven output structure that significantly enhances how developers build and manage AI-powered applications compared to the traditional Chat Completions API. While the Chat Completions API returns a single, plain-text response within the choices array, the Responses API provides an output array containing a sequence of semantic event items—such as reasoning, message, function_call, web_search_call, and more—that clearly delineate each step in the model's reasoning and actions. This structured approach allows developers to easily track and interpret the model's behavior, facilitating more robust error handling and smoother integration with external tools. Moreover, the response from the Responses API includes the model parameters settings, providing additional context for developers. Pipeline examples Built-in tool: web search This example demonstrates how to use the built-in web search tool. In this pipeline, the user’s location is specified to ensure the web search targets relevant geographic results. System prompt: You are a friendly and helpful assistant. Please use your judge to decide whether to use the appropriate tools or not to answer questions from the user. Prompt: Can you recommend 2 good sushi restaurants near me? Output: As a result, the output contains both a web search call and a message. The model uses the web search to find and provide recommendations based on current data, tailored to the specified location. Built-in tool: File search This example demonstrates how the built-in file search tool enables the model to retrieve information from documents stored in a vector store during response generation. In this case, the file wildfire_stats.pdf has been uploaded. You can create and manage vector stores through the Vector Store management page. Prompt: What is the number of Federal wildfires in 2018 Output: The output array contains a file_search_call event, which includes search results in its results field. These results provide matched text, metadata, and relevance scores from the vector store. This is followed by a message event, where the model uses the retrieved information to generate a grounded response. The presence of detailed results in the file_search_call is enabled by selecting the Include file search results option. OpenAI Responses API Tool Calling The OpenAI Responses API Tool Calling Snap is designed to support function calling using OpenAI’s Responses API. It works similarly to the OpenAI Tool Calling Snap (which uses the Chat Completions API), but is adapted to the event-driven response structure of the Responses API and supports stateful interactions via the previous response ID. While it shares much of its configuration with the Responses API Generation Snap, it is purpose-built for workflows involving function calls. Existing LLM agent pipeline patterns and utility Snaps—such as the Function Generator and Function Result Generator—can continue to be used with this Snap, just as with the original OpenAI Tool Calling Snap. The primary difference lies in adapting the Snap configuration to accommodate the Responses API’s event-driven output, particularly the structured function_call event item in the output array. The Responses API Tool Calling Snap provides two output views, similar to the OpenAI Tool Calling Snap, with enhancements to simplify building agent pipelines and support stateful interactions using the previous response ID: Model response view: The complete API response, including extra fields: messages: an empty list if store is enabled, or the full message history—including messages payload and model response—if disabled (similar to the OpenAI Tool Calling Snap). When using stateful workflows, message history isn’t needed because the previous response ID is used to maintain context. has_tool_call: a boolean indicating whether the response includes a tool call. Since the Responses API no longer includes the finish_reason: "tool_calls" field, this new field makes it easier to create stop conditions in the pipeloop Snap within the agent driver pipeline. Tool call view: Displays the list of function calls made by the model during the interaction. Tool Call View of Chat Completions vs Responses API Uses id as the function call identifier when sending back the function result. Tool call properties (name, arguments) are nested inside the function field. Each tool call includes: • id: the unique event ID • call_id: used to reference the function call when returning the result The tool call structure is flat — name and arguments are top-level fields. Building LLM Agent Pipelines To build LLM agent pipelines with the OpenAI Responses API Tool Calling Snap, you can reuse the same agent pipeline pattern described in Introducing Tool Calling Snaps and LLM Agent Pipelines. Only minor configuration changes are needed to support the Responses API. Agent Driver Pipeline The primary change is in the PipeLoop Snap configuration, where the stop condition should now check the has_tool_call field, since the Responses API no longer includes the finish_reason:"tool_calls". Agent Worker Pipeline Fields mapping A Mapper Snap is used to prepare the related fields for the OpenAI Responses API Tool Calling Snap. OpenAI Responses API Tool Calling The key changes are in this Snap’s configuration to support the Responses API’s stateful interactions. There are two supported approaches: Option 1: Use Store (Recommended) Leverages the built-in state management of the Responses API. Enable Store Use Previous Response ID Send only the function call results as the input messages for the next round. (messages field in the Snap’s output will be an empty array, so you can still use it in the Message Appender Snap to collect tool results.) Option 2: Maintain Conversation History in Pipeline Similar to the approach used in the Chat Completions API. Disable Store Include the full message history in the input (messages field in the Snap’s output contains message history) (Optional) Enable Include Reasoning Encrypted Content (for reasoning models) to preserve reasoning context efficiently OpenAI Function Result Generator As explained in Tool Call View of Chat Completions vs Responses API section, the Responses API includes both an id and a call_id. You must use the call_id to construct the function call result when sending it back to the model. Conclusion The OpenAI Responses API makes AI workflows smarter and more adaptable, with stateful interactions and built-in tools. SnapLogic’s OpenAI Responses API Generation and Tool Calling Snaps bring these capabilities directly into your pipelines, letting you take advantage of advanced features like built-in tools and event-based outputs with only minimal adjustments. By integrating these Snaps, you can seamlessly enhance your workflows and fully unlock the potential of the Responses API.77Views0likes0CommentsAgentic Builders Webinar Series - Integrated agentic workflows, built live, every week
Register Here>> The Agentic Builders webinar series is your step-by-step guide to designing powerful, AI-powered workflows that transform how work gets done. Across five live sessions, SnapLogic experts will show you how to connect your data, automate complex tasks, and empower teams to put AI to work across departments including: sales, finance, customer success, learning services, and revenue operations. What you’ll take away: See agentic workflows built live, integrating data sources and tools you already use. Learn how to automate high-value, high-effort tasks across your organization. Discover best practices for connecting CRM, support, LMS, and financial systems. Walk away with actionable steps to design your first (or next) agentic workflow. Starts August 28th and runs through September 25th. Explore the series!54Views0likes0CommentsMore Than Just Fast: A Holistic Guide to High-Performance AI Agents
At SnapLogic, while building and refining an AI Agent for a large customer in the healthcare industry, we embarked on a journey of holistic performance optimization. We didn't just want to make it faster. We tried to make it better across the board. This journey taught us that significant gains are found by looking at the entire system, from the back-end data sources to the pixels on the user's screen. Here’s our playbook for building a truly high-performing AI agent, backed by real-world metrics. The Foundation: Data and Architecture Before you can tune an engine, you have to build it on a solid chassis. For an AI Agent, that chassis is its core architecture and its relationship with data. Choose the Right Brain for the Job: Not all LLMs are created equal. The "best" model depends entirely on the nature of the tasks your agent needs to perform. A simple agent with one or two tools has very different requirements from a complex agent that needs to reason, plan, and execute dynamic operations. Matching the model to the task complexity is key to balancing cost, speed, and capability. Task Complexity Model Type Characteristics & Best For Simple, Single-Tool Tasks Fast & Cost-Effective Goal: Executing a well-defined task with a limited toolset (e.g., simple data lookups, classification). These models are fast and cheap, perfect for high-volume, low-complexity actions. Multi-Tool Orchestration Balanced Goal: Reliably choosing the correct tool from several options and handling moderately complex user requests. These models offer a great blend of speed, cost, and improved instruction-following for a good user experience. Complex Reasoning & Dynamic Tasks High-Performance / Sophisticated Goal: Handling ambiguous requests that require multi-step reasoning, planning, and advanced tool use like dynamic SQL query generation. These are the most powerful (and expensive) models, essential for tasks where deep understanding and accuracy are critical. Deconstruct Complexity with a Multi-Agent Approach: A single, monolithic agent designed to do everything can become slow and unwieldy. A more advanced approach is to break down a highly complex agent into a team of smaller, specialized agents. This strategy offers two powerful benefits: It enables the use of faster, cheaper models. Each specialized agent has a narrower, more defined task, which often means you can use a less powerful (and faster) LLM for that specific job, reserving your most sophisticated model for the "manager" agent that orchestrates the others. It dramatically increases reusability. These smaller, function-specific agents and their underlying tools are modular. They can be easily repurposed and reused in the next AI Agent you build, accelerating future development cycles. Set the Stage for Success with Data: An AI Agent is only as good as the data it can access. We learned that optimizing data access is a critical first step. This involved: Implementing Dynamic Text-to-SQL: Instead of relying on rigid, pre-defined queries, we empowered the agent to build its own SQL queries dynamically from natural language. This flexibility required a deep initial investment in analyzing and understanding the critical columns and data formats our agent would need from sources like Snowflake. Generating Dedicated Database Views: To support the agent, we generated dedicated views on top of our source tables. This strategy serves two key purposes: it dramatically reduces query times by pre-joining and simplifying complex data, and it allows us to remove sensitive or unnecessary data from the source, ensuring the agent only has access to what it needs. Pre-loading the Schema for Agility: Making the database schema available to the agent is critical for accurate dynamic SQL generation. To optimize this, we pre-load the relevant schemas at startup. This simple step saves precious time on every single query the agent generates, contributing significantly to the overall responsiveness. The Engine: Tuning the Agent’s Logic and Retrieval Our Diagnostic Toolkit: Using AI to Analyze AI Before we could optimize the engine, we needed to know exactly where the friction was. Our diagnostic process followed a two-step approach: High-Level Analysis: We started in the SnapLogic Monitor, which provides a high-level, tabular view of all pipeline executions. This dashboard is the starting point for any performance investigation. As you can see below, it gives a list of all runs, their status, and their total duration. By clicking the Download table button, you can export this summary data as a CSV. This allows for a quick, high-level analysis to spot outliers and trends without immediately diving into verbose log files. AI-Powered Deep Dive: Once we identified a bottleneck from the dashboard—a pipeline that was taking longer than expected—we downloaded the detailed, verbose log files for those specific pipeline runs. We then fed these complex logs into an AI tool of our choice. This "AI analyzing AI" approach helped us instantly pinpoint key issues that would have taken hours to find manually. For example, this process uncovered an unnecessary error loop caused by duplicate JDBC driver versions, which significantly extended the execution time of our Snowflake Snaps. Fixing this single issue was a key factor in the 68% performance improvement we saw when querying our technical knowledge base. With a precise diagnosis in hand, we turned our attention to the agent's "thinking" process. This is where we saw some of our most dramatic performance gains. How We Achieved This: Crafting the Perfect Instructions (System Prompts): We transitioned from generic prompts to highly customized system prompts, optimized for both the specific task and the chosen LLM. A simpler model gets a simpler, more direct prompt, while a sophisticated model can be instructed to "think step-by-step" to improve its reasoning. A Simple Switch for Production Speed: One of the most impactful, low-effort optimizations came from how we use a key development tool: the Record Replay Snap. During the creation and testing of our agent's pipelines, this Snap is invaluable for capturing and replaying data, but it adds about 2.5 seconds of overhead to each execution. For a simple agent run involving a driver, a worker, and one tool, this adds up to 7.5 seconds of unnecessary latency in a production environment. Once our pipelines were successfully tested, we switched these Snaps to "Replay Only" mode. This simple change instantly removed the recording overhead, providing a significant speed boost across all agent interactions. Smarter, Faster Data Retrieval (RAG Optimization): For our Retrieval-Augmented Generation (RAG) tools, we focused on two key levers: Finding the Sweet Spot (k value): We tuned the k value—the number of documents retrieved for context. For our product information retrieval use case, adjusting this value was the key to our 63% speed improvement. It’s the art of getting just enough context for an accurate answer without creating unnecessary work for the LLM. Surgical Precision with Metadata: Instead of always performing a broad vector search, we enabled the agent to use metadata. If it knows a document's unique_ID, it can fetch that exact document. This is the difference between browsing a library and using a call number. It's swift and precise. Ensuring Consistency: We set the temperature to a low value during the data extraction and indexing process. This ensures that the data chunks are created consistently, leading to more reliable and repeatable search results. The Results: A Data-Driven Transformation Our optimization efforts led to significant, measurable improvements across several key use cases for the AI Agent. Use Case Before Optimization After Optimization Speed Improvement Querying Technical Knowledge Base 92 seconds 29 seconds ~68% Faster Processing Sales Order Data 32 seconds 10.7 seconds ~66% Faster RAG Retrieval 5.8 seconds 2.1 seconds ~63% Faster Production Optimization (Replay Only) 20 seconds 17.5 seconds ~12% Faster* (*This improvement came from switching development Snaps to a production-ready "Replay Only" mode, removing the latency inherent to the testing phase.) The Experience: Focusing on the User Ultimately, all the back-end optimization in the world is irrelevant if the user experience is poor. The final layer of our strategy was to focus on the front-end application. Engage, Don't Just Wait: A simple "running..." message can cause user anxiety and make any wait feel longer. Our next iteration will provide a real-time status of the agent's thinking process (e.g., "Querying product database...", "Synthesizing answer..."). This transparency keeps the user engaged and builds trust. Guide the User to Success: We learned that a blank text box can be intimidating. By providing predefined example prompts and clearly explaining the agent's capabilities, we guide the user toward successful interactions. Deliver a Clear Result: The final output must be easy to consume. We format our results cleanly, using tables, lists, and clear language to ensure the user can understand and act on the information instantly. By taking this holistic approach, we optimized the foundation, the engine, and the user experience to build an AI Agent that doesn't just feel fast. It feels intelligent, reliable, and genuinely helpful.40Views0likes0CommentsBuilding an AI Agent with SnapLogic AgentCreator using OpenAI and Microsoft Teams - Part 1
Building an AI Agent with SnapLogic AgentCreator using OpenAI and Microsoft Teams In this two-part blog series, I will cover how to create an AI Agent using SnapLogic's AgentCreator and integrate it with Microsoft Teams via Azure Bot services. The solution combines SnapLogic’s AgentCreator, OpenAI ( gpt 4.1 mini ) and Microsoft Teams ( to provide a familiar chat interface ). In the first part, we will cover building the agent with SnapLogic pipelines and the AgentCreator pattern. In the second part, I will explain the Azure setup and Teams integration, and highlight the business benefits of conversational automation. Designing the SnapLogic-Powered AI Agent The example I've decided to go with is a simple Weather Agent that provides a conversational interface for weather queries, accessible directly within Teams. This improves user experience by integrating information into the tools people already use and showcasing how SnapLogic’s AgentCreator can automate tasks through natural language. How does SnapLogic help? SnapLogic’s new AgentCreator framework allows us to build an AI-driven agent that uses LLM intelligence combined with SnapLogic pipelines to fetch real data. The Weather Agent understands a user’s question, decides if it needs to call a function ( tool ), performs that action via a SnapLogic pipeline, and then responds conversationally with the result. SnapLogic AgentCreator is purpose-built for such scenarios, enabling enterprises to create AI agents that can call pipelines and APIs autonomously. In our case, the agent will use a weather API through SnapLogic to get live data, meaning the agent's answers are not just based on static knowledge, but on real-time API calls. SnapLogic AgentCreator Architecture Overview We will focus on the AgentCreator pattern – a design that splits the agent’s logic into two cooperative pipelines: an Agent Driver and an Agent Worker. This pattern is orchestrated by SnapLogic’s Pipeline loop ( PipeLoop ) Snap, which allows iterative calls to a pipeline until a certain condition is met, in our case, until the conversation turn is complete or n number of iterations have been completed. Here’s how it works: Agent Driver pipeline: This orchestrator pipeline receives the incoming chat message and manages the overall conversation loop. It sends the user’s query ( plus any chat history messages available ) and the system prompt to the Agent Worker pipeline using the PipeLoop Snap, and keeps iterating until the LLM signals that it’s done responding or the number of iterations are reached. Agent Worker pipeline: This pipeline handles one iteration of LLM interaction. It presents the LLM with the conversation context and available tools, gets the LLM’s response ( which could be an answer or a function call request ), executes any required tool, and returns the result back to the Driver. The Worker is essentially where the "brain" of the agent lives – it decides if a tool call is needed and formats the answer. This architecture allows the agent to have multi-turn reasoning. For example, if the user asks for weather, the LLM might first respond with a function call to get data, the Worker executes that call, and then the LLM produces a final answer in a second iteration. The PipeLoop Snap in the Driver pipeline detects whether another iteration is needed ( if the last LLM output was a partial result or tool request ) and loops again, or stops if the answer is complete. Key components of the Weather Agent architecture: SnapLogic AgentCreator: The toolkit that makes this AI agent possible. It provides specialized Snaps for prompt handling, LLM integration ( OpenAI, Azure OpenAI, Amazon Bedrock, Google Gemini etc. ), and function-calling logic. SnapLogic AgentCreator enables designing AI agents with dynamic iteration and tool usage built in. LLM ( Generative AI model 😞 The LLM powering the agent's understanding and response generation. In our implementation, an LLM ( such as OpenAI GPT ) interprets the user’s request and decides when to call the available tools. SnapLogic’s Tool Calling Snap interfaces with the LLM’s API to get these decisions. Weather API: The external data source for live weather information. The agent uses a real API ( https://open-meteo.com/ ) to fetch current weather details for the requested location. Microsoft Teams & Azure Bot: This is the front-end interface where the user interacts with the bot, and the connector that sends messages between Teams and our SnapLogic pipelines. Setting up an OpenAI API account Because we are working with the gpt 4.1 mini API, we will need to configure an OpenAI account. This assumes you have already created an API key in your OpenAI dashboard. Navigate to Manager tab under your project folder location and click on the "+" button to create a new Account. Navigate to OpenAI LLM -> OpenAI API Key Account You can name it based on your needs or naming convention Copy and paste your API key from the OpenAI dashboard On the Agent Worker pipeline, open the "OpenAI Tool Calling" snap and apply the newly created account Save the pipeline. You have now successfully integrated the OpenAI API. Weather Agent pipelines in SnapLogic I've built a set of SnapLogic pipelines to implement the Weather Agent logic using AgentCreator. Each pipeline has a specific role in the overall chatbot workflow: WeatherAgent_AgentDriver: The orchestrator for the agent. It is triggered by incoming HTTP requests from the Azure Bot Service ( when a user sends a Teams message ). The AgentDriver parses the incoming message, sends a quick "typing…" indicator to the user ( to simulate the bot typing ), and then uses a PipeLoop Snap to invoke the AgentWorker pipeline. It supplies the user’s question, the system prompt and any prior context, and keeps iterating until the bot’s answer is complete. It also handles "deleting" chat history if the user writes a specific message like "CLEAR_CHAT" in the Teams agent conversation to refresh the conversation. WeatherAgent_AgentWorker: The tool-calling pipeline ( Agent Worker ) that interacts with the LLM. On each iteration, it takes the conversation messages ( system prompt, user query, and any accumulated dialogue history ) from the Driver. The flow of the Agent Worker for a Weather Agent: defines what tools ( functions ) the LLM is allowed to call – in this case, a location and weather lookup tools invokes the LLM via a Tool Calling Snap, passing it the current conversation and available function definitions processes the LLM’s response – if the LLM requests a function call ( "get weather for London" ), the pipeline routes that request to the appropriate tool pipeline once the tool returns data, the Worker formats the result using a Function Result Generator Snap and appends it to the conversation via a Message Appender Snap returns the updated conversation with any LLM answer or tool results back to the Driver. The AgentWorker essentially handles one round of "LLM thinking" WeatherAgent_GetLocation: A tool that the agent can use to convert a user’s input location ( city name, etc. ) into a standardized form or coordinates ( latitude and longitude ). It queries an open-meteo API to retrieve latitude and longitude data based on the given location. The system prompt instructs the agent that if the tool returns more than one match, ask the user which location they meant - keeping a human-in-the loop for such scenarios. For example, if the user requests weather for "Springfield", the agent first calls the GetLocation tool and if the tool responds with multiple locations, the agent will list them ( for example, Springfield, MA; Springfield, IL; Springfield, MO ) and ask the user to specify which location they meant before proceeding. Once the location is confirmed, the agent passes the coordinates to the GetWeather tool. WeatherAgent_GetWeather: The tool pipeline that actually fetches current weather data from an external API. This pipeline is invoked when the LLM decides it needs the weather info. It takes an input latitude and longitude and calls a weather API. In our case, I've used the open-meteo service, which returns a JSON containing weather details for a given location. The pipeline consists of an HTTP Client Snap ( configured to call the weather API endpoint with the location ) and a Mapper Snap to shape the API’s JSON response into the format expected by the Agent Worker pipeline. Once the data is retrieved ( temperature, conditions, etc. ), this pipeline’s output is fed back into the Agent Worker ( via the Function Result Generator ) so the LLM can use it to compose a user-friendly answer. MessageEndpoint_ChatHistory: This pipeline handles conversation history ( simple memory ) for each user or conversation. Because our agent may be used by multiple users ( and we want each user’s chat to be independent ), we maintain a user-specific chat history. In this example the pipeline uses the SLDB's storage to store the file but in a production environment the ChatHistory pipeline could use a database Snap store chat history, keyed by user or conversation ID. Each time a new message comes in, the AgentDriver will call this pipeline to fetch recent context ( so the bot can "remember" what was said before ). This ensures continuity in the conversation – for example, if the user follows up with "What about tomorrow?", the bot can refer to the previous question’s context stored in history. For simplicity, one could also maintain context in-memory during a single conversation session, but persisting it via this pipeline allows context across multiple sessions or a longer pause. SnapLogic introduced specialized Snaps for LLM function calling to coordinate this process. The Function Generator Snap defines the available tools that the LLM agent can use. The Tool Calling Snap sends the user’s query and function definitions to the LLM model and gets back either an answer or a function call request ( and any intermediate messages ). If a function call is requested, SnapLogic uses a Pipeline Execute or similar mechanism to run the corresponding pipeline. The Function Result Generator then formats the pipeline’s output into a form the LLM can understand. At the end, the Message Appender Snap adds the function result into the conversation history, so the LLM can take that into account in the next response. This chain of Snaps allows the agent to decide between answering directly or using a tool, all within a no-code pipeline. Sample interaction from user prompt to answer To make the above more concrete, let’s walk through the flow of a sample interaction step by step: User asks a question in Teams: "What's the weather in San Francisco right now?" This message is sent from Teams to the Azure Bot Service, which relays it as an HTTP POST to our SnapLogic AgentDriver pipeline’s endpoint ( the messaging endpoint URL we will configure in the second part ). AgentDriver pipeline receives the message: The WeatherAgent_AgentDriver captures the incoming JSON payload from Teams. This payload contains the user’s message text and metadata ( like user ID, conversation ID, etc. ). The pipeline will first respond immediately with a typing indicator to Teams. We configured a small branch in the pipeline to output a "typing" activity message back to the Bot service, so that Teams shows the bot is typing - implemented to mainly enhance UX while the user waits for an answer. Preparing the prompt and context: The AgentDriver then prepares the initial prompt for the LLM. Typically, we include a system prompt ( defining the bot’s role/behavior ) and the user prompt. If we have prior conversation history ( from MessageEndpoint_ChatHistory for this user ), we would also include recent messages to give context. All this is packaged into a messages array that will be sent to the LLM. AgentDriver invokes AgentWorker via PipeLoop: The Driver uses a PipeLoop Snap configured to call the WeatherAgent_AgentWorker pipeline. It passes the prepared message payload as input. The PipeLoop is set with a stop condition based on the LLM’s response status – it will loop until the LLM indicates the conversation turn is completed or the iteration limit has been reached ( for example, OpenAI returns a finish_reason of "stop" when it has a final answer, or "function_call" when it wants to call a function ). AgentWorker ( 1st iteration - tool decision 😞 In this first iteration, the Worker pipeline receives the messages ( system + user ). Inside the Worker: A Function Generator Snap provides the definition of the GetLocation and GetWeather tools, including their name, description, and parameters. This tells the LLM what the tool does and how to call it. The Tool Calling Snap now sends the conversation ( so far just the user question and system role ) along with the available tool definition to the LLM. The LLM evaluates the user’s request in the context of being a weather assistant. In our scenario, we expect it will decide it needs to use the tool to get the answer. Instead of replying with text, the LLM responds with a function call request. For example, the LLM might return a JSON like: The Tool Calling Snap outputs this structured decision. ( Under the hood, the Snap outputs it on a Tool Calls view when a function call is requested ). The pipeline splits into two parallel paths at this point: One path captures the LLM’s partial response ( which indicates a tool is being called ) and routes it into a Message Appender. This ensures that the conversation history now includes an assistant turn that is essentially a tool call. The other path takes the function call details and invokes the corresponding tool. In SnapLogic, we use a Pipeline Execute Snap to call the WeatherAgent_GetWeather pipeline. We pass the location ( "San Francisco" ) from the LLM’s request into that pipeline as input parameter ( careful, it is not a pipeline parameter ). WeatherAgent_GetWeather executes: This pipeline calls the external Weather API with the given location. It gets back a weather data ( say the API returns that it’s 18°C and sunny ). The SnapLogic pipeline returns this data to the AgentWorker pipeline. On the next iteration, the messages array would look something like below: AgentWorker ( function result return 😞 With the weather data now in hand, a Function Result Generator Snap in the Worker takes the result and packages it in the format the LLM expects for a function result. Essentially, it creates the content that will be injected into the conversation as the function’s return value. The Message Appender Snap then adds this result to the conversation history array as a new assistant message ( but marked in a way that the LLM knows it’s the function’s output ). Now the Worker’s first iteration ends, and it outputs the updated messages array ( which now contains: user’s question, assistant’s "thinking/confirmation" message, and the raw weather data from the tool ). AgentDriver ( loop decision 😞 The Driver pipeline receives the output of the Worker’s iteration. Since the last LLM action was a function call ( not a final answer ), the stop condition is not met. Thus, the PipeLoop triggers the next iteration, sending the updated conversation ( which now includes the weather info ) back into the AgentWorker for another round. AgentWorker ( 2nd iteration - final answer 😞 In this iteration, the Worker pipeline again calls the Tool Calling Snap, but now the messages array includes the results of the weather function. The LLM gets to see the weather data that was fetched. Typically, the LLM will now complete the task by formulating a human-friendly answer. For example, it might respond: "It’s currently 18°C and sunny in San Francisco." This time, the LLM’s answer is a normal completion with no further function calls needed. The Tool Calling Snap outputs the assistant’s answer text and a finish_reason indicating completion ( like "stop" ). The Worker appends this answer to the message history and outputs the final messages payload. AgentDriver ( completion 😞 The Driver receives the final output from the Worker’s second iteration. The PipeLoop Snap sees that the LLM signaled no more steps ( finish condition met ), so it stops looping. Now the AgentDriver takes the final assistant message ( the weather answer ) and sends it as the bot’s response back to Teams via the HTTP response. The pipeline will extract just the answer text to return to the user. User sees the answer in Teams: The user’s Teams chat now displays the Weather Agent's reply, for example: "It’s currently 18°C and sunny in San Francisco." The conversation context ( question and answer ) can be stored via the ChatHistory pipeline for future reference. From the user’s perspective, they asked a question and the bot answered naturally, with only a brief delay during which they saw the bot "typing" indicator. Throughout this interaction, the typing indicator that is implemented helps reassure the user that the agent is working on the request. The user-specific chat history ensures that if the user asks a follow-up like "How about tomorrow?", the agent could understand that "tomorrow" refers to the weather in San Francisco, continuing the context ( this would involve the LLM and pipelines using the stored history to know the city from prior turn ). This completes the first part, which was on how the SnapLogic pipelines and AgentCreator framework enable an AI-powered chatbot to use tools and deliver real-time info. We saw how the Agent Driver + Worker architecture ( with iterative PipeLoop execution ) allows interactions where the LLM can call SnapLogic pipelines as functions. The no-code SnapLogic approach made it possible to integrate an LLM without writing custom code – we simply configured Snaps and pipelines. We now have a working AI Agent that we can use in SnapLogic, however, we are still missing the chatbot experience. In the second part, we’ll shift to the integration with Microsoft Teams and Azure, to see how this pipeline is exposed as a bot endpoint and what steps are needed to deploy it in a real chat environment. AgentCreator624Views3likes0CommentsBuilding an AI Agent with SnapLogic AgentCreator using OpenAI and Microsoft Teams - Part 2
Integrating the AI Agent with Microsoft Teams via Azure Bot Service The first part covered the creation of our agent's architecture using SnapLogic pipelines and AgentCreator. Now, we focus on connecting that pipeline to Microsoft Teams so end users can chat with it. This involves creating and configuring the Azure Bot Service as a bridge between Teams and our SnapLogic pipelines. We will walk through the prerequisites and setup. Prerequisites for the Azure Bot Integration To integrate the SnapLogic agent with Teams, ensure you have the following prerequisites in place: SnapLogic AgentCreator and pipelines: A SnapLogic environment where AgentCreator is enabled. The Weather Agent pipelines can be used as a working example. You’ll also need to create the AgentDriver pipeline as a Triggered Task ( to obtain an endpoint URL accessible by the bot ). SnapLogic OAuth2 Account: An OAuth2 Account which will be used in an HTTP Client to send the assistant response back to the user. Also used for simulating "typing" indicator in Teams chat between tool usage. Microsoft 365 Tenant with Teams: Access to a Microsoft tenant where you have permission to register applications and upload custom Teams apps. You’ll need a Teams environment to test the bot ( this could be a corporate tenant or a developer tenant ). Azure Subscription: An Azure account with an active subscription to create resources ( specifically, an Azure Bot Service ). Also, ensure you have the Azure Bot Channels Registration or Azure Bot resource creation rights. Azure AD App Registration: Credentials for the bot. We will register an application in Azure Active Directory to represent our bot ( this provides a Client ID and Client Secret that will be used by the Bot Service to authenticate ). Azure Bot Service resource: We will create an Azure Bot which will tie together the app registration and our messaging endpoint, and allow adding Teams as a channel. Register an App in Azure AD for the Bot The first step is to register an Azure AD application that will identify our bot and provide authentication to Azure Bot Service and Teams. Create App registration: In the Azure Portal, navigate to Azure Active Directory > App Registrations and click "New registration". Give the app a name. For supported account types, you can choose "Accounts in this organizational directory only" ( Single tenant ) for simplicity, since this bot is intended for your organization’s Teams. You do not need to specify a Redirect URI for this scenario. Finalize registration: Click Register to create the app. Once created, you’ll see the Application ( Client ) ID – copy this ID, as we’ll need it later as the Bot ID and in the OAuth2 account. Create a client secret: In your new app’s overview, go to Certificates & secrets. Click "New client secret" to generate a secret key. Give it a description and a suitable expiration period. After saving, copy the Value of the client secret ( it will be a long string ). Save this secret somewhere secure now – you won’t be able to retrieve it again after you leave the page. We’ll provide this secret to the Bot Service so it can authenticate as this app and we will also use it in the OAuth2 account in SnapLogic. Gather Tenant ID: Since we chose a single-tenant app, we’ll also need the Azure AD tenant ID. You can find this in Overview of the app. Copy the tenant ID as well for later use. At this point, you should have: Client ID ( application ID ) for the bot and the OAuth2 account Client secret for the bot ( stored securely ) and the OAuth2 account Tenant ID of our Azure AD These will be used when setting up the Azure Bot Service so that it knows about this app registration. Create an OAuth2 account Now that we have the client id, client secret and tenant id all gathered from the app registration, we can create the OAuth2 account which will be used in an HTTP Client snap that will send the "typing" indicator as well as the response from the agent. Navigate to the "Manager" tab and locate your project folder where the agent pipelines are stored On the right side, click on the "+" icon to create a new account Choose "API Suite > OAuth2 Account" Populate the client id and client secret values from your app registration process Check the 'Send client data as Basic Auth header' and 'Header authenticated' settings Populate the authorization and token endpoints OAuth2 authorization endpoint: https://login.microsoftonline.com/<TENANT_ID>/oauth2/v2.0/authorize Oauth2 token endpoint: https://login.microsoftonline.com/<TENANT_ID>/oauth2/v2.0/token Change the "Grant type" to "client_credentials Add scope to both "Token endpoint config" and "Authorization endpoint config", the scope in our case is the following: https://api.botframework.com/.default Check "Auto-refresh token" Click "Authorize", if everything was set correctly on the previous step you should get redirected back to SnapLogic with a valid access token Example of an already configured OAuth2 account Create the Azure Bot Service and Connect to SnapLogic With the Azure AD app ready, we can create the actual bot resource that will connect to Teams and our SnapLogic endpoint: Add Azure Bot resource: In the Azure Portal, search for "Azure Bot" Service and select Azure Bot. Choose Create to make a new Bot resource. Configure Bot Settings: On the creation form, fill in: Bot handle: A unique name for your bot. Subscription and Resource Group: Select your Azure subscription and a resource group to contain the bot resource. Location: Pick a region. Pricing tier: Choose the Free tier if ( F0 ) – it’s more than sufficient for development and basic usage. Microsoft App ID: Here, reuse the existing App Registration we created. There should be an option to choose an existing app – provide the Client ID of the app registration. This links the bot resource to our AD app. App type: Select Single Tenant since our app registration is single-tenant. You might also need to provide the App secret ( Client Secret ) for the bot here during creation. Create the Bot: Click Review + create and then Create to provision the bot service. Azure will deploy the bot resource. Once completed, go to the resource’s page. Configure messaging endpoint: This is a crucial step – we must point the bot to our SnapLogic pipeline. In the Azure Bot resource settings, find the Settings menu and navigate to Configuration. Populate the field for Messaging endpoint ( the URL that the bot will call when a message is received ). Here, paste the Trigger URL of your WeatherAgent_AgentDriver pipeline. To get this URL: In SnapLogic, you would have already created the AgentDriver pipeline as a Triggered Task. That generates an endpoint URL: https://elastic.snaplogic.com/api/1/rest/slschedule/<org>/<proj>/WeatherAgent_AgentDriver Example endpoint with an appended authorization query param as bearer_token: https://elastic.snaplogic.com/api/1/rest/slschedule/myOrg/WeatherProject/WeatherAgent_AgentDriver?bearer_token=<bearer token> Enter the URL exactly as given by SnapLogic + Bearer token value Save the configuration. Now, when the Teams user messages the bot, Azure will send an HTTPS POST to this SnapLogic URL. Add Microsoft Teams channel: Still in the Azure Bot resource, go to Channels. Add a new channel and select Microsoft Teams. This step registers the bot with Teams so that Teams clients can use it. Now our bot service is set up with the SnapLogic pipeline as its backend. The AgentDriver pipeline is effectively the bot’s webhook. The Azure Bot resource handles authentication with Teams and will forward user messages to SnapLogic and relay SnapLogic’s responses back to Teams. Packaging the bot for Teams ( App manifest ) At this stage, the bot exists in Azure, but to use it in Teams we need to package it as a Teams app, especially if we want to share it within the organization. This involves creating a Teams app manifest and icons, then uploading it to Teams. Prepare the Teams app manifest: The manifest is a JSON file describing your Teams app ( the bot ). Microsoft provides a schema for this but you can download the manifest file from this example, make sure you replace the <APP ID> placeholders within it. The manifest file consists of: App ID: Use the Bot’s App ID ( Client ID of the registered app ) App name, description: The name of Teams app, example "SnapLogic Agent". Icons: Prepare two icon images for the bot – typically a color icon ( 192x192 PNG ) and an outline icon ( 32x32 PNG ). These will be used as the agent's avatar and in the Teams app catalog. The manifest may also include information like developer info, version number, etc. If using the Teams Developer Portal, it can guide you through filling these fields and will handle the JSON for you. Just ensure the Bot ID and scopes are correctly set. Combine manifest and icons: Once your manifest file and icons are ready, put all three into a .zip file. For example, a zip containing: manifest.json icon-color.png ( 192x192 ) icon-outline.png ( 32x32 ) Make sure the JSON inside the zip references the icon file names exactly as they are. Upload the app to Teams: In Microsoft Teams, go to Apps > Manage your apps > Upload a custom app. Upload the zip file. Teams should recognize it as a new app. When added, it essentially registers the bot ID with the Teams client. Test in Teams: Open a chat with your Weather Agent in Teams ( it should appear with the name and icon you provided ). Type a message, like "Hi" or a weather question: "What's the weather in New York?" The message will go out to Azure, which will call SnapLogic’s endpoint. The SnapLogic pipelines will run through the logic ( as described in in the first part ) and Azure will return the bot’s reply to Teams. You should see the bot’s answer appear in the chat. If you see the bot typing indicator first and then the answer, everything is working as expected! Initial message and a response from the agent Typing indicator as showcased during agent execution Agent response after using the available tools Now the Weather Agent is fully functional within Teams. It’s essentially an AI-powered chat interface to a live weather API, all orchestrated by SnapLogic in the background. Benefits of SnapLogic and Teams for Conversational Agentic Interfaces Integrating SnapLogic AgentCreator with Microsoft Teams via Azure Bot Service has several benefits: Fast prototyping: You can go from idea to a working bot in a very short time. There’s no need to write custom bot code or host a web service – SnapLogic pipelines become your bot logic. In our example, building a weather query bot is as simple as wiring up a few Snaps and APIs. This accelerates development and allows quick iteration. Business users or integration developers can prototype new AI agents rapidly, responding to evolving needs without a heavy software development cycle. No-code integration and simplicity: SnapLogic provides out-of-the-box connectors to hundreds of systems and services. By using SnapLogic as the engine, your bot can tap into any of these with minimal effort. Want a bot that not only gives weather but also looks up flight data or CRM info? It’s just another pipeline. The AgentCreator framework handles the AI part, while the SnapLogic platform handles the integration part ( connecting to external APIs and data sources ). This synergy makes it simple to create powerful bots that perform real actions – far beyond what an LLM alone could do. And it’s all done with low/no-code configuration. Enhanced user experience: Delivering automation through a conversational interface in Teams meets users where they already collaborate. There’s no new app to learn – users simply chat with a bot as if they’re chatting with a colleague. Reusability: The modular design of the pipelines in the weather agent can be a template for other agents by swapping out the tools and prompts. The integration pattern remains the same. This showcases the reusability of the AgentCreator approach across various use cases. Conclusion By combining SnapLogic’s generative AI integration capabilities with Microsoft’s bot framework and Teams, we created a powerful AI Agent without writing any code at all. We used SnapLogic AgentCreator snaps to handle the AI reasoning and tool calling, and used Azure Bot Service to connect that logic to a Microsoft Teams. The real win is how quickly and easily this was achieved. In a matter of days or even hours, an enterprise can prototype a conversational AI agent that ties into live data and services. The speed of development, combined with the secure and integration into everyday platforms like Teams, delivers real business value. In summary, SnapLogic and Teams enables a new class of enterprise applications: ones that talk to you, using AI to bridge human requests to automated actions. The Weather Agent is a simple example, but it highlights how fast prototyping, integration simplicity, and enhanced user experience come together. I encourage you to try building your own SnapLogic Agent – whether it’s for weather, workflows, or anything else – and unleash the power of conversational AI in your organization. Happy integrating, and don’t forget your umbrella if the Weather Agent says rain is on the way! AgentCreator728Views4likes0CommentsRecipes for Success with SnapLogic’s GenAI App Builder: From Integration to Automation
For this episode of the Enterprise Alchemists podcast, Guy and Dominic invited Aaron Kesler and Roger Sramkoski to join them to discuss why SnapLogic's GenAI App Builder is the key to success with AI projects. Aaron is the Senior Product Manager for all things AI at SnapLogic, and Roger is a Senior Technical Product Marketing Manager focused on AI. We kept things concrete, discussing real-world results that early adopters have already been able to deliver by using SnapLogic's integration capabilities to power their new AI-driven experiences.2.3KViews4likes2CommentsMulti Pipeline Function Generator - Simplifies Agent Worker Pipeline
This article introduces a new Snap called the “Multi Pipeline Function Generator”. The Multi Pipeline Function Generator is designed to take existing Pipelines in your SnapLogic Project and turn their configurations into function definitions for LLM-based tool calling. It achieves the following: It replaces the existing chain of function generators, therefore reduces the length of the worker pipeline. Combined with our updates to the tool calling snaps, this snap allows multiple tool calling branches to be merged into a single branch, simplifying the pipeline structure. With it, users can directly select the desired pipeline to be used as a tool from a dropdown menu. The snap will automatically retrieve the tool name, purpose, and parameters from the pipeline properties to generate a function definition in the required format. Problem Statement Currently, the complexity of the agent worker pipeline increases linearly with the number of tools it has. The image below shows a worker pipeline with three tools. It requires three function generators and has three tool calling branches to execute different tools. This becomes problematic when the number of tools is large, as the pipeline becomes very long both horizontally and vertically. Current Agent Worker Pipeline With Three Tools Solution Overview One Multi Pipeline Function Generator snap can replace multiple function generators (as long as the tool is a pipeline; it's not applicable if the tool is of another type, such as OpenAPI or APIM service). New Agent Worker Pipeline Using “Multi Pipeline Function Generator” Additionally, for each outputted tool definition, it includes the corresponding pipeline's path. This allows downstream components (the Pipeline Execute snap) to directly call the respective tool pipeline with the path, as shown below. The Multi Pipeline Function Generator snap allows users to select multiple tool pipelines at once through dropdown menus. It reads the necessary data for generating function definition from the pipeline properties. Of course, this requires that the data has been set up in the pipeline properties beforehand (will be explained later). The image below shows the settings for this snap. Snap Settings How to Use the Snap To use this snap, you need to: Fill in the necessary information for generating the function definition in the properties of your tool pipeline. The pipeline's name will become the function name The information under 'info -> purpose' will become the function description. Each key in your OpenAPI specification will be treated as a parameter, so you will ALSO need to add the expected input parameters to the list of pipeline parameters. Please note that in the current design, the pipeline parameters specified here are solely used for generating the function definition. When utilizing parameters within the pipeline, you do not need to retrieve their values using pipeline parameters. Instead, you can directly access the argument values from the input document, as determined by the model based on the function definition. Then, you can select this pipeline as a tool from the dropdown menu in the Multi Pipeline Function Generator snap. In the second output of the tool calling snap, we only need to keep one branch. In the pipeline execute snap, we can directly use the expression $sl_tool_metadata.path to dynamically retrieve the path of the tool pipeline being called. See image below. Below is an example of the pipeline properties for the tool 'CRM_insight' for your reference. Below is the settings page of the original function generator snap for comparison. As you can see, the information required is the same. The difference is that now we directly fill this information into the pipeline's properties. Step 3 - reduce the number of branches More Design Details The tool calling snap has also been updated to support $sl_tool_metadata.path , since the model's initial response doesn't include the pipeline path which is needed. After the tool calling snap receives the tools the model needs to call, it adds the sl_tool_metadata containing the pipeline path to the model's response and outputs it to the snap's second output view. This allows us to use it in the pipeline execute snap later. This feature is supported for tool calling with Amazon Bedrock, OpenAI, Azure OpenAI, and Google GenAI snap packs. The pipeline path can accept either a string or a list as input. By turning on the 'Aggregate input' mode, multiple input documents can be combined into a single function definition document for output, similar to that of a gate snap. This can be useful in scenarios like this: you use a SnapLogic list snap to enumerate all pipelines within a project, then use a filter snap to select the desired tool pipelines, and finally use the multi pipeline function generator to convert this series of pipelines into function definitions. Example Pipelines Download here. Conclusion In summary, the Multi Pipeline Function Generator snap streamlines the creation of function definitions for pipeline as tool in agent worker pipelines. This significantly reduces pipeline length in scenarios with numerous tools, and by associating pipeline information directly with the pipeline, it enhances overall manageability. Furthermore, its applicability extends across various providers.700Views0likes1Comment