cancel
Showing results for 
Search instead for 
Did you mean: 

Building an AI Agent with SnapLogic AgentCreator using OpenAI and Microsoft Teams - Part 1

j_angelevski
Contributor III

Building an AI Agent with SnapLogic AgentCreator using OpenAI and Microsoft Teams

In this two-part blog series, I will cover how to create an AI Agent using SnapLogic's AgentCreator and integrate it with Microsoft Teams via Azure Bot services. The solution combines SnapLogic’s AgentCreator, OpenAI ( gpt 4.1 mini ) and Microsoft Teams ( to provide a familiar chat interface ). In the first part, we will cover building the agent with SnapLogic pipelines and the AgentCreator pattern. In the second part, I will explain the Azure setup and Teams integration, and highlight the business benefits of conversational automation.

Designing the SnapLogic-Powered AI Agent

The example I've decided to go with is a simple Weather Agent that provides a conversational interface for weather queries, accessible directly within Teams. This improves user experience by integrating information into the tools people already use and showcasing how SnapLogic’s AgentCreator can automate tasks through natural language.

How does SnapLogic help? SnapLogic’s new AgentCreator framework allows us to build an AI-driven agent that uses LLM intelligence combined with SnapLogic pipelines to fetch real data. The Weather Agent understands a user’s question, decides if it needs to call a function ( tool ), performs that action via a SnapLogic pipeline, and then responds conversationally with the result.

SnapLogic AgentCreator is purpose-built for such scenarios, enabling enterprises to create AI agents that can call pipelines and APIs autonomously. In our case, the agent will use a weather API through SnapLogic to get live data, meaning the agent's answers are not just based on static knowledge, but on real-time API calls.

SnapLogic AgentCreator Architecture Overview

We will focus on the AgentCreator pattern – a design that splits the agent’s logic into two cooperative pipelines: an Agent Driver and an Agent Worker. This pattern is orchestrated by SnapLogic’s Pipeline loop ( PipeLoop ) Snap, which allows iterative calls to a pipeline until a certain condition is met, in our case, until the conversation turn is complete or n number of iterations have been completed. Here’s how it works:

  • Agent Driver pipeline: This orchestrator pipeline receives the incoming chat message and manages the overall conversation loop. It sends the user’s query ( plus any chat history messages available ) and the system prompt to the Agent Worker pipeline using the PipeLoop Snap, and keeps iterating until the LLM signals that it’s done responding or the number of iterations are reached.

  • Agent Worker pipeline: This pipeline handles one iteration of LLM interaction. It presents the LLM with the conversation context and available tools, gets the LLM’s response ( which could be an answer or a function call request ), executes any required tool, and returns the result back to the Driver. The Worker is essentially where the "brain" of the agent lives – it decides if a tool call is needed and formats the answer.

This architecture allows the agent to have multi-turn reasoning. For example, if the user asks for weather, the LLM might first respond with a function call to get data, the Worker executes that call, and then the LLM produces a final answer in a second iteration. The PipeLoop Snap in the Driver pipeline detects whether another iteration is needed ( if the last LLM output was a partial result or tool request ) and loops again, or stops if the answer is complete.

j_angelevski_8-1748960709467.png

Key components of the Weather Agent architecture:

  • SnapLogic AgentCreator: The toolkit that makes this AI agent possible. It provides specialized Snaps for prompt handling, LLM integration ( OpenAI, Azure OpenAI, Amazon Bedrock, Google Gemini etc. ), and function-calling logic. SnapLogic AgentCreator enables designing AI agents with dynamic iteration and tool usage built in.

  • LLM ( Generative AI model 😞 The LLM powering the agent's understanding and response generation. In our implementation, an LLM ( such as OpenAI GPT ) interprets the user’s request and decides when to call the available tools. SnapLogic’s Tool Calling Snap interfaces with the LLM’s API to get these decisions.

  • Weather API: The external data source for live weather information. The agent uses a real API ( https://open-meteo.com/ ) to fetch current weather details for the requested location.

  • Microsoft Teams & Azure Bot: This is the front-end interface where the user interacts with the bot, and the connector that sends messages between Teams and our SnapLogic pipelines.

Setting up an OpenAI API account

Because we are working with the gpt 4.1 mini API, we will need to configure an OpenAI account. This assumes you have already created an API key in your OpenAI dashboard.

  1. Navigate to Manager tab under your project folder location and click on the "+" button to create a new Account.
    j_angelevski_12-1748962245901.png
  2. Navigate to OpenAI LLM -> OpenAI API Key Account
    j_angelevski_13-1748962321332.png

     

  3. You can name it based on your needs or naming convention
  4. Copy and paste your API key from the OpenAI dashboard
    j_angelevski_17-1748962639161.png
  5. On the Agent Worker pipeline, open the "OpenAI Tool Calling" snap and apply the newly created account
    j_angelevski_15-1748962528710.pngj_angelevski_16-1748962586737.png
  6. Save the pipeline. You have now successfully integrated the OpenAI API.

Weather Agent pipelines in SnapLogic

I've built a set of SnapLogic pipelines to implement the Weather Agent logic using AgentCreator. Each pipeline has a specific role in the overall chatbot workflow:

  • WeatherAgent_AgentDriver: The orchestrator for the agent. It is triggered by incoming HTTP requests from the Azure Bot Service ( when a user sends a Teams message ). The AgentDriver parses the incoming message, sends a quick "typing…" indicator to the user ( to simulate the bot typing ), and then uses a PipeLoop Snap to invoke the AgentWorker pipeline. It supplies the user’s question, the system prompt and any prior context, and keeps iterating until the bot’s answer is complete. It also handles "deleting" chat history if the user writes a specific message like "CLEAR_CHAT" in the Teams agent conversation to refresh the conversation.

    j_angelevski_11-1748961882607.png
  • WeatherAgent_AgentWorker: The tool-calling pipeline ( Agent Worker ) that interacts with the LLM. On each iteration, it takes the conversation messages ( system prompt, user query, and any accumulated dialogue history ) from the Driver. The flow of the Agent Worker for a Weather Agent:

    • defines what tools ( functions ) the LLM is allowed to call – in this case, a location and weather lookup tools

    • invokes the LLM via a Tool Calling Snap, passing it the current conversation and available function definitions

    • processes the LLM’s response – if the LLM requests a function call ( "get weather for London" ), the pipeline routes that request to the appropriate tool pipeline

    • once the tool returns data, the Worker formats the result using a Function Result Generator Snap and appends it to the conversation via a Message Appender Snap

    • returns the updated conversation with any LLM answer or tool results back to the Driver. The AgentWorker essentially handles one round of "LLM thinking"

       
      image-20250527232037179.png
  • WeatherAgent_GetLocation: A tool that the agent can use to convert a user’s input location ( city name, etc. ) into a standardized form or coordinates ( latitude and longitude ). It queries an open-meteo API to retrieve latitude and longitude data based on the given location. The system prompt instructs the agent that if the tool returns more than one match, ask the user which location they meant - keeping a human-in-the loop for such scenarios. For example, if the user requests weather for "Springfield", the agent first calls the GetLocation tool and if the tool responds with multiple locations, the agent will list them ( for example, Springfield, MA; Springfield, IL; Springfield, MO ) and ask the user to specify which location they meant before proceeding. Once the location is confirmed, the agent passes the coordinates to the GetWeather tool.

    image-20250527232106249.png
  • WeatherAgent_GetWeather: The tool pipeline that actually fetches current weather data from an external API. This pipeline is invoked when the LLM decides it needs the weather info. It takes an input latitude and longitude and calls a weather API. In our case, I've used the open-meteo service, which returns a JSON containing weather details for a given location. The pipeline consists of an HTTP Client Snap ( configured to call the weather API endpoint with the location ) and a Mapper Snap to shape the API’s JSON response into the format expected by the Agent Worker pipeline. Once the data is retrieved ( temperature, conditions, etc. ), this pipeline’s output is fed back into the Agent Worker ( via the Function Result Generator ) so the LLM can use it to compose a user-friendly answer.

    image-20250527232124716.png
  • MessageEndpoint_ChatHistory: This pipeline handles conversation history ( simple memory ) for each user or conversation. Because our agent may be used by multiple users ( and we want each user’s chat to be independent ), we maintain a user-specific chat history. In this example the pipeline uses the SLDB's storage to store the file but in a production environment the ChatHistory pipeline could use a database Snap store chat history, keyed by user or conversation ID. Each time a new message comes in, the AgentDriver will call this pipeline to fetch recent context ( so the bot can "remember" what was said before ). This ensures continuity in the conversation – for example, if the user follows up with "What about tomorrow?", the bot can refer to the previous question’s context stored in history. For simplicity, one could also maintain context in-memory during a single conversation session, but persisting it via this pipeline allows context across multiple sessions or a longer pause.

    image-20250527232157895.png

SnapLogic introduced specialized Snaps for LLM function calling to coordinate this process. The Function Generator Snap defines the available tools that the LLM agent can use. The Tool Calling Snap sends the user’s query and function definitions to the LLM model and gets back either an answer or a function call request ( and any intermediate messages ). If a function call is requested, SnapLogic uses a Pipeline Execute or similar mechanism to run the corresponding pipeline. The Function Result Generator then formats the pipeline’s output into a form the LLM can understand. At the end, the Message Appender Snap adds the function result into the conversation history, so the LLM can take that into account in the next response. This chain of Snaps allows the agent to decide between answering directly or using a tool, all within a no-code pipeline.

Sample interaction from user prompt to answer

To make the above more concrete, let’s walk through the flow of a sample interaction step by step:

  1. User asks a question in Teams: "What's the weather in San Francisco right now?" This message is sent from Teams to the Azure Bot Service, which relays it as an HTTP POST to our SnapLogic AgentDriver pipeline’s endpoint ( the messaging endpoint URL we will configure in the second part ).

  2. AgentDriver pipeline receives the message: The WeatherAgent_AgentDriver captures the incoming JSON payload from Teams. This payload contains the user’s message text and metadata ( like user ID, conversation ID, etc. ). The pipeline will first respond immediately with a typing indicator to Teams. We configured a small branch in the pipeline to output a "typing" activity message back to the Bot service, so that Teams shows the bot is typing - implemented to mainly enhance UX while the user waits for an answer.

  3. Preparing the prompt and context: The AgentDriver then prepares the initial prompt for the LLM. Typically, we include a system prompt ( defining the bot’s role/behavior ) and the user prompt. If we have prior conversation history ( from MessageEndpoint_ChatHistory for this user ), we would also include recent messages to give context. All this is packaged into a messages array that will be sent to the LLM.

  4. AgentDriver invokes AgentWorker via PipeLoop: The Driver uses a PipeLoop Snap configured to call the WeatherAgent_AgentWorker pipeline. It passes the prepared message payload as input. The PipeLoop is set with a stop condition based on the LLM’s response status – it will loop until the LLM indicates the conversation turn is completed or the iteration limit has been reached ( for example, OpenAI returns a finish_reason of "stop" when it has a final answer, or "function_call" when it wants to call a function ).

  5. AgentWorker ( 1st iteration - tool decision 😞 In this first iteration, the Worker pipeline receives the messages ( system + user ). Inside the Worker:

    • A Function Generator Snap provides the definition of the GetLocation and GetWeather tools, including their name, description, and parameters. This tells the LLM what the tool does and how to call it.

    • The Tool Calling Snap now sends the conversation ( so far just the user question and system role ) along with the available tool definition to the LLM.

    • The LLM evaluates the user’s request in the context of being a weather assistant. In our scenario, we expect it will decide it needs to use the tool to get the answer. Instead of replying with text, the LLM responds with a function call request. For example, the LLM might return a JSON like:

Spoiler
[
    {
        "messages":
        [
            {
                "content":"You are an AI Agent with the capability to retrieve weather data for a given location...",
                "sl_role":"SYSTEM"
            },
            {
                "content":"Hello!",
                "sl_role":"USER"
            },
            {
                "content":"Hello! How can I assist you today?",
                "role":"assistant"
            },
            {
                "content":"What's the weather in San Francisco?",
                "sl_role":"USER"
            }
        ]
    }
]

// OpenAI Tool Calling Snap response with finish_reason as "tool_calls"
...
{
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "tool_calls": [
          {
            "id": "call_nW2dRHuCyNgi3f3oXCnrLBNw",
            "type": "function",
            "function": {
              "name": "WeatherAgent_GetLocation",
              "arguments": {
                "location": "San Francisco"
              }
            }
          }
        ],
        "refusal": null,
        "annotations": []
      },
      "logprobs": null,
      "finish_reason": "tool_calls"
    }
  ]
}
...

The Tool Calling Snap outputs this structured decision. ( Under the hood, the Snap outputs it on a Tool Calls view when a function call is requested ).

  • The pipeline splits into two parallel paths at this point:

    • One path captures the LLM’s partial response ( which indicates a tool is being called ) and routes it into a Message Appender. This ensures that the conversation history now includes an assistant turn that is essentially a tool call.

    • The other path takes the function call details and invokes the corresponding tool. In SnapLogic, we use a Pipeline Execute Snap to call the WeatherAgent_GetWeather pipeline. We pass the location ( "San Francisco" ) from the LLM’s request into that pipeline as input parameter ( careful, it is not a pipeline parameter ).

  • WeatherAgent_GetWeather executes: This pipeline calls the external Weather API with the given location. It gets back a weather data ( say the API returns that it’s 18°C and sunny ). The SnapLogic pipeline returns this data to the AgentWorker pipeline. On the next iteration, the messages array would look something like below:

    image-20250528112404957.png
  • AgentWorker ( function result return 😞 With the weather data now in hand, a Function Result Generator Snap in the Worker takes the result and packages it in the format the LLM expects for a function result. Essentially, it creates the content that will be injected into the conversation as the function’s return value. The Message Appender Snap then adds this result to the conversation history array as a new assistant message ( but marked in a way that the LLM knows it’s the function’s output ). Now the Worker’s first iteration ends, and it outputs the updated messages array ( which now contains: user’s question, assistant’s "thinking/confirmation" message, and the raw weather data from the tool ).

  • AgentDriver ( loop decision 😞 The Driver pipeline receives the output of the Worker’s iteration. Since the last LLM action was a function call ( not a final answer ), the stop condition is not met. Thus, the PipeLoop triggers the next iteration, sending the updated conversation ( which now includes the weather info ) back into the AgentWorker for another round.

  • AgentWorker ( 2nd iteration - final answer 😞 In this iteration, the Worker pipeline again calls the Tool Calling Snap, but now the messages array includes the results of the weather function. The LLM gets to see the weather data that was fetched. Typically, the LLM will now complete the task by formulating a human-friendly answer. For example, it might respond: "It’s currently 18°C and sunny in San Francisco."  This time, the LLM’s answer is a normal completion with no further function calls needed. The Tool Calling Snap outputs the assistant’s answer text and a finish_reason indicating completion ( like "stop" ). The Worker appends this answer to the message history and outputs the final messages payload.

  • AgentDriver ( completion 😞 The Driver receives the final output from the Worker’s second iteration. The PipeLoop Snap sees that the LLM signaled no more steps ( finish condition met ), so it stops looping. Now the AgentDriver takes the final assistant message ( the weather answer ) and sends it as the bot’s response back to Teams via the HTTP response. The pipeline will extract just the answer text to return to the user.

  • User sees the answer in Teams: The user’s Teams chat now displays the Weather Agent's reply, for example: "It’s currently 18°C and sunny in San Francisco." The conversation context ( question and answer ) can be stored via the ChatHistory pipeline for future reference. From the user’s perspective, they asked a question and the bot answered naturally, with only a brief delay during which they saw the bot "typing" indicator.

Throughout this interaction, the typing indicator that is implemented helps reassure the user that the agent is working on the request. The user-specific chat history ensures that if the user asks a follow-up like "How about tomorrow?", the agent could understand that "tomorrow" refers to the weather in San Francisco, continuing the context ( this would involve the LLM and pipelines using the stored history to know the city from prior turn ).

This completes the first part, which was on how the SnapLogic pipelines and AgentCreator framework enable an AI-powered chatbot to use tools and deliver real-time info. We saw how the Agent Driver + Worker architecture ( with iterative PipeLoop execution ) allows interactions where the LLM can call SnapLogic pipelines as functions. The no-code SnapLogic approach made it possible to integrate an LLM without writing custom code – we simply configured Snaps and pipelines. We now have a working AI Agent that we can use in SnapLogic, however, we are still missing the chatbot experience. In the second part, we’ll shift to the integration with Microsoft Teams and Azure, to see how this pipeline is exposed as a bot endpoint and what steps are needed to deploy it in a real chat environment.

AgentCreator

0 REPLIES 0