ContributionsMost RecentMost LikesSolutionsBuilding an AI Agent with SnapLogic AgentCreator using OpenAI and Microsoft Teams - Part 1 Building an AI Agent with SnapLogic AgentCreator using OpenAI and Microsoft Teams In this two-part blog series, I will cover how to create an AI Agent using SnapLogic's AgentCreator and integrate it with Microsoft Teams via Azure Bot services. The solution combines SnapLogic’s AgentCreator, OpenAI ( gpt 4.1 mini ) and Microsoft Teams ( to provide a familiar chat interface ). In the first part, we will cover building the agent with SnapLogic pipelines and the AgentCreator pattern. In the second part, I will explain the Azure setup and Teams integration, and highlight the business benefits of conversational automation. Designing the SnapLogic-Powered AI Agent The example I've decided to go with is a simple Weather Agent that provides a conversational interface for weather queries, accessible directly within Teams. This improves user experience by integrating information into the tools people already use and showcasing how SnapLogic’s AgentCreator can automate tasks through natural language. How does SnapLogic help? SnapLogic’s new AgentCreator framework allows us to build an AI-driven agent that uses LLM intelligence combined with SnapLogic pipelines to fetch real data. The Weather Agent understands a user’s question, decides if it needs to call a function ( tool ), performs that action via a SnapLogic pipeline, and then responds conversationally with the result. SnapLogic AgentCreator is purpose-built for such scenarios, enabling enterprises to create AI agents that can call pipelines and APIs autonomously. In our case, the agent will use a weather API through SnapLogic to get live data, meaning the agent's answers are not just based on static knowledge, but on real-time API calls. SnapLogic AgentCreator Architecture Overview We will focus on the AgentCreator pattern – a design that splits the agent’s logic into two cooperative pipelines: an Agent Driver and an Agent Worker. This pattern is orchestrated by SnapLogic’s Pipeline loop ( PipeLoop ) Snap, which allows iterative calls to a pipeline until a certain condition is met, in our case, until the conversation turn is complete or n number of iterations have been completed. Here’s how it works: Agent Driver pipeline: This orchestrator pipeline receives the incoming chat message and manages the overall conversation loop. It sends the user’s query ( plus any chat history messages available ) and the system prompt to the Agent Worker pipeline using the PipeLoop Snap, and keeps iterating until the LLM signals that it’s done responding or the number of iterations are reached. Agent Worker pipeline: This pipeline handles one iteration of LLM interaction. It presents the LLM with the conversation context and available tools, gets the LLM’s response ( which could be an answer or a function call request ), executes any required tool, and returns the result back to the Driver. The Worker is essentially where the "brain" of the agent lives – it decides if a tool call is needed and formats the answer. This architecture allows the agent to have multi-turn reasoning. For example, if the user asks for weather, the LLM might first respond with a function call to get data, the Worker executes that call, and then the LLM produces a final answer in a second iteration. The PipeLoop Snap in the Driver pipeline detects whether another iteration is needed ( if the last LLM output was a partial result or tool request ) and loops again, or stops if the answer is complete. Key components of the Weather Agent architecture: SnapLogic AgentCreator: The toolkit that makes this AI agent possible. It provides specialized Snaps for prompt handling, LLM integration ( OpenAI, Azure OpenAI, Amazon Bedrock, Google Gemini etc. ), and function-calling logic. SnapLogic AgentCreator enables designing AI agents with dynamic iteration and tool usage built in. LLM ( Generative AI model 😞 The LLM powering the agent's understanding and response generation. In our implementation, an LLM ( such as OpenAI GPT ) interprets the user’s request and decides when to call the available tools. SnapLogic’s Tool Calling Snap interfaces with the LLM’s API to get these decisions. Weather API: The external data source for live weather information. The agent uses a real API ( https://open-meteo.com/ ) to fetch current weather details for the requested location. Microsoft Teams & Azure Bot: This is the front-end interface where the user interacts with the bot, and the connector that sends messages between Teams and our SnapLogic pipelines. Setting up an OpenAI API account Because we are working with the gpt 4.1 mini API, we will need to configure an OpenAI account. This assumes you have already created an API key in your OpenAI dashboard. Navigate to Manager tab under your project folder location and click on the "+" button to create a new Account. Navigate to OpenAI LLM -> OpenAI API Key Account You can name it based on your needs or naming convention Copy and paste your API key from the OpenAI dashboard On the Agent Worker pipeline, open the "OpenAI Tool Calling" snap and apply the newly created account Save the pipeline. You have now successfully integrated the OpenAI API. Weather Agent pipelines in SnapLogic I've built a set of SnapLogic pipelines to implement the Weather Agent logic using AgentCreator. Each pipeline has a specific role in the overall chatbot workflow: WeatherAgent_AgentDriver: The orchestrator for the agent. It is triggered by incoming HTTP requests from the Azure Bot Service ( when a user sends a Teams message ). The AgentDriver parses the incoming message, sends a quick "typing…" indicator to the user ( to simulate the bot typing ), and then uses a PipeLoop Snap to invoke the AgentWorker pipeline. It supplies the user’s question, the system prompt and any prior context, and keeps iterating until the bot’s answer is complete. It also handles "deleting" chat history if the user writes a specific message like "CLEAR_CHAT" in the Teams agent conversation to refresh the conversation. WeatherAgent_AgentWorker: The tool-calling pipeline ( Agent Worker ) that interacts with the LLM. On each iteration, it takes the conversation messages ( system prompt, user query, and any accumulated dialogue history ) from the Driver. The flow of the Agent Worker for a Weather Agent: defines what tools ( functions ) the LLM is allowed to call – in this case, a location and weather lookup tools invokes the LLM via a Tool Calling Snap, passing it the current conversation and available function definitions processes the LLM’s response – if the LLM requests a function call ( "get weather for London" ), the pipeline routes that request to the appropriate tool pipeline once the tool returns data, the Worker formats the result using a Function Result Generator Snap and appends it to the conversation via a Message Appender Snap returns the updated conversation with any LLM answer or tool results back to the Driver. The AgentWorker essentially handles one round of "LLM thinking" WeatherAgent_GetLocation: A tool that the agent can use to convert a user’s input location ( city name, etc. ) into a standardized form or coordinates ( latitude and longitude ). It queries an open-meteo API to retrieve latitude and longitude data based on the given location. The system prompt instructs the agent that if the tool returns more than one match, ask the user which location they meant - keeping a human-in-the loop for such scenarios. For example, if the user requests weather for "Springfield", the agent first calls the GetLocation tool and if the tool responds with multiple locations, the agent will list them ( for example, Springfield, MA; Springfield, IL; Springfield, MO ) and ask the user to specify which location they meant before proceeding. Once the location is confirmed, the agent passes the coordinates to the GetWeather tool. WeatherAgent_GetWeather: The tool pipeline that actually fetches current weather data from an external API. This pipeline is invoked when the LLM decides it needs the weather info. It takes an input latitude and longitude and calls a weather API. In our case, I've used the open-meteo service, which returns a JSON containing weather details for a given location. The pipeline consists of an HTTP Client Snap ( configured to call the weather API endpoint with the location ) and a Mapper Snap to shape the API’s JSON response into the format expected by the Agent Worker pipeline. Once the data is retrieved ( temperature, conditions, etc. ), this pipeline’s output is fed back into the Agent Worker ( via the Function Result Generator ) so the LLM can use it to compose a user-friendly answer. MessageEndpoint_ChatHistory: This pipeline handles conversation history ( simple memory ) for each user or conversation. Because our agent may be used by multiple users ( and we want each user’s chat to be independent ), we maintain a user-specific chat history. In this example the pipeline uses the SLDB's storage to store the file but in a production environment the ChatHistory pipeline could use a database Snap store chat history, keyed by user or conversation ID. Each time a new message comes in, the AgentDriver will call this pipeline to fetch recent context ( so the bot can "remember" what was said before ). This ensures continuity in the conversation – for example, if the user follows up with "What about tomorrow?", the bot can refer to the previous question’s context stored in history. For simplicity, one could also maintain context in-memory during a single conversation session, but persisting it via this pipeline allows context across multiple sessions or a longer pause. SnapLogic introduced specialized Snaps for LLM function calling to coordinate this process. The Function Generator Snap defines the available tools that the LLM agent can use. The Tool Calling Snap sends the user’s query and function definitions to the LLM model and gets back either an answer or a function call request ( and any intermediate messages ). If a function call is requested, SnapLogic uses a Pipeline Execute or similar mechanism to run the corresponding pipeline. The Function Result Generator then formats the pipeline’s output into a form the LLM can understand. At the end, the Message Appender Snap adds the function result into the conversation history, so the LLM can take that into account in the next response. This chain of Snaps allows the agent to decide between answering directly or using a tool, all within a no-code pipeline. Sample interaction from user prompt to answer To make the above more concrete, let’s walk through the flow of a sample interaction step by step: User asks a question in Teams: "What's the weather in San Francisco right now?" This message is sent from Teams to the Azure Bot Service, which relays it as an HTTP POST to our SnapLogic AgentDriver pipeline’s endpoint ( the messaging endpoint URL we will configure in the second part ). AgentDriver pipeline receives the message: The WeatherAgent_AgentDriver captures the incoming JSON payload from Teams. This payload contains the user’s message text and metadata ( like user ID, conversation ID, etc. ). The pipeline will first respond immediately with a typing indicator to Teams. We configured a small branch in the pipeline to output a "typing" activity message back to the Bot service, so that Teams shows the bot is typing - implemented to mainly enhance UX while the user waits for an answer. Preparing the prompt and context: The AgentDriver then prepares the initial prompt for the LLM. Typically, we include a system prompt ( defining the bot’s role/behavior ) and the user prompt. If we have prior conversation history ( from MessageEndpoint_ChatHistory for this user ), we would also include recent messages to give context. All this is packaged into a messages array that will be sent to the LLM. AgentDriver invokes AgentWorker via PipeLoop: The Driver uses a PipeLoop Snap configured to call the WeatherAgent_AgentWorker pipeline. It passes the prepared message payload as input. The PipeLoop is set with a stop condition based on the LLM’s response status – it will loop until the LLM indicates the conversation turn is completed or the iteration limit has been reached ( for example, OpenAI returns a finish_reason of "stop" when it has a final answer, or "function_call" when it wants to call a function ). AgentWorker ( 1st iteration - tool decision 😞 In this first iteration, the Worker pipeline receives the messages ( system + user ). Inside the Worker: A Function Generator Snap provides the definition of the GetLocation and GetWeather tools, including their name, description, and parameters. This tells the LLM what the tool does and how to call it. The Tool Calling Snap now sends the conversation ( so far just the user question and system role ) along with the available tool definition to the LLM. The LLM evaluates the user’s request in the context of being a weather assistant. In our scenario, we expect it will decide it needs to use the tool to get the answer. Instead of replying with text, the LLM responds with a function call request. For example, the LLM might return a JSON like: The Tool Calling Snap outputs this structured decision. ( Under the hood, the Snap outputs it on a Tool Calls view when a function call is requested ). The pipeline splits into two parallel paths at this point: One path captures the LLM’s partial response ( which indicates a tool is being called ) and routes it into a Message Appender. This ensures that the conversation history now includes an assistant turn that is essentially a tool call. The other path takes the function call details and invokes the corresponding tool. In SnapLogic, we use a Pipeline Execute Snap to call the WeatherAgent_GetWeather pipeline. We pass the location ( "San Francisco" ) from the LLM’s request into that pipeline as input parameter ( careful, it is not a pipeline parameter ). WeatherAgent_GetWeather executes: This pipeline calls the external Weather API with the given location. It gets back a weather data ( say the API returns that it’s 18°C and sunny ). The SnapLogic pipeline returns this data to the AgentWorker pipeline. On the next iteration, the messages array would look something like below: AgentWorker ( function result return 😞 With the weather data now in hand, a Function Result Generator Snap in the Worker takes the result and packages it in the format the LLM expects for a function result. Essentially, it creates the content that will be injected into the conversation as the function’s return value. The Message Appender Snap then adds this result to the conversation history array as a new assistant message ( but marked in a way that the LLM knows it’s the function’s output ). Now the Worker’s first iteration ends, and it outputs the updated messages array ( which now contains: user’s question, assistant’s "thinking/confirmation" message, and the raw weather data from the tool ). AgentDriver ( loop decision 😞 The Driver pipeline receives the output of the Worker’s iteration. Since the last LLM action was a function call ( not a final answer ), the stop condition is not met. Thus, the PipeLoop triggers the next iteration, sending the updated conversation ( which now includes the weather info ) back into the AgentWorker for another round. AgentWorker ( 2nd iteration - final answer 😞 In this iteration, the Worker pipeline again calls the Tool Calling Snap, but now the messages array includes the results of the weather function. The LLM gets to see the weather data that was fetched. Typically, the LLM will now complete the task by formulating a human-friendly answer. For example, it might respond: "It’s currently 18°C and sunny in San Francisco." This time, the LLM’s answer is a normal completion with no further function calls needed. The Tool Calling Snap outputs the assistant’s answer text and a finish_reason indicating completion ( like "stop" ). The Worker appends this answer to the message history and outputs the final messages payload. AgentDriver ( completion 😞 The Driver receives the final output from the Worker’s second iteration. The PipeLoop Snap sees that the LLM signaled no more steps ( finish condition met ), so it stops looping. Now the AgentDriver takes the final assistant message ( the weather answer ) and sends it as the bot’s response back to Teams via the HTTP response. The pipeline will extract just the answer text to return to the user. User sees the answer in Teams: The user’s Teams chat now displays the Weather Agent's reply, for example: "It’s currently 18°C and sunny in San Francisco." The conversation context ( question and answer ) can be stored via the ChatHistory pipeline for future reference. From the user’s perspective, they asked a question and the bot answered naturally, with only a brief delay during which they saw the bot "typing" indicator. Throughout this interaction, the typing indicator that is implemented helps reassure the user that the agent is working on the request. The user-specific chat history ensures that if the user asks a follow-up like "How about tomorrow?", the agent could understand that "tomorrow" refers to the weather in San Francisco, continuing the context ( this would involve the LLM and pipelines using the stored history to know the city from prior turn ). This completes the first part, which was on how the SnapLogic pipelines and AgentCreator framework enable an AI-powered chatbot to use tools and deliver real-time info. We saw how the Agent Driver + Worker architecture ( with iterative PipeLoop execution ) allows interactions where the LLM can call SnapLogic pipelines as functions. The no-code SnapLogic approach made it possible to integrate an LLM without writing custom code – we simply configured Snaps and pipelines. We now have a working AI Agent that we can use in SnapLogic, however, we are still missing the chatbot experience. In the second part, we’ll shift to the integration with Microsoft Teams and Azure, to see how this pipeline is exposed as a bot endpoint and what steps are needed to deploy it in a real chat environment. AgentCreator Building an AI Agent with SnapLogic AgentCreator using OpenAI and Microsoft Teams - Part 2 Integrating the AI Agent with Microsoft Teams via Azure Bot Service The first part covered the creation of our agent's architecture using SnapLogic pipelines and AgentCreator. Now, we focus on connecting that pipeline to Microsoft Teams so end users can chat with it. This involves creating and configuring the Azure Bot Service as a bridge between Teams and our SnapLogic pipelines. We will walk through the prerequisites and setup. Prerequisites for the Azure Bot Integration To integrate the SnapLogic agent with Teams, ensure you have the following prerequisites in place: SnapLogic AgentCreator and pipelines: A SnapLogic environment where AgentCreator is enabled. The Weather Agent pipelines can be used as a working example. You’ll also need to create the AgentDriver pipeline as a Triggered Task ( to obtain an endpoint URL accessible by the bot ). SnapLogic OAuth2 Account: An OAuth2 Account which will be used in an HTTP Client to send the assistant response back to the user. Also used for simulating "typing" indicator in Teams chat between tool usage. Microsoft 365 Tenant with Teams: Access to a Microsoft tenant where you have permission to register applications and upload custom Teams apps. You’ll need a Teams environment to test the bot ( this could be a corporate tenant or a developer tenant ). Azure Subscription: An Azure account with an active subscription to create resources ( specifically, an Azure Bot Service ). Also, ensure you have the Azure Bot Channels Registration or Azure Bot resource creation rights. Azure AD App Registration: Credentials for the bot. We will register an application in Azure Active Directory to represent our bot ( this provides a Client ID and Client Secret that will be used by the Bot Service to authenticate ). Azure Bot Service resource: We will create an Azure Bot which will tie together the app registration and our messaging endpoint, and allow adding Teams as a channel. Register an App in Azure AD for the Bot The first step is to register an Azure AD application that will identify our bot and provide authentication to Azure Bot Service and Teams. Create App registration: In the Azure Portal, navigate to Azure Active Directory > App Registrations and click "New registration". Give the app a name. For supported account types, you can choose "Accounts in this organizational directory only" ( Single tenant ) for simplicity, since this bot is intended for your organization’s Teams. You do not need to specify a Redirect URI for this scenario. Finalize registration: Click Register to create the app. Once created, you’ll see the Application ( Client ) ID – copy this ID, as we’ll need it later as the Bot ID and in the OAuth2 account. Create a client secret: In your new app’s overview, go to Certificates & secrets. Click "New client secret" to generate a secret key. Give it a description and a suitable expiration period. After saving, copy the Value of the client secret ( it will be a long string ). Save this secret somewhere secure now – you won’t be able to retrieve it again after you leave the page. We’ll provide this secret to the Bot Service so it can authenticate as this app and we will also use it in the OAuth2 account in SnapLogic. Gather Tenant ID: Since we chose a single-tenant app, we’ll also need the Azure AD tenant ID. You can find this in Overview of the app. Copy the tenant ID as well for later use. At this point, you should have: Client ID ( application ID ) for the bot and the OAuth2 account Client secret for the bot ( stored securely ) and the OAuth2 account Tenant ID of our Azure AD These will be used when setting up the Azure Bot Service so that it knows about this app registration. Create an OAuth2 account Now that we have the client id, client secret and tenant id all gathered from the app registration, we can create the OAuth2 account which will be used in an HTTP Client snap that will send the "typing" indicator as well as the response from the agent. Navigate to the "Manager" tab and locate your project folder where the agent pipelines are stored On the right side, click on the "+" icon to create a new account Choose "API Suite > OAuth2 Account" Populate the client id and client secret values from your app registration process Check the 'Send client data as Basic Auth header' and 'Header authenticated' settings Populate the authorization and token endpoints OAuth2 authorization endpoint: https://login.microsoftonline.com/<TENANT_ID>/oauth2/v2.0/authorize Oauth2 token endpoint: https://login.microsoftonline.com/<TENANT_ID>/oauth2/v2.0/token Change the "Grant type" to "client_credentials Add scope to both "Token endpoint config" and "Authorization endpoint config", the scope in our case is the following: https://api.botframework.com/.default Check "Auto-refresh token" Click "Authorize", if everything was set correctly on the previous step you should get redirected back to SnapLogic with a valid access token Example of an already configured OAuth2 account Create the Azure Bot Service and Connect to SnapLogic With the Azure AD app ready, we can create the actual bot resource that will connect to Teams and our SnapLogic endpoint: Add Azure Bot resource: In the Azure Portal, search for "Azure Bot" Service and select Azure Bot. Choose Create to make a new Bot resource. Configure Bot Settings: On the creation form, fill in: Bot handle: A unique name for your bot. Subscription and Resource Group: Select your Azure subscription and a resource group to contain the bot resource. Location: Pick a region. Pricing tier: Choose the Free tier if ( F0 ) – it’s more than sufficient for development and basic usage. Microsoft App ID: Here, reuse the existing App Registration we created. There should be an option to choose an existing app – provide the Client ID of the app registration. This links the bot resource to our AD app. App type: Select Single Tenant since our app registration is single-tenant. You might also need to provide the App secret ( Client Secret ) for the bot here during creation. Create the Bot: Click Review + create and then Create to provision the bot service. Azure will deploy the bot resource. Once completed, go to the resource’s page. Configure messaging endpoint: This is a crucial step – we must point the bot to our SnapLogic pipeline. In the Azure Bot resource settings, find the Settings menu and navigate to Configuration. Populate the field for Messaging endpoint ( the URL that the bot will call when a message is received ). Here, paste the Trigger URL of your WeatherAgent_AgentDriver pipeline. To get this URL: In SnapLogic, you would have already created the AgentDriver pipeline as a Triggered Task. That generates an endpoint URL: https://elastic.snaplogic.com/api/1/rest/slschedule/<org>/<proj>/WeatherAgent_AgentDriver Example endpoint with an appended authorization query param as bearer_token: https://elastic.snaplogic.com/api/1/rest/slschedule/myOrg/WeatherProject/WeatherAgent_AgentDriver?bearer_token=<bearer token> Enter the URL exactly as given by SnapLogic + Bearer token value Save the configuration. Now, when the Teams user messages the bot, Azure will send an HTTPS POST to this SnapLogic URL. Add Microsoft Teams channel: Still in the Azure Bot resource, go to Channels. Add a new channel and select Microsoft Teams. This step registers the bot with Teams so that Teams clients can use it. Now our bot service is set up with the SnapLogic pipeline as its backend. The AgentDriver pipeline is effectively the bot’s webhook. The Azure Bot resource handles authentication with Teams and will forward user messages to SnapLogic and relay SnapLogic’s responses back to Teams. Packaging the bot for Teams ( App manifest ) At this stage, the bot exists in Azure, but to use it in Teams we need to package it as a Teams app, especially if we want to share it within the organization. This involves creating a Teams app manifest and icons, then uploading it to Teams. Prepare the Teams app manifest: The manifest is a JSON file describing your Teams app ( the bot ). Microsoft provides a schema for this but you can download the manifest file from this example, make sure you replace the <APP ID> placeholders within it. The manifest file consists of: App ID: Use the Bot’s App ID ( Client ID of the registered app ) App name, description: The name of Teams app, example "SnapLogic Agent". Icons: Prepare two icon images for the bot – typically a color icon ( 192x192 PNG ) and an outline icon ( 32x32 PNG ). These will be used as the agent's avatar and in the Teams app catalog. The manifest may also include information like developer info, version number, etc. If using the Teams Developer Portal, it can guide you through filling these fields and will handle the JSON for you. Just ensure the Bot ID and scopes are correctly set. Combine manifest and icons: Once your manifest file and icons are ready, put all three into a .zip file. For example, a zip containing: manifest.json icon-color.png ( 192x192 ) icon-outline.png ( 32x32 ) Make sure the JSON inside the zip references the icon file names exactly as they are. Upload the app to Teams: In Microsoft Teams, go to Apps > Manage your apps > Upload a custom app. Upload the zip file. Teams should recognize it as a new app. When added, it essentially registers the bot ID with the Teams client. Test in Teams: Open a chat with your Weather Agent in Teams ( it should appear with the name and icon you provided ). Type a message, like "Hi" or a weather question: "What's the weather in New York?" The message will go out to Azure, which will call SnapLogic’s endpoint. The SnapLogic pipelines will run through the logic ( as described in in the first part ) and Azure will return the bot’s reply to Teams. You should see the bot’s answer appear in the chat. If you see the bot typing indicator first and then the answer, everything is working as expected! Initial message and a response from the agent Typing indicator as showcased during agent execution Agent response after using the available tools Now the Weather Agent is fully functional within Teams. It’s essentially an AI-powered chat interface to a live weather API, all orchestrated by SnapLogic in the background. Benefits of SnapLogic and Teams for Conversational Agentic Interfaces Integrating SnapLogic AgentCreator with Microsoft Teams via Azure Bot Service has several benefits: Fast prototyping: You can go from idea to a working bot in a very short time. There’s no need to write custom bot code or host a web service – SnapLogic pipelines become your bot logic. In our example, building a weather query bot is as simple as wiring up a few Snaps and APIs. This accelerates development and allows quick iteration. Business users or integration developers can prototype new AI agents rapidly, responding to evolving needs without a heavy software development cycle. No-code integration and simplicity: SnapLogic provides out-of-the-box connectors to hundreds of systems and services. By using SnapLogic as the engine, your bot can tap into any of these with minimal effort. Want a bot that not only gives weather but also looks up flight data or CRM info? It’s just another pipeline. The AgentCreator framework handles the AI part, while the SnapLogic platform handles the integration part ( connecting to external APIs and data sources ). This synergy makes it simple to create powerful bots that perform real actions – far beyond what an LLM alone could do. And it’s all done with low/no-code configuration. Enhanced user experience: Delivering automation through a conversational interface in Teams meets users where they already collaborate. There’s no new app to learn – users simply chat with a bot as if they’re chatting with a colleague. Reusability: The modular design of the pipelines in the weather agent can be a template for other agents by swapping out the tools and prompts. The integration pattern remains the same. This showcases the reusability of the AgentCreator approach across various use cases. Conclusion By combining SnapLogic’s generative AI integration capabilities with Microsoft’s bot framework and Teams, we created a powerful AI Agent without writing any code at all. We used SnapLogic AgentCreator snaps to handle the AI reasoning and tool calling, and used Azure Bot Service to connect that logic to a Microsoft Teams. The real win is how quickly and easily this was achieved. In a matter of days or even hours, an enterprise can prototype a conversational AI agent that ties into live data and services. The speed of development, combined with the secure and integration into everyday platforms like Teams, delivers real business value. In summary, SnapLogic and Teams enables a new class of enterprise applications: ones that talk to you, using AI to bridge human requests to automated actions. The Weather Agent is a simple example, but it highlights how fast prototyping, integration simplicity, and enhanced user experience come together. I encourage you to try building your own SnapLogic Agent – whether it’s for weather, workflows, or anything else – and unleash the power of conversational AI in your organization. Happy integrating, and don’t forget your umbrella if the Weather Agent says rain is on the way! AgentCreator Re: Not able connect local postgresql database Hi maheswara , Could you please provide more details about the error? It's a bit vague and could refer to various issues. Would you mind sharing the specific error message for clarification? Re: Not able connect local postgresql database Hi maheswara , The issue might be the JDBC Driver. From the screenshot you shared, I am not sure if you have configured the JDBC JARs Driver, but if you didn't, you need to download it and upload it in SnapLogic and then under the JDBC JARs configuration, upload/select the .jar file of the driver. You can download the latest driver or any version you need. You can download it from the official docs: https://jdbc.postgresql.org/download/ Re: Return true if values from my input 1 (list 1,list2,list3...so on) are available in input 2 list A In this case you can amend the original expression. $['Input list'].map(val => $['Original list'].indexOf(val)).filter(val => val == -1).length > 0 ? false : true This will return true only if all values within the Input list exist in the Original list. Output: You can route the data based on the insert field. Re: Delaying a process To facilitate the enhancement of your proficiency with SnapLogic and the comprehensive utilization of its diverse features, several strategic approaches can be considered: Firstly, an exploration of the official SnapLogic documentation is recommended. This resource provides extensive insights into each feature, as well as individual snaps, accompanied by practical use cases. Specifically, for a more in-depth understanding of the expression language, which parallels JavaScript in syntax, additional information can be accessed via the following link: Expression Language Guide. Should you possess prior knowledge of JavaScript, the transition to the expression language should be relatively seamless since the expression language has a similar syntax to that of JavaScript ( at least for the usage of functions and representation of objects/arrays ). Furthermore, detailed documentation concerning the Script snap is available here: Script Snap Guide. Subsequently, active engagement through hands-on practice is highly recommended. Commencing with elementary projects and progressively advancing to more intricate integrations provides a foundational understanding of distinct snaps and their respective functionalities. Moreover, the examination of pre-existing integrations and pipelines developed by others can be immensely instructive. Having a reverse engineering approach illuminates the usage of diverse snaps to accomplish specific tasks. Last but certainly not least, engaging with the SnapLogic community can significantly enrich your learning journey. The collective wisdom of the community is a valuable resource that can provide you with practical tips, solutions to challenges, and alternative perspectives that enhance your command of SnapLogic. This collaborative engagement further complements your exploration of the platform's multifaceted features. Re: Return true if values from my input 1 (list 1,list2,list3...so on) are available in input 2 list A Hi userg , Can you share how the input document looks like? Not sure if both inputs are accessible within a single object. Assuming they are, you can use the following expression to compare the values in the arrays: $Input2.map(val => { "values": val, "insert": val.map(v => $Input1.indexOf(v) != -1).filter(val => val == false).length == 0 ? true : false }) Sample input data: [ { "Input1": ["a", "b", "c", "d", "e", { "test": 1 }], "Input2": [ ["c", "a", "e"], ["a", "z"], ["a", "", "b"], ["a", { "test": 2 }] ] } ] Output: [ { "Input1":[ "a", "b", "c", "d", "e", { "test":1 } ], "Input2":[ { "values":[ "c", "a", "e" ], "insert":true }, { "values":[ "a", "z" ], "insert":false }, { "values":[ "a", "", "b" ], "insert":false }, { "values":[ "a", { "test":2 } ], "insert":false } ] } ] This expression adds an additional flag ( "insert" ), to the Input2 and if all the values in the array exist in the Input1, then insert will be true otherwise false. You can then split the Input2 array to get the object as individual input documents and pass through only the values where insert flag is equal to true. That can be done with a Filter snap. Re: Delaying a process Hi Abhishek_117 The code you shared does wait 15 seconds, but it waits on each input document. This means that if you have 10 input documents, you will have a wait time of 150 seconds. I am not sure where you set up your script in your pipeline, but you have to make sure you are either waiting 15 seconds before the input documents start processing, or waiting 15 seconds after each input document has been processed. You can do this if you amend your code, specifically the time.sleep() function. If you want your script to wait for 15 seconds before it starts processing the input documents, the you can use the following code in your execute() function: def execute(self): self.log.info("Executing Transform script") time.sleep(15) while self.input.hasNext(): try: # Read the next input document, store it in a new dictionary, and write this as an output document. inDoc = self.input.next() outDoc = { 'original' : inDoc } self.output.write(inDoc, outDoc) except Exception as e: errDoc = { 'error' : str(e) } self.log.error("Error in python script") self.error.write(errDoc) self.log.info("Script executed") Or, if you want to wait for 15 seconds after all documents have been def execute(self): self.log.info("Executing Transform script") while self.input.hasNext(): try: # Read the next input document, store it in a new dictionary, and write this as an output document. inDoc = self.input.next() outDoc = { 'original' : inDoc } self.output.write(inDoc, outDoc) except Exception as e: errDoc = { 'error' : str(e) } self.log.error("Error in python script") self.error.write(errDoc) time.sleep(15) self.log.info("Script executed") The only difference is that you need to put time.sleep(15) before the while loop or after the while loop, this of course depends on your case. Re: How to count the columns in table Hi Manzoor , You can use the following expression to replace the NULL values with blank in the entity object, assuming it is a flat object. $entity.mapValues(val => val == null ? "" : val) This will simply iterate over each value in the entity object, and check if the value is null, if it is, it will return an empty string otherwise it will return the original value. Re: Convert array into different format. Hi mohit3 , If the objects in the array, specifically, the key names are the same in each object, you can't flatten them into a single object because they all have the same key name. Scenario 1: If each object is with different key names in the original array, then you can use the following expression: sl.ensureArray({}.extend(...jsonPath($, "Customfield[*]"))) The expression above uses the spread operator to extract the objects in the array into a single object. We can split the expression into three parts. Part 1: Spread operator { ...jsonPath($, "Customfield[*]")) } The spread operator is a versatile syntax that allows you to expand elements from one data structure (like an array or an object) into another. You can read more about it on the following link: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax By using your input, and then applying the spread operator in the expression, it will extract all objects from the array and make a single object, here's how the output will look like on the first step: { 0: { fieldname:"RECTY", fieldvalue:"S01" } 1: { fieldname:"LFDNR", fieldvalue:"0000004" } 2: { fieldname:"ARE4", fieldvalue:"467Q" } } Part 2: Extend object {}.extend(...jsonPath($, "Customfield[*]")) In the previous part, we extracted the objects from the initial array and now created a single object that holds all objects from the original array. You need to use the extend keyword with the spread operator to make a flat object ( because without the extend keyword, your object will have the index as a key for each original object ). You can read more about extend on the following link: https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1439367/Object+Functions+and+Properties#ObjectFunctionsandProperties-extend Because you now have multiple objects in the original array with the same key name, the output after this step will look like: { fieldname: "ARE4", fieldvalue: "467Q" } As you can see, we have a flat object now, but the extend keyword overrides the key value pairs with the same name and writes the last objects with the same key name to the output, hence why you now have only one from the three original objects. Part 3: Ensure output is an array I have used the sl.ensureArray() expression to turn the object into an array back again. sl.ensureArray(<object input>) The final output will be: [ { fieldname: "ARE4", fieldvalue: "467Q" } ] As I mentioned above, this expression will work fine as long as the objects in the array have different key names. Scenario 3: If each object is with the same key name in the original array, then you need to add a unique identifier to each key ( this can be the index itself in the array 😞 sl.ensureArray({}.extend(...jsonPath($, "Customfield[*]").map((val, index) => val.mapKeys((v, k) => k + "[" + index + "]")))) This expression is similar to the first one, but in addition to the first one, it adds a unique identifier to each of the key names in the original object to make sure that all key names are unique. This will gives us the following output: [ { "fieldname[0]": "RECTY", "fieldvalue[0]": "S01", "fieldname[1]": "LFDNR", "fieldvalue[1]": "0000004", "fieldname[2]": "ARE4", "fieldvalue[2]": "467Q" } ] As you can see, it has all of the original objects included in the output.