Recent Content
Need Guidance on Dynamic Excel File Generation and Email Integration
Hello Team, I am currently developing an integration where the data structure in the Mapper includes an array format like [{}, {}, ...]. One of the fields, Sales Employee, contains values such as null, Andrew Johnson, and Kaitlyn Bernd. My goal is to dynamically create separate Excel files for each unique value in the Sales Employee field (including null) with all the records, and then send all the generated files as attachments in a single email. Since the employee names may vary and increase in the future, the solution needs to handle dynamic grouping and file generation. I would appreciate any expert opinions or best practices on achieving this efficiently in SnapLogic. Thanks and Regards,69Views0likes1CommentPlatform Memory Alerts & Priority Notifications for Resource Failures
This is more about platform memory alerts. From my understanding, we have alert metrics in place that trigger an email if any of the nodes hit the specified threshold in the manager. However, I am looking at a specific use case. Consider an Ultra Pipeline that needs to invoke a child pipeline for transformation logic. This child pipeline is expected to run on the same node as the parent pipeline to reduce additional processing time, as it is exposed to the client side. Now, if the child pipeline fails to prepare due to insufficient resources on the node, no alert will be generated since the child pipeline did not return anything in the error view. Is there any feature or discussion underway to provide priority notifications to the organization admin for such failures? Task-level notifications won't be helpful as they rely on the configured error limits at the task level. While I used the Ultra Pipeline as an example, this scenario applies to scheduled and API-triggered pipelines as well. Your insights would be appreciated.529Views0likes1CommentSnapLogic Execution Mode Confusion: LOCAL_SNAPLEX vs SNAPLEX_WITH_PATH with pipe.plexPath
I understand the basic difference between the two execution options for child pipelines: LOCAL_SNAPLEX: Executes the child pipeline on one of the available nodes within the same Snaplex as the parent pipeline. SNAPLEX_WITH_PATH: Allows specifying a Snaplex explicitly through the Snaplex Path field. This is generally used to run the child pipeline on a different Snaplex. However, I noticed a practical overlap: Let’s say I have a Snaplex named integration-test. If I choose LOCAL_SNAPLEX, the child pipeline runs on the same Snaplex (integration-test) as the parent. If I choose SNAPLEX_WITH_PATH and set the path as pipe.plexPath, it also resolves to the same Snaplex (integration-test) where the parent is running — so the execution again happens locally. I tested both options and found: The load was distributed similarly in both cases. Execution time was nearly identical. So from a functional perspective, both seem to behave the same when the Snaplex path resolves to the same environment. My question is: What is the actual difference in behavior or purpose between these two options when pipe.plexPath resolves to the same Snaplex? Also, why is using SNAPLEX_WITH_PATH with pipe.plexPath flagged as critical in the pipeline quality check, even though the behavior appears equivalent to LOCAL_SNAPLEX? Curious if anyone has faced similar observations or can shed light on the underlying difference.Solved124Views0likes2CommentsInserting large data in servicenow
Hello Team, I am developing a pipeline in SnapLogic where there are 6000000 records coming from snowflake and i have designed my pipeline like this: Parent pipeline: snowflake execute -> mapper where i have mapped one to one field -> group by n with 10000 group size -> pipeline execute where Pool size is 5 and in child pipeline i have used json spliter and service now insert ? what can i do to optimize the performance and make it execute faster in snaplogic, currently it takes much time to execute ? Can someone assist in this regards? Thanks in advance.288Views0likes3CommentsBuilding an AI Agent with SnapLogic AgentCreator using OpenAI and Microsoft Teams - Part 1
Building an AI Agent with SnapLogic AgentCreator using OpenAI and Microsoft Teams In this two-part blog series, I will cover how to create an AI Agent using SnapLogic's AgentCreator and integrate it with Microsoft Teams via Azure Bot services. The solution combines SnapLogic’s AgentCreator, OpenAI ( gpt 4.1 mini ) and Microsoft Teams ( to provide a familiar chat interface ). In the first part, we will cover building the agent with SnapLogic pipelines and the AgentCreator pattern. In the second part, I will explain the Azure setup and Teams integration, and highlight the business benefits of conversational automation. Designing the SnapLogic-Powered AI Agent The example I've decided to go with is a simple Weather Agent that provides a conversational interface for weather queries, accessible directly within Teams. This improves user experience by integrating information into the tools people already use and showcasing how SnapLogic’s AgentCreator can automate tasks through natural language. How does SnapLogic help? SnapLogic’s new AgentCreator framework allows us to build an AI-driven agent that uses LLM intelligence combined with SnapLogic pipelines to fetch real data. The Weather Agent understands a user’s question, decides if it needs to call a function ( tool ), performs that action via a SnapLogic pipeline, and then responds conversationally with the result. SnapLogic AgentCreator is purpose-built for such scenarios, enabling enterprises to create AI agents that can call pipelines and APIs autonomously. In our case, the agent will use a weather API through SnapLogic to get live data, meaning the agent's answers are not just based on static knowledge, but on real-time API calls. SnapLogic AgentCreator Architecture Overview We will focus on the AgentCreator pattern – a design that splits the agent’s logic into two cooperative pipelines: an Agent Driver and an Agent Worker. This pattern is orchestrated by SnapLogic’s Pipeline loop ( PipeLoop ) Snap, which allows iterative calls to a pipeline until a certain condition is met, in our case, until the conversation turn is complete or n number of iterations have been completed. Here’s how it works: Agent Driver pipeline: This orchestrator pipeline receives the incoming chat message and manages the overall conversation loop. It sends the user’s query ( plus any chat history messages available ) and the system prompt to the Agent Worker pipeline using the PipeLoop Snap, and keeps iterating until the LLM signals that it’s done responding or the number of iterations are reached. Agent Worker pipeline: This pipeline handles one iteration of LLM interaction. It presents the LLM with the conversation context and available tools, gets the LLM’s response ( which could be an answer or a function call request ), executes any required tool, and returns the result back to the Driver. The Worker is essentially where the "brain" of the agent lives – it decides if a tool call is needed and formats the answer. This architecture allows the agent to have multi-turn reasoning. For example, if the user asks for weather, the LLM might first respond with a function call to get data, the Worker executes that call, and then the LLM produces a final answer in a second iteration. The PipeLoop Snap in the Driver pipeline detects whether another iteration is needed ( if the last LLM output was a partial result or tool request ) and loops again, or stops if the answer is complete. Key components of the Weather Agent architecture: SnapLogic AgentCreator: The toolkit that makes this AI agent possible. It provides specialized Snaps for prompt handling, LLM integration ( OpenAI, Azure OpenAI, Amazon Bedrock, Google Gemini etc. ), and function-calling logic. SnapLogic AgentCreator enables designing AI agents with dynamic iteration and tool usage built in. LLM ( Generative AI model 😞 The LLM powering the agent's understanding and response generation. In our implementation, an LLM ( such as OpenAI GPT ) interprets the user’s request and decides when to call the available tools. SnapLogic’s Tool Calling Snap interfaces with the LLM’s API to get these decisions. Weather API: The external data source for live weather information. The agent uses a real API ( https://open-meteo.com/ ) to fetch current weather details for the requested location. Microsoft Teams & Azure Bot: This is the front-end interface where the user interacts with the bot, and the connector that sends messages between Teams and our SnapLogic pipelines. Setting up an OpenAI API account Because we are working with the gpt 4.1 mini API, we will need to configure an OpenAI account. This assumes you have already created an API key in your OpenAI dashboard. Navigate to Manager tab under your project folder location and click on the "+" button to create a new Account. Navigate to OpenAI LLM -> OpenAI API Key Account You can name it based on your needs or naming convention Copy and paste your API key from the OpenAI dashboard On the Agent Worker pipeline, open the "OpenAI Tool Calling" snap and apply the newly created account Save the pipeline. You have now successfully integrated the OpenAI API. Weather Agent pipelines in SnapLogic I've built a set of SnapLogic pipelines to implement the Weather Agent logic using AgentCreator. Each pipeline has a specific role in the overall chatbot workflow: WeatherAgent_AgentDriver: The orchestrator for the agent. It is triggered by incoming HTTP requests from the Azure Bot Service ( when a user sends a Teams message ). The AgentDriver parses the incoming message, sends a quick "typing…" indicator to the user ( to simulate the bot typing ), and then uses a PipeLoop Snap to invoke the AgentWorker pipeline. It supplies the user’s question, the system prompt and any prior context, and keeps iterating until the bot’s answer is complete. It also handles "deleting" chat history if the user writes a specific message like "CLEAR_CHAT" in the Teams agent conversation to refresh the conversation. WeatherAgent_AgentWorker: The tool-calling pipeline ( Agent Worker ) that interacts with the LLM. On each iteration, it takes the conversation messages ( system prompt, user query, and any accumulated dialogue history ) from the Driver. The flow of the Agent Worker for a Weather Agent: defines what tools ( functions ) the LLM is allowed to call – in this case, a location and weather lookup tools invokes the LLM via a Tool Calling Snap, passing it the current conversation and available function definitions processes the LLM’s response – if the LLM requests a function call ( "get weather for London" ), the pipeline routes that request to the appropriate tool pipeline once the tool returns data, the Worker formats the result using a Function Result Generator Snap and appends it to the conversation via a Message Appender Snap returns the updated conversation with any LLM answer or tool results back to the Driver. The AgentWorker essentially handles one round of "LLM thinking" WeatherAgent_GetLocation: A tool that the agent can use to convert a user’s input location ( city name, etc. ) into a standardized form or coordinates ( latitude and longitude ). It queries an open-meteo API to retrieve latitude and longitude data based on the given location. The system prompt instructs the agent that if the tool returns more than one match, ask the user which location they meant - keeping a human-in-the loop for such scenarios. For example, if the user requests weather for "Springfield", the agent first calls the GetLocation tool and if the tool responds with multiple locations, the agent will list them ( for example, Springfield, MA; Springfield, IL; Springfield, MO ) and ask the user to specify which location they meant before proceeding. Once the location is confirmed, the agent passes the coordinates to the GetWeather tool. WeatherAgent_GetWeather: The tool pipeline that actually fetches current weather data from an external API. This pipeline is invoked when the LLM decides it needs the weather info. It takes an input latitude and longitude and calls a weather API. In our case, I've used the open-meteo service, which returns a JSON containing weather details for a given location. The pipeline consists of an HTTP Client Snap ( configured to call the weather API endpoint with the location ) and a Mapper Snap to shape the API’s JSON response into the format expected by the Agent Worker pipeline. Once the data is retrieved ( temperature, conditions, etc. ), this pipeline’s output is fed back into the Agent Worker ( via the Function Result Generator ) so the LLM can use it to compose a user-friendly answer. MessageEndpoint_ChatHistory: This pipeline handles conversation history ( simple memory ) for each user or conversation. Because our agent may be used by multiple users ( and we want each user’s chat to be independent ), we maintain a user-specific chat history. In this example the pipeline uses the SLDB's storage to store the file but in a production environment the ChatHistory pipeline could use a database Snap store chat history, keyed by user or conversation ID. Each time a new message comes in, the AgentDriver will call this pipeline to fetch recent context ( so the bot can "remember" what was said before ). This ensures continuity in the conversation – for example, if the user follows up with "What about tomorrow?", the bot can refer to the previous question’s context stored in history. For simplicity, one could also maintain context in-memory during a single conversation session, but persisting it via this pipeline allows context across multiple sessions or a longer pause. SnapLogic introduced specialized Snaps for LLM function calling to coordinate this process. The Function Generator Snap defines the available tools that the LLM agent can use. The Tool Calling Snap sends the user’s query and function definitions to the LLM model and gets back either an answer or a function call request ( and any intermediate messages ). If a function call is requested, SnapLogic uses a Pipeline Execute or similar mechanism to run the corresponding pipeline. The Function Result Generator then formats the pipeline’s output into a form the LLM can understand. At the end, the Message Appender Snap adds the function result into the conversation history, so the LLM can take that into account in the next response. This chain of Snaps allows the agent to decide between answering directly or using a tool, all within a no-code pipeline. Sample interaction from user prompt to answer To make the above more concrete, let’s walk through the flow of a sample interaction step by step: User asks a question in Teams: "What's the weather in San Francisco right now?" This message is sent from Teams to the Azure Bot Service, which relays it as an HTTP POST to our SnapLogic AgentDriver pipeline’s endpoint ( the messaging endpoint URL we will configure in the second part ). AgentDriver pipeline receives the message: The WeatherAgent_AgentDriver captures the incoming JSON payload from Teams. This payload contains the user’s message text and metadata ( like user ID, conversation ID, etc. ). The pipeline will first respond immediately with a typing indicator to Teams. We configured a small branch in the pipeline to output a "typing" activity message back to the Bot service, so that Teams shows the bot is typing - implemented to mainly enhance UX while the user waits for an answer. Preparing the prompt and context: The AgentDriver then prepares the initial prompt for the LLM. Typically, we include a system prompt ( defining the bot’s role/behavior ) and the user prompt. If we have prior conversation history ( from MessageEndpoint_ChatHistory for this user ), we would also include recent messages to give context. All this is packaged into a messages array that will be sent to the LLM. AgentDriver invokes AgentWorker via PipeLoop: The Driver uses a PipeLoop Snap configured to call the WeatherAgent_AgentWorker pipeline. It passes the prepared message payload as input. The PipeLoop is set with a stop condition based on the LLM’s response status – it will loop until the LLM indicates the conversation turn is completed or the iteration limit has been reached ( for example, OpenAI returns a finish_reason of "stop" when it has a final answer, or "function_call" when it wants to call a function ). AgentWorker ( 1st iteration - tool decision 😞 In this first iteration, the Worker pipeline receives the messages ( system + user ). Inside the Worker: A Function Generator Snap provides the definition of the GetLocation and GetWeather tools, including their name, description, and parameters. This tells the LLM what the tool does and how to call it. The Tool Calling Snap now sends the conversation ( so far just the user question and system role ) along with the available tool definition to the LLM. The LLM evaluates the user’s request in the context of being a weather assistant. In our scenario, we expect it will decide it needs to use the tool to get the answer. Instead of replying with text, the LLM responds with a function call request. For example, the LLM might return a JSON like: The Tool Calling Snap outputs this structured decision. ( Under the hood, the Snap outputs it on a Tool Calls view when a function call is requested ). The pipeline splits into two parallel paths at this point: One path captures the LLM’s partial response ( which indicates a tool is being called ) and routes it into a Message Appender. This ensures that the conversation history now includes an assistant turn that is essentially a tool call. The other path takes the function call details and invokes the corresponding tool. In SnapLogic, we use a Pipeline Execute Snap to call the WeatherAgent_GetWeather pipeline. We pass the location ( "San Francisco" ) from the LLM’s request into that pipeline as input parameter ( careful, it is not a pipeline parameter ). WeatherAgent_GetWeather executes: This pipeline calls the external Weather API with the given location. It gets back a weather data ( say the API returns that it’s 18°C and sunny ). The SnapLogic pipeline returns this data to the AgentWorker pipeline. On the next iteration, the messages array would look something like below: AgentWorker ( function result return 😞 With the weather data now in hand, a Function Result Generator Snap in the Worker takes the result and packages it in the format the LLM expects for a function result. Essentially, it creates the content that will be injected into the conversation as the function’s return value. The Message Appender Snap then adds this result to the conversation history array as a new assistant message ( but marked in a way that the LLM knows it’s the function’s output ). Now the Worker’s first iteration ends, and it outputs the updated messages array ( which now contains: user’s question, assistant’s "thinking/confirmation" message, and the raw weather data from the tool ). AgentDriver ( loop decision 😞 The Driver pipeline receives the output of the Worker’s iteration. Since the last LLM action was a function call ( not a final answer ), the stop condition is not met. Thus, the PipeLoop triggers the next iteration, sending the updated conversation ( which now includes the weather info ) back into the AgentWorker for another round. AgentWorker ( 2nd iteration - final answer 😞 In this iteration, the Worker pipeline again calls the Tool Calling Snap, but now the messages array includes the results of the weather function. The LLM gets to see the weather data that was fetched. Typically, the LLM will now complete the task by formulating a human-friendly answer. For example, it might respond: "It’s currently 18°C and sunny in San Francisco." This time, the LLM’s answer is a normal completion with no further function calls needed. The Tool Calling Snap outputs the assistant’s answer text and a finish_reason indicating completion ( like "stop" ). The Worker appends this answer to the message history and outputs the final messages payload. AgentDriver ( completion 😞 The Driver receives the final output from the Worker’s second iteration. The PipeLoop Snap sees that the LLM signaled no more steps ( finish condition met ), so it stops looping. Now the AgentDriver takes the final assistant message ( the weather answer ) and sends it as the bot’s response back to Teams via the HTTP response. The pipeline will extract just the answer text to return to the user. User sees the answer in Teams: The user’s Teams chat now displays the Weather Agent's reply, for example: "It’s currently 18°C and sunny in San Francisco." The conversation context ( question and answer ) can be stored via the ChatHistory pipeline for future reference. From the user’s perspective, they asked a question and the bot answered naturally, with only a brief delay during which they saw the bot "typing" indicator. Throughout this interaction, the typing indicator that is implemented helps reassure the user that the agent is working on the request. The user-specific chat history ensures that if the user asks a follow-up like "How about tomorrow?", the agent could understand that "tomorrow" refers to the weather in San Francisco, continuing the context ( this would involve the LLM and pipelines using the stored history to know the city from prior turn ). This completes the first part, which was on how the SnapLogic pipelines and AgentCreator framework enable an AI-powered chatbot to use tools and deliver real-time info. We saw how the Agent Driver + Worker architecture ( with iterative PipeLoop execution ) allows interactions where the LLM can call SnapLogic pipelines as functions. The no-code SnapLogic approach made it possible to integrate an LLM without writing custom code – we simply configured Snaps and pipelines. We now have a working AI Agent that we can use in SnapLogic, however, we are still missing the chatbot experience. In the second part, we’ll shift to the integration with Microsoft Teams and Azure, to see how this pipeline is exposed as a bot endpoint and what steps are needed to deploy it in a real chat environment. AgentCreator509Views3likes0CommentsBuilding an AI Agent with SnapLogic AgentCreator using OpenAI and Microsoft Teams - Part 2
Integrating the AI Agent with Microsoft Teams via Azure Bot Service The first part covered the creation of our agent's architecture using SnapLogic pipelines and AgentCreator. Now, we focus on connecting that pipeline to Microsoft Teams so end users can chat with it. This involves creating and configuring the Azure Bot Service as a bridge between Teams and our SnapLogic pipelines. We will walk through the prerequisites and setup. Prerequisites for the Azure Bot Integration To integrate the SnapLogic agent with Teams, ensure you have the following prerequisites in place: SnapLogic AgentCreator and pipelines: A SnapLogic environment where AgentCreator is enabled. The Weather Agent pipelines can be used as a working example. You’ll also need to create the AgentDriver pipeline as a Triggered Task ( to obtain an endpoint URL accessible by the bot ). SnapLogic OAuth2 Account: An OAuth2 Account which will be used in an HTTP Client to send the assistant response back to the user. Also used for simulating "typing" indicator in Teams chat between tool usage. Microsoft 365 Tenant with Teams: Access to a Microsoft tenant where you have permission to register applications and upload custom Teams apps. You’ll need a Teams environment to test the bot ( this could be a corporate tenant or a developer tenant ). Azure Subscription: An Azure account with an active subscription to create resources ( specifically, an Azure Bot Service ). Also, ensure you have the Azure Bot Channels Registration or Azure Bot resource creation rights. Azure AD App Registration: Credentials for the bot. We will register an application in Azure Active Directory to represent our bot ( this provides a Client ID and Client Secret that will be used by the Bot Service to authenticate ). Azure Bot Service resource: We will create an Azure Bot which will tie together the app registration and our messaging endpoint, and allow adding Teams as a channel. Register an App in Azure AD for the Bot The first step is to register an Azure AD application that will identify our bot and provide authentication to Azure Bot Service and Teams. Create App registration: In the Azure Portal, navigate to Azure Active Directory > App Registrations and click "New registration". Give the app a name. For supported account types, you can choose "Accounts in this organizational directory only" ( Single tenant ) for simplicity, since this bot is intended for your organization’s Teams. You do not need to specify a Redirect URI for this scenario. Finalize registration: Click Register to create the app. Once created, you’ll see the Application ( Client ) ID – copy this ID, as we’ll need it later as the Bot ID and in the OAuth2 account. Create a client secret: In your new app’s overview, go to Certificates & secrets. Click "New client secret" to generate a secret key. Give it a description and a suitable expiration period. After saving, copy the Value of the client secret ( it will be a long string ). Save this secret somewhere secure now – you won’t be able to retrieve it again after you leave the page. We’ll provide this secret to the Bot Service so it can authenticate as this app and we will also use it in the OAuth2 account in SnapLogic. Gather Tenant ID: Since we chose a single-tenant app, we’ll also need the Azure AD tenant ID. You can find this in Overview of the app. Copy the tenant ID as well for later use. At this point, you should have: Client ID ( application ID ) for the bot and the OAuth2 account Client secret for the bot ( stored securely ) and the OAuth2 account Tenant ID of our Azure AD These will be used when setting up the Azure Bot Service so that it knows about this app registration. Create an OAuth2 account Now that we have the client id, client secret and tenant id all gathered from the app registration, we can create the OAuth2 account which will be used in an HTTP Client snap that will send the "typing" indicator as well as the response from the agent. Navigate to the "Manager" tab and locate your project folder where the agent pipelines are stored On the right side, click on the "+" icon to create a new account Choose "API Suite > OAuth2 Account" Populate the client id and client secret values from your app registration process Check the 'Send client data as Basic Auth header' and 'Header authenticated' settings Populate the authorization and token endpoints OAuth2 authorization endpoint: https://login.microsoftonline.com/<TENANT_ID>/oauth2/v2.0/authorize Oauth2 token endpoint: https://login.microsoftonline.com/<TENANT_ID>/oauth2/v2.0/token Change the "Grant type" to "client_credentials Add scope to both "Token endpoint config" and "Authorization endpoint config", the scope in our case is the following: https://api.botframework.com/.default Check "Auto-refresh token" Click "Authorize", if everything was set correctly on the previous step you should get redirected back to SnapLogic with a valid access token Example of an already configured OAuth2 account Create the Azure Bot Service and Connect to SnapLogic With the Azure AD app ready, we can create the actual bot resource that will connect to Teams and our SnapLogic endpoint: Add Azure Bot resource: In the Azure Portal, search for "Azure Bot" Service and select Azure Bot. Choose Create to make a new Bot resource. Configure Bot Settings: On the creation form, fill in: Bot handle: A unique name for your bot. Subscription and Resource Group: Select your Azure subscription and a resource group to contain the bot resource. Location: Pick a region. Pricing tier: Choose the Free tier if ( F0 ) – it’s more than sufficient for development and basic usage. Microsoft App ID: Here, reuse the existing App Registration we created. There should be an option to choose an existing app – provide the Client ID of the app registration. This links the bot resource to our AD app. App type: Select Single Tenant since our app registration is single-tenant. You might also need to provide the App secret ( Client Secret ) for the bot here during creation. Create the Bot: Click Review + create and then Create to provision the bot service. Azure will deploy the bot resource. Once completed, go to the resource’s page. Configure messaging endpoint: This is a crucial step – we must point the bot to our SnapLogic pipeline. In the Azure Bot resource settings, find the Settings menu and navigate to Configuration. Populate the field for Messaging endpoint ( the URL that the bot will call when a message is received ). Here, paste the Trigger URL of your WeatherAgent_AgentDriver pipeline. To get this URL: In SnapLogic, you would have already created the AgentDriver pipeline as a Triggered Task. That generates an endpoint URL: https://elastic.snaplogic.com/api/1/rest/slschedule/<org>/<proj>/WeatherAgent_AgentDriver Example endpoint with an appended authorization query param as bearer_token: https://elastic.snaplogic.com/api/1/rest/slschedule/myOrg/WeatherProject/WeatherAgent_AgentDriver?bearer_token=<bearer token> Enter the URL exactly as given by SnapLogic + Bearer token value Save the configuration. Now, when the Teams user messages the bot, Azure will send an HTTPS POST to this SnapLogic URL. Add Microsoft Teams channel: Still in the Azure Bot resource, go to Channels. Add a new channel and select Microsoft Teams. This step registers the bot with Teams so that Teams clients can use it. Now our bot service is set up with the SnapLogic pipeline as its backend. The AgentDriver pipeline is effectively the bot’s webhook. The Azure Bot resource handles authentication with Teams and will forward user messages to SnapLogic and relay SnapLogic’s responses back to Teams. Packaging the bot for Teams ( App manifest ) At this stage, the bot exists in Azure, but to use it in Teams we need to package it as a Teams app, especially if we want to share it within the organization. This involves creating a Teams app manifest and icons, then uploading it to Teams. Prepare the Teams app manifest: The manifest is a JSON file describing your Teams app ( the bot ). Microsoft provides a schema for this but you can download the manifest file from this example, make sure you replace the <APP ID> placeholders within it. The manifest file consists of: App ID: Use the Bot’s App ID ( Client ID of the registered app ) App name, description: The name of Teams app, example "SnapLogic Agent". Icons: Prepare two icon images for the bot – typically a color icon ( 192x192 PNG ) and an outline icon ( 32x32 PNG ). These will be used as the agent's avatar and in the Teams app catalog. The manifest may also include information like developer info, version number, etc. If using the Teams Developer Portal, it can guide you through filling these fields and will handle the JSON for you. Just ensure the Bot ID and scopes are correctly set. Combine manifest and icons: Once your manifest file and icons are ready, put all three into a .zip file. For example, a zip containing: manifest.json icon-color.png ( 192x192 ) icon-outline.png ( 32x32 ) Make sure the JSON inside the zip references the icon file names exactly as they are. Upload the app to Teams: In Microsoft Teams, go to Apps > Manage your apps > Upload a custom app. Upload the zip file. Teams should recognize it as a new app. When added, it essentially registers the bot ID with the Teams client. Test in Teams: Open a chat with your Weather Agent in Teams ( it should appear with the name and icon you provided ). Type a message, like "Hi" or a weather question: "What's the weather in New York?" The message will go out to Azure, which will call SnapLogic’s endpoint. The SnapLogic pipelines will run through the logic ( as described in in the first part ) and Azure will return the bot’s reply to Teams. You should see the bot’s answer appear in the chat. If you see the bot typing indicator first and then the answer, everything is working as expected! Initial message and a response from the agent Typing indicator as showcased during agent execution Agent response after using the available tools Now the Weather Agent is fully functional within Teams. It’s essentially an AI-powered chat interface to a live weather API, all orchestrated by SnapLogic in the background. Benefits of SnapLogic and Teams for Conversational Agentic Interfaces Integrating SnapLogic AgentCreator with Microsoft Teams via Azure Bot Service has several benefits: Fast prototyping: You can go from idea to a working bot in a very short time. There’s no need to write custom bot code or host a web service – SnapLogic pipelines become your bot logic. In our example, building a weather query bot is as simple as wiring up a few Snaps and APIs. This accelerates development and allows quick iteration. Business users or integration developers can prototype new AI agents rapidly, responding to evolving needs without a heavy software development cycle. No-code integration and simplicity: SnapLogic provides out-of-the-box connectors to hundreds of systems and services. By using SnapLogic as the engine, your bot can tap into any of these with minimal effort. Want a bot that not only gives weather but also looks up flight data or CRM info? It’s just another pipeline. The AgentCreator framework handles the AI part, while the SnapLogic platform handles the integration part ( connecting to external APIs and data sources ). This synergy makes it simple to create powerful bots that perform real actions – far beyond what an LLM alone could do. And it’s all done with low/no-code configuration. Enhanced user experience: Delivering automation through a conversational interface in Teams meets users where they already collaborate. There’s no new app to learn – users simply chat with a bot as if they’re chatting with a colleague. Reusability: The modular design of the pipelines in the weather agent can be a template for other agents by swapping out the tools and prompts. The integration pattern remains the same. This showcases the reusability of the AgentCreator approach across various use cases. Conclusion By combining SnapLogic’s generative AI integration capabilities with Microsoft’s bot framework and Teams, we created a powerful AI Agent without writing any code at all. We used SnapLogic AgentCreator snaps to handle the AI reasoning and tool calling, and used Azure Bot Service to connect that logic to a Microsoft Teams. The real win is how quickly and easily this was achieved. In a matter of days or even hours, an enterprise can prototype a conversational AI agent that ties into live data and services. The speed of development, combined with the secure and integration into everyday platforms like Teams, delivers real business value. In summary, SnapLogic and Teams enables a new class of enterprise applications: ones that talk to you, using AI to bridge human requests to automated actions. The Weather Agent is a simple example, but it highlights how fast prototyping, integration simplicity, and enhanced user experience come together. I encourage you to try building your own SnapLogic Agent – whether it’s for weather, workflows, or anything else – and unleash the power of conversational AI in your organization. Happy integrating, and don’t forget your umbrella if the Weather Agent says rain is on the way! AgentCreator691Views3likes0CommentsQuick Vote for SnapLogic for the DBTA Readers’ Choice Awards
Calling on our Integration Nation Community: this one’s for you! We’re in the running for Best Data Integration Solution at the DBTA Readers’ Choice Awards - but we need your vote to win. ✅ It’s quick. ✅ It’s easy. ✅ It makes a difference. Vote now 👉 https://lnkd.in/e7hiSGr195Views0likes0CommentsTrying to connect to an external SFTP
I have generated a key-value pair, shared the public key with the client and setup a Binary SSH account in the Manager in order to connect to the client's SFTP. Additionally, I have got the Groundplex's external IPs whitelisted at the client side and on our side as well. After all this I am getting the following error when I tried to browse the path using the Directory Browser snap: error: Unable to create filesystem object for sftp://.... stacktrace: Caused by: com.jcraft.jsch.JSchException: Session.connect: java.net.SocketException: Connection reset Caused by: java.net.SocketException: Connection reset reason: Failed to get SFTP session connected resolution: Please check all properties and credentials I am stuck in competing the solution due to this error. So, any help is very much appreciated, thanks!375Views2likes0CommentsData reconciliation solutions?
One of my company's use cases for SnapLogic today is replication of data from Salesforce into internal Kafka topics for use throughout the enterprise. There have been various instances of internal consumers of the Kafka data reporting missing records. Investigations have found multiple causes for these data drops. Some of the causes are related to behavior that Salesforce describes as "Working As Designed". Salesforce has recommended other replication architectures, but there are various concerns with my company about using them (license cost, platform load) ... and we might still end up with missing data. So, we're looking into data reconciliation / auditing solutions. Are there any recommendations on a tool that can: * Identify record(s) where the record in Salesforce does not have a matching record (e.g. same timestamp) existing in Kafka * Generate a message containing relevant metadata (e.g. record Id, Salesforce object, Kafka topic) to be sent to a REST endpoint / message queue for reprocessing2.1KViews0likes6Comments