Node Navigation
Community Update
To prevent further spamming incidents, we have changed our permissions for the members of this community. While anyone can visit our community, only verified customers, partners, and employees will have posting access. Email community@snaplogic.com to verify your account.
Community Activity
What is the Integration Nation
1 MIN READ Community is more than just an online platform. It is the coming together of people with a shared interest to discuss common use cases, find solutions, and grow their expertise. At the same time, recognition of customers and individuals that go above and beyond helps develop leaders in an industry. This article describes the Integration Nation, SnapLogic's new community program.dmiller2 years agoFormer Employee4.6KViews12likes5CommentsExpression Library Starter Pack
Download Link Shared Libraries.zip (4.8 KB) Description: These expression files have been created as examples for integration professionals and/or convenience functions for citizen developers. The expression files can be included at the organization’s root-level shared folder or project-space shared folder for ease of use. The included pipeline provides simple examples for many of the functions contained in these libraries. Samples: lib.string.lpad(‘123’, 5, ‘0’) yields “00123” lib.date.fromEpoch(1234567890) yields “2009-02-13T23:31:30.000 UTC” lib.date.trunc(Date.now(), ‘HH’) yields current date/time truncated to hour Release used: master-7317 - 4.21 GA ============================================================= date.expr Many new customers struggle with date manipulation in JavaScript. This expression library was created to provide some of the functionality they may see as “missing” in JavaScript. Static Variables Two static variables are defined: timezone - The timezone to be used by the functions in this expression library if a timezone is not provided. This should be set using the appropriate timezone values for Date.toLocaleDateTimeString function defaultFormat - The format string to be used by the functions in this expression library if a format string is not provided Functions format (Date[, FormatString[, TimeZone]]) - format a Date object, returned as a String If FormatString is not specified, “defaultFormat” provided in the Static Variables section will be used If TimeZone is not specified, “timezone” provided in the Static Variables section will be used firstDayOfMonth (Date) - return a Date object for the first day of the month from the Date object specified lastDayOfMonth (Date) - return a Date object for the last day of the month from the Date object specified toEpoch (Date) - return a BigInt value representing the epoch time (number of seconds since 1-Jan-1970) from the Date object specified fromEpoch (seconds) - return a Date object calculated from the epoch time specified getAge (Date, AsOfDate) - return an integer value representing the difference in years between AsOfDate and Date specified If AsOfDate is not specified, current date/time is used segmentHour (Date, numSegments) - return a Date object where the Minutes value has been set according to the number of segments specified. For example, if numSegments is 4 the minutes returned will be 00, 15, 30, 45; if numSegments is 6 the minutes returned will be 00, 10, 20, 30, 40, 50 Note that the minutes value is truncated (not rounded) to the segment, so a numSegments of 4: Minutes 0-14 → 00 Minutes 15-29 → 15 Minutes 30-44 → 30 Minutes 45-59 → 45 trunc (Date[, FormatString) - return a Date object that has been truncated based on the desired format Valid FormatString values: yyyy - truncate to year MM - truncate to month dd - truncate to day HH - truncate to hour mm - truncate to minute ss - truncate to second string.expr Many new customers struggle with formatting strings for specific purposes. This expression library was created to provide some of the functionality they may see as “missing” in JavaScript. Functions lpad (String, size[, fill-character]) - return a String value that has been left-padded to the specified size, using the fill-character specified Default fill character is space String will be truncated to “size” if original string is longer rpad (String, size[, fill-character]) - return a String value that has been right-padded to the specified size, using the fill-character specified Default fill character is space String will be truncated to “size” if original string is longer objectToKeyValuePairs (Object[, key/value separator[, data separator[, encode]]]) - return a String that contains key/value pairs joined from all fields in an object If “key/value separator” is not specified, default value colon “:” is used If “data separator” is not specified, default value comma “,” is used If “encode” is set to true, each data value will be passed through encodeURIComponent keyValuePairsToObject (String[, key/value separator[, data separator[, decode]]]) - return an Object parsed from the key/value pairs String If “key/value separator” is not specified, default value colon “:” is used If “data separator” is not specified, default value comma “,” is used If “decode” is set to true, each data value will be passed through decodeURIComponent flattenObject (Object, SubObjectName) - return a new object with the members of SubObjectName merged into Object; useful for flattening data after a Lookup or other snaps that move data into “original” numberWithCommas (Number[, Precision]) - return a String that is formatted from the specified Number, placing a comma “,” character at every 3rd position Decimal values are not formatted but will be rounded to the requested precision bytesToFileSize (Bytes[, Precision]) - convert a number that represents a file size in bytes to a human readable (concise) format up to YottaByte notation xml.expr The XML Generator does not handle incoming data that is not XML-safe. This expression library provides simple functions to encode and decode String values. Functions encode (String) - return a String that has replaced all occurrences of XML special characters with appropriate escape-sequence strings decode (String) - return a String that has replaced all occurrences of XML escape-string sequences with equivalent symbolskoryknick6 years agoEmployee8.7KViews9likes5CommentsAdvent of Code via SnapLogic IIP
I’ve been a pretty big fan of Advent of Code since I found out about it in 2019 and when I started with SnapLogic in 2021, I figured it could be pretty cool to try to use the SnapLogic IIP to solve all (well, not all, but at least some) of the Advent of Code daily puzzles, mostly to learn better how some of the snaps work, but also get more experience designing pipelines since I’m typically working more in the individual snap development. This year, I figured I’d post about it here in the community, and to see if others have an interest in attempting to solve the daily puzzles on SnapLogic IIP. I think there are a certain level of these problems that ARE solvable via the IIP, and some that aren’t due to some of them which aren’t. My ground rules for considering a day solved is: Get the input into a pipeline in whatever way possible, either via file download and read, or via Constant snap (My posted examples will use the sample input with Constant snap, but my final solutions typically will use a file reader) No use of the Script Snap (if it can’t be solved without a script snap, it’s considered unsolvable, but you’d be surprised what things you can do without a script snap with our snaps) No use of external services (databases, rest endpoints, etc) as those are likely to have some level of “cheating” associated with them similar to a script snap Basically, using only the transform, flow, and file reader/writer (to read input files, create, delete, read, and write temporary files, and write final output files) Pipe Execs are allowed I figure this might be something that other members of the community might be interested in doing, if you want to participate, feel free to join in on the conversation, I figure we can probably keep discussion to a single thread and do replies per day? Not sure how many might be interested in this, though. What is Advent of Code? From the website: Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as interview prep, company training, university coursework, practice problems, a speed contest, or to challenge each other. You don’t need a computer science background to participate - just a little programming knowledge and some problem solving skills will get you pretty far. Nor do you need a fancy computer; every problem has a solution that completes in at most 15 seconds on ten-year-old hardware. If you want to join in, go to https://adventofcode.com and connect one of the authentication mechanisms. (I use github, but you can auth with google, twitter, or reddit as well) Logging in for an account is required so that you can receive input specific to you. If you plan to join and want to join a leaderboard for this project, feel free to join my private leaderboard with the code 1645534-1249c834 .8.2KViews8likes20CommentsTour the Integration Nation Community Platform
Welcome! Welcome to the Integration Nation! This tutorial will help you to get started and provides an overview about various community platform features, areas, and offerings. Explore the various features of Integration Nation community platform, get to know new content areas, and discover how to get the most out of your membership You will learn: how to update your profile where to go for different information Prerequisite Ensure that you have a community account and are logged in before continuing. We recommend using your business account so that you can be granted access to customer or partner areas if applicable. Your User Profile and Settings Once you are logged in, click on the avatar icon in the upper right and select My profile to access your user profile. On your profile page, you can: Give yourself a shiny new avatar by clicking on the image in the hero art and either choose one from the Community set we've created for you or upload your own. See your community stats and badges earned. Manage your uploaded images. Check out what is also on your profile, then head to **My settings** also in the menu. On your first day, you want to focus on: Preferences > Preference Options for your timezone and whether or not to automatically subscribe to posts you participate in. Preferences > Privacy for setting who can see your private information and email. Subscriptions & Notifications > Notification Settings to manage in community notifications and email notifications (there can be a lot) FAQ Visit the FAQ at the top of the page for information on using this community platform. If you need more information or can't find what you are looking for, post a question in the About this Community category. Menu Bar and Categories The menu bar across the top provides access to all the categories of our community. Discussions As indicated by the title, this section is primarily where discussions will be held: Product Category Discussions categories are where we talk about integration and automation from a big picture level. Getting the Most Out of the SnapLogic Platform categories is where we discuss features and functionality of SnapLogic products. SnapLogic Technical blog for how-tos, best practices, and other technical information from SnapLogic's subject matter experts. Enhancement Requests (customers/partners only) focuses on the discussing new functionality. User Experience Program gives you the chance to participate in user research studies to improve our products. Solutions for Your Business is the place to talk end-to-end solutions for your industry or your department. SnapLabs is our brand-new innovation hub, designed to offer you a unique, early-access experience of our most innovative features and capabilities. Learning This section contains: sample exercises to get you started links to our training options access to our Architecture Center Events See what events SnapLogic is either hosting or attending, and start a conversation about what you learn. Note that the “Will you be attending?” area does not register you for an event. It is there as a reminder of the event. You must complete the registration within the message of the event. User Groups Looking to find other SnapLogic customers in your industry or region? User Groups will be the place to be. We are excited to build out this section. The creation of and maintenance of new user user groups will depend on interest. Have a suggestion for a new user group, message the Community Manager dmiller. Nation Hub This is where we support all the programs of the Integration, from this community platform, our MVP Program, and our customer/partner awards. Support Accessible only to customers and partners, this area provides information on submitting support tickets and will be the home of our knowledge base based primarily on questions resolved in here the community. Last updated: Oct 23, 2023dmiller2 years agoFormer Employee3.5KViews7likes0CommentsAdvance Prompt Engineering
12 MIN READ This guide will cover some advanced prompt engineering techniques and how to apply them in Snaplogic GenAI App Builder to help you tackle more complex tasks and enhance overall performance. You will learn how to use system prompts, structure responses in JSON, create complex prompts, manage tokens, and consider prompt and context size. First, let’s level set on what exactly prompt engineering is and why it’s important. What is Prompt Engineering? At its core, prompt engineering is about designing the input (the “prompt”) that you give to an AI model. The way you phrase your prompt can significantly impact the quality and relevance of the model’s output. It’s not just about what you ask the AI to do, but how you ask it. Why is Prompt Engineering Important? Even the most advanced AI models rely heavily on the prompts they receive. A well-crafted prompt can lead to insightful, accurate, and highly relevant responses, while a poorly structured prompt can result in vague, inaccurate, or irrelevant answers. Understanding the nuances of prompt engineering can help you maximize the effectiveness of your AI applications. Prerequisites Basic of Snaplogic OpenAI, Azure OpenAI, Amazon Bedrock Anthropic Claude, or Google Gemini account System prompt The System prompt is a special input that defines the LLM’'s behavior, tone, and boundaries before it engages with users. It establishes the context and sets the rules for interactions, ensuring that the assistant's responses align with the desired persona and goals. Imagine you’re a travel agent assistant. Your job is to provide clients with tailored and precise travel recommendations. To do this effectively, it’s essential to establish the LLM’s behavior through the System prompt: defining the assistant’s role, setting the appropriate tone and style, and including important instructions. Drag the "OpenAI Chat Completion" "Azure OpenAI Chat Completion" "Anthropic Claude on AWS Messages", or "Google Gemini Generate" onto the canvas Select "Account" tab and select your configured account Select “Settings” tab to configure these fields Select a model: click the chat bubble icon to view the list of available models and select your preferred one. Set “Prompt” “I’m organizing a 3-day honeymoon in Japan. We’re interested in cultural experiences, and nature.” Set “System Prompt”: expand the “Advanced prompt Configuration” and enter the System prompt “You are a travel agent assistant. Provide personalized travel recommendations based on user preferences. Maintain a friendly and conversational tone. If asked about travel plan, include accommodations and restaurants.” Save and close the snap settings panel Validate the pipeline and let’s see the result by clicking the preview data output To summarize the above example System prompt: You are a travel agent assistant. Provide personalized travel recommendations based on user preferences. Maintain a friendly and conversational tone. If asked about travel plan, include accommodations and restaurants. Question: I’m organizing a 3-day honeymoon in Japan. We’re interested in cultural experiences, and nature. Answer: That's so exciting! Japan is the perfect destination for a honeymoon filled with cultural experiences and beautiful scenery. For your 3-day trip, I recommend starting in Kyoto, known for its traditional temples and gardens. You can visit iconic sites like Fushimi Inari Shrine, Arashiyama Bamboo Grove, and Kinkaku-ji Temple. Be sure to catch a traditional tea ceremony or try on a kimono for a truly immersive experience. Next, head to Hakone for a taste of Japan's stunning natural beauty. You can explore the Hakone Open-Air Museum, take a relaxing dip in an onsen (hot spring), and enjoy a scenic cruise on Lake Ashi with views of Mt. Fuji. For accommodations, I suggest staying in a traditional ryokan for an authentic Japanese experience. Ryokans offer tatami-matted rooms, futon beds, and kaiseki dinners featuring seasonal dishes. As for dining, make sure to try local specialties like sushi, ramen, and tempura. And don't miss out on trying sake, Japan's famous rice wine! Let me know if you'd like more details or assistance with booking accommodations and restaurants. Have a wonderful honeymoon in Japan! 🎎🌸🗾 The response adheres to the system prompt by providing personalized recommendations in a friendly tone, including details on accommodations and dining options. JSON response Structuring response in JSON format is a crucial technique for enhancing clarity, maintaining consistency, and ensuring seamless integration with various systems, including front-end applications, APIs, and databases. For example, if you need to present information from the above travel agent assistant example in a predefined format on a website, generate a PDF, or connect with other services that facilitate booking hotels, and activities, it is essential to format the prompt response as JSON. This approach ensures compatibility and smooth interaction across different platforms and services. Let's try modifying the system prompt from the previous example to produce output in a specific JSON format. Click the Chat Completion snap to open settings. Update the system prompt to instruct the LLM to produce the JSON response: "You are a travel agent assistant. Provide a JSON response that includes destination, trip_duration, list of activities, list of hotels (with fields for name and description), and list of restaurants(with fields for name, location, and description)." Check the “JSON mode” checkbox. The snap will output a field named json_output that contains the parsed JSON object of response. Save and close the snap settings panel. Validate the pipeline and let’s see the result. The prompt answer is the JSON string and the parsed JSON object can be found in the “json_output” field since the JSON mode is enabled. The JSON response complies with the structure specified in the system prompt, ensuring that all necessary fields are included. The structured format supports seamless integration with downstream applications. For a travel agency, this capability allows for the efficient generation of personalized itineraries, which can be utilized to populate web pages, generate PDFs or Excel documents, send emails, or directly update travel booking systems, including querying flight availability and checking hotel options. Complex prompt Using a list of messages to incorporate conversation history helps maintain context in ongoing dialogues. This approach ensures responses are relevant and coherent, improving the overall flow of the conversation. Additionally, these messages can be provided as examples of user responses to guide the model in interacting effectively. By including previous interactions, it enhances continuity and user engagement, facilitating the model's ability to handle complex, multi-turn exchanges. This technique allows the model to generate more natural and accurate responses, especially when building on earlier details, resulting in a more seamless and intuitive conversation. Moreover, they can be used for example of response to let model know how should interact with user. Each message contain a role and content. The common roles are: System: Provides the initial context, setting the tone and behavior for the LLM. User: Represents the user’s input, guiding the conversation based on their queries or commands. Assistant/Model: Contains previous responses from the LLM or examples of desired behavior. This section will guide you through the process of constructing a message list and using it as input for the LLM. We'll create the following pipeline to make a travel agent assistant be able to answer questions by leveraging the context from previous conversations. In this example, user asks about Japan's attractions in April and later inquires about the weather without specifying a location or time. Let’s create the pipeline and see how it works. Drag the "JSON Generator" snap onto the canvas. Click on the "JSON Generator" to open it, then click on the "Edit JSON" button in the main Settings tab Highlight all the text from the template and delete it. Paste in this text. This prompt will be used as the user question. { "prompt": "Can you tell me what the weather’s going to be like?" } The "JSON Generator" should now look like this Click "OK" in the lower-right corner to save the prompt Save the settings and close the snap Drag the “OpenAI Prompt Generator” or “Azure OpenAI Prompt Generator” onto the canvas. Connect the Prompt Generator to the “JSON Generator” Click on the "Prompt Generator" to open settings. Change the label to “System Prompt” Click on the "Edit prompt" to open the prompt editor Highlight all the text from the template and delete it. Paste in this text. We will use it as the system prompt. You are a travel agent assistant. Provide personalized travel recommendations based on user preferences. The prompt editor should now look like this Click "OK" in the lower-right corner to save the prompt Select the “Advanced prompt output” checkbox. The “User role” field will be populated. Set the “User role” field to “SYSTEM” The final settings of the “System Prompt” should now look like this. Save the settings and close the snap Drag the second “Prompt Generator” onto the canvas and connect it to the prior snap. This snap will handle the previous user’s questions. Follow step 9 to 17 as a guide to configure the following fields Label: User Message 1 Prompt editor: I am planning a trip to Japan in April. Can you help me find some tourist attractions? User role: USER The final settings of the “User Message 1” should be like this. Drag the third “Prompt Generator” onto the canvas and connect it to the prior snap. This snap will handle the previous LLM’s answer. Follow step 9 to 17 as a guide to configure the following fields Label: Assistant Message Prompt editor: Sure! Some tourist attractions in Japan during your trip in April are: 1. Cherry Blossom Viewing 2. Fushimi Inari Shrine 3. Hiroshima Peace Memorial Park 4. Mount Fuji 5. Gion District Let me know if you need more information or assistance with planning your trip! User role: ASSISTANT The final settings of the “Assistant Message” should be like this. Drag the fourth “Prompt Generator” onto the canvas and connect it to the prior snap. This snap will handle the user question. Follow step 9 to 17 as a guide to configure the following fields: Label: User Message 2 Prompt editor: {{prompt}} User role: USER The final settings of the “User Message 2” should be like this. Drag the “Chat Completion” onto the canvas and connect it to “User Message 2”. Click on the "Chat Completion" to open settings. Select the account in the Account tab. Select the Settings tab. Select the model name. Check “Use message payload” checkbox. The prompt generator will create a list of messages in the "messages" field. Enabling "Use message payload" is necessary to use this list of messages as input. The “Message payload” field appears. Set the value to $messages. The settings of the Chat Completion should now look like this Save and close the setting panel Validate the pipeline and let’s see the result. Click on the output view of “User Message 2” to see the message payload, which we have constructed using the advanced mode of the prompt generator snap. Click on the output view of “Chat Completion” snap to see the LLM response. The result is: In April, the weather in Japan is generally mild and pleasant with cherry blossoms in full bloom. The temperatures are typically around 10-20°C (50-68°F) and there may be occasional rain showers. It's a great time to explore outdoor attractions and enjoy the beautiful spring scenery. Make sure to pack layers and an umbrella just in case! The model effectively delivered weather information for Japan in April, even the last user query did not specify location or time. This is possible because the model uses the entire conversation history to understand the context and flow of the dialogue. Furthermore, the model echoed the user’s question before responding, maintaining a consistent conversational style. To achieve the best results, make sure your message list is complete and well-organized, as this will help the LLM generate more relevant and coherent responses, enhancing the quality of the interaction. Tokens Tokens are units of text, including words, character sets, or combinations of words and punctuation, that language models use to process and generate language. They can range from single characters or punctuation marks to entire words or parts of words, depending on the model. For instance, the word "artificial" might be split into tokens like "art", "ifi", and "cial". The total number of tokens in a prompt affects the model's response capability. Each model has a maximum token limit, which includes both the input and output. For instance, GPT-3.5-Turbo has a limit of 4,096 tokens, while GPT-4 has limits of 8,192 tokens and 32,768 tokens for the 32k context version. Effective token management ensures responses remain within these limits, improving efficiency, reducing costs, and enhancing accuracy. To manage token usage effectively, the maximum tokens parameter is essential. It sets a limit on the number of tokens the model can generate, ensuring the combined total of input and output stays within the model’s capacity. Setting a maximum tokens parameter has several benefits: it prevents responses from becoming excessively long, reduces response times by generating more concise outputs, optimizes performance, and minimizes costs by controlling token usage. Additionally, it enhances user experience by providing clear, focused, and quicker responses. Use case Examples: Customer Support Chatbots: By setting maximum tokens, you ensure that the chatbot's responses are brief and focused, providing quick, relevant answers to user inquiries without overwhelming them with excessive detail. This enhances user experience and keeps interactions efficient. Content summarization: Helps generate concise summaries of long texts, suitable for applications with space constraints, such as mobile apps or notifications. Interactive Storytelling: Controls the length of narrative segments or dialogue options, maintaining engaging and well-paced storytelling. Product Descriptions: Generate brief and effective product descriptions for e-commerce platforms, maintaining relevance and fitting within space constraints. Let's walk through how to configure the maximum tokens in the SnapLogic Chat Completion snap using the prompt: “Describe the Photosynthesis in simple terms.”. We’ll see how the LLM behaves with and without the maximum token setting. Drag the “OpenAI Chat Completion” or “Azure OpenAI Chat Completion”, or “Google Gemini Generate” onto the canvas Select “Account” tab and select your configured account Select “Settings” tab Select your preferred model to use Set prompt to the message “Describe the Photosynthesis in simple terms.” The Chat Completion settings should now look like this Save the snap settings and validate the pipeline to see the result. In the result, the “usage” field provide us the token consumption detail. prompt_tokens: tokens used by the input completion_tokens: tokens used for generating response. total_tokens: the combined number of tokens used for both the input prompt and the generated response We can see that the response is quite long and the token used for response (completion_tokens) is 241. Let’s set the maximum token and see the result again Expand the “Model parameters” Set “Maximum tokens” to 100 Save the snap settings and validate the pipeline to see the result. The result is more concise compared to the output when the maximum tokens are not set. In this case, the number of completion_tokens used is only 84, indicating a shorter and more focused response. Using the maximum tokens effectively ensures that responses are concise and relevant, optimizing both performance and cost-efficiency. By setting this limit, you can prevent excessively long outputs, reduce response times, and maintain clarity in the generated content. To achieve optimal results, align the maximum tokens setting with your specific needs, such as the desired response length and application requirements. Regularly review and adjust this parameter to balance brevity with completeness, ensuring that the outputs remain useful and within operational constraints. Prompt size considerations In the previous section, we covered techniques for managing response size to stay within token limits. Now, we turn our focus to prompt size and context considerations. By ensuring that both prompts and context are appropriately sized, you can improve the accuracy and relevance of model responses while staying within token limits. Here are some techniques for managing prompt and context size: Keep prompt clear and concise By making prompts clear and direct, you reduce token usage, which helps keep the prompt within the model's limits. Focusing on essential information and removing unnecessary words enhances the accuracy and relevance of the model's responses. Additionally, specifying the desired output length further optimizes the interaction, preventing excessively long responses and improving overall efficiency. Example Prompt: “Could you please provide a detailed explanation of how the process of photosynthesis works in plants, including the roles of chlorophyll, sunlight, and water?” Better prompt: "Explain the process of photosynthesis in plants, including the roles of chlorophyll, sunlight, and water, in about 50 words." Splitting complex tasks into simpler prompts Breaking down complex tasks into smaller, more manageable subtasks not only reduces the size of each individual prompt but also enables the model to process each part more efficiently. This approach ensures that each prompt stays within token limits, resulting in clearer and more accurate responses. Example Complex Task: "Write a detailed report on the economic impact of climate change in developing countries, including statistical analysis, case studies, and policy recommendations." Simplified Prompts: "Summarize the economic impact of climate change in developing countries." "Provide a statistical analysis of how climate change affects agriculture in developing countries." "List case studies that demonstrate the economic consequences of climate change in developing countries." "Suggest policy recommendations for mitigating the economic impact of climate change in developing countries." Use a sliding window for chat history From the complex prompt section, we know that including the entire chat history helps maintain context, but it can also quickly use up available tokens. To optimize prompt size, employ a sliding window approach. This technique involves including only a portion of the chat history, focusing on recent and relevant exchanges, to keep the prompt within token limits. Summarize contexts Use a summarization technique to condense context into a brief summary. Instead of including extensive conversation history, create a concise summary that captures the essential information. This approach reduces token usage while retaining key details for generating accurate responses. By applying these techniques, you can effectively manage prompt and context size, ensuring that interactions remain efficient and relevant while optimizing token usage.ChompooPanida10 months agoEmployee1.6KViews7likes0CommentsPerforming an Action when there is no data
A common integration pattern is to do something when no data is received. For example, we might read a file, parse it, and find that no records meet some filter criteria. As a result, we might send an email, or insert a ticket into a ticket management system like ServiceNOW. However, in SnapLogic, this can be somewhat more difficult than it seems initially because of the streaming architecture. In fact, many snaps will not execute without input documents - rather hard to accomplish when there is no document to flow through: So, how can we take an action, even though there’s no document to run the snap? The trick is to make a document and force it into the stream with a Join Snap (set to Merge): Note in this figure that even though nothing flows into the top view of the Join Snap, it still produces an output. This enables us to use the Router snap to check whether or not a field that we know will exist in the source does in fact exist. If it does, we know that data has passed through the Filter snap and then been merged with the Sequence data. If it does not, we know that only the Sequence data has been passed through and therefore nothing made it through the Filter . Only one of these views will have anything pass through it. The magic here is in the Merge option in the Join snap. As long as it is receiving data on a view, it will output - even if it has nothing to ‘join’ to. Meanwhile, the Sequence snap will always output the number of documents that you set it to. In this case, only one is required. (This pipeline here: Do something after no doc filter_2017_04_26.slp (11.1 KB) )ncrouch9 years agoFormer Employee14KViews7likes14CommentsBox's Snaplogic Knowledge Base
Here is a Box folder containing utilities and endpoint specific pipelines.aleung9 years agoContributor III4.3KViews7likes1CommentSnapGPT Beginner's Guide
What is SnapGPT? SnapGPT is a generative AI solution in early release and currently available only to users who have been invited to SnapLabs. Built right into the SnapLogic web interface (screenshot below), you can now prompt SnapGPT for a wide variety of help creating Pipelines, configuring Snaps, for suggestions about which Snap to use, and so much more. How can I get started with SnapGPT? In this section we cover a few examples that should be repeatable as a way to send your first few prompts to SnapGPT and observe the outcome. After that you can explore our SnapGPT Prompt Catalog, which contains even more prompts to copy/paste into SnapGPT as you explore. One caveat here is that as a generative AI solution that is always learning, it is possible that outcomes will change over time. When SnapGPT creates a Pipeline for you it will be a bit like importing a Pipeline in the sense that it will have a wizard to help select accounts and finalize the Pipeline. Log in at https://snapgpt.labs.snaplogic.com If SnapGPT is not shown by default, press the SnapGPT button in the upper-right corner of the SnapLogic web interface to make it visible; to make it always visible, click your name in the upper-right corner > User Settings > Opt-in Features > Check the box for “Open SnapGPT by Default”: A new box will appear on the right-hand side of the SnapLogic web interface for you to start typing to SnapGPT: Examples: See SnapGPT in Action Now let’s talk about getting your feet wet, hands dirty, or whatever saying floats your boat. Below are several examples you can use to start exploring SnapGPT and they should be precise enough to yield consistent results. Example 1: Create a pipeline that pulls Salesforce Opportunities Our first example is one that will generate a short but complete Pipeline for us. With any generative AI, SnapGPT included, it is important to remember that the more specific you are with the prompt the more accurate a response you will receive, or in this example, the more accurate a Pipeline we will receive. Prompt: “Create a Pipeline using Salesforce Read to fetch my Opportunities, Filter out any opportunities outside of the last fiscal quarter, then write them to Snowflake.” Here is a screenshot of the short Pipeline created by SnapGPT that closely resembles the prompt we provided: Inside the Filter Snap we can see that SnapGPT created an expression for us to filter the $CloseDate file for us: Example 2: Ask help for identifying which Snap to use At some point we were all new to using SnapLogic and we learned it from CSM-led training, trial-and-error, reviewing existing pipelines, etc. What we did not have was an always-on AI assistant ready to answer our questions (we still love you Iris and wouldn’t be here without you!). This example helps show us how SnapGPT can be prompted with natural language to let us know exactly what Snap we need. Prompts: “What snap can I use to remove records from my pipeline based on a given condition?” “Which snap acts like a case statement or switch to allow me to move records down different pathways based on a condition?” Example 3: Ask for help to learn when to use one Snap over a different Snap Another example of using SnapGPT more for educational purposes or documentation skimming would be to ask it when you might want to use one Snap instead of another. Prompt: “When would I need to use the Salesforce SOQL snap instead of the Salesforce Read snap?” Example 4: Generate sample data We can also use SnapGPT to generate sample data, for those times when we need to get started on a business process and show some results but maybe we don’t yet have access to the source system. Prompt: “Create a single-snap pipeline with a JSON Generator that has 10 example Salesforce Lead records” Example 5: Fetch exchange data from third-party API It is also possible to use SnapGPT to pull data from a third-party site such as exchange data. Prompt: “Fetch exchange rate data from the European Central Bank and save it to a JSON file” What should I be aware of when using SnapGPT? As with any early access release of software, especially generative AI that is always learning, there are some key points to keep in mind as you explore SnapGPT and share feedback with the SnapLogic team (including any previously mentioned and/or typical disclaimers about using ChatGPT or SnapGPT): SnapGPT may generate Pipelines with unnecessary Snaps (like kids overpacking to visit grandma’s house!) SnapGPT depends on ChatGPT availability, so there are times when you might see a response like this: What if I have questions? Our goal is to provide several ways to interact with our team, which we’ve broken out below. Community: Using the SnapLogic Community’s locked SnapLabs Category, which is the same category you should be reading this content from (please do not post on the public forums yet since this is a limited release at this time). Office hours: Roger Sramkoski, one of our Sr. Technical Marketing Managers, will be setting up office hours once or twice a week. These will be purely optional and will be minimal agendas so we can focus on open conversations. Email: You can also contact Roger Sramkoski directly at rsramkoski@snaplogic.comRogerSramkoski3 years agoEmployee7.2KViews7likes3CommentsError Handling in SnapLogic - InterWorks
Hello We have a blog post posted on the topic Error Handling and Best Practices on our website which i am sending the URL below. We cover most of the stuff you need to know about error handling and what are the best practices to implement in the pipelines for the error handling. Please be free to leave a feedback in the comments section and if you have any further questions you can contact me. https://interworks.com.mk/error-handling-in-snaplogic/Andrej_Bogdanov7 years agoNew Contributor III2.3KViews7likes0CommentsInfosys: Modernising the Client's Landscape Containing 500+ Legacy Interfaces
2022 Partner Innovation Award Nomination What were the underlying reasons or business implications for the need to innovate and transform the business? Client organisation was carved out from a large conglomerate and they inherited a large set of legacy applications built on disparate legacy technologies. The client wanted to modernise the landscape and also look at utilising a platform led approach, in a new cloud based setup Following were the key major challenges: Heterogenous integration landscape with 500+ interfaces spread across various legacy technologies like SAP PI, WebMethods and Cast Iron with EOL approaching fast Minimal documentation of existing landscape High maintenance cost of several operative environments with respect to platform and application support Challenges with skill availability on legacy platforms leading to high cost and business continuity risk High license and Infrastructure cost Selecting the right platform that meets future needs Migrating 500+ interfaces involving multiple internal stakeholders, external vendors and third party with an aggressive timeline Which customer did you implement this on? Siemens Energy Global GmbH Describe your strategy and execution to solve the business challenge. Considering the business challenges and criticality as well as technical complexity, Infosys and Siemens Energy worked in close collaboration to devise a robust strategy and execution plan. Strategy After evaluating various platforms, Snaplogic was shortlisted as the platform of choice for its robust capabilities The timeline of the project was aligned to the license expiry date of the legacy integration platforms Wave wise approach was finalised based on business criticality, application availability and technology grouping Ensure zero business impact with on par performance and no end system impact 5.Feasibility checks to confirm connectivity with various applications 6.Focus on maximizing the use of standard features and reduce customization for better maintainability Rationalisation of number of interfaces for better performance and cost reduction in collaboration Execution Cloud and On-premise Application integrations have been performed utilising the various SNAPS available In built SNAPS have been used to connect to various applications like Workday , SAP HANA etc. REST/SOAP connectors have been used for the Integration of Non-Standard Applications to ensure security Handle security aspects of Internal and third apps/vendors using various authentication mechanism Real Time Vs Batch processing requirements have been executed using various connector and the supporting parameters provided by Snaplogic Snaplogic Dashboard used extensively for optimized monitoring of the pipeline executions Parallel Processing of threads for improved performance Snaplogic Architects and SMEs were consulted for architecture, design and performance related aspects Who was and how were they involved in building out the solution? To migrate 500+ legacy interfaces, a peak team of 25 people from Infosys were involved, with support from Siemens Energy integration experts and application teams When we encountered issues that required support from Snaplogic Product team, regular interactions were organised to look for suitable solutions What were the business results after executing the strategy? Siemens Energy achieved the following business results with the successful execution of the migration program: numbers of interfaces reduced by approx. 1/3rd in the Snaplogic landscape reduction in technical complexity with reduction from 4 platforms to single platform, leading to significant cost reduction for platform/application support and skills modern cloud based setup with assured future scalability Introduction of APIs Improved predictability of business continuity What was the ROI the customer gained from executing the strategy? Below are the overall ROI gained by customer with the success of the migration program: ~50% reduction in license and infrastructure cost ~30% projected cost reduction for platform and application support Anything else you would like to add? The following were the key highlights of the migration program: There was focus on maximizing the use of standard features and reduce customization for better maintainability Extremely complex requirements were also implemented with standard snaplogic feautres to eliminate the existing custom code Component tracker was introduced which is an innovative way to maintain and access information about interfaces at a single placedmiller3 years agoFormer Employee3.2KViews6likes0Comments
Getting Started
Here are some links to help you get quickly familiarized with the Integration Nation community platform.
Top Content
Recent Blogs
Why Security is Essential for Generative AI Applications
As generative AI applications transition from prototypes to enterprise-grade solutions, ensuring security becomes non-negotiable. These applications often interact with sensitive user data, internal databases, and decision-making logic that must be protected from unauthorized access. Streamlit, while great for quickly developing interactive AI interfaces, lacks built-in access control mechanisms. Therefore, integrating robust authentication and authorization workflows is critical to safeguarding both the user interface and backend APIs.
Overview of the AgentCreator + Streamlit Architecture
This guide focuses on securing a generative AI-powered Sales Agent application built with SnapLogic AgentCreator and deployed via Streamlit. The application integrates Salesforce OAuth 2.0 as an identity provider and secures its backend APIs using SnapLogic API Management. Through this setup, only authorized Salesforce users from a trusted domain can access the application, ensuring end-to-end security for both the frontend and backend.
Understanding the Application Stack
Role of SnapLogic's AgentCreator Toolkit
The SnapLogic AgentCreator Toolkit enables developers and sales engineers to build sophisticated AI-powered agents without having to manage complex infrastructure. These agents operate within SnapLogic pipelines, making it easy to embed business logic, API integrations, and data processing in a modular way. For example, a sales assistant built with AgentCreator and exposed as API using Triggered Tasks can pull real-time CRM data, generate intelligent responses, and return it via a clean web interface.
Streamlit as User Interface
On the frontend, Streamlit is used to build a simple, interactive web interface for users to query the Sales Agent.
Importance of API Management in AI Workflows
Once these agents are exposed via HTTP APIs, managing who accesses them—and how—is crucial. That’s where SnapLogic API Management comes in. It provides enterprise-grade tools for API publishing, securing endpoints, enforcing role-based access controls, and monitoring traffic. These features ensure that only verified users and clients can interact with your APIs, reducing the risk of unauthorized data access or abuse.
However, the real challenge lies in securing both ends:
The Streamlit UI, which needs to restrict access to authorized users.
The SnapLogic APIs, exposing the AgentCreator Pipelines which must validate and authorize each incoming request.
OAuth 2.0 Authentication: Fundamentals and Benefits
What is OAuth 2.0?
OAuth 2.0 is an open standard protocol designed for secure delegated access. Instead of sharing credentials directly, users grant applications access to their resources using access tokens. This model is particularly valuable in enterprise environments, where central identity management is crucial. By using OAuth 2.0, applications can authenticate users through trusted Identity Providers (IDPs) while maintaining a separation of concerns between authentication, authorization, and application logic.
Why Use Salesforce as the Identity Provider (IDP)?
Salesforce is a robust identity provider that many organizations already rely on for CRM, user management, and security. Leveraging Salesforce for OAuth 2.0 authentication allows developers to tap into a pre-existing user base and organizational trust framework. In this tutorial, Salesforce is used to handle login and token issuance, ensuring that only authorized Salesforce users can access the Streamlit application. This integration also simplifies compliance with enterprise identity policies such as SSO, MFA, and domain-based restrictions.
To address the authentication challenge, we use the OAuth 2.0 Authorization Code Flow, with Salesforce acting as both the Identity and Token Provider.
Here is Salesforce’s official documentation on OAuth endpoints, which is helpful for configuring your connected app.
🔒 Note: While Salesforce is a logical choice for this example—since the Sales Agent interacts with Salesforce data—any OAuth2-compliant Identity Provider (IDP) such as Google, Okta, or Microsoft Entra ID (formerly Azure AD) can be used. The core authentication flow remains the same, with variations primarily in OAuth endpoints and app registration steps.
Architecture Overview and Security Objectives
Frontend (Streamlit) vs Backend (SnapLogic APIs)
The application architecture separates the frontend interface and backend logic. The frontend is built using Streamlit, which allows users to interact with a visually intuitive dashboard. It handles login, displays AI-generated responses, and captures user inputs. The backend, powered by SnapLogic's AgentCreator, hosts the core business logic within pipelines that are exposed as APIs. This separation ensures flexibility and modular development, but it also introduces the challenge of securing both components independently yet cohesively.
Threat Model and Security Goals
The primary security threats in such a system include unauthorized access to the UI, data leaks through unsecured APIs, and token misuse. To mitigate these risks, the following security objectives are established:
Authentication: Ensure only legitimate users from a trusted identity provider (Salesforce) can log in.
Authorization: Grant API access based on user roles and domains, verified via SnapLogic APIM policies.
Token Integrity: Validate and inspect access tokens before allowing backend communication with SnapLogic APIM Policies
Secret Management: Store sensitive credentials (like Client ID and Secret) securely using Streamlit's secret management features.
This layered approach aligns with enterprise security standards and provides a scalable model for future generative AI applications.
Authentication & Authorization Flow
Here’s how we securely manage access:
1. Login via Salesforce:
Users are redirected to Salesforce’s login screen.
After successful login, Salesforce redirects back to the app with an access token.
The token and user identity info are stored in Streamlit’s session state.
2. Calling SnapLogic APIs:
The frontend sends requests to SnapLogic’s triggered task APIs, attaching the Salesforce access token in the Authorization HTTP Header.
3. Securing APIs via SnapLogic Policies:
Callout Authenticator Policy: Validates the token by sending it to Salesforce’s token validation endpoint, as Salesforce tokens are opaque and not self-contained like JWTs.
AuthorizeByRole Policy: After extracting the user’s email address, this policy checks if the domain (e.g., @snaplogic.com) is allowed. If so, access is granted.
Below you can find the complete OAuth 2 Authorization Code Flow enhanced with the Token Introspection
& Authorization Flow
This setup ensures end-to-end security, combining OAuth-based authentication with SnapLogic’s enterprise-grade API Management capabilities. In the following sections, we’ll walk through how to implement each part—from setting up the Salesforce Connected App to configuring policies in SnapLogic—so you can replicate or adapt this pattern for your own generative AI applications.
Step 1: Set Up Salesforce Connected App
Navigate to Salesforce Developer Console
To initiate the OAuth 2.0 authentication flow, you’ll need to register your application as a Connected App in Salesforce. Begin by logging into your Salesforce Developer or Admin account. From the top-right gear icon, navigate to Setup → App Manager. Click on “New Connected App” to create a new OAuth-enabled application profile.
Define OAuth Callback URLs and Scopes
In the new Connected App form, set the following fields under the API (Enable OAuth Settings) section:
Callback URL: This should be the URL of your Streamlit application (e.g., https://snaplogic-genai-builder.streamlit.app/Sales_Agent).
Selected OAuth Scopes: Include at least openid, email, and profile. You may also include additional scopes depending on the level of access required.
Ensure that the “Enable OAuth Settings” box is checked to make this app OAuth-compliant.
Retrieve Client ID and Client Secret
After saving the app configuration, Salesforce will generate a Consumer Key (Client ID) and a Consumer Secret. These are crucial for the OAuth exchange and must be securely stored. You will use these values later when configuring the Streamlit OAuth integration and environmental settings. Do not expose these secrets in your codebase or version control.
📄 For details on Salesforce OAuth endpoints, see: 👉 Salesforce OAuth Endpoints Documentation
Step 2: Integrate OAuth with Streamlit Using streamlit-oauth
Install and Configure streamlit-oauth Package
To incorporate OAuth 2.0 authentication into your Streamlit application, you can use the third-party package streamlit-oauth (streamlit-oauth). This package abstracts the OAuth flow and simplifies integration with popular identity providers like Salesforce. To install it, run the following command in your terminal:
pip install streamlit-oauth
After installation, you'll configure the OAuth2Component to initiate the login process and handle token reception once authentication is successful.
Handle ClientID and ClientSecret Securely
Once users log in through Salesforce, the app receives an Access Token and an ID token. These tokens should never be exposed in the UI or logged publicly. Instead, store them securely in st.session_state, Streamlit's native session management system. This ensures the tokens are tied to the user's session and can be accessed for API calls later in the flow.
Store Credentials via Streamlit Secrets Management
Storing secrets such as CLIENT_ID and CLIENT_SECRET directly in your source code is a security risk. Streamlit provides a built-in Secrets Management system that allows you to store sensitive information in a .streamlit/secrets.toml file, which should be excluded from version control.
Example:
# .streamlit/secrets.toml
SF_CLIENT_ID = "your_client_id"
SF_CLIENT_SECRET = "your_client_secret"
In your code, you can access these securely:
CLIENT_ID = st.secrets["SF_CLIENT_ID"]
CLIENT_SECRET = st.secrets["SF_CLIENT_SECRET"]
Step 3: Manage Environment Settings with python-dotenv
Why Environment Variables Matter
Managing environment-specific configuration is essential for maintaining secure and scalable applications. In addition to storing sensitive credentials using Streamlit’s secrets management, storing dynamic OAuth parameters such as URLs, scopes, and redirect URIs in an environment file (e.g., .env) allows you to keep code clean and configuration flexible. This is particularly useful if you plan to deploy across multiple environments (development, staging, production) with different settings.
Store OAuth Endpoints in .env Files
To manage environment settings, use the python-dotenv package (python-dotenv), which loads environment variables from a .env file into your Python application. First, install the library:
pip install python-dotenv
Create a .env file in your project directory with the following format:
SF_AUTHORIZE_URL=https://login.salesforce.com/services/oauth2/authorize
SF_TOKEN_URL=https://login.salesforce.com/services/oauth2/token
SF_REVOKE_TOKEN_URL=https://login.salesforce.com/services/oauth2/revoke
SF_REDIRECT_URI=https://your-streamlit-app-url
SF_SCOPE=id openid email profile
Then, use the dotenv_values function to load the variables into your script:
from dotenv import dotenv_values
env = dotenv_values(".env")
AUTHORIZE_URL = env["SF_AUTHORIZE_URL"]
TOKEN_URL = env["SF_TOKEN_URL"]
REVOKE_TOKEN_URL = env["SF_REVOKE_TOKEN_URL"]
REDIRECT_URI = env["SF_REDIRECT_URI"]
SCOPE = env["SF_SCOPE"]
This approach ensures that your sensitive and environment-specific data is decoupled from the codebase, enhancing maintainability and security.
Step 4: Configure OAuth Flow in Streamlit
Define OAuth2 Component and Redirect Logic
With your environment variables and secrets in place, it’s time to configure the OAuth flow in Streamlit using the OAuth2Component from the streamlit-oauth package. This component handles user redirection to the Salesforce login page, token retrieval, and response parsing upon return to your app.
from streamlit_oauth import OAuth2Component
oauth2 = OAuth2Component(
client_id=CLIENT_ID,
client_secret=CLIENT_SECRET,
authorize_url=AUTHORIZE_URL,
token_url=TOKEN_URL,
redirect_uri=REDIRECT_URI
)
# create a button to start the OAuth2 flow
result = oauth2.authorize_button(
name="Log in",
icon="https://www.salesforce.com/etc/designs/sfdc-www/en_us/favicon.ico",
redirect_uri=REDIRECT_URI,
scope=SCOPE,
use_container_width=False
)
This button initiates the OAuth2 flow and handles redirection transparently. Once the user logs in successfully, Salesforce redirects them back to the app with a valid token.
Handle Session State for Tokens and User Data
After authentication, the returned tokens are stored in st.session_state to maintain a secure, per-user context. Here’s how to decode the token and extract user identity details:
if result:
#decode the id_token and get the user's email address
id_token = result["token"]["id_token"]
access_token = result["token"]["access_token"]
# verify the signature is an optional step for security
payload = id_token.split(".")[1]
# add padding to the payload if needed
payload += "=" * (-len(payload) % 4)
payload = json.loads(base64.b64decode(payload))
email = payload["email"]
username = payload["name"]
#storing token and its parts in session state
st.session_state["SF_token"] = result["token"]
st.session_state["SF_user"] = username
st.session_state["SF_auth"] = email
st.session_state["SF_access_token"]=access_token
st.session_state["SF_id_token"]=id_token
st.rerun()
else:
st.write(f"Congrats **{st.session_state.SF_user}**, you are logged in now!")
if st.button("Log out"):
cleartoken()
st.rerun()
This mechanism ensures that the authenticated user context is preserved across interactions, and sensitive tokens remain protected within the session.
The username displays in the UI after a successful login. 😀
Step 5: Create and Expose SnapLogic Triggered Task
Build Backend Logic with AgentCreator Snaps
With user authentication handled on the frontend, the next step is to build the backend business logic using SnapLogic AgentCreator. This toolkit lets you design AI-powered pipelines that integrate with data sources, perform intelligent processing, and return contextual responses. You can use pre-built Snaps (SnapLogic connectors) for Salesforce, OpenAI, and other services to assemble your Sales Agent pipeline.
Generate the Trigger URL for API Access
Once your pipeline is tested and functional, expose it as an API using a Triggered Task:
In SnapLogic Designer, open your Sales Agent pipeline.
Click on “Create Task” and choose “Triggered Task”.
Provide a meaningful name and set runtime parameters if needed.
After saving, note the generated Trigger URL—this acts as your backend endpoint to which the Streamlit app will send requests.
This URL is the bridge between your authenticated frontend and the secure AI logic on SnapLogic’s platform. However, before connecting it to Streamlit, you'll need to protect it using SnapLogic API Management, which we'll cover in the next section.
Step 6: Secure API with SnapLogic API Manager
Introduction to API Policies: Authentication and Authorization
To prevent unauthorized access to your backend, you must secure the Triggered Task endpoint using SnapLogic API Management. SnapLogic enables policy-based security, allowing you to enforce authentication and authorization using Salesforce-issued tokens. Two primary policies will be applied: Callout Authenticator and Authorize By Role.
The new Policy Editor of SnapLogic APIM 3.0
Add Callout Authenticator Policy
This policy validates the access token received from Salesforce. Since Salesforce tokens are opaque (not self-contained like JWTs), the Callout Authenticator policy sends the token to Salesforce’s introspection endpoint for validation. If the token is active, Salesforce returns the user's metadata (email, scope, client ID, etc.).
Example of a valid token introspection response:
{
"active": true,
"scope": "id refresh_token openid",
"client_id": "3MVG9C...",
"username": "mpentzek@snaplogic.com",
"sub": "https://login.salesforce.com/id/...",
"token_type": "access_token",
"exp": 1743708730,
"iat": 1743701530,
"nbf": 1743701530
}
If the token is invalid or expired, the response will simply show:
{
"active": false
}
Below you can see the configuration of the Callout Authenticator Policy:
Extract the domain from the username (email) returned by the Introspection endpoint after successful token validation for use in the Authorize By Role Policy.
Add AuthorizeByRole Policy
Once the token is validated, the Authorize By Role policy inspects the username (email) returned by Salesforce. You can configure this policy to allow access only to users from a trusted domain (e.g., @snaplogic.com), ensuring that external users cannot exploit your API.
For example, you might configure the policy to check for the presence of “snaplogic” in the domain portion of the email. This adds a second layer of security after token verification and supports internal-only access models.
Step 7: Connect the Streamlit Frontend to the Secured API
Pass Access Tokens in HTTP Authorization Header
Once the user has successfully logged in and the access token is stored in st.session_state, you can use this token to securely communicate with your SnapLogic Triggered Task endpoint. The access token must be included in the HTTP request’s Authorization header using the Bearer token scheme.
headers = {
'Authorization': f'Bearer {st.session_state["SF_access_token"]}'
}
This ensures that the SnapLogic API Manager can validate the request and apply both authentication and authorization policies before executing the backend logic.
Display API Responses in the Streamlit UI
To make the interaction seamless, you can capture the user’s input, send it to the secured API, and render the response directly in the Streamlit app. Here’s an example of how this interaction might look:
import requests
import streamlit as st
prompt = st.text_input("Ask the Sales Agent something:")
if st.button("Submit"):
with st.spinner("Working..."):
data = {"prompt": prompt}
headers = {
'Authorization': f'Bearer {st.session_state["SF_access_token"]}'
}
response = requests.post(
url="https://your-trigger-url-from-snaplogic",
data=data,
headers=headers,
timeout=10,
verify=False # Only disable in development
)
if response.status_code == 200:
st.success("Response received:")
st.write(response.text)
else:
st.error(f"Error: {response.status_code}")
This fully connects the frontend to the secured backend, enabling secure, real-time interactions with your generative AI agent.
Common Pitfalls and Troubleshooting
Handling Expired or Invalid Tokens
One of the most common issues in OAuth-secured applications is dealing with expired or invalid tokens. Since Salesforce access tokens have a limited lifespan, users who stay inactive for a period may find their sessions invalidated. To address this:
Always check the token's validity before making API calls.
Gracefully handle 401 Unauthorized responses by prompting the user to log in again.
Implement a token refresh mechanism if your application supports long-lived sessions (requires refresh token configuration in Salesforce).
By proactively managing token lifecycle, you prevent disruptions to user experience and secure API communications.
Debugging OAuth Redirection Errors
OAuth redirection misconfigurations can block the authentication flow. Here are common issues and their solutions:
Incorrect Callback URL: Ensure that the SF_REDIRECT_URI in your .env file matches exactly what’s defined in the Salesforce Connected App settings.
Missing Scopes: If the token does not contain expected identity fields (like email), verify that all required scopes (openid, email, profile) are included in both the app config and OAuth request.
Domain Restrictions: If access is denied even after successful login, confirm that the user’s email domain matches the policy set in the SnapLogic API Manager.
Logging the returned error messages and using browser developer tools can help you pinpoint the issue during redirection and callback stages.
Best Practices for Secure AI Application Deployment
Rotate Secrets Regularly
To reduce the risk of secret leakage and potential exploitation, it's essential to rotate sensitive credentials—such as CLIENT_ID and CLIENT_SECRET—on a regular basis. Even though Streamlit’s Secrets Management securely stores these values, periodic rotation ensures resilience against accidental exposure, insider threats, or repository misconfigurations.
To streamline this, set calendar reminders or use automated DevSecOps pipelines that replace secrets and update environment files or secret stores accordingly.
Monitor API Logs and Auth Failures
Security doesn’t stop at implementation. Ongoing monitoring is critical for identifying potential misuse or intrusion attempts. SnapLogic’s API Management interface provides detailed metrics that can help you:
Track API usage per user or IP address.
Identify repeated authorization failures or token inspection errors.
Spot anomalous patterns such as unexpected call volumes or malformed requests.
Extending the Architecture
Supporting Other OAuth Providers (Google, Okta, Entra ID)
While this tutorial focuses on Salesforce as the OAuth 2.0 Identity Provider, the same security architecture can be extended to support other popular providers like Google, Okta, and Microsoft Entra ID (formerly Azure AD). These providers are fully OAuth-compliant and typically offer similar endpoints for authorization, token exchange, and user introspection.
To switch providers, update the following in your .env file:
SF_AUTHORIZE_URL
SF_TOKEN_URL
SF_SCOPE (as per provider documentation)
Also, make sure your app is registered in the respective provider’s developer portal and configured with the correct redirect URI and scopes.
Adding Role-Based Access Controls
For larger deployments, simple domain-based filtering may not be sufficient. You can extend authorization logic by incorporating role-based access controls (RBAC). This can be achieved by:
Including custom roles in the OAuth token payload (e.g., via custom claims).
Parsing these roles in SnapLogic’s AuthorizeByRole policy.
Restricting access to specific APIs or features based on user roles (e.g., admin, analyst, viewer).
RBAC allows you to build multi-tiered applications with differentiated permissions while maintaining strong security governance.
Conclusion
Final Thoughts on Secure AI App Deployment
Securing your generative AI applications is no longer optional—especially when they’re built for enterprise use cases involving sensitive data, customer interactions, and decision automation. This tutorial demonstrated a complete security pattern using SnapLogic AgentCreator and Streamlit, authenticated via Salesforce OAuth 2.0 and protected through SnapLogic API Management.
By following this step-by-step approach, you ensure only verified users can access your app, and backend APIs are shielded by layered authentication and role-based authorization policies. The same architecture can easily be extended to other providers or scaled across multiple AI workflows within your organization.
Resources for Further Learning
SnapLogic Resources and Use Cases
Salesforce Developer Docs
Streamlit Documentation
OAuth 2.0 Official Specification
With a secure foundation in place, you’re now empowered to build and scale powerful, enterprise-grade AI applications confidently.
18 days ago0likes
Despite significant advances in industrial automation, many critical devices still rely on legacy OPC Classic servers (DA, AE, HDA). Integrating these aging systems with modern platforms presents challenges such as protocol incompatibility and the absence of native OPC UA support. Meanwhile, modern integration and analytics platforms increasingly depend on OPC UA for secure, scalable connectivity. This post addresses these challenges by demonstrating how the OPC UA Wrapper can seamlessly bridge OPC Classic servers to SnapLogic. Through a practical use case—detecting missing reset anomalies in saw-toothed wave signals from an OPC Simulation DA Server—you’ll discover how to enable real-time monitoring and alerting without costly infrastructure upgrades
24 days ago4likes
Scalable Analytics Platform: A Data Engineering Journey - Explore SnapLogic's innovative Medallion Architecture approach for handling massive data, improving analytics with S3, Trino, and Amazon Neptune. Learn about cost reduction, scalability, data governance, and enhanced insights.
27 days ago2likes
SnapLogic AutoSync: Your Agile Chopper for Data Integration
In the world of enterprise data, long-haul flights are essential—but sometimes you need to lift off quickly, land precisely, and get the job done without waiting for a runway.
Think of SnapLogic’s Intelligent Integration Platform (IIP) as your data jumbo jet: powerful, scalable, and built for complex, high-volume integrations across global systems. Now imagine you need something faster, more nimble—something that doesn’t require a flight crew to get airborne.
Enter SnapLogic AutoSync—the agile chopper in your integration fleet.
Whether you're syncing Salesforce data after an acquisition, uploading spreadsheets for instant analysis, or automating recurring flows between systems like Marketo and Redshift, AutoSync lifts your data with just a few clicks. It empowers business users to move quickly and experiment safely, without compromising on governance or control.
With AutoSync, you’re not just reducing engineering cycles—you’re accelerating the entire journey from raw data to actionable insight.
2 months ago6likes
4 MIN READ
In the energy sector, turbine lubrication oil is mission-critical. A drop in oil level or pressure can silently escalate into major failures, unplanned shutdowns, and expensive maintenance windows.
In this blog, we showcase a real-world implementation using SnapLogic and OPC UA, designed to:
🔧 Continuously monitor turbine lubrication oil levels 📥 Ingest real-time sensor data from industrial systems 📊 Store telemetry in data lakes for analytics and compliance 📣 Real-time Slack alerts to engineers — before failures strike
This IIoT-driven solution empowers energy providers to adopt predictive maintenance practices and reduce operational risk
2 months ago2likes