Node Navigation
Community Update
To prevent further spamming incidents, we have changed our permissions for the members of this community. While anyone can visit our community, only verified customers, partners, and employees will have posting access. Email community@snaplogic.com to verify your account.
Community Activity
Receiving Kafka Acknowledgement Time out
Hi Community!! Greeting!! We are trying to Read data from a Kafka consumer snap and are using a child pipeline (We are using Pipeline Execute) to write data into a zip folder. The pipeline is erroring with "SnapDataException: Timed out waiting for acknowledgment of consumed messages" error. While investigating with pipeline statistics data, we have observed that the Pipeline execute snap is sending one lesser output document than the number of input documents. Example: We have 4 documents flowing from Kafka consumer. We are using Pipeline execute snap right after reading data from Kafka. Now the statistics say there are 4 input documents. But the output says 3 documents. Contrary to this we can see 4 successful executions in the dashboard and the JCC log data confirm us with 4 triggers and 4 completion of child pipelines. This incident has been occurring intermittently. Has anyone experienced this issue? If so, please suggest us for a solution. Thank you!!pradhyumna_r2 years agoNew Contributor174KViews0likes11CommentsScheduling pipeline to run every 20 mins betwen 05:00 am til 3pm
Is it possible to schedule/ run snaplogic pipeline every 20 mins in timeframe 5am-3pm ? iam able t schedule it every 20 min, but dont know if there is possibility to set time frame. … thank youSolvedSL123453 years agoNew Contributor III94KViews0likes4CommentsIntegrating SAP using the SAP IDoc Snaps. (Part 1/4)
8 MIN READ I spent more than two decades working for SAP. When Matthew Bowen approached me about helping him understand our SAP Snaps better, we started digging into them immediately. After a while, we both felt it would be a great idea to not only share the knowledge inside Snaplogic but also start a blog series on integrating with SAP and help shed some light not only on the Snaps themselves but also on the SAP side. This article will kick everything off by looking at our SAP IDoc Snaps. The series will start with a general overview of the Snaps and the required prerequisites. We then look at creating IDocs from a Snaplogic pipeline and taking a peek at the processing inside SAP before continuing to show you how to send a status change back to SAP for a given IDoc you received in a pipeline. Finally, we close the series by looking at securing the communication between the SnapLogic GroundPlex and SAP with Secure Network Communication (SNC).92KViews1like0CommentsUsing Workday RaaS to Extract Data
Using Workday RaaS to Extract Data Workday has a Reporting as a Service (RaaS) interface which allows you to export most data from Workday by creating a custom report within Workday then exposing it as a web service. The output can be various formats such as RSS, XML or JSON format and you can even modify the filters within the URI. RaaS are faster because they are pre-gathered because the actual data gathering happens in Workday cloud whenever the report runs and delivered as the preferred output format of XML or JSON. That’s why it’s faster. Reports are a live look into Workday, and pulls the data as of the current moment just like the WWS which is used by the Workday snap. If you depend on live data, use the Workday snap. Also remember that using reports you’re just shifting the load-debt. It’s still calling Workday, just at a time you may or may not be expecting instead of at the time of the call. Below are high-level instructions on how to retrieve data from Workday within SnapLogic using Workday’s RaaS interface. Step 1 - Create Custom Report in Workday You need to login to a Workday instance Click on Reporting & Analytics Button Click on Create Custom Report Provide some Report Name Report Type: Advanced Data Source: Journal Lines Under Custom Report Data Source Filter: Journal Lines for Financial Reporting and Reporting Time Period Under Columns Journal Number Company Accounting Date Ledger/Budget Debit Amount Ledger/Budget Credit Amount Ledger Currency Journal Source Ledger Line Memo Created By Be sure to share it with the account that you are querying the report with. Some reports Data Sources will enable filters on them so that the data retrieved is indexed and offers some performance benefits. In this report it will prompt you to enable Data source filter Now click on the Prompts tab to populate the default prompts. Once the checkbox is clicked, the prompt defaults are populated. Accept the defaults and if you wish to change the names, please do rename that makes sense for your project. Click OK at bottom to save Step 2 - Ask customer what data do they want. The default behavior is to pull the data as of the current moment. If the customer has a different requirement then please follow this link (Workday Report Data Prompts) to understand how you can control what type of data can be pulled from Workday reports. The above behavior is only possible with Workday Advanced Reports. Step 3 - Get Web Service URL Next to the name of your custom report you’ll see a button … click it and under Actions select Web Services and View URLs You need to set filters for the ones that are mandatory (red asterisk *) Period: 2013 - Jan Ledger: Actuals Amount Type: Activity Time Period: Current Period Company: Global Modern Services, Inc. (USA) Click OK button Workday delivers in multiple formats. (REST - Workday XML, WSDL, JSON, etc). Choose the format you like and copy the URL. It is important to copy the URL for that specific format. REST https://wd2-impl-services1.workday.com/ccx/service/customreport2/tenant_name/nganapathiraju/NG_Journal?Perform_Intercompany_Eliminations=0&Perform_Interworktag_Eliminations=0&Company!WID=cb550da820584750aae8f807882fa79a&Time_Period!WID=ac6e82a2e2d01000180ca7cad7770051&Calculate_Current_Year_Retained_Earnings=0&Calculate_Translation_Gain_or_Loss=0&Amount_Type!WID=dcfe0be6bdf044da8781b873631c71c4&Ledger!WID=93553555942b4b448defb264c084d0fa&Eliminations_Only=0&Period!WID=4facd2281c9a4b2794afc7559475359f WSDL https://wd2-impl-services1.workday.com/ccx/service/customreport2/tenant_name/nganapathiraju/NG_Journal?wsdl JSON https://wd2-impl-services1.workday.com/ccx/service/customreport2/tenant_name/nganapathiraju/NG_Journal?Perform_Intercompany_Eliminations=0&Perform_Interworktag_Eliminations=0&Company!WID=cb550da820584750aae8f807882fa79a&Time_Period!WID=ac6e82a2e2d01000180ca7cad7770051&Calculate_Current_Year_Retained_Earnings=0&Calculate_Translation_Gain_or_Loss=0&Amount_Type!WID=dcfe0be6bdf044da8781b873631c71c4&Ledger!WID=93553555942b4b448defb264c084d0fa&Eliminations_Only=0&Period!WID=4facd2281c9a4b2794afc7559475359f&format=json Refresh URL it will download the report in JSON format after you enter the credentials Step 4 - Create SnapLogic Pipeline to access URL Download Get WD Report.slp included on this page Import pipeline into SnapLogic Modify REST - Get Snap Change Service URL to the URL from Step 2 Create New Account - Basic Auth Enter your Workday Username and Password Validate Pipeline You should see preview data Note you might need to change Mapper if you are using different fields The sample pipeline with basic structure Get WD Report.slp (5.6 KB)nganapathiraju9 years agoFormer Employee54KViews0likes1CommentPreview not available for Flow snaps
Certain flow snaps like “Filter”, “Head”, “Tail” when used in a complex pipeline doesn’t generate a preview while validating the entire pipeline however these generate a preview if the pipeline is simple. For instance, if I’m trying to read data from an excel file with 100k+ records and use either ‘filter’ or ‘head/tail’, I can see the preview when the pipeline has been validated but it doesn’t generate one when the pipeline is huge. Any specific reason behind this? I would want to see the preview as the pipeline that I’m currently developing is a complex one (multiple snaps prior to “flow” snaps), thereafter it needs a ‘filter + head/tail’ (i.e. flow snaps) and then it will again require a bunch of other snaps after those ‘filter + head/tail’ making it complex again. Help on this matter would be highly appreciated. Regards, DarshSolveddarshthakkar4 years agoValued Contributor31KViews0likes13CommentsREST Post Multipart Form-Data + File Upload Issue
I’m having trouble correctly formatting multipart form-data to be sent as the body of a REST Post request along with a single file upload. I’ve managed to get the Post request to correctly send in Postman with the required key/value pairs and a test file: However, when attempting to replicate this in SnapLogic I am unsuccessful. I’ve managed to correctly map the key/value pairs in a mapper, and have created the following JSON in a JSON generator using my mapped values: I’ve used my “entity” object as the HTTP entity in my Post request, and pointed to a local test file for upload in my REST Post snap: Is my approach to compiling the form-data correct, or no?whaleyl7 years agoNew Contributor III30KViews0likes10CommentsSearch in Array
hi there, I have data coming in these structures source1 - { "@type": "array", "order-line": [ { "line-num": "00010", "attachments": { "@type": "array", "attachment": { "type": "AttachmentText", "text": "Testing the Long Text Documents" } } }, { "line-num": "00020", "attachments": { "@type": "array", "attachment": [ { "type": "AttachmentText", "text": "This is the Third time being sent.\n\nIn our continuing efforts to reduce transportation costs and increaseefficiency Clearway Energy Inc. has contracted Malark Logistics andjointly developed the following routing instructions. Please complywith the following instructions. Failure to comply will result inescalating non-compliance fees beginning at $50.00 per shipment andreaching $250 plus freight costs for subsequent violations.\n" }, { "type": "AttachmentText", "text": "This is the second time being sent.\n\nIn our continuing efforts to reduce transportation costs and increaseefficiency Clearway Energy Inc. has contracted Malark Logistics andjointly developed the following routing instructions. Please complywith the following instructions. Failure to comply will result inescalating non-compliance fees beginning at $50.00 per shipment andreaching $250 plus freight costs for subsequent violations.\n" } ] } } ] } and source2 - { "line-num": "00020", "attachments": [ { "type": "AttachmentText", "text": "This is the second time being sent.\n\nIn our continuing efforts to reduce transportation costs and increaseefficiency Clearway Energy Inc. has contracted Malark Logistics andjointly developed the following routing instructions. Please complywith the following instructions. Failure to comply will result inescalating non-compliance fees beginning at $50.00 per shipment andreaching $250 plus freight costs for subsequent violations.\n", "Id": null } ] } how can I compare to see Source2 Atttachment.Text exists in Attachment.Text of Source1’s , where line-num is same? Any help is greatly appreciated. Thanks ManoharSolvedmanohar5 years agoContributor28KViews0likes8Comments- walkerline1178 years agoContributor26KViews0likes3Comments
Converting Datetime with AM/PM to 24Hr date format
Hi, I have a requirement where from the source file. I am getting date as string value ‘10/21/2019 6:16:54 PM’, I want the date to be converted into yyyy-mm-dd hh:mm:ss format. In the example given above the output should be ‘2019-10-21 18:16:54’. 8/3/2019 4:24:00 AM for this input output should be 2019-08-03 04:24:00 for 7/3/2019 12:09:45 PM output should be 2019-07-03 12:09:45 I was able to achieve this using a complex regex but if anyone has a better way to do this please reply to this thread. Regards AnubhavSolvedanubhav_nautiya6 years agoContributor26KViews1like5CommentsUsing Github as a code repository for SnapLogic artifacts
Hello SL community!! Coming from a typical SOA/ESB background and working on tools like TIBCO Businessworks, Webmethods, etc. I was wondering if we have a way of using any code repository tool (Github, etc.) to store our Projects and other artifacts. I understand we have a intuitive Import/Export capabilities, this is manual and is a explicit activity to be performed by a developer. I was wondering: if we have anything (or is there a possibility) to integrate SL and GitHub. This should ensure that the Project and its assets gets checked-in into the GitHub repo without us doing any manual import-export. And that this can be further use to move code from one environment to another (code migration). I am new to the SL world, but already finding the community really helpful 🙂 All response would be of help 🙂 Thank you! Sudhendusudhendu8 years agoNew Contributor II24KViews0likes29Comments
Getting Started
Here are some links to help you get quickly familiarized with the Integration Nation community platform.
Top Content
Recent Blogs
Why Security is Essential for Generative AI Applications
As generative AI applications transition from prototypes to enterprise-grade solutions, ensuring security becomes non-negotiable. These applications often interact with sensitive user data, internal databases, and decision-making logic that must be protected from unauthorized access. Streamlit, while great for quickly developing interactive AI interfaces, lacks built-in access control mechanisms. Therefore, integrating robust authentication and authorization workflows is critical to safeguarding both the user interface and backend APIs.
Overview of the AgentCreator + Streamlit Architecture
This guide focuses on securing a generative AI-powered Sales Agent application built with SnapLogic AgentCreator and deployed via Streamlit. The application integrates Salesforce OAuth 2.0 as an identity provider and secures its backend APIs using SnapLogic API Management. Through this setup, only authorized Salesforce users from a trusted domain can access the application, ensuring end-to-end security for both the frontend and backend.
Understanding the Application Stack
Role of SnapLogic's AgentCreator Toolkit
The SnapLogic AgentCreator Toolkit enables developers and sales engineers to build sophisticated AI-powered agents without having to manage complex infrastructure. These agents operate within SnapLogic pipelines, making it easy to embed business logic, API integrations, and data processing in a modular way. For example, a sales assistant built with AgentCreator and exposed as API using Triggered Tasks can pull real-time CRM data, generate intelligent responses, and return it via a clean web interface.
Streamlit as User Interface
On the frontend, Streamlit is used to build a simple, interactive web interface for users to query the Sales Agent.
Importance of API Management in AI Workflows
Once these agents are exposed via HTTP APIs, managing who accesses them—and how—is crucial. That’s where SnapLogic API Management comes in. It provides enterprise-grade tools for API publishing, securing endpoints, enforcing role-based access controls, and monitoring traffic. These features ensure that only verified users and clients can interact with your APIs, reducing the risk of unauthorized data access or abuse.
However, the real challenge lies in securing both ends:
The Streamlit UI, which needs to restrict access to authorized users.
The SnapLogic APIs, exposing the AgentCreator Pipelines which must validate and authorize each incoming request.
OAuth 2.0 Authentication: Fundamentals and Benefits
What is OAuth 2.0?
OAuth 2.0 is an open standard protocol designed for secure delegated access. Instead of sharing credentials directly, users grant applications access to their resources using access tokens. This model is particularly valuable in enterprise environments, where central identity management is crucial. By using OAuth 2.0, applications can authenticate users through trusted Identity Providers (IDPs) while maintaining a separation of concerns between authentication, authorization, and application logic.
Why Use Salesforce as the Identity Provider (IDP)?
Salesforce is a robust identity provider that many organizations already rely on for CRM, user management, and security. Leveraging Salesforce for OAuth 2.0 authentication allows developers to tap into a pre-existing user base and organizational trust framework. In this tutorial, Salesforce is used to handle login and token issuance, ensuring that only authorized Salesforce users can access the Streamlit application. This integration also simplifies compliance with enterprise identity policies such as SSO, MFA, and domain-based restrictions.
To address the authentication challenge, we use the OAuth 2.0 Authorization Code Flow, with Salesforce acting as both the Identity and Token Provider.
Here is Salesforce’s official documentation on OAuth endpoints, which is helpful for configuring your connected app.
🔒 Note: While Salesforce is a logical choice for this example—since the Sales Agent interacts with Salesforce data—any OAuth2-compliant Identity Provider (IDP) such as Google, Okta, or Microsoft Entra ID (formerly Azure AD) can be used. The core authentication flow remains the same, with variations primarily in OAuth endpoints and app registration steps.
Architecture Overview and Security Objectives
Frontend (Streamlit) vs Backend (SnapLogic APIs)
The application architecture separates the frontend interface and backend logic. The frontend is built using Streamlit, which allows users to interact with a visually intuitive dashboard. It handles login, displays AI-generated responses, and captures user inputs. The backend, powered by SnapLogic's AgentCreator, hosts the core business logic within pipelines that are exposed as APIs. This separation ensures flexibility and modular development, but it also introduces the challenge of securing both components independently yet cohesively.
Threat Model and Security Goals
The primary security threats in such a system include unauthorized access to the UI, data leaks through unsecured APIs, and token misuse. To mitigate these risks, the following security objectives are established:
Authentication: Ensure only legitimate users from a trusted identity provider (Salesforce) can log in.
Authorization: Grant API access based on user roles and domains, verified via SnapLogic APIM policies.
Token Integrity: Validate and inspect access tokens before allowing backend communication with SnapLogic APIM Policies
Secret Management: Store sensitive credentials (like Client ID and Secret) securely using Streamlit's secret management features.
This layered approach aligns with enterprise security standards and provides a scalable model for future generative AI applications.
Authentication & Authorization Flow
Here’s how we securely manage access:
1. Login via Salesforce:
Users are redirected to Salesforce’s login screen.
After successful login, Salesforce redirects back to the app with an access token.
The token and user identity info are stored in Streamlit’s session state.
2. Calling SnapLogic APIs:
The frontend sends requests to SnapLogic’s triggered task APIs, attaching the Salesforce access token in the Authorization HTTP Header.
3. Securing APIs via SnapLogic Policies:
Callout Authenticator Policy: Validates the token by sending it to Salesforce’s token validation endpoint, as Salesforce tokens are opaque and not self-contained like JWTs.
AuthorizeByRole Policy: After extracting the user’s email address, this policy checks if the domain (e.g., @snaplogic.com) is allowed. If so, access is granted.
Below you can find the complete OAuth 2 Authorization Code Flow enhanced with the Token Introspection
& Authorization Flow
This setup ensures end-to-end security, combining OAuth-based authentication with SnapLogic’s enterprise-grade API Management capabilities. In the following sections, we’ll walk through how to implement each part—from setting up the Salesforce Connected App to configuring policies in SnapLogic—so you can replicate or adapt this pattern for your own generative AI applications.
Step 1: Set Up Salesforce Connected App
Navigate to Salesforce Developer Console
To initiate the OAuth 2.0 authentication flow, you’ll need to register your application as a Connected App in Salesforce. Begin by logging into your Salesforce Developer or Admin account. From the top-right gear icon, navigate to Setup → App Manager. Click on “New Connected App” to create a new OAuth-enabled application profile.
Define OAuth Callback URLs and Scopes
In the new Connected App form, set the following fields under the API (Enable OAuth Settings) section:
Callback URL: This should be the URL of your Streamlit application (e.g., https://snaplogic-genai-builder.streamlit.app/Sales_Agent).
Selected OAuth Scopes: Include at least openid, email, and profile. You may also include additional scopes depending on the level of access required.
Ensure that the “Enable OAuth Settings” box is checked to make this app OAuth-compliant.
Retrieve Client ID and Client Secret
After saving the app configuration, Salesforce will generate a Consumer Key (Client ID) and a Consumer Secret. These are crucial for the OAuth exchange and must be securely stored. You will use these values later when configuring the Streamlit OAuth integration and environmental settings. Do not expose these secrets in your codebase or version control.
📄 For details on Salesforce OAuth endpoints, see: 👉 Salesforce OAuth Endpoints Documentation
Step 2: Integrate OAuth with Streamlit Using streamlit-oauth
Install and Configure streamlit-oauth Package
To incorporate OAuth 2.0 authentication into your Streamlit application, you can use the third-party package streamlit-oauth (streamlit-oauth). This package abstracts the OAuth flow and simplifies integration with popular identity providers like Salesforce. To install it, run the following command in your terminal:
pip install streamlit-oauth
After installation, you'll configure the OAuth2Component to initiate the login process and handle token reception once authentication is successful.
Handle ClientID and ClientSecret Securely
Once users log in through Salesforce, the app receives an Access Token and an ID token. These tokens should never be exposed in the UI or logged publicly. Instead, store them securely in st.session_state, Streamlit's native session management system. This ensures the tokens are tied to the user's session and can be accessed for API calls later in the flow.
Store Credentials via Streamlit Secrets Management
Storing secrets such as CLIENT_ID and CLIENT_SECRET directly in your source code is a security risk. Streamlit provides a built-in Secrets Management system that allows you to store sensitive information in a .streamlit/secrets.toml file, which should be excluded from version control.
Example:
# .streamlit/secrets.toml
SF_CLIENT_ID = "your_client_id"
SF_CLIENT_SECRET = "your_client_secret"
In your code, you can access these securely:
CLIENT_ID = st.secrets["SF_CLIENT_ID"]
CLIENT_SECRET = st.secrets["SF_CLIENT_SECRET"]
Step 3: Manage Environment Settings with python-dotenv
Why Environment Variables Matter
Managing environment-specific configuration is essential for maintaining secure and scalable applications. In addition to storing sensitive credentials using Streamlit’s secrets management, storing dynamic OAuth parameters such as URLs, scopes, and redirect URIs in an environment file (e.g., .env) allows you to keep code clean and configuration flexible. This is particularly useful if you plan to deploy across multiple environments (development, staging, production) with different settings.
Store OAuth Endpoints in .env Files
To manage environment settings, use the python-dotenv package (python-dotenv), which loads environment variables from a .env file into your Python application. First, install the library:
pip install python-dotenv
Create a .env file in your project directory with the following format:
SF_AUTHORIZE_URL=https://login.salesforce.com/services/oauth2/authorize
SF_TOKEN_URL=https://login.salesforce.com/services/oauth2/token
SF_REVOKE_TOKEN_URL=https://login.salesforce.com/services/oauth2/revoke
SF_REDIRECT_URI=https://your-streamlit-app-url
SF_SCOPE=id openid email profile
Then, use the dotenv_values function to load the variables into your script:
from dotenv import dotenv_values
env = dotenv_values(".env")
AUTHORIZE_URL = env["SF_AUTHORIZE_URL"]
TOKEN_URL = env["SF_TOKEN_URL"]
REVOKE_TOKEN_URL = env["SF_REVOKE_TOKEN_URL"]
REDIRECT_URI = env["SF_REDIRECT_URI"]
SCOPE = env["SF_SCOPE"]
This approach ensures that your sensitive and environment-specific data is decoupled from the codebase, enhancing maintainability and security.
Step 4: Configure OAuth Flow in Streamlit
Define OAuth2 Component and Redirect Logic
With your environment variables and secrets in place, it’s time to configure the OAuth flow in Streamlit using the OAuth2Component from the streamlit-oauth package. This component handles user redirection to the Salesforce login page, token retrieval, and response parsing upon return to your app.
from streamlit_oauth import OAuth2Component
oauth2 = OAuth2Component(
client_id=CLIENT_ID,
client_secret=CLIENT_SECRET,
authorize_url=AUTHORIZE_URL,
token_url=TOKEN_URL,
redirect_uri=REDIRECT_URI
)
# create a button to start the OAuth2 flow
result = oauth2.authorize_button(
name="Log in",
icon="https://www.salesforce.com/etc/designs/sfdc-www/en_us/favicon.ico",
redirect_uri=REDIRECT_URI,
scope=SCOPE,
use_container_width=False
)
This button initiates the OAuth2 flow and handles redirection transparently. Once the user logs in successfully, Salesforce redirects them back to the app with a valid token.
Handle Session State for Tokens and User Data
After authentication, the returned tokens are stored in st.session_state to maintain a secure, per-user context. Here’s how to decode the token and extract user identity details:
if result:
#decode the id_token and get the user's email address
id_token = result["token"]["id_token"]
access_token = result["token"]["access_token"]
# verify the signature is an optional step for security
payload = id_token.split(".")[1]
# add padding to the payload if needed
payload += "=" * (-len(payload) % 4)
payload = json.loads(base64.b64decode(payload))
email = payload["email"]
username = payload["name"]
#storing token and its parts in session state
st.session_state["SF_token"] = result["token"]
st.session_state["SF_user"] = username
st.session_state["SF_auth"] = email
st.session_state["SF_access_token"]=access_token
st.session_state["SF_id_token"]=id_token
st.rerun()
else:
st.write(f"Congrats **{st.session_state.SF_user}**, you are logged in now!")
if st.button("Log out"):
cleartoken()
st.rerun()
This mechanism ensures that the authenticated user context is preserved across interactions, and sensitive tokens remain protected within the session.
The username displays in the UI after a successful login. 😀
Step 5: Create and Expose SnapLogic Triggered Task
Build Backend Logic with AgentCreator Snaps
With user authentication handled on the frontend, the next step is to build the backend business logic using SnapLogic AgentCreator. This toolkit lets you design AI-powered pipelines that integrate with data sources, perform intelligent processing, and return contextual responses. You can use pre-built Snaps (SnapLogic connectors) for Salesforce, OpenAI, and other services to assemble your Sales Agent pipeline.
Generate the Trigger URL for API Access
Once your pipeline is tested and functional, expose it as an API using a Triggered Task:
In SnapLogic Designer, open your Sales Agent pipeline.
Click on “Create Task” and choose “Triggered Task”.
Provide a meaningful name and set runtime parameters if needed.
After saving, note the generated Trigger URL—this acts as your backend endpoint to which the Streamlit app will send requests.
This URL is the bridge between your authenticated frontend and the secure AI logic on SnapLogic’s platform. However, before connecting it to Streamlit, you'll need to protect it using SnapLogic API Management, which we'll cover in the next section.
Step 6: Secure API with SnapLogic API Manager
Introduction to API Policies: Authentication and Authorization
To prevent unauthorized access to your backend, you must secure the Triggered Task endpoint using SnapLogic API Management. SnapLogic enables policy-based security, allowing you to enforce authentication and authorization using Salesforce-issued tokens. Two primary policies will be applied: Callout Authenticator and Authorize By Role.
The new Policy Editor of SnapLogic APIM 3.0
Add Callout Authenticator Policy
This policy validates the access token received from Salesforce. Since Salesforce tokens are opaque (not self-contained like JWTs), the Callout Authenticator policy sends the token to Salesforce’s introspection endpoint for validation. If the token is active, Salesforce returns the user's metadata (email, scope, client ID, etc.).
Example of a valid token introspection response:
{
"active": true,
"scope": "id refresh_token openid",
"client_id": "3MVG9C...",
"username": "mpentzek@snaplogic.com",
"sub": "https://login.salesforce.com/id/...",
"token_type": "access_token",
"exp": 1743708730,
"iat": 1743701530,
"nbf": 1743701530
}
If the token is invalid or expired, the response will simply show:
{
"active": false
}
Below you can see the configuration of the Callout Authenticator Policy:
Extract the domain from the username (email) returned by the Introspection endpoint after successful token validation for use in the Authorize By Role Policy.
Add AuthorizeByRole Policy
Once the token is validated, the Authorize By Role policy inspects the username (email) returned by Salesforce. You can configure this policy to allow access only to users from a trusted domain (e.g., @snaplogic.com), ensuring that external users cannot exploit your API.
For example, you might configure the policy to check for the presence of “snaplogic” in the domain portion of the email. This adds a second layer of security after token verification and supports internal-only access models.
Step 7: Connect the Streamlit Frontend to the Secured API
Pass Access Tokens in HTTP Authorization Header
Once the user has successfully logged in and the access token is stored in st.session_state, you can use this token to securely communicate with your SnapLogic Triggered Task endpoint. The access token must be included in the HTTP request’s Authorization header using the Bearer token scheme.
headers = {
'Authorization': f'Bearer {st.session_state["SF_access_token"]}'
}
This ensures that the SnapLogic API Manager can validate the request and apply both authentication and authorization policies before executing the backend logic.
Display API Responses in the Streamlit UI
To make the interaction seamless, you can capture the user’s input, send it to the secured API, and render the response directly in the Streamlit app. Here’s an example of how this interaction might look:
import requests
import streamlit as st
prompt = st.text_input("Ask the Sales Agent something:")
if st.button("Submit"):
with st.spinner("Working..."):
data = {"prompt": prompt}
headers = {
'Authorization': f'Bearer {st.session_state["SF_access_token"]}'
}
response = requests.post(
url="https://your-trigger-url-from-snaplogic",
data=data,
headers=headers,
timeout=10,
verify=False # Only disable in development
)
if response.status_code == 200:
st.success("Response received:")
st.write(response.text)
else:
st.error(f"Error: {response.status_code}")
This fully connects the frontend to the secured backend, enabling secure, real-time interactions with your generative AI agent.
Common Pitfalls and Troubleshooting
Handling Expired or Invalid Tokens
One of the most common issues in OAuth-secured applications is dealing with expired or invalid tokens. Since Salesforce access tokens have a limited lifespan, users who stay inactive for a period may find their sessions invalidated. To address this:
Always check the token's validity before making API calls.
Gracefully handle 401 Unauthorized responses by prompting the user to log in again.
Implement a token refresh mechanism if your application supports long-lived sessions (requires refresh token configuration in Salesforce).
By proactively managing token lifecycle, you prevent disruptions to user experience and secure API communications.
Debugging OAuth Redirection Errors
OAuth redirection misconfigurations can block the authentication flow. Here are common issues and their solutions:
Incorrect Callback URL: Ensure that the SF_REDIRECT_URI in your .env file matches exactly what’s defined in the Salesforce Connected App settings.
Missing Scopes: If the token does not contain expected identity fields (like email), verify that all required scopes (openid, email, profile) are included in both the app config and OAuth request.
Domain Restrictions: If access is denied even after successful login, confirm that the user’s email domain matches the policy set in the SnapLogic API Manager.
Logging the returned error messages and using browser developer tools can help you pinpoint the issue during redirection and callback stages.
Best Practices for Secure AI Application Deployment
Rotate Secrets Regularly
To reduce the risk of secret leakage and potential exploitation, it's essential to rotate sensitive credentials—such as CLIENT_ID and CLIENT_SECRET—on a regular basis. Even though Streamlit’s Secrets Management securely stores these values, periodic rotation ensures resilience against accidental exposure, insider threats, or repository misconfigurations.
To streamline this, set calendar reminders or use automated DevSecOps pipelines that replace secrets and update environment files or secret stores accordingly.
Monitor API Logs and Auth Failures
Security doesn’t stop at implementation. Ongoing monitoring is critical for identifying potential misuse or intrusion attempts. SnapLogic’s API Management interface provides detailed metrics that can help you:
Track API usage per user or IP address.
Identify repeated authorization failures or token inspection errors.
Spot anomalous patterns such as unexpected call volumes or malformed requests.
Extending the Architecture
Supporting Other OAuth Providers (Google, Okta, Entra ID)
While this tutorial focuses on Salesforce as the OAuth 2.0 Identity Provider, the same security architecture can be extended to support other popular providers like Google, Okta, and Microsoft Entra ID (formerly Azure AD). These providers are fully OAuth-compliant and typically offer similar endpoints for authorization, token exchange, and user introspection.
To switch providers, update the following in your .env file:
SF_AUTHORIZE_URL
SF_TOKEN_URL
SF_SCOPE (as per provider documentation)
Also, make sure your app is registered in the respective provider’s developer portal and configured with the correct redirect URI and scopes.
Adding Role-Based Access Controls
For larger deployments, simple domain-based filtering may not be sufficient. You can extend authorization logic by incorporating role-based access controls (RBAC). This can be achieved by:
Including custom roles in the OAuth token payload (e.g., via custom claims).
Parsing these roles in SnapLogic’s AuthorizeByRole policy.
Restricting access to specific APIs or features based on user roles (e.g., admin, analyst, viewer).
RBAC allows you to build multi-tiered applications with differentiated permissions while maintaining strong security governance.
Conclusion
Final Thoughts on Secure AI App Deployment
Securing your generative AI applications is no longer optional—especially when they’re built for enterprise use cases involving sensitive data, customer interactions, and decision automation. This tutorial demonstrated a complete security pattern using SnapLogic AgentCreator and Streamlit, authenticated via Salesforce OAuth 2.0 and protected through SnapLogic API Management.
By following this step-by-step approach, you ensure only verified users can access your app, and backend APIs are shielded by layered authentication and role-based authorization policies. The same architecture can easily be extended to other providers or scaled across multiple AI workflows within your organization.
Resources for Further Learning
SnapLogic Resources and Use Cases
Salesforce Developer Docs
Streamlit Documentation
OAuth 2.0 Official Specification
With a secure foundation in place, you’re now empowered to build and scale powerful, enterprise-grade AI applications confidently.
18 days ago0likes
Despite significant advances in industrial automation, many critical devices still rely on legacy OPC Classic servers (DA, AE, HDA). Integrating these aging systems with modern platforms presents challenges such as protocol incompatibility and the absence of native OPC UA support. Meanwhile, modern integration and analytics platforms increasingly depend on OPC UA for secure, scalable connectivity. This post addresses these challenges by demonstrating how the OPC UA Wrapper can seamlessly bridge OPC Classic servers to SnapLogic. Through a practical use case—detecting missing reset anomalies in saw-toothed wave signals from an OPC Simulation DA Server—you’ll discover how to enable real-time monitoring and alerting without costly infrastructure upgrades
24 days ago4likes
Scalable Analytics Platform: A Data Engineering Journey - Explore SnapLogic's innovative Medallion Architecture approach for handling massive data, improving analytics with S3, Trino, and Amazon Neptune. Learn about cost reduction, scalability, data governance, and enhanced insights.
27 days ago2likes
SnapLogic AutoSync: Your Agile Chopper for Data Integration
In the world of enterprise data, long-haul flights are essential—but sometimes you need to lift off quickly, land precisely, and get the job done without waiting for a runway.
Think of SnapLogic’s Intelligent Integration Platform (IIP) as your data jumbo jet: powerful, scalable, and built for complex, high-volume integrations across global systems. Now imagine you need something faster, more nimble—something that doesn’t require a flight crew to get airborne.
Enter SnapLogic AutoSync—the agile chopper in your integration fleet.
Whether you're syncing Salesforce data after an acquisition, uploading spreadsheets for instant analysis, or automating recurring flows between systems like Marketo and Redshift, AutoSync lifts your data with just a few clicks. It empowers business users to move quickly and experiment safely, without compromising on governance or control.
With AutoSync, you’re not just reducing engineering cycles—you’re accelerating the entire journey from raw data to actionable insight.
2 months ago6likes
4 MIN READ
In the energy sector, turbine lubrication oil is mission-critical. A drop in oil level or pressure can silently escalate into major failures, unplanned shutdowns, and expensive maintenance windows.
In this blog, we showcase a real-world implementation using SnapLogic and OPC UA, designed to:
🔧 Continuously monitor turbine lubrication oil levels 📥 Ingest real-time sensor data from industrial systems 📊 Store telemetry in data lakes for analytics and compliance 📣 Real-time Slack alerts to engineers — before failures strike
This IIoT-driven solution empowers energy providers to adopt predictive maintenance practices and reduce operational risk
2 months ago2likes