Node Navigation
Community Update
To prevent further spamming incidents, we have changed our permissions for the members of this community. While anyone can visit our community, only verified customers, partners, and employees will have posting access. Email community@snaplogic.com to verify your account.
Community Activity
Platform Administration Reference guide v3
Introduction This document is a reference manual for common administrative and management tasks on the SnapLogic platform. It has been revised to include the new Admin Manager and Monitor functionality, which replace the Classic Manager and Dashboard interfaces respectively. This document is for SnapLogic Environment Administrators (Org Administrators) and users involved in supporting or managing the platform components. Author: Ram Bysani SnapLogic Enterprise Architecture team Environment Administrator (known as Org Admin in the Classic Manager) permissions There are two reserved groups in SnapLogic: admins: Users in this group have full access to all projects in the Org. members: Users in this group have access to projects that they create, or to which they are granted access. Users are automatically added to this group when you create them, and they must be a part of the members group to have any privileges within that Org. There are two user roles: Environment admins: Org users who can manage the Org. Environment admins are part of the admins group, and this role is named “Org Admin” in the classic Manager. Basic user: All non-admin users. Within an Org, basic users can create projects and work with assets in the Project spaces to which they have been granted permission. To gain Org administrator privileges, a Basic user can be added to the admins group. The below table lists the various tasks under the different categories that an Environment admin user can perform: Task Comments USER MANAGEMENT Create and delete users. Update user profiles. Create and delete groups. Add users to a group. Configure password expiration policies. Enable users’ access to applications (AutoSync, IIP) When a user is removed from an Org, the administrator that removes the user becomes the owner of that user's assets. Reference: User Management MANAGER Create and manage Project Spaces. Update permissions (R, W, X) on an individual Project space and projects. Delete a Project space. Restore Project spaces, projects, and assets from the Recycle bin. Permanently delete Project spaces, projects, and assets from the Recycle bin. Configure Git integration and integration with tools such as Azure Repos, GitLab, and GHES. View Account Statistics, and generate reports for accounts, projects, and pipelines within the project that use an account. Upgrade/downgrade Snap Pack versions. ALERTS and NOTIFICATIONS Set up alerts and notifications. Set up Slack channels and recipients for notifications. Reference: Alerts SNAPLEX and ORG Create Groundplexes. Manage Snaplex versions. Update Snaplex settings. Update or revert a Snaplex version. APIM Publish, unpublish, and deprecate APIs on the Developer portal. Configure the Developer portal. Approve API subscriptions and manage/approve user accounts. Reference: API Management AutoSync Configure AutoSync user permissions. Configure connections for data pipeline endpoints. Create user groups to share connection configuration. View information on all data pipelines in the Org. Reference: AutoSync Administration Table 1.0 Org Admin Tasks SnapLogic Monitoring Dashboards The enhanced Monitor interface can be launched from the Apps (Waffle) menu located on the top right corner of the page. The enhanced Monitor Interface enables you to observe integration executions, activities, events, and infrastructure health in your SnapLogic environment. The Monitor pages are categorized under three main groups: Analyze Observe Review Reference: Move_from_Dashboard_to_Monitor The following table lists some common administrative and monitoring tasks for which the Monitor interface can be used. Task Monitor App page Integration Catalog to fetch and display metadata for all integrations in the environment. Monitor -> Analyze -> Integration Catalog Reference: Integration Catalog View of the environment over a time period. Monitor -> Analyze -> Insights Reference: Insights View pipeline and task executions along with statistics, logs, and other details. Stop executions. Download execution details. Monitor -> Analyze -> Execution Reference: Execution Monitor and manage Snaplex services and nodes with graph views for a time period. Monitor -> Analyze -> Infrastructure Reference: Infrastructure View and download metrics for Snaplex nodes for a time period. Monitor -> Analyze -> Metrics Monitor -> Observe -> API Metrics Reference: Metrics, API-Metrics Review Alert history and Activity logs. Monitor -> Review Reference: Alert History, Activity Log Troubleshooting Snaplex / Node / Pipeline issues. Reference: Troubleshooting Table 2.0 Monitor App features Metrics for monitoring CPU Consumption CPU consumption can be high (and exceed 90% at times) when pipelines are executing. A high CPU consumption percentage when no pipelines are executing could indicate a high CPU usage by other processes on the Snaplex node. Review CPU Metrics under the Monitor -> Metrics, and Monitor -> Infrastructure tabs. Reference: CPU utilization metrics System load average (For Unix based systems) Load average is a measure of the number of processes that are either actively running on the CPU or waiting in line to be processed by the CPU. e.g. in a system with 4 virtual CPUs: A load average value of 4.0 means average full use of all CPUs without any idle time or queue. A load average value of >4.0 suggests that processes are waiting for CPU time. A load average value of <4.0 indicates underutilization. System load. Monitor -> Metrics tab. Heap Memory Heap memory is used by the SnapLogic application to dynamically allocate memory at runtime to perform memory intensive operations. The JVM can crash with an Out-of-Memory exception if the heap memory limit is reached. High heap memory usage can also impact other application functions such as pipeline execution, metrics collection, etc. The key heap metrics are listed in the table below: Metric Comments Heap Size Amount of heap memory reserved by the OS This value can grow or shrink depending on usage. Used heap Portion of heap memory in use by the application’s Java objects This value changes constantly with usage. Max heap size Upper heap memory limit This value is constant and does not change. It can be configured by setting the jcc.heap.max_size property in the global.properties file or as a node property. Heap memory. Monitor -> Metrics tab. Non-heap memory consumption The JVM reserves additional native memory that is not part of the heap memory. This memory area is called Metaspace, and is used to store class metadata. Metaspace can grow dynamically based on the application’s needs. Non-heap memory metrics are similar to heap memory metrics however there is no limit on the size of the non-heap memory. In a Snaplex, non-heap size tends to stay somewhat flat or grow slowly over longer periods of time. Non-heap size values larger than 1 GiB should be investigated with help from SnapLogic support. Note that all memory values are displayed in GiB (Gibibytes). Non-Heap memory. Monitor -> Analyze -> Metrics (Node) Swap memory Swap memory or swap space is a portion of disk used by the operating system to extend the virtual memory beyond the physical RAM. This allows multiple processes to share the computer’s memory by “swapping out” some of the RAM used by less active processes to the disk, making more RAM available for the more active processes. Swap space is entirely managed by the operating system, and not by individual processes such as the SnapLogic Snaplex. Note that swap space is not “extra” memory that can compensate for low heap memory. Refer to this document for information about auto, and custom heap settings. Reference: Custom heap setting. High swap utilization is an indicator of contention between processes, and may suggest a need for higher RAM. Additional Metrics Select the node from Monitor -> Analyze, and navigate to the Metrics tab. Review the following metrics. Active Pipelines Monitor the Average and Max active pipeline counts for specific time periods. Consider adding nodes for load balancing and platform stability if these counts are consistently high. Active Pipelines. Monitor -> Analyze -> Metrics (Node) Active Threads Active threads. Monitor -> Analyze -> Metrics (Node) Every Snap in an active pipeline consumes at least one thread. Some Snaps such as Pipeline Execute, Bulk loaders, and Snaps performing input/output can use a higher number of threads compared to other Snaps. Refer to this Sigma document on community.snaplogic.com: Snaplex Capacity Tuning Guide for additional configuration details. Disk Utilization It is important to monitor disk utilization as the lack of free disk space can lead to blocking threads, and can potentially impact essential Snaplex functions such as heartbeats to the Control Plane. Disk utilization. Monitor -> Analyze -> Metrics (Node) Additional Reference: Analyze Metrics. Download data in csv format for the individual Metrics graphs. Enabling Notifications for Snaplex node events Event Notifications can be created on the Manager (Currently in the Classic Manager) under Settings -> Notifications. The notification rule can be set up to send an alert about a tracked event to multiple email addresses. The alerts can also be viewed on the Manager under the Alerts tab. Reference: Notification Events Snaplex Node notifications Telemetry Integration with third-party observability tools using OpenTelemetry (OTEL) The SnapLogic platform uses OpenTelemetry (OTEL) to support telemetry data integration with third-party observability tools. Please contact your CSM to enable the Open Telemetry feature. Reference: Open Telemetry Integration Node diagnostics details The Node diagnostics table includes diagnostic data that can be useful for troubleshooting. For configurable settings, the table displays the Maximum, Minimum, Recommended, and Current values in GiB (Gibibytes) where applicable. The values in red indicate settings outside of the recommended range. Navigate to the Monitor -> infrastructure -> (Node) -> Additional Details tab. Example: Node diagnostics table Identifying pipelines that contribute to a node crash / termination Monitor Page Comments Monitor -> Activity logs Filter by category = Snaplex. Make note of the node crash events for a specific time period Event name text: Node crash event is reported Reference: Activity Logs Monitor -> Execution Select the execution window in the Calendar. Filter executions by setting these Filter conditions: Status: Failed Node name: <Enter node name from the crash event> Reference: Execution Sort on the Documents column to identify the pipeline executions processing the most number of documents. Click anywhere on the row to view the execution statistics. You can also view the active pipelines for that time period from the Monitor -> Metrics -> Active pipelines view. Table 3.0 Pipeline execution review Additional configurations to mitigate pipeline terminations The below thresholds can be optimized to minimize pipeline terminations due to Out-of-Memory exceptions. Note that the memory thresholds are based on the physical memory on the node, and not the Virtual / Swap memory. Maximum Memory % Pipeline termination threshold Pipeline restart delay interval Refer to the table Table 3.0 Snaplex node memory configurations in this Sigma document for additional details and recommended values: Snaplex Capacity Tuning Pipeline Quality Check API The Linter public API for pipeline quality provides additional rules to provide complete reports for all standard checks, including message levels (Critical / Warning / Info), with actionable message descriptions for pipeline quality. Reference: Pipeline Quality Check By applying the quality checks, it is possible to optimize pipelines, and improve maintainability. You can also use SnapGPT to analyze pipelines, identify issues, and suggest best practices to improve your pipelines. (SnapGPT_Analyze_Pipelines) Other third party profiling tools Third party profiling tools such as VisualVM can be used to monitor local memory, CPU, and other metrics. This document will be updated in a later version to include the VisualVM configurations for the SnapLogic application running on a Groundplex. Java Component Container (jcc) command line utility (for Groundplexes) The jcc script is a command-line tool that provides a set of commands to manage the Snaplex nodes. This utility is installed in the /opt/snaplogic/bin directory of the Groundplex node. The below table lists the commonly used arguments for the jcc script (jcc.sh on Linux and jcc.bat on Windows). Note that the command would list other arguments (for example, try-restart). However, those are mainly included for backward compatibility and not frequently used. $SNAPLOGIC refers to the /opt/snaplogic directory on Linux or the <Windows drive>:\opt\snaplogic directory on Windows servers. Run these commands as the root user on Linux and as an Administrator on Windows. Example: sudo /opt/snaplogic/bin/jcc.sh restart or c:\snaplogic\bin\jcc.bat restart Argument Description Comments status Returns the Snaplex status. The response string would indicate if the Snaplex Java process is running. start Starts the Snaplex process on the node. stop Stops the Snaplex process on the node. restart Stops and restarts the Snaplex process on the node. Restarts both the monitor and the Snaplex processes. diagnostic Generates the diagnostic report for the Snaplex node. The HTML output file is generated in the $SNAPLOGIC/run/log directory. Resolve any warnings from the report to ensure normal operations. clearcache Clears the cache files from the node. This command must be executed when the JCC is stopped. addDataKey Generates a new key pair and appends it to the keystore in the /etc/snaplogic folder with the specified alias. This command is used to rotate the private keys for Enhanced Account Encryption. Doc reference: Enhanced Account Encryption The following options are available for a Groundplex on Windows server. install_service remove_service The jcc.bat install_service command installs the Snaplex as a Windows service. The jcc.bat remove_service command removes the installed Windows service. Run these commands as an Administrator user. Table 4.0 jcc script arguments Example of custom log configuration for a Snaplex node (Groundplex) Custom log file configuration is occasionally required due to internal logging specifications or to troubleshoot problems with specific Snaps. In the following example, we illustrate the steps to configure the log level of ‘Debug’ for the Azure SQL Snap pack. The log level can be customized for each node of the Groundplex where the related pipelines are executed, and will be effective for all pipelines that use any of the Azure SQL Snaps (for example, Azure SQL - Execute, Azure SQL - Update, etc.). Note that Debug logging can affect pipeline performance so this configuration must only be used for debugging purposes. Configuration Steps Follow steps 1 and 2 from this document: Custom log configuration Note: You can perform Step 2 by adding the property key and value under the Global Properties section. Example: Key: jcc.jvm_options Value: -Dlog4j.configurationFile=/opt/snaplogic/logconfig/log4j2-jcc.xml The Snaplex node must be restarted for the change to take effect. Refer to the commands in Table 3.0. b. Edit the log4j2-jcc.xml file configured in Step a. c. Add a new RollingRandomAccessFile element under <Appenders>. In this example, the element is referenced with a unique name JCC_AZURE. It also has a log size and rollover policy defined. The policy would enable generation of up to 10 log files of 1 MB each. These values can be adjusted depending on your requirements. <RollingRandomAccessFile name="JCC_AZURE" fileName="${env:SL_ROOT}/run/log/${sys:log.file_prefix}jcc_azure.json" immediateFlush="true" append="true" filePattern="${env:SL_ROOT}/run/log/jcc_azure-log-%d{yyyy-MM-dd-HH-mm}.json” ignoreExceptions="false"> <JsonLogLayout properties="true"/> <Policies> <SizeBasedTriggeringPolicy size="1 MB"/> </Policies> <DefaultRolloverStrategy max="10"/> </RollingRandomAccessFile> … … </Appenders> d. The next step is to configure a Logger that references the Appender defined in step #c. This is done by adding a new <Logger> element. In this example, the Logger is defined with log level = Debug. <Logger name="com.snaplogic.snaps.azuresql" level="debug" includeLocation="true" additivity="false"> <AppenderRef ref="JCC_AZURE" /> </Logger> .. .. <Root> … </Root </Loggers> </Configuration> The value for the name attribute is derived from the Class FQID value of the associated Snap. The changes to log4j2-jcc.xml are marked by the highlighted text in steps c and d. The complete XML file is also attached for reference. You can refer to the Log4j documentation for more details on the attributes or for additional customization. Log4j reference Debug log messages and log files Additional debug log messages will be printed to the pipeline execution logs for any pipeline with Azure SQL Snaps. These logs can be retrieved from Dashboard. Example: {"ts": "2023-11-30T20:21:33.490Z", "lvl": "DEBUG", "fi": "JdbcDataSourceRegistryImpl.java:369", "msg": "JDBC URL: jdbc:sqlserver://sltapdb.database.windows.net:1433;database=SL.TAP;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;authentication=sqlPassword;loginTimeout=30;connectRetryCount=3;connectRetryInterval=5;applicationName=SnapLogic (main23721) - pid-113e3955-1969-4541-9c9c-e3e0c897cccd, database server: Microsoft SQL Server(12.00.2531), driver: Microsoft JDBC Driver 11.2 for SQL Server(11.2.0.0)", "snlb": "Azure+SQL+-+Update", "snrd": "5c06e157-81c7-497f-babb-edc7274fa4f6", "plrd": "5410a1bdc8c71346894494a2_f319696c-6053-46af-9251-b50a8a874ff9", "prc": "Azure SQL - The updated log configuration would also write the custom JCC logs (for all pipelines that have executed the Azure SQL Snaps) to disk under the /opt/snaplogic/run/log directory. The file size for each log file and the number of files would depend on the configuration in the log4j2-jcc.xml file. The changes to log4j2-jcc.xml can be reverted if the additional custom logging is no longer required. Log level configuration for a Snaplex in Production Orgs The default log level for a new Snaplex is ‘Debug.’ This value can be updated to ‘Info’ in Production Orgs as a best practice. The available values are: Trace: Records details of all events associated with the Snaplex. Debug: Records all events associated with the Snaplex. Info: Records messages that outline the status of the Snaplex and the completed Tasks. Warning: Records all warning messages associated with the Snaplex. Error: Records all error messages associated with the Snaplex. Reference: Snaplex logging PlexFS File Storage considerations PlexFS also known as suggest space is a storage location on the local disk of the JCC node. The /opt/snaplogic/run/fs folder is commonly designated for this purpose. It is used as a data store to temporarily store preview data during pipeline validation, as well as to maintain the state data for Resumable pipelines. Disk volumes To address issues that cause disk full errors and to ensure smoother operations of the systems that affect the stability of the Groundplex, you need to have separate mounts on Groundplex nodes. Follow the steps suggested below to create two separate disk volumes on the JCC nodes. Reference: Disk Volumes The /opt/snaplogic/run/fs folder location is used for the PlexFS operations. mount --bind /workspace/fs /opt/snaplogic/run/fs Folder Structure: The folders under PlexFS are created with this path structure: /opt/snaplogic/run/fs/<Environment>/<ProjectSpace>/<Project>/__suggest__/<Asset_ID> Example: /opt/snaplogic/run/fs/Org1/Proj_Space_1/Project1/__suggest__/aaa5010bc The files in the sub-folders are created with these extensions: *.jsonl *.dat PlexFS File Creation The files in /opt/snaplogic/run/fs are generated when a user performs pipeline validation. The amount of data in a .dat file is based on the “Preview Document Count” user setting. For Snaps with binary output (such as File Reader), the Snap will stop writing to PlexFS when the next downstream Snap has generated its limit of Preview data. PlexFS File Deletion The files for a specific pipeline are deleted when the user clicks ‘Retry’ to perform validation. New data files are generated. Files for a specific user session are deleted when the user logs out of SnapLogic. All PlexFS files are deleted when the Snaplex is restarted. Files in PlexFS are generated with an expiration date. The default expiration date is two days. The files are cleaned up periodically based on the expiration date. It is possible to set a feature flag to override the expiration time, and delete the files sooner. Recommendations The temp files are cleaned up periodically based on the default expiration date however you might occasionally encounter disk space availability issues due to excessive Preview data being written to the PlexFS file storage. The mount directory location can be configured with additional disk space or shared file storage (e.g. Amazon EFS). Contact SnapLogic support for details on the feature flag configuration to update the expiration time to a shorter duration for faster file clean up. The value for this feature flag is set in seconds.5.6KViews4likes0CommentsSecure your AgentCreator App - With SnapLogic API Management
14 MIN READ Why Security is Essential for Generative AI Applications As generative AI applications transition from prototypes to enterprise-grade solutions, ensuring security becomes non-negotiable. These applications often interact with sensitive user data, internal databases, and decision-making logic that must be protected from unauthorized access. Streamlit, while great for quickly developing interactive AI interfaces, lacks built-in access control mechanisms. Therefore, integrating robust authentication and authorization workflows is critical to safeguarding both the user interface and backend APIs. Overview of the AgentCreator + Streamlit Architecture This guide focuses on securing a generative AI-powered Sales Agent application built with SnapLogic AgentCreator and deployed via Streamlit. The application integrates Salesforce OAuth 2.0 as an identity provider and secures its backend APIs using SnapLogic API Management. Through this setup, only authorized Salesforce users from a trusted domain can access the application, ensuring end-to-end security for both the frontend and backend. Understanding the Application Stack Role of SnapLogic's AgentCreator Toolkit The SnapLogic AgentCreator Toolkit enables developers and sales engineers to build sophisticated AI-powered agents without having to manage complex infrastructure. These agents operate within SnapLogic pipelines, making it easy to embed business logic, API integrations, and data processing in a modular way. For example, a sales assistant built with AgentCreator and exposed as API using Triggered Tasks can pull real-time CRM data, generate intelligent responses, and return it via a clean web interface. Streamlit as User Interface On the frontend, Streamlit is used to build a simple, interactive web interface for users to query the Sales Agent. Importance of API Management in AI Workflows Once these agents are exposed via HTTP APIs, managing who accesses them—and how—is crucial. That’s where SnapLogic API Management comes in. It provides enterprise-grade tools for API publishing, securing endpoints, enforcing role-based access controls, and monitoring traffic. These features ensure that only verified users and clients can interact with your APIs, reducing the risk of unauthorized data access or abuse. However, the real challenge lies in securing both ends: The Streamlit UI, which needs to restrict access to authorized users. The SnapLogic APIs, exposing the AgentCreator Pipelines which must validate and authorize each incoming request. OAuth 2.0 Authentication: Fundamentals and Benefits What is OAuth 2.0? OAuth 2.0 is an open standard protocol designed for secure delegated access. Instead of sharing credentials directly, users grant applications access to their resources using access tokens. This model is particularly valuable in enterprise environments, where central identity management is crucial. By using OAuth 2.0, applications can authenticate users through trusted Identity Providers (IDPs) while maintaining a separation of concerns between authentication, authorization, and application logic. Why Use Salesforce as the Identity Provider (IDP)? Salesforce is a robust identity provider that many organizations already rely on for CRM, user management, and security. Leveraging Salesforce for OAuth 2.0 authentication allows developers to tap into a pre-existing user base and organizational trust framework. In this tutorial, Salesforce is used to handle login and token issuance, ensuring that only authorized Salesforce users can access the Streamlit application. This integration also simplifies compliance with enterprise identity policies such as SSO, MFA, and domain-based restrictions. To address the authentication challenge, we use the OAuth 2.0 Authorization Code Flow, with Salesforce acting as both the Identity and Token Provider. Here is Salesforce’s official documentation on OAuth endpoints, which is helpful for configuring your connected app. 🔒 Note: While Salesforce is a logical choice for this example—since the Sales Agent interacts with Salesforce data—any OAuth2-compliant Identity Provider (IDP) such as Google, Okta, or Microsoft Entra ID (formerly Azure AD) can be used. The core authentication flow remains the same, with variations primarily in OAuth endpoints and app registration steps. Architecture Overview and Security Objectives Frontend (Streamlit) vs Backend (SnapLogic APIs) The application architecture separates the frontend interface and backend logic. The frontend is built using Streamlit, which allows users to interact with a visually intuitive dashboard. It handles login, displays AI-generated responses, and captures user inputs. The backend, powered by SnapLogic's AgentCreator, hosts the core business logic within pipelines that are exposed as APIs. This separation ensures flexibility and modular development, but it also introduces the challenge of securing both components independently yet cohesively. Threat Model and Security Goals The primary security threats in such a system include unauthorized access to the UI, data leaks through unsecured APIs, and token misuse. To mitigate these risks, the following security objectives are established: Authentication: Ensure only legitimate users from a trusted identity provider (Salesforce) can log in. Authorization: Grant API access based on user roles and domains, verified via SnapLogic APIM policies. Token Integrity: Validate and inspect access tokens before allowing backend communication with SnapLogic APIM Policies Secret Management: Store sensitive credentials (like Client ID and Secret) securely using Streamlit's secret management features. This layered approach aligns with enterprise security standards and provides a scalable model for future generative AI applications. Authentication & Authorization Flow Here’s how we securely manage access: 1. Login via Salesforce: Users are redirected to Salesforce’s login screen. After successful login, Salesforce redirects back to the app with an access token. The token and user identity info are stored in Streamlit’s session state. 2. Calling SnapLogic APIs: The frontend sends requests to SnapLogic’s triggered task APIs, attaching the Salesforce access token in the Authorization HTTP Header. 3. Securing APIs via SnapLogic Policies: Callout Authenticator Policy: Validates the token by sending it to Salesforce’s token validation endpoint, as Salesforce tokens are opaque and not self-contained like JWTs. AuthorizeByRole Policy: After extracting the user’s email address, this policy checks if the domain (e.g., @snaplogic.com) is allowed. If so, access is granted. Below you can find the complete OAuth 2 Authorization Code Flow enhanced with the Token Introspection & Authorization Flow This setup ensures end-to-end security, combining OAuth-based authentication with SnapLogic’s enterprise-grade API Management capabilities. In the following sections, we’ll walk through how to implement each part—from setting up the Salesforce Connected App to configuring policies in SnapLogic—so you can replicate or adapt this pattern for your own generative AI applications. Step 1: Set Up Salesforce Connected App Navigate to Salesforce Developer Console To initiate the OAuth 2.0 authentication flow, you’ll need to register your application as a Connected App in Salesforce. Begin by logging into your Salesforce Developer or Admin account. From the top-right gear icon, navigate to Setup → App Manager. Click on “New Connected App” to create a new OAuth-enabled application profile. Define OAuth Callback URLs and Scopes In the new Connected App form, set the following fields under the API (Enable OAuth Settings) section: Callback URL: This should be the URL of your Streamlit application (e.g., https://snaplogic-genai-builder.streamlit.app/Sales_Agent). Selected OAuth Scopes: Include at least openid, email, and profile. You may also include additional scopes depending on the level of access required. Ensure that the “Enable OAuth Settings” box is checked to make this app OAuth-compliant. Retrieve Client ID and Client Secret After saving the app configuration, Salesforce will generate a Consumer Key (Client ID) and a Consumer Secret. These are crucial for the OAuth exchange and must be securely stored. You will use these values later when configuring the Streamlit OAuth integration and environmental settings. Do not expose these secrets in your codebase or version control. 📄 For details on Salesforce OAuth endpoints, see: 👉 Salesforce OAuth Endpoints Documentation Step 2: Integrate OAuth with Streamlit Using streamlit-oauth Install and Configure streamlit-oauth Package To incorporate OAuth 2.0 authentication into your Streamlit application, you can use the third-party package streamlit-oauth (streamlit-oauth). This package abstracts the OAuth flow and simplifies integration with popular identity providers like Salesforce. To install it, run the following command in your terminal: pip install streamlit-oauth After installation, you'll configure the OAuth2Component to initiate the login process and handle token reception once authentication is successful. Handle ClientID and ClientSecret Securely Once users log in through Salesforce, the app receives an Access Token and an ID token. These tokens should never be exposed in the UI or logged publicly. Instead, store them securely in st.session_state, Streamlit's native session management system. This ensures the tokens are tied to the user's session and can be accessed for API calls later in the flow. Store Credentials via Streamlit Secrets Management Storing secrets such as CLIENT_ID and CLIENT_SECRET directly in your source code is a security risk. Streamlit provides a built-in Secrets Management system that allows you to store sensitive information in a .streamlit/secrets.toml file, which should be excluded from version control. Example: # .streamlit/secrets.toml SF_CLIENT_ID = "your_client_id" SF_CLIENT_SECRET = "your_client_secret" In your code, you can access these securely: CLIENT_ID = st.secrets["SF_CLIENT_ID"] CLIENT_SECRET = st.secrets["SF_CLIENT_SECRET"] Step 3: Manage Environment Settings with python-dotenv Why Environment Variables Matter Managing environment-specific configuration is essential for maintaining secure and scalable applications. In addition to storing sensitive credentials using Streamlit’s secrets management, storing dynamic OAuth parameters such as URLs, scopes, and redirect URIs in an environment file (e.g., .env) allows you to keep code clean and configuration flexible. This is particularly useful if you plan to deploy across multiple environments (development, staging, production) with different settings. Store OAuth Endpoints in .env Files To manage environment settings, use the python-dotenv package (python-dotenv), which loads environment variables from a .env file into your Python application. First, install the library: pip install python-dotenv Create a .env file in your project directory with the following format: SF_AUTHORIZE_URL=https://login.salesforce.com/services/oauth2/authorize SF_TOKEN_URL=https://login.salesforce.com/services/oauth2/token SF_REVOKE_TOKEN_URL=https://login.salesforce.com/services/oauth2/revoke SF_REDIRECT_URI=https://your-streamlit-app-url SF_SCOPE=id openid email profile Then, use the dotenv_values function to load the variables into your script: from dotenv import dotenv_values env = dotenv_values(".env") AUTHORIZE_URL = env["SF_AUTHORIZE_URL"] TOKEN_URL = env["SF_TOKEN_URL"] REVOKE_TOKEN_URL = env["SF_REVOKE_TOKEN_URL"] REDIRECT_URI = env["SF_REDIRECT_URI"] SCOPE = env["SF_SCOPE"] This approach ensures that your sensitive and environment-specific data is decoupled from the codebase, enhancing maintainability and security. Step 4: Configure OAuth Flow in Streamlit Define OAuth2 Component and Redirect Logic With your environment variables and secrets in place, it’s time to configure the OAuth flow in Streamlit using the OAuth2Component from the streamlit-oauth package. This component handles user redirection to the Salesforce login page, token retrieval, and response parsing upon return to your app. from streamlit_oauth import OAuth2Component oauth2 = OAuth2Component( client_id=CLIENT_ID, client_secret=CLIENT_SECRET, authorize_url=AUTHORIZE_URL, token_url=TOKEN_URL, redirect_uri=REDIRECT_URI ) # create a button to start the OAuth2 flow result = oauth2.authorize_button( name="Log in", icon="https://www.salesforce.com/etc/designs/sfdc-www/en_us/favicon.ico", redirect_uri=REDIRECT_URI, scope=SCOPE, use_container_width=False ) This button initiates the OAuth2 flow and handles redirection transparently. Once the user logs in successfully, Salesforce redirects them back to the app with a valid token. Handle Session State for Tokens and User Data After authentication, the returned tokens are stored in st.session_state to maintain a secure, per-user context. Here’s how to decode the token and extract user identity details: if result: #decode the id_token and get the user's email address id_token = result["token"]["id_token"] access_token = result["token"]["access_token"] # verify the signature is an optional step for security payload = id_token.split(".")[1] # add padding to the payload if needed payload += "=" * (-len(payload) % 4) payload = json.loads(base64.b64decode(payload)) email = payload["email"] username = payload["name"] #storing token and its parts in session state st.session_state["SF_token"] = result["token"] st.session_state["SF_user"] = username st.session_state["SF_auth"] = email st.session_state["SF_access_token"]=access_token st.session_state["SF_id_token"]=id_token st.rerun() else: st.write(f"Congrats **{st.session_state.SF_user}**, you are logged in now!") if st.button("Log out"): cleartoken() st.rerun() This mechanism ensures that the authenticated user context is preserved across interactions, and sensitive tokens remain protected within the session. The username displays in the UI after a successful login. 😀 Step 5: Create and Expose SnapLogic Triggered Task Build Backend Logic with AgentCreator Snaps With user authentication handled on the frontend, the next step is to build the backend business logic using SnapLogic AgentCreator. This toolkit lets you design AI-powered pipelines that integrate with data sources, perform intelligent processing, and return contextual responses. You can use pre-built Snaps (SnapLogic connectors) for Salesforce, OpenAI, and other services to assemble your Sales Agent pipeline. Generate the Trigger URL for API Access Once your pipeline is tested and functional, expose it as an API using a Triggered Task: In SnapLogic Designer, open your Sales Agent pipeline. Click on “Create Task” and choose “Triggered Task”. Provide a meaningful name and set runtime parameters if needed. After saving, note the generated Trigger URL—this acts as your backend endpoint to which the Streamlit app will send requests. This URL is the bridge between your authenticated frontend and the secure AI logic on SnapLogic’s platform. However, before connecting it to Streamlit, you'll need to protect it using SnapLogic API Management, which we'll cover in the next section. Step 6: Secure API with SnapLogic API Manager Introduction to API Policies: Authentication and Authorization To prevent unauthorized access to your backend, you must secure the Triggered Task endpoint using SnapLogic API Management. SnapLogic enables policy-based security, allowing you to enforce authentication and authorization using Salesforce-issued tokens. Two primary policies will be applied: Callout Authenticator and Authorize By Role. The new Policy Editor of SnapLogic APIM 3.0 Add Callout Authenticator Policy This policy validates the access token received from Salesforce. Since Salesforce tokens are opaque (not self-contained like JWTs), the Callout Authenticator policy sends the token to Salesforce’s introspection endpoint for validation. If the token is active, Salesforce returns the user's metadata (email, scope, client ID, etc.). Example of a valid token introspection response: { "active": true, "scope": "id refresh_token openid", "client_id": "3MVG9C...", "username": "mpentzek@snaplogic.com", "sub": "https://login.salesforce.com/id/...", "token_type": "access_token", "exp": 1743708730, "iat": 1743701530, "nbf": 1743701530 } If the token is invalid or expired, the response will simply show: { "active": false } Below you can see the configuration of the Callout Authenticator Policy: Extract the domain from the username (email) returned by the Introspection endpoint after successful token validation for use in the Authorize By Role Policy. Add AuthorizeByRole Policy Once the token is validated, the Authorize By Role policy inspects the username (email) returned by Salesforce. You can configure this policy to allow access only to users from a trusted domain (e.g., @snaplogic.com), ensuring that external users cannot exploit your API. For example, you might configure the policy to check for the presence of “snaplogic” in the domain portion of the email. This adds a second layer of security after token verification and supports internal-only access models. Step 7: Connect the Streamlit Frontend to the Secured API Pass Access Tokens in HTTP Authorization Header Once the user has successfully logged in and the access token is stored in st.session_state, you can use this token to securely communicate with your SnapLogic Triggered Task endpoint. The access token must be included in the HTTP request’s Authorization header using the Bearer token scheme. headers = { 'Authorization': f'Bearer {st.session_state["SF_access_token"]}' } This ensures that the SnapLogic API Manager can validate the request and apply both authentication and authorization policies before executing the backend logic. Display API Responses in the Streamlit UI To make the interaction seamless, you can capture the user’s input, send it to the secured API, and render the response directly in the Streamlit app. Here’s an example of how this interaction might look: import requests import streamlit as st prompt = st.text_input("Ask the Sales Agent something:") if st.button("Submit"): with st.spinner("Working..."): data = {"prompt": prompt} headers = { 'Authorization': f'Bearer {st.session_state["SF_access_token"]}' } response = requests.post( url="https://your-trigger-url-from-snaplogic", data=data, headers=headers, timeout=10, verify=False # Only disable in development ) if response.status_code == 200: st.success("Response received:") st.write(response.text) else: st.error(f"Error: {response.status_code}") This fully connects the frontend to the secured backend, enabling secure, real-time interactions with your generative AI agent. Common Pitfalls and Troubleshooting Handling Expired or Invalid Tokens One of the most common issues in OAuth-secured applications is dealing with expired or invalid tokens. Since Salesforce access tokens have a limited lifespan, users who stay inactive for a period may find their sessions invalidated. To address this: Always check the token's validity before making API calls. Gracefully handle 401 Unauthorized responses by prompting the user to log in again. Implement a token refresh mechanism if your application supports long-lived sessions (requires refresh token configuration in Salesforce). By proactively managing token lifecycle, you prevent disruptions to user experience and secure API communications. Debugging OAuth Redirection Errors OAuth redirection misconfigurations can block the authentication flow. Here are common issues and their solutions: Incorrect Callback URL: Ensure that the SF_REDIRECT_URI in your .env file matches exactly what’s defined in the Salesforce Connected App settings. Missing Scopes: If the token does not contain expected identity fields (like email), verify that all required scopes (openid, email, profile) are included in both the app config and OAuth request. Domain Restrictions: If access is denied even after successful login, confirm that the user’s email domain matches the policy set in the SnapLogic API Manager. Logging the returned error messages and using browser developer tools can help you pinpoint the issue during redirection and callback stages. Best Practices for Secure AI Application Deployment Rotate Secrets Regularly To reduce the risk of secret leakage and potential exploitation, it's essential to rotate sensitive credentials—such as CLIENT_ID and CLIENT_SECRET—on a regular basis. Even though Streamlit’s Secrets Management securely stores these values, periodic rotation ensures resilience against accidental exposure, insider threats, or repository misconfigurations. To streamline this, set calendar reminders or use automated DevSecOps pipelines that replace secrets and update environment files or secret stores accordingly. Monitor API Logs and Auth Failures Security doesn’t stop at implementation. Ongoing monitoring is critical for identifying potential misuse or intrusion attempts. SnapLogic’s API Management interface provides detailed metrics that can help you: Track API usage per user or IP address. Identify repeated authorization failures or token inspection errors. Spot anomalous patterns such as unexpected call volumes or malformed requests. Extending the Architecture Supporting Other OAuth Providers (Google, Okta, Entra ID) While this tutorial focuses on Salesforce as the OAuth 2.0 Identity Provider, the same security architecture can be extended to support other popular providers like Google, Okta, and Microsoft Entra ID (formerly Azure AD). These providers are fully OAuth-compliant and typically offer similar endpoints for authorization, token exchange, and user introspection. To switch providers, update the following in your .env file: SF_AUTHORIZE_URL SF_TOKEN_URL SF_SCOPE (as per provider documentation) Also, make sure your app is registered in the respective provider’s developer portal and configured with the correct redirect URI and scopes. Adding Role-Based Access Controls For larger deployments, simple domain-based filtering may not be sufficient. You can extend authorization logic by incorporating role-based access controls (RBAC). This can be achieved by: Including custom roles in the OAuth token payload (e.g., via custom claims). Parsing these roles in SnapLogic’s AuthorizeByRole policy. Restricting access to specific APIs or features based on user roles (e.g., admin, analyst, viewer). RBAC allows you to build multi-tiered applications with differentiated permissions while maintaining strong security governance. Conclusion Final Thoughts on Secure AI App Deployment Securing your generative AI applications is no longer optional—especially when they’re built for enterprise use cases involving sensitive data, customer interactions, and decision automation. This tutorial demonstrated a complete security pattern using SnapLogic AgentCreator and Streamlit, authenticated via Salesforce OAuth 2.0 and protected through SnapLogic API Management. By following this step-by-step approach, you ensure only verified users can access your app, and backend APIs are shielded by layered authentication and role-based authorization policies. The same architecture can easily be extended to other providers or scaled across multiple AI workflows within your organization. Resources for Further Learning SnapLogic Resources and Use Cases Salesforce Developer Docs Streamlit Documentation OAuth 2.0 Official Specification With a secure foundation in place, you’re now empowered to build and scale powerful, enterprise-grade AI applications confidently.100Views0likes0CommentsPlatform Memory Alerts & Priority Notifications for Resource Failures
This is more about platform memory alerts. From my understanding, we have alert metrics in place that trigger an email if any of the nodes hit the specified threshold in the manager. However, I am looking at a specific use case. Consider an Ultra Pipeline that needs to invoke a child pipeline for transformation logic. This child pipeline is expected to run on the same node as the parent pipeline to reduce additional processing time, as it is exposed to the client side. Now, if the child pipeline fails to prepare due to insufficient resources on the node, no alert will be generated since the child pipeline did not return anything in the error view. Is there any feature or discussion underway to provide priority notifications to the organization admin for such failures? Task-level notifications won't be helpful as they rely on the configured error limits at the task level. While I used the Ultra Pipeline as an example, this scenario applies to scheduled and API-triggered pipelines as well. Your insights would be appreciated.Ranjith22 days agoNew Contributor II527Views0likes1CommentSnapLogic Execution Mode Confusion: LOCAL_SNAPLEX vs SNAPLEX_WITH_PATH with pipe.plexPath
I understand the basic difference between the two execution options for child pipelines: LOCAL_SNAPLEX: Executes the child pipeline on one of the available nodes within the same Snaplex as the parent pipeline. SNAPLEX_WITH_PATH: Allows specifying a Snaplex explicitly through the Snaplex Path field. This is generally used to run the child pipeline on a different Snaplex. However, I noticed a practical overlap: Let’s say I have a Snaplex named integration-test. If I choose LOCAL_SNAPLEX, the child pipeline runs on the same Snaplex (integration-test) as the parent. If I choose SNAPLEX_WITH_PATH and set the path as pipe.plexPath, it also resolves to the same Snaplex (integration-test) where the parent is running — so the execution again happens locally. I tested both options and found: The load was distributed similarly in both cases. Execution time was nearly identical. So from a functional perspective, both seem to behave the same when the Snaplex path resolves to the same environment. My question is: What is the actual difference in behavior or purpose between these two options when pipe.plexPath resolves to the same Snaplex? Also, why is using SNAPLEX_WITH_PATH with pipe.plexPath flagged as critical in the pipeline quality check, even though the behavior appears equivalent to LOCAL_SNAPLEX? Curious if anyone has faced similar observations or can shed light on the underlying difference.SolvedRanjith22 days agoNew Contributor II122Views0likes2CommentsBridging Legacy OPC Classic Servers(DA, AE, HDA) to SnapLogic via OPC UA Wrapper
4 MIN READ Despite significant advances in industrial automation, many critical devices still rely on legacy OPC Classic servers (DA, AE, HDA). Integrating these aging systems with modern platforms presents challenges such as protocol incompatibility and the absence of native OPC UA support. Meanwhile, modern integration and analytics platforms increasingly depend on OPC UA for secure, scalable connectivity. This post addresses these challenges by demonstrating how the OPC UA Wrapper can seamlessly bridge OPC Classic servers to SnapLogic. Through a practical use case—detecting missing reset anomalies in saw-toothed wave signals from an OPC Simulation DA Server—you’ll discover how to enable real-time monitoring and alerting without costly infrastructure upgrades216Views4likes2CommentsGoogle Sheets Subscribe questions
Hello all, I'm trying to use the Google Sheets Subscribe snap, and either I misconfigured something, or my expectations about the results are wrong. I can subscribe successfully to a Google Sheet, and my expectation is to get a call sent to my triggered task with the changes that were made by users in the Google Sheet document. After my test, I saw only empty messages sent to my triggered task, so it seems like I'm only receiving a "ping" when there is a change in the document. Is this expected, or am I supposed to get a summary of the change? Also I'm wondering why there is an expiration for the Google Sheet subscription? How am I supposed to do if I want to monitor the document permanently? Thanks! JFjfpelletier24 days agoContributor89Views0likes0CommentsNeed Guidance on Dynamic Excel File Generation and Email Integration
Hello Team, I am currently developing an integration where the data structure in the Mapper includes an array format like [{}, {}, ...]. One of the fields, Sales Employee, contains values such as null, Andrew Johnson, and Kaitlyn Bernd. My goal is to dynamically create separate Excel files for each unique value in the Sales Employee field (including null) with all the records, and then send all the generated files as attachments in a single email. Since the employee names may vary and increase in the future, the solution needs to handle dynamic grouping and file generation. I would appreciate any expert opinions or best practices on achieving this efficiently in SnapLogic. Thanks and Regards,deepanshu_125 days agoNew Contributor III67Views0likes0CommentsInserting large data in servicenow
Hello Team, I am developing a pipeline in SnapLogic where there are 6000000 records coming from snowflake and i have designed my pipeline like this: Parent pipeline: snowflake execute -> mapper where i have mapped one to one field -> group by n with 10000 group size -> pipeline execute where Pool size is 5 and in child pipeline i have used json spliter and service now insert ? what can i do to optimize the performance and make it execute faster in snaplogic, currently it takes much time to execute ? Can someone assist in this regards? Thanks in advance.deepanshu_125 days agoNew Contributor III285Views0likes3CommentsThe Innovator’s Advantage: Becoming an Agent Creator in a Generative AI World
Your roadmap to AI-powered transformation starts here. Download eBook >> In a fast-paced, AI-first world, waiting is no longer an option. Forward-thinking leaders already leverage generative AI to cut costs, streamline operations, and drive measurable results. This exclusive eBook, The Innovator’s Advantage, shows you how to lead the way. Inside, you’ll discover: Why now is the time to become an agent creator — and what that really means A practical blueprint for building and deploying generative AI agents that deliver real business value Real-world success stories from organizations transforming IT, finance, and operations How to prepare your data, your teams, and your tech stack for scalable, secure AI adoption Innovation is already happening, and those who act now will own the advantage. SnapLogic’s AgentCreator empowers organizations to go beyond theory and put generative AI to work across the enterprise, with speed, simplicity and security. Be the leader who acts. Not the one who catches up. Download The Innovator’s Advantage and take the first step to becoming the agent creator.Scott26 days agoAdmin99Views0likes0Comments
Getting Started
Here are some links to help you get quickly familiarized with the Integration Nation community platform.
Top Content
Recent Blogs
Why Security is Essential for Generative AI Applications
As generative AI applications transition from prototypes to enterprise-grade solutions, ensuring security becomes non-negotiable. These applications often interact with sensitive user data, internal databases, and decision-making logic that must be protected from unauthorized access. Streamlit, while great for quickly developing interactive AI interfaces, lacks built-in access control mechanisms. Therefore, integrating robust authentication and authorization workflows is critical to safeguarding both the user interface and backend APIs.
Overview of the AgentCreator + Streamlit Architecture
This guide focuses on securing a generative AI-powered Sales Agent application built with SnapLogic AgentCreator and deployed via Streamlit. The application integrates Salesforce OAuth 2.0 as an identity provider and secures its backend APIs using SnapLogic API Management. Through this setup, only authorized Salesforce users from a trusted domain can access the application, ensuring end-to-end security for both the frontend and backend.
Understanding the Application Stack
Role of SnapLogic's AgentCreator Toolkit
The SnapLogic AgentCreator Toolkit enables developers and sales engineers to build sophisticated AI-powered agents without having to manage complex infrastructure. These agents operate within SnapLogic pipelines, making it easy to embed business logic, API integrations, and data processing in a modular way. For example, a sales assistant built with AgentCreator and exposed as API using Triggered Tasks can pull real-time CRM data, generate intelligent responses, and return it via a clean web interface.
Streamlit as User Interface
On the frontend, Streamlit is used to build a simple, interactive web interface for users to query the Sales Agent.
Importance of API Management in AI Workflows
Once these agents are exposed via HTTP APIs, managing who accesses them—and how—is crucial. That’s where SnapLogic API Management comes in. It provides enterprise-grade tools for API publishing, securing endpoints, enforcing role-based access controls, and monitoring traffic. These features ensure that only verified users and clients can interact with your APIs, reducing the risk of unauthorized data access or abuse.
However, the real challenge lies in securing both ends:
The Streamlit UI, which needs to restrict access to authorized users.
The SnapLogic APIs, exposing the AgentCreator Pipelines which must validate and authorize each incoming request.
OAuth 2.0 Authentication: Fundamentals and Benefits
What is OAuth 2.0?
OAuth 2.0 is an open standard protocol designed for secure delegated access. Instead of sharing credentials directly, users grant applications access to their resources using access tokens. This model is particularly valuable in enterprise environments, where central identity management is crucial. By using OAuth 2.0, applications can authenticate users through trusted Identity Providers (IDPs) while maintaining a separation of concerns between authentication, authorization, and application logic.
Why Use Salesforce as the Identity Provider (IDP)?
Salesforce is a robust identity provider that many organizations already rely on for CRM, user management, and security. Leveraging Salesforce for OAuth 2.0 authentication allows developers to tap into a pre-existing user base and organizational trust framework. In this tutorial, Salesforce is used to handle login and token issuance, ensuring that only authorized Salesforce users can access the Streamlit application. This integration also simplifies compliance with enterprise identity policies such as SSO, MFA, and domain-based restrictions.
To address the authentication challenge, we use the OAuth 2.0 Authorization Code Flow, with Salesforce acting as both the Identity and Token Provider.
Here is Salesforce’s official documentation on OAuth endpoints, which is helpful for configuring your connected app.
🔒 Note: While Salesforce is a logical choice for this example—since the Sales Agent interacts with Salesforce data—any OAuth2-compliant Identity Provider (IDP) such as Google, Okta, or Microsoft Entra ID (formerly Azure AD) can be used. The core authentication flow remains the same, with variations primarily in OAuth endpoints and app registration steps.
Architecture Overview and Security Objectives
Frontend (Streamlit) vs Backend (SnapLogic APIs)
The application architecture separates the frontend interface and backend logic. The frontend is built using Streamlit, which allows users to interact with a visually intuitive dashboard. It handles login, displays AI-generated responses, and captures user inputs. The backend, powered by SnapLogic's AgentCreator, hosts the core business logic within pipelines that are exposed as APIs. This separation ensures flexibility and modular development, but it also introduces the challenge of securing both components independently yet cohesively.
Threat Model and Security Goals
The primary security threats in such a system include unauthorized access to the UI, data leaks through unsecured APIs, and token misuse. To mitigate these risks, the following security objectives are established:
Authentication: Ensure only legitimate users from a trusted identity provider (Salesforce) can log in.
Authorization: Grant API access based on user roles and domains, verified via SnapLogic APIM policies.
Token Integrity: Validate and inspect access tokens before allowing backend communication with SnapLogic APIM Policies
Secret Management: Store sensitive credentials (like Client ID and Secret) securely using Streamlit's secret management features.
This layered approach aligns with enterprise security standards and provides a scalable model for future generative AI applications.
Authentication & Authorization Flow
Here’s how we securely manage access:
1. Login via Salesforce:
Users are redirected to Salesforce’s login screen.
After successful login, Salesforce redirects back to the app with an access token.
The token and user identity info are stored in Streamlit’s session state.
2. Calling SnapLogic APIs:
The frontend sends requests to SnapLogic’s triggered task APIs, attaching the Salesforce access token in the Authorization HTTP Header.
3. Securing APIs via SnapLogic Policies:
Callout Authenticator Policy: Validates the token by sending it to Salesforce’s token validation endpoint, as Salesforce tokens are opaque and not self-contained like JWTs.
AuthorizeByRole Policy: After extracting the user’s email address, this policy checks if the domain (e.g., @snaplogic.com) is allowed. If so, access is granted.
Below you can find the complete OAuth 2 Authorization Code Flow enhanced with the Token Introspection
& Authorization Flow
This setup ensures end-to-end security, combining OAuth-based authentication with SnapLogic’s enterprise-grade API Management capabilities. In the following sections, we’ll walk through how to implement each part—from setting up the Salesforce Connected App to configuring policies in SnapLogic—so you can replicate or adapt this pattern for your own generative AI applications.
Step 1: Set Up Salesforce Connected App
Navigate to Salesforce Developer Console
To initiate the OAuth 2.0 authentication flow, you’ll need to register your application as a Connected App in Salesforce. Begin by logging into your Salesforce Developer or Admin account. From the top-right gear icon, navigate to Setup → App Manager. Click on “New Connected App” to create a new OAuth-enabled application profile.
Define OAuth Callback URLs and Scopes
In the new Connected App form, set the following fields under the API (Enable OAuth Settings) section:
Callback URL: This should be the URL of your Streamlit application (e.g., https://snaplogic-genai-builder.streamlit.app/Sales_Agent).
Selected OAuth Scopes: Include at least openid, email, and profile. You may also include additional scopes depending on the level of access required.
Ensure that the “Enable OAuth Settings” box is checked to make this app OAuth-compliant.
Retrieve Client ID and Client Secret
After saving the app configuration, Salesforce will generate a Consumer Key (Client ID) and a Consumer Secret. These are crucial for the OAuth exchange and must be securely stored. You will use these values later when configuring the Streamlit OAuth integration and environmental settings. Do not expose these secrets in your codebase or version control.
📄 For details on Salesforce OAuth endpoints, see: 👉 Salesforce OAuth Endpoints Documentation
Step 2: Integrate OAuth with Streamlit Using streamlit-oauth
Install and Configure streamlit-oauth Package
To incorporate OAuth 2.0 authentication into your Streamlit application, you can use the third-party package streamlit-oauth (streamlit-oauth). This package abstracts the OAuth flow and simplifies integration with popular identity providers like Salesforce. To install it, run the following command in your terminal:
pip install streamlit-oauth
After installation, you'll configure the OAuth2Component to initiate the login process and handle token reception once authentication is successful.
Handle ClientID and ClientSecret Securely
Once users log in through Salesforce, the app receives an Access Token and an ID token. These tokens should never be exposed in the UI or logged publicly. Instead, store them securely in st.session_state, Streamlit's native session management system. This ensures the tokens are tied to the user's session and can be accessed for API calls later in the flow.
Store Credentials via Streamlit Secrets Management
Storing secrets such as CLIENT_ID and CLIENT_SECRET directly in your source code is a security risk. Streamlit provides a built-in Secrets Management system that allows you to store sensitive information in a .streamlit/secrets.toml file, which should be excluded from version control.
Example:
# .streamlit/secrets.toml
SF_CLIENT_ID = "your_client_id"
SF_CLIENT_SECRET = "your_client_secret"
In your code, you can access these securely:
CLIENT_ID = st.secrets["SF_CLIENT_ID"]
CLIENT_SECRET = st.secrets["SF_CLIENT_SECRET"]
Step 3: Manage Environment Settings with python-dotenv
Why Environment Variables Matter
Managing environment-specific configuration is essential for maintaining secure and scalable applications. In addition to storing sensitive credentials using Streamlit’s secrets management, storing dynamic OAuth parameters such as URLs, scopes, and redirect URIs in an environment file (e.g., .env) allows you to keep code clean and configuration flexible. This is particularly useful if you plan to deploy across multiple environments (development, staging, production) with different settings.
Store OAuth Endpoints in .env Files
To manage environment settings, use the python-dotenv package (python-dotenv), which loads environment variables from a .env file into your Python application. First, install the library:
pip install python-dotenv
Create a .env file in your project directory with the following format:
SF_AUTHORIZE_URL=https://login.salesforce.com/services/oauth2/authorize
SF_TOKEN_URL=https://login.salesforce.com/services/oauth2/token
SF_REVOKE_TOKEN_URL=https://login.salesforce.com/services/oauth2/revoke
SF_REDIRECT_URI=https://your-streamlit-app-url
SF_SCOPE=id openid email profile
Then, use the dotenv_values function to load the variables into your script:
from dotenv import dotenv_values
env = dotenv_values(".env")
AUTHORIZE_URL = env["SF_AUTHORIZE_URL"]
TOKEN_URL = env["SF_TOKEN_URL"]
REVOKE_TOKEN_URL = env["SF_REVOKE_TOKEN_URL"]
REDIRECT_URI = env["SF_REDIRECT_URI"]
SCOPE = env["SF_SCOPE"]
This approach ensures that your sensitive and environment-specific data is decoupled from the codebase, enhancing maintainability and security.
Step 4: Configure OAuth Flow in Streamlit
Define OAuth2 Component and Redirect Logic
With your environment variables and secrets in place, it’s time to configure the OAuth flow in Streamlit using the OAuth2Component from the streamlit-oauth package. This component handles user redirection to the Salesforce login page, token retrieval, and response parsing upon return to your app.
from streamlit_oauth import OAuth2Component
oauth2 = OAuth2Component(
client_id=CLIENT_ID,
client_secret=CLIENT_SECRET,
authorize_url=AUTHORIZE_URL,
token_url=TOKEN_URL,
redirect_uri=REDIRECT_URI
)
# create a button to start the OAuth2 flow
result = oauth2.authorize_button(
name="Log in",
icon="https://www.salesforce.com/etc/designs/sfdc-www/en_us/favicon.ico",
redirect_uri=REDIRECT_URI,
scope=SCOPE,
use_container_width=False
)
This button initiates the OAuth2 flow and handles redirection transparently. Once the user logs in successfully, Salesforce redirects them back to the app with a valid token.
Handle Session State for Tokens and User Data
After authentication, the returned tokens are stored in st.session_state to maintain a secure, per-user context. Here’s how to decode the token and extract user identity details:
if result:
#decode the id_token and get the user's email address
id_token = result["token"]["id_token"]
access_token = result["token"]["access_token"]
# verify the signature is an optional step for security
payload = id_token.split(".")[1]
# add padding to the payload if needed
payload += "=" * (-len(payload) % 4)
payload = json.loads(base64.b64decode(payload))
email = payload["email"]
username = payload["name"]
#storing token and its parts in session state
st.session_state["SF_token"] = result["token"]
st.session_state["SF_user"] = username
st.session_state["SF_auth"] = email
st.session_state["SF_access_token"]=access_token
st.session_state["SF_id_token"]=id_token
st.rerun()
else:
st.write(f"Congrats **{st.session_state.SF_user}**, you are logged in now!")
if st.button("Log out"):
cleartoken()
st.rerun()
This mechanism ensures that the authenticated user context is preserved across interactions, and sensitive tokens remain protected within the session.
The username displays in the UI after a successful login. 😀
Step 5: Create and Expose SnapLogic Triggered Task
Build Backend Logic with AgentCreator Snaps
With user authentication handled on the frontend, the next step is to build the backend business logic using SnapLogic AgentCreator. This toolkit lets you design AI-powered pipelines that integrate with data sources, perform intelligent processing, and return contextual responses. You can use pre-built Snaps (SnapLogic connectors) for Salesforce, OpenAI, and other services to assemble your Sales Agent pipeline.
Generate the Trigger URL for API Access
Once your pipeline is tested and functional, expose it as an API using a Triggered Task:
In SnapLogic Designer, open your Sales Agent pipeline.
Click on “Create Task” and choose “Triggered Task”.
Provide a meaningful name and set runtime parameters if needed.
After saving, note the generated Trigger URL—this acts as your backend endpoint to which the Streamlit app will send requests.
This URL is the bridge between your authenticated frontend and the secure AI logic on SnapLogic’s platform. However, before connecting it to Streamlit, you'll need to protect it using SnapLogic API Management, which we'll cover in the next section.
Step 6: Secure API with SnapLogic API Manager
Introduction to API Policies: Authentication and Authorization
To prevent unauthorized access to your backend, you must secure the Triggered Task endpoint using SnapLogic API Management. SnapLogic enables policy-based security, allowing you to enforce authentication and authorization using Salesforce-issued tokens. Two primary policies will be applied: Callout Authenticator and Authorize By Role.
The new Policy Editor of SnapLogic APIM 3.0
Add Callout Authenticator Policy
This policy validates the access token received from Salesforce. Since Salesforce tokens are opaque (not self-contained like JWTs), the Callout Authenticator policy sends the token to Salesforce’s introspection endpoint for validation. If the token is active, Salesforce returns the user's metadata (email, scope, client ID, etc.).
Example of a valid token introspection response:
{
"active": true,
"scope": "id refresh_token openid",
"client_id": "3MVG9C...",
"username": "mpentzek@snaplogic.com",
"sub": "https://login.salesforce.com/id/...",
"token_type": "access_token",
"exp": 1743708730,
"iat": 1743701530,
"nbf": 1743701530
}
If the token is invalid or expired, the response will simply show:
{
"active": false
}
Below you can see the configuration of the Callout Authenticator Policy:
Extract the domain from the username (email) returned by the Introspection endpoint after successful token validation for use in the Authorize By Role Policy.
Add AuthorizeByRole Policy
Once the token is validated, the Authorize By Role policy inspects the username (email) returned by Salesforce. You can configure this policy to allow access only to users from a trusted domain (e.g., @snaplogic.com), ensuring that external users cannot exploit your API.
For example, you might configure the policy to check for the presence of “snaplogic” in the domain portion of the email. This adds a second layer of security after token verification and supports internal-only access models.
Step 7: Connect the Streamlit Frontend to the Secured API
Pass Access Tokens in HTTP Authorization Header
Once the user has successfully logged in and the access token is stored in st.session_state, you can use this token to securely communicate with your SnapLogic Triggered Task endpoint. The access token must be included in the HTTP request’s Authorization header using the Bearer token scheme.
headers = {
'Authorization': f'Bearer {st.session_state["SF_access_token"]}'
}
This ensures that the SnapLogic API Manager can validate the request and apply both authentication and authorization policies before executing the backend logic.
Display API Responses in the Streamlit UI
To make the interaction seamless, you can capture the user’s input, send it to the secured API, and render the response directly in the Streamlit app. Here’s an example of how this interaction might look:
import requests
import streamlit as st
prompt = st.text_input("Ask the Sales Agent something:")
if st.button("Submit"):
with st.spinner("Working..."):
data = {"prompt": prompt}
headers = {
'Authorization': f'Bearer {st.session_state["SF_access_token"]}'
}
response = requests.post(
url="https://your-trigger-url-from-snaplogic",
data=data,
headers=headers,
timeout=10,
verify=False # Only disable in development
)
if response.status_code == 200:
st.success("Response received:")
st.write(response.text)
else:
st.error(f"Error: {response.status_code}")
This fully connects the frontend to the secured backend, enabling secure, real-time interactions with your generative AI agent.
Common Pitfalls and Troubleshooting
Handling Expired or Invalid Tokens
One of the most common issues in OAuth-secured applications is dealing with expired or invalid tokens. Since Salesforce access tokens have a limited lifespan, users who stay inactive for a period may find their sessions invalidated. To address this:
Always check the token's validity before making API calls.
Gracefully handle 401 Unauthorized responses by prompting the user to log in again.
Implement a token refresh mechanism if your application supports long-lived sessions (requires refresh token configuration in Salesforce).
By proactively managing token lifecycle, you prevent disruptions to user experience and secure API communications.
Debugging OAuth Redirection Errors
OAuth redirection misconfigurations can block the authentication flow. Here are common issues and their solutions:
Incorrect Callback URL: Ensure that the SF_REDIRECT_URI in your .env file matches exactly what’s defined in the Salesforce Connected App settings.
Missing Scopes: If the token does not contain expected identity fields (like email), verify that all required scopes (openid, email, profile) are included in both the app config and OAuth request.
Domain Restrictions: If access is denied even after successful login, confirm that the user’s email domain matches the policy set in the SnapLogic API Manager.
Logging the returned error messages and using browser developer tools can help you pinpoint the issue during redirection and callback stages.
Best Practices for Secure AI Application Deployment
Rotate Secrets Regularly
To reduce the risk of secret leakage and potential exploitation, it's essential to rotate sensitive credentials—such as CLIENT_ID and CLIENT_SECRET—on a regular basis. Even though Streamlit’s Secrets Management securely stores these values, periodic rotation ensures resilience against accidental exposure, insider threats, or repository misconfigurations.
To streamline this, set calendar reminders or use automated DevSecOps pipelines that replace secrets and update environment files or secret stores accordingly.
Monitor API Logs and Auth Failures
Security doesn’t stop at implementation. Ongoing monitoring is critical for identifying potential misuse or intrusion attempts. SnapLogic’s API Management interface provides detailed metrics that can help you:
Track API usage per user or IP address.
Identify repeated authorization failures or token inspection errors.
Spot anomalous patterns such as unexpected call volumes or malformed requests.
Extending the Architecture
Supporting Other OAuth Providers (Google, Okta, Entra ID)
While this tutorial focuses on Salesforce as the OAuth 2.0 Identity Provider, the same security architecture can be extended to support other popular providers like Google, Okta, and Microsoft Entra ID (formerly Azure AD). These providers are fully OAuth-compliant and typically offer similar endpoints for authorization, token exchange, and user introspection.
To switch providers, update the following in your .env file:
SF_AUTHORIZE_URL
SF_TOKEN_URL
SF_SCOPE (as per provider documentation)
Also, make sure your app is registered in the respective provider’s developer portal and configured with the correct redirect URI and scopes.
Adding Role-Based Access Controls
For larger deployments, simple domain-based filtering may not be sufficient. You can extend authorization logic by incorporating role-based access controls (RBAC). This can be achieved by:
Including custom roles in the OAuth token payload (e.g., via custom claims).
Parsing these roles in SnapLogic’s AuthorizeByRole policy.
Restricting access to specific APIs or features based on user roles (e.g., admin, analyst, viewer).
RBAC allows you to build multi-tiered applications with differentiated permissions while maintaining strong security governance.
Conclusion
Final Thoughts on Secure AI App Deployment
Securing your generative AI applications is no longer optional—especially when they’re built for enterprise use cases involving sensitive data, customer interactions, and decision automation. This tutorial demonstrated a complete security pattern using SnapLogic AgentCreator and Streamlit, authenticated via Salesforce OAuth 2.0 and protected through SnapLogic API Management.
By following this step-by-step approach, you ensure only verified users can access your app, and backend APIs are shielded by layered authentication and role-based authorization policies. The same architecture can easily be extended to other providers or scaled across multiple AI workflows within your organization.
Resources for Further Learning
SnapLogic Resources and Use Cases
Salesforce Developer Docs
Streamlit Documentation
OAuth 2.0 Official Specification
With a secure foundation in place, you’re now empowered to build and scale powerful, enterprise-grade AI applications confidently.
18 days ago0likes
Despite significant advances in industrial automation, many critical devices still rely on legacy OPC Classic servers (DA, AE, HDA). Integrating these aging systems with modern platforms presents challenges such as protocol incompatibility and the absence of native OPC UA support. Meanwhile, modern integration and analytics platforms increasingly depend on OPC UA for secure, scalable connectivity. This post addresses these challenges by demonstrating how the OPC UA Wrapper can seamlessly bridge OPC Classic servers to SnapLogic. Through a practical use case—detecting missing reset anomalies in saw-toothed wave signals from an OPC Simulation DA Server—you’ll discover how to enable real-time monitoring and alerting without costly infrastructure upgrades
24 days ago4likes
Scalable Analytics Platform: A Data Engineering Journey - Explore SnapLogic's innovative Medallion Architecture approach for handling massive data, improving analytics with S3, Trino, and Amazon Neptune. Learn about cost reduction, scalability, data governance, and enhanced insights.
27 days ago2likes
SnapLogic AutoSync: Your Agile Chopper for Data Integration
In the world of enterprise data, long-haul flights are essential—but sometimes you need to lift off quickly, land precisely, and get the job done without waiting for a runway.
Think of SnapLogic’s Intelligent Integration Platform (IIP) as your data jumbo jet: powerful, scalable, and built for complex, high-volume integrations across global systems. Now imagine you need something faster, more nimble—something that doesn’t require a flight crew to get airborne.
Enter SnapLogic AutoSync—the agile chopper in your integration fleet.
Whether you're syncing Salesforce data after an acquisition, uploading spreadsheets for instant analysis, or automating recurring flows between systems like Marketo and Redshift, AutoSync lifts your data with just a few clicks. It empowers business users to move quickly and experiment safely, without compromising on governance or control.
With AutoSync, you’re not just reducing engineering cycles—you’re accelerating the entire journey from raw data to actionable insight.
2 months ago6likes
4 MIN READ
In the energy sector, turbine lubrication oil is mission-critical. A drop in oil level or pressure can silently escalate into major failures, unplanned shutdowns, and expensive maintenance windows.
In this blog, we showcase a real-world implementation using SnapLogic and OPC UA, designed to:
🔧 Continuously monitor turbine lubrication oil levels 📥 Ingest real-time sensor data from industrial systems 📊 Store telemetry in data lakes for analytics and compliance 📣 Real-time Slack alerts to engineers — before failures strike
This IIoT-driven solution empowers energy providers to adopt predictive maintenance practices and reduce operational risk
2 months ago2likes