Secure your AgentCreator App - With SnapLogic API Management
Why Security is Essential for Generative AI Applications As generative AI applications transition from prototypes to enterprise-grade solutions, ensuring security becomes non-negotiable. These applications often interact with sensitive user data, internal databases, and decision-making logic that must be protected from unauthorized access. Streamlit, while great for quickly developing interactive AI interfaces, lacks built-in access control mechanisms. Therefore, integrating robust authentication and authorization workflows is critical to safeguarding both the user interface and backend APIs. Overview of the AgentCreator + Streamlit Architecture This guide focuses on securing a generative AI-powered Sales Agent application built with SnapLogic AgentCreator and deployed via Streamlit. The application integrates Salesforce OAuth 2.0 as an identity provider and secures its backend APIs using SnapLogic API Management. Through this setup, only authorized Salesforce users from a trusted domain can access the application, ensuring end-to-end security for both the frontend and backend. Understanding the Application Stack Role of SnapLogic's AgentCreator Toolkit The SnapLogic AgentCreator Toolkit enables developers and sales engineers to build sophisticated AI-powered agents without having to manage complex infrastructure. These agents operate within SnapLogic pipelines, making it easy to embed business logic, API integrations, and data processing in a modular way. For example, a sales assistant built with AgentCreator and exposed as API using Triggered Tasks can pull real-time CRM data, generate intelligent responses, and return it via a clean web interface. Streamlit as User Interface On the frontend, Streamlit is used to build a simple, interactive web interface for users to query the Sales Agent. Importance of API Management in AI Workflows Once these agents are exposed via HTTP APIs, managing who accesses them—and how—is crucial. That’s where SnapLogic API Management comes in. It provides enterprise-grade tools for API publishing, securing endpoints, enforcing role-based access controls, and monitoring traffic. These features ensure that only verified users and clients can interact with your APIs, reducing the risk of unauthorized data access or abuse. However, the real challenge lies in securing both ends: The Streamlit UI, which needs to restrict access to authorized users. The SnapLogic APIs, exposing the AgentCreator Pipelines which must validate and authorize each incoming request. OAuth 2.0 Authentication: Fundamentals and Benefits What is OAuth 2.0? OAuth 2.0 is an open standard protocol designed for secure delegated access. Instead of sharing credentials directly, users grant applications access to their resources using access tokens. This model is particularly valuable in enterprise environments, where central identity management is crucial. By using OAuth 2.0, applications can authenticate users through trusted Identity Providers (IDPs) while maintaining a separation of concerns between authentication, authorization, and application logic. Why Use Salesforce as the Identity Provider (IDP)? Salesforce is a robust identity provider that many organizations already rely on for CRM, user management, and security. Leveraging Salesforce for OAuth 2.0 authentication allows developers to tap into a pre-existing user base and organizational trust framework. In this tutorial, Salesforce is used to handle login and token issuance, ensuring that only authorized Salesforce users can access the Streamlit application. This integration also simplifies compliance with enterprise identity policies such as SSO, MFA, and domain-based restrictions. To address the authentication challenge, we use the OAuth 2.0 Authorization Code Flow, with Salesforce acting as both the Identity and Token Provider. Here is Salesforce’s official documentation on OAuth endpoints, which is helpful for configuring your connected app. 🔒 Note: While Salesforce is a logical choice for this example—since the Sales Agent interacts with Salesforce data—any OAuth2-compliant Identity Provider (IDP) such as Google, Okta, or Microsoft Entra ID (formerly Azure AD) can be used. The core authentication flow remains the same, with variations primarily in OAuth endpoints and app registration steps. Architecture Overview and Security Objectives Frontend (Streamlit) vs Backend (SnapLogic APIs) The application architecture separates the frontend interface and backend logic. The frontend is built using Streamlit, which allows users to interact with a visually intuitive dashboard. It handles login, displays AI-generated responses, and captures user inputs. The backend, powered by SnapLogic's AgentCreator, hosts the core business logic within pipelines that are exposed as APIs. This separation ensures flexibility and modular development, but it also introduces the challenge of securing both components independently yet cohesively. Threat Model and Security Goals The primary security threats in such a system include unauthorized access to the UI, data leaks through unsecured APIs, and token misuse. To mitigate these risks, the following security objectives are established: Authentication: Ensure only legitimate users from a trusted identity provider (Salesforce) can log in. Authorization: Grant API access based on user roles and domains, verified via SnapLogic APIM policies. Token Integrity: Validate and inspect access tokens before allowing backend communication with SnapLogic APIM Policies Secret Management: Store sensitive credentials (like Client ID and Secret) securely using Streamlit's secret management features. This layered approach aligns with enterprise security standards and provides a scalable model for future generative AI applications. Authentication & Authorization Flow Here’s how we securely manage access: 1. Login via Salesforce: Users are redirected to Salesforce’s login screen. After successful login, Salesforce redirects back to the app with an access token. The token and user identity info are stored in Streamlit’s session state. 2. Calling SnapLogic APIs: The frontend sends requests to SnapLogic’s triggered task APIs, attaching the Salesforce access token in the Authorization HTTP Header. 3. Securing APIs via SnapLogic Policies: Callout Authenticator Policy: Validates the token by sending it to Salesforce’s token validation endpoint, as Salesforce tokens are opaque and not self-contained like JWTs. AuthorizeByRole Policy: After extracting the user’s email address, this policy checks if the domain (e.g., @snaplogic.com) is allowed. If so, access is granted. Below you can find the complete OAuth 2 Authorization Code Flow enhanced with the Token Introspection & Authorization Flow This setup ensures end-to-end security, combining OAuth-based authentication with SnapLogic’s enterprise-grade API Management capabilities. In the following sections, we’ll walk through how to implement each part—from setting up the Salesforce Connected App to configuring policies in SnapLogic—so you can replicate or adapt this pattern for your own generative AI applications. Step 1: Set Up Salesforce Connected App Navigate to Salesforce Developer Console To initiate the OAuth 2.0 authentication flow, you’ll need to register your application as a Connected App in Salesforce. Begin by logging into your Salesforce Developer or Admin account. From the top-right gear icon, navigate to Setup → App Manager. Click on “New Connected App” to create a new OAuth-enabled application profile. Define OAuth Callback URLs and Scopes In the new Connected App form, set the following fields under the API (Enable OAuth Settings) section: Callback URL: This should be the URL of your Streamlit application (e.g., https://snaplogic-genai-builder.streamlit.app/Sales_Agent). Selected OAuth Scopes: Include at least openid, email, and profile. You may also include additional scopes depending on the level of access required. Ensure that the “Enable OAuth Settings” box is checked to make this app OAuth-compliant. Retrieve Client ID and Client Secret After saving the app configuration, Salesforce will generate a Consumer Key (Client ID) and a Consumer Secret. These are crucial for the OAuth exchange and must be securely stored. You will use these values later when configuring the Streamlit OAuth integration and environmental settings. Do not expose these secrets in your codebase or version control. 📄 For details on Salesforce OAuth endpoints, see: 👉 Salesforce OAuth Endpoints Documentation Step 2: Integrate OAuth with Streamlit Using streamlit-oauth Install and Configure streamlit-oauth Package To incorporate OAuth 2.0 authentication into your Streamlit application, you can use the third-party package streamlit-oauth (streamlit-oauth). This package abstracts the OAuth flow and simplifies integration with popular identity providers like Salesforce. To install it, run the following command in your terminal: pip install streamlit-oauth After installation, you'll configure the OAuth2Component to initiate the login process and handle token reception once authentication is successful. Handle ClientID and ClientSecret Securely Once users log in through Salesforce, the app receives an Access Token and an ID token. These tokens should never be exposed in the UI or logged publicly. Instead, store them securely in st.session_state, Streamlit's native session management system. This ensures the tokens are tied to the user's session and can be accessed for API calls later in the flow. Store Credentials via Streamlit Secrets Management Storing secrets such as CLIENT_ID and CLIENT_SECRET directly in your source code is a security risk. Streamlit provides a built-in Secrets Management system that allows you to store sensitive information in a .streamlit/secrets.toml file, which should be excluded from version control. Example: # .streamlit/secrets.toml SF_CLIENT_ID = "your_client_id" SF_CLIENT_SECRET = "your_client_secret" In your code, you can access these securely: CLIENT_ID = st.secrets["SF_CLIENT_ID"] CLIENT_SECRET = st.secrets["SF_CLIENT_SECRET"] Step 3: Manage Environment Settings with python-dotenv Why Environment Variables Matter Managing environment-specific configuration is essential for maintaining secure and scalable applications. In addition to storing sensitive credentials using Streamlit’s secrets management, storing dynamic OAuth parameters such as URLs, scopes, and redirect URIs in an environment file (e.g., .env) allows you to keep code clean and configuration flexible. This is particularly useful if you plan to deploy across multiple environments (development, staging, production) with different settings. Store OAuth Endpoints in .env Files To manage environment settings, use the python-dotenv package (python-dotenv), which loads environment variables from a .env file into your Python application. First, install the library: pip install python-dotenv Create a .env file in your project directory with the following format: SF_AUTHORIZE_URL=https://login.salesforce.com/services/oauth2/authorize SF_TOKEN_URL=https://login.salesforce.com/services/oauth2/token SF_REVOKE_TOKEN_URL=https://login.salesforce.com/services/oauth2/revoke SF_REDIRECT_URI=https://your-streamlit-app-url SF_SCOPE=id openid email profile Then, use the dotenv_values function to load the variables into your script: from dotenv import dotenv_values env = dotenv_values(".env") AUTHORIZE_URL = env["SF_AUTHORIZE_URL"] TOKEN_URL = env["SF_TOKEN_URL"] REVOKE_TOKEN_URL = env["SF_REVOKE_TOKEN_URL"] REDIRECT_URI = env["SF_REDIRECT_URI"] SCOPE = env["SF_SCOPE"] This approach ensures that your sensitive and environment-specific data is decoupled from the codebase, enhancing maintainability and security. Step 4: Configure OAuth Flow in Streamlit Define OAuth2 Component and Redirect Logic With your environment variables and secrets in place, it’s time to configure the OAuth flow in Streamlit using the OAuth2Component from the streamlit-oauth package. This component handles user redirection to the Salesforce login page, token retrieval, and response parsing upon return to your app. from streamlit_oauth import OAuth2Component oauth2 = OAuth2Component( client_id=CLIENT_ID, client_secret=CLIENT_SECRET, authorize_url=AUTHORIZE_URL, token_url=TOKEN_URL, redirect_uri=REDIRECT_URI ) # create a button to start the OAuth2 flow result = oauth2.authorize_button( name="Log in", icon="https://www.salesforce.com/etc/designs/sfdc-www/en_us/favicon.ico", redirect_uri=REDIRECT_URI, scope=SCOPE, use_container_width=False ) This button initiates the OAuth2 flow and handles redirection transparently. Once the user logs in successfully, Salesforce redirects them back to the app with a valid token. Handle Session State for Tokens and User Data After authentication, the returned tokens are stored in st.session_state to maintain a secure, per-user context. Here’s how to decode the token and extract user identity details: if result: #decode the id_token and get the user's email address id_token = result["token"]["id_token"] access_token = result["token"]["access_token"] # verify the signature is an optional step for security payload = id_token.split(".")[1] # add padding to the payload if needed payload += "=" * (-len(payload) % 4) payload = json.loads(base64.b64decode(payload)) email = payload["email"] username = payload["name"] #storing token and its parts in session state st.session_state["SF_token"] = result["token"] st.session_state["SF_user"] = username st.session_state["SF_auth"] = email st.session_state["SF_access_token"]=access_token st.session_state["SF_id_token"]=id_token st.rerun() else: st.write(f"Congrats **{st.session_state.SF_user}**, you are logged in now!") if st.button("Log out"): cleartoken() st.rerun() This mechanism ensures that the authenticated user context is preserved across interactions, and sensitive tokens remain protected within the session. The username displays in the UI after a successful login. 😀 Step 5: Create and Expose SnapLogic Triggered Task Build Backend Logic with AgentCreator Snaps With user authentication handled on the frontend, the next step is to build the backend business logic using SnapLogic AgentCreator. This toolkit lets you design AI-powered pipelines that integrate with data sources, perform intelligent processing, and return contextual responses. You can use pre-built Snaps (SnapLogic connectors) for Salesforce, OpenAI, and other services to assemble your Sales Agent pipeline. Generate the Trigger URL for API Access Once your pipeline is tested and functional, expose it as an API using a Triggered Task: In SnapLogic Designer, open your Sales Agent pipeline. Click on “Create Task” and choose “Triggered Task”. Provide a meaningful name and set runtime parameters if needed. After saving, note the generated Trigger URL—this acts as your backend endpoint to which the Streamlit app will send requests. This URL is the bridge between your authenticated frontend and the secure AI logic on SnapLogic’s platform. However, before connecting it to Streamlit, you'll need to protect it using SnapLogic API Management, which we'll cover in the next section. Step 6: Secure API with SnapLogic API Manager Introduction to API Policies: Authentication and Authorization To prevent unauthorized access to your backend, you must secure the Triggered Task endpoint using SnapLogic API Management. SnapLogic enables policy-based security, allowing you to enforce authentication and authorization using Salesforce-issued tokens. Two primary policies will be applied: Callout Authenticator and Authorize By Role. The new Policy Editor of SnapLogic APIM 3.0 Add Callout Authenticator Policy This policy validates the access token received from Salesforce. Since Salesforce tokens are opaque (not self-contained like JWTs), the Callout Authenticator policy sends the token to Salesforce’s introspection endpoint for validation. If the token is active, Salesforce returns the user's metadata (email, scope, client ID, etc.). Example of a valid token introspection response: { "active": true, "scope": "id refresh_token openid", "client_id": "3MVG9C...", "username": "mpentzek@snaplogic.com", "sub": "https://login.salesforce.com/id/...", "token_type": "access_token", "exp": 1743708730, "iat": 1743701530, "nbf": 1743701530 } If the token is invalid or expired, the response will simply show: { "active": false } Below you can see the configuration of the Callout Authenticator Policy: Extract the domain from the username (email) returned by the Introspection endpoint after successful token validation for use in the Authorize By Role Policy. Add AuthorizeByRole Policy Once the token is validated, the Authorize By Role policy inspects the username (email) returned by Salesforce. You can configure this policy to allow access only to users from a trusted domain (e.g., @snaplogic.com), ensuring that external users cannot exploit your API. For example, you might configure the policy to check for the presence of “snaplogic” in the domain portion of the email. This adds a second layer of security after token verification and supports internal-only access models. Step 7: Connect the Streamlit Frontend to the Secured API Pass Access Tokens in HTTP Authorization Header Once the user has successfully logged in and the access token is stored in st.session_state, you can use this token to securely communicate with your SnapLogic Triggered Task endpoint. The access token must be included in the HTTP request’s Authorization header using the Bearer token scheme. headers = { 'Authorization': f'Bearer {st.session_state["SF_access_token"]}' } This ensures that the SnapLogic API Manager can validate the request and apply both authentication and authorization policies before executing the backend logic. Display API Responses in the Streamlit UI To make the interaction seamless, you can capture the user’s input, send it to the secured API, and render the response directly in the Streamlit app. Here’s an example of how this interaction might look: import requests import streamlit as st prompt = st.text_input("Ask the Sales Agent something:") if st.button("Submit"): with st.spinner("Working..."): data = {"prompt": prompt} headers = { 'Authorization': f'Bearer {st.session_state["SF_access_token"]}' } response = requests.post( url="https://your-trigger-url-from-snaplogic", data=data, headers=headers, timeout=10, verify=False # Only disable in development ) if response.status_code == 200: st.success("Response received:") st.write(response.text) else: st.error(f"Error: {response.status_code}") This fully connects the frontend to the secured backend, enabling secure, real-time interactions with your generative AI agent. Common Pitfalls and Troubleshooting Handling Expired or Invalid Tokens One of the most common issues in OAuth-secured applications is dealing with expired or invalid tokens. Since Salesforce access tokens have a limited lifespan, users who stay inactive for a period may find their sessions invalidated. To address this: Always check the token's validity before making API calls. Gracefully handle 401 Unauthorized responses by prompting the user to log in again. Implement a token refresh mechanism if your application supports long-lived sessions (requires refresh token configuration in Salesforce). By proactively managing token lifecycle, you prevent disruptions to user experience and secure API communications. Debugging OAuth Redirection Errors OAuth redirection misconfigurations can block the authentication flow. Here are common issues and their solutions: Incorrect Callback URL: Ensure that the SF_REDIRECT_URI in your .env file matches exactly what’s defined in the Salesforce Connected App settings. Missing Scopes: If the token does not contain expected identity fields (like email), verify that all required scopes (openid, email, profile) are included in both the app config and OAuth request. Domain Restrictions: If access is denied even after successful login, confirm that the user’s email domain matches the policy set in the SnapLogic API Manager. Logging the returned error messages and using browser developer tools can help you pinpoint the issue during redirection and callback stages. Best Practices for Secure AI Application Deployment Rotate Secrets Regularly To reduce the risk of secret leakage and potential exploitation, it's essential to rotate sensitive credentials—such as CLIENT_ID and CLIENT_SECRET—on a regular basis. Even though Streamlit’s Secrets Management securely stores these values, periodic rotation ensures resilience against accidental exposure, insider threats, or repository misconfigurations. To streamline this, set calendar reminders or use automated DevSecOps pipelines that replace secrets and update environment files or secret stores accordingly. Monitor API Logs and Auth Failures Security doesn’t stop at implementation. Ongoing monitoring is critical for identifying potential misuse or intrusion attempts. SnapLogic’s API Management interface provides detailed metrics that can help you: Track API usage per user or IP address. Identify repeated authorization failures or token inspection errors. Spot anomalous patterns such as unexpected call volumes or malformed requests. Extending the Architecture Supporting Other OAuth Providers (Google, Okta, Entra ID) While this tutorial focuses on Salesforce as the OAuth 2.0 Identity Provider, the same security architecture can be extended to support other popular providers like Google, Okta, and Microsoft Entra ID (formerly Azure AD). These providers are fully OAuth-compliant and typically offer similar endpoints for authorization, token exchange, and user introspection. To switch providers, update the following in your .env file: SF_AUTHORIZE_URL SF_TOKEN_URL SF_SCOPE (as per provider documentation) Also, make sure your app is registered in the respective provider’s developer portal and configured with the correct redirect URI and scopes. Adding Role-Based Access Controls For larger deployments, simple domain-based filtering may not be sufficient. You can extend authorization logic by incorporating role-based access controls (RBAC). This can be achieved by: Including custom roles in the OAuth token payload (e.g., via custom claims). Parsing these roles in SnapLogic’s AuthorizeByRole policy. Restricting access to specific APIs or features based on user roles (e.g., admin, analyst, viewer). RBAC allows you to build multi-tiered applications with differentiated permissions while maintaining strong security governance. Conclusion Final Thoughts on Secure AI App Deployment Securing your generative AI applications is no longer optional—especially when they’re built for enterprise use cases involving sensitive data, customer interactions, and decision automation. This tutorial demonstrated a complete security pattern using SnapLogic AgentCreator and Streamlit, authenticated via Salesforce OAuth 2.0 and protected through SnapLogic API Management. By following this step-by-step approach, you ensure only verified users can access your app, and backend APIs are shielded by layered authentication and role-based authorization policies. The same architecture can easily be extended to other providers or scaled across multiple AI workflows within your organization. Resources for Further Learning SnapLogic Resources and Use Cases Salesforce Developer Docs Streamlit Documentation OAuth 2.0 Official Specification With a secure foundation in place, you’re now empowered to build and scale powerful, enterprise-grade AI applications confidently.100Views0likes0CommentsSnapGPT - Security and Data Handling Protocols
Authors: Aaron Kesler, Jump Thanawut, Scott Monteith Security and Data Handling Protocols for SnapGPT SnapLogic acknowledges and respects the data concerns of our customers. The purpose of this document is to present our data handling and global data protection standards for SnapGPT. Overview & SnapLogic’s Approach to AI / LLM: SnapLogic utilizes high-quality Enterprise Language Learning Models (LLMs), selecting the most appropriate one for each specific task. Current support includes Azure OpenAI GPT, Anthropic Claude on Amazon Bedrock, and Google Vertex PaLM. Product & Data: Product Features & Scope: SnapGPT offers a range of features, each designed to enhance user experience and productivity in various aspects of pipeline and SQL query generation: Input Prompts: This feature allows customers to interact directly with the LLM by providing input prompts. These prompts are the primary method through which users can specify their requirements or ask questions to the LLM. Describe Pipeline: This skill enables users to obtain a comprehensive description of an existing pipeline. It helps in understanding and documenting the pipeline's structure and functionality. Analyze Pipeline: This feature ingests the entire pipeline configuration and analyzes it to make suggestions for optimization and improvement. It assists users in enhancing the efficiency and effectiveness of their pipelines. Mapper Configuration: Facilitates the configuration of the mapper snap by generating expressions to simplify the process of mapping input to output. Pipeline Generation: Users can create prototype pipelines using simple input prompts. This feature is geared towards streamlining the pipeline creation process, making it more accessible and less time-consuming. SQL Generation without Schema: Tailored for situations where the schema information is not available or cannot be shared, this feature generates SQL queries based solely on the customer's prompt, offering flexibility and convenience. SQL Generation with Schema (coming feb 2024): This advanced feature generates SQL queries by taking into account the schema of the input database. It is particularly useful for creating contextually accurate and efficient SQL queries. Data Usage & Opt-Out Options: At SnapLogic, we recognize the importance of data security and user privacy in the rapidly evolving Generative AI space. SnapGPT has been designed with these principles at its core, ensuring that customers can leverage the power of AI and machine learning while maintaining control over their data. Our approach prioritizes transparency, giving users the ability to opt-out of data sharing, and aligning with industry best practices for data handling. This commitment reflects our dedication to not only providing advanced AI solutions but also ensuring that these solutions align with the highest standards of privacy and data protection. Data Usage in SnapGPT: SnapGPT is designed to handle customer data with the utmost care and precision, ensuring that data usage is aligned with the functionality of each feature: Customer Input and Interaction: Customer inputs, such as prompts or pipeline configurations, are key to the functionality of SnapGPT. This data is used solely for the purpose of processing specific requests and generating responses or suggestions relevant to the user's query. No data is retained for model training purposes. Feature-Specific Data Handling: Each feature/skill of SnapGPT, like pipeline analysis or SQL generation, uses customer data differently. See the table below for details on each skill. Skill Name Description of the Skill Data Transferred to LLM Input Prompts Direct input prompts from customers are transferred to the LLM and tracked by SnapLogic analytics. Prompt details only; these are not stored or used for training by the LLM. Describe & Analyze Pipeline Allows customers to describe a pipeline, with the entire pipeline configuration relayed to the LLM. Entire pipeline configuration excluding account credential information. Mapper Configuration Enables sending input schema information within the prompt to the LLM for the “Mapper configuration” feature. Input schema information without account credential information. Pipeline Generation Uses input prompts to create pipeline prototypes by transmitting them to the LLM. Input prompts only; not stored or used for training by the LLM. SQL Generation W/out Schema Generates SQL queries based only on the customer's prompt in situations where schema information cannot be shared. Only the customer's prompt; no schema information is used. SQL Generation W/ Schema (Feb 2024) Generates accurate SQL queries by considering the schema of the input database. Schema of the input database excluding any account credentials, enhancing query accuracy. Future Adaptations: In the near future, we intend to offer customers opt-out options. Choosing to opt-out of including any environment-specific data in SnapGPT prompts can impact the quality of response from SnapGPT as it will lack additional context. As of the current version, usage of SnapGPT will include sending the data from the features listed above to the LLMs. We recommend that customers who are not comfortable with the described data transfers to wait for the opt-out option to become available. Impact of Opting Out: Choosing to opt-out of data sharing may impact the functionality and effectiveness of SnapGPT. For example, opting out of schema retrieval in SQL Generation may lead to less precise query outputs. Users are advised to consider these impacts when setting their data sharing preferences. Data Processing: Architecture: Data Flow: Data Retention & Residency: SnapLogic is committed to ensuring the secure handling and appropriate residency of customer data. Our data retention policies are designed to respect customer privacy while providing the necessary functionality of SnapGPT: Data Retention: No Retention for Model Training: SnapGPT is designed to prioritize user privacy. Therefore, no customer data processed by SnapGPT is retained for the purpose of model training. This ensures that user data is not used in any way to train or refine the underlying AI models. Storing Usage Data for Adoption Tracking: While we do not retain data for model training, SnapLogic stores usage data related to SnapGPT in Heap Analytics. This is strictly for the purpose of tracking product adoption and usage patterns. The collection of usage data helps us understand how our customers interact with SnapGPT, enabling us to continuously improve the product and tailor it to user needs. Data Residency: Location-Based Data Storage: Our control planes in the United States and the EMEA region adhere to the specific data residency policies of these locations. We ensure compliance with regional data protection and privacy laws, offering customers the assurance that their data is managed in accordance with local regulations. Controls – Admin, Groups, Users: SnapLogic provides robust control mechanisms for administrators, while ensuring that group and user-level controls align with organizational policies: Administrators have granular control over the use of SnapGPT within their organization. They can determine what data is shared with the LLM and have the ability to opt out of data sharing to meet specific data retention and sharing policies. Additionally, admins can control user access to various features and skills, ensuring alignment with organizational needs and security policies. Group Controls: Currently, groups do not have specific controls over SnapGPT. Group-level policies are managed by administrators to ensure consistency and security across the organization. User Controls: Users can access and utilize the features and skills of SnapGPT to which they are entitled. User entitlements are managed by administrators, ensuring that each user has access to the necessary tools for their role while maintaining data security and compliance. Guidelines for Secure and Compliant use of SnapGPT At SnapLogic, we understand the critical importance of data security and compliance in today’s digital landscape. As such, we are dedicated to providing our customers with the tools and knowledge necessary to utilize SnapGPT in a way that aligns with their internal information security (InfoSec) and privacy policies. This section offers guidelines to help ensure that your interaction with SnapGPT is both secure and compliant with your organizational standards. Customer Data Control: Customers are encouraged to actively manage and control the data they share with SnapGPT. By understanding and utilizing the available admin and user controls, customers can ensure that their use of SnapGPT aligns with their internal InfoSec and privacy policies. Best Practices for Data Sharing: We recommend that customers review and follow best practices for data sharing, especially when working with sensitive or confidential information. This includes using anonymization or pseudonymization techniques where appropriate, and sharing only the data in prompts and pipelines that is necessary for the task at hand. Integrating with Internal Policies: Customers should integrate their use of SnapGPT with their existing InfoSec and privacy frameworks. This integration ensures that data handling through SnapGPT remains consistent with the organization’s overall data protection strategy. Regular Review and Adjustment: Customers are advised to regularly review their data sharing settings and practices with SnapGPT, adjusting them as necessary to remain aligned with evolving InfoSec and privacy requirements. Training and Awareness: We also suggest that customers provide regular training and awareness programs to their users about the responsible and secure use of AI tools like SnapGPT, emphasizing the importance of data privacy and protection. Compliance: For detailed information on SnapLogic’s commitment to compliance with various regulatory standards and data security measures, please visit our comprehensive overview at SnapLogic Security & Compliance (https://www.snaplogic.com/security-standards). This resource provides an in-depth look at how we adhere to global data protection regulations, manage data security, and ensure the highest standards of compliance across all our products, including SnapGPT. For specific compliance inquiries or more information on how we handle compliance in relation to SnapGPT, please contact the SnapLogic Compliance Team at Security@snaplogic.com. For further details or inquiries regarding SnapGPT or any other SnapLogic AI services, please contact our SnapLogic AI Services Team ( ai-services@snaplogic.com). For more information on SnapLogic Security and Compliance: https://www.snaplogic.com/security-standards3.8KViews1like0CommentsProject Structures and Team Rights Guide
Overview This document outlines the recommended organizational structure for Project Spaces, Projects and team rights within your SnapLogic org. Authors: SnapLogic Enterprise Architecture team Integration Storage Hierarchy In the SnapLogic platform, integrations (pipelines) are managed within the following hierarchy: Organization - Multiple customer organizations are configured based on their development environment e.g. DEV, QA, STAGING, UAT, and PROD etc. Project Space - Team in a Business Unit or Business Group with an access to a workspace to collaborate and implement integration at one central location Project - Group pipelines with an integration, aggregation, and reporting required for the type of function Pipeline - A specific implementation of an integration For example, the path “/MyDevOrg/SLEntArch/SamplePipelines/WorkdayToSnowflake” is broken down into the following components: Organization - MyDevOrg Project Space - SLEntArch Project - SamplePipelines Pipeline - WorkdayToSnowflake Recommended Hierarchy Naming Conventions Clarity is one of the most important factors when naming projects spaces, projects, and pipelines. Naming each level of the hierarchy must make sense to all developers and administrators so that integrations can be found quickly and easily. If using Triggered Tasks, it is important to ensure that no characters are used that will violate HTTP URI naming conventions. Further, you may wish to use characters that don’t require an URL encoding, such as spaces, commas, etc. Below is an example naming convention for your project spaces, projects, and pipelines: Project spaces are named based on business unit or project team: Sales_and_Marketing EnterpriseBI ProfessionalServices Projects are named based on business target endpoint Sales_and_Marketing/MarketingForecast EnterpriseBI/SalesDashboardReporting ProfessionalServices/CommunityExamples Integrations (pipelines) are named based on business function Sales_and_Marketing/MarketingForecast/Fall_Marketing_New_Customers_To_SalesForce EnterpriseBI/SalesDashboardReporting/Supplier_Invoices_To_Workday ProfessionalServices/CommunityExamples/Community_27986_Example_JSON_MultiArray_Join Keep in mind the shallow hierarchy (project space/project) when considering your naming scheme for project spaces and projects. In most orgs, it is acceptable to assign a project space to each business unit and allow that business unit to create projects within their project space based on integration function or target. However, If you expect a very large number of pipelines to be created by a single business unit, you might want to consider an allowance for multiple project spaces for a given business unit. Shared Folders Root Shared A special project named “shared” is added to each SnapLogic Organization (org). Using the org name in the above example, this would be /MyDevOrg/shared. This is commonly referred to as the “root shared” folder. This folder will always exist and is automatically assigned with Full Access (read, write, execute) to all members of the “admins” group of the org and Read-Execute access to all other users. As a best practice, the root shared folder should only contain objects (accounts, files, pipelines, and tasks) that all SnapLogic users in your org should have access to use. Some examples may include: SMTP account used for the Email Sender snap Readonly database account used to access common, public tables Shared expression libraries that contain global static variables or user defined functions for common string/date manipulation Shared pipelines such as error handlers or globally re-usable code Project Space Shared Another special project named “shared” is added to each project space in the org. Using the example path above, this would be /MyDevOrg/SLEntArch/shared. This folder will always exist under each project space and inherits the permissions assigned to the project space. As a best practice, the project space shared folder should only contain objects (accounts, files, pipelines, and tasks) that all SnapLogic users with access to the Project Space should have access to use. Some examples may include: Database accounts used to access common tables within your business unit Shared expression libraries that contain static variables and user defined functions common to your business unit Shared/reusable pipelines common to your business unit User Groups We recommend that you create the following Groups in all of your SnapLogic orgs: Operators - this group contains the users that may need to manually execute a pipeline but do not require Full Access to the projects Migrators - this group contains the users that will perform object migrations in your orgs but do not need to Execute pipelines You should also create Developer groups specific to each Project Space and/or Project within your org. Using the example project spaces and projects listed in the Naming Conventions section of this document, you may want to add the following Groups: Groups specific to Project Space Sales_and_Marketing_Developers EnterpriseBI_Developers ProfessionalServices_Developers Groups specific to Project MarketingForecast_Developers SalesDashboardReporting_Developers CommunityExamples_Developers You may choose to enable access for developers to only see objects within the project they are working in, or you could allow read-only access to all projects within their project space to allow for some cross-project design examples. Typically, the Developer groups will have Full Access in your development org for the Projects that they are working in, with Read-Execute access to the Project Space “shared” folder and Read-Execute access to the root “shared” folder. Developer groups will also have Read-Only access in all non-development orgs for the same Project Space “shared” and Projects that they can access in your development org. If you have a larger SnapLogic development community in your organization, you may wish to distribute the administration of Projects and create Admin groups for each Project Space who will be assigned ownership of the Project Space, which allows them to create new Projects and maintain all permissions within the Project Space. Default Users We recommend that the following service accounts be added to your org(s): snaplogic_admin@<yourdomain> - this user should own the root SnapLogic shared folder, and all/most SnapLogic Project Spaces in your org(s); add this user to the “admins” group snaplogic_service@<yourdomain> - this user should own all of your SnapLogic tasks and have permissions related to executing tasks for all Projects. Note that Read-Execute Access is required as a minimum; Full Access is required if any files are written back to the SLDB of the Project during processing. Add this user to the “Operators” group Note that during migration of tasks to your non-development org(s), you should either use the snaplogic_service@<yourdomain> user to perform the migration, or use the Update Asset Owner API to change the owner of the task after migration. Tasks are owned by the user that creates it; so if a user in the Migrators group performs the migration, they will be assigned as the owner and may not have permissions to successfully execute the task in the target org(s). Hierarchy Permissions Recommended access to the root “shared” project: admin@snaplogic.com - Owner “admins” group - Full Access “members” group - Read/Execute Access “Operators” group - Read/Execute Access “Migrators” group - Full Access “Support” group - Read/Execute Access You may wish to limit Execute Access to only certain teams. If so, change the “members” group to Read Only Access and grant Read/Execute Access to your desired team groups. If you perform migrations only within specific day/time windows, you can add/remove users from the Migrators group using a scheduled task that calls the Groups API to replace all members of the Migrators group and either remove all users from the group (close the migration window) or restore users to the group (open the migration window). Recommended access to the Project Space “shared” project: admin@snaplogic.com - Owner “admins” group - Full Access “members” group - Read-Only Access (optional) “Operators” group - Read/Execute Access “Migrators” group - Full Access “<Project>_Admins” group(s) - Full access in development “<Project>_Developers” group(s) - Read/Execute Access in development You may choose to grant Read-Only access to your <Project>_Admins and <Project>_Developers groups in non-development environments depending on your support team structure Recommended access to the Projects: admin@snaplogic.com - Owner “admins” group - Full Access “members” group - Read-Only Access (optional) “Operators” group - Read/Execute Access “Migrators” group - Full Access “<Project>_Admins” group(s) - Full Access (only in development) “<Project>_Developers” group(s) - Full Access (only in development) You may choose to grant Read-Only access to your <Project>_Admins and <Project>_Developers groups in non-development environments depending on your support team structure.3.9KViews2likes0Comments