Recent Content
Way to lock down in Prod org to "Monitor" only access?
Hello, is there way to structure access so when a person logs into the Prod org space, they only see the monitor tab or view, and the "Designer" and "Manager" areas are hidden? Purely we want someone to look at how integrations are running only. Any ideas appreciated. Mike38Views0likes2CommentsCommon Mistakes Beginners Make in SnapLogic (and How to Avoid Them)
SnapLogic is one of the most powerful Integration Platform as a Service (iPaaS) tools — designed to connect systems, transform data, and automate workflows without heavy coding. But for beginners, it’s easy to get caught up in its simplicity and make mistakes that lead to inefficient, unstable, or unmaintainable pipelines. In this post, we’ll explore the most common mistakes beginners make in SnapLogic, why they happen, and how you can avoid them with best practices. 1. Not Using the Mapper Snap Effectively ❌ The mistake: Beginners often either overuse Mapper Snaps (adding too many unnecessarily) or skip them altogether by hardcoding values inside other Snaps. 💡 Why it’s a problem: This leads to messy pipelines, inconsistent logic, and difficulties during debugging or updates. ✅ How to fix it: Use a single Mapper Snap per logical transformation. Name it meaningfully — e.g., Map_Customer_To_Salesforce. Keep transformation logic and business rules in the Mapper, not inside REST or DB Snaps. Add inline comments in expressions using // comment. 🖼 Pro tip: Think of your Mapper as the translator between systems — clean, well-organized mapping makes your entire pipeline more readable. 2. Ignoring Error Views ❌ The mistake: Leaving error views disconnected or disabled. 💡 Why it’s a problem: When a Snap fails, you lose that failed record forever — with no log or visibility. ✅ How to fix it: Always enable error views on critical Snaps (especially REST, Mapper, or File operations). Route error outputs to a File Writer or Pipeline Execute Snap for centralized error handling. Capture details like error.reason, error.entity, and error.stacktrace. 🖼 Pro tip: Create a reusable “Error Logging” sub-pipeline for consistent handling across projects. 3. Skipping Input Validation ❌ The mistake: Assuming that incoming data (from JSON, CSV, or API) is always correct. 💡 Why it’s a problem: Invalid or missing fields can cause API rejections, DB errors, or wrong transformations. ✅ How to fix it: Use Router Snap or Filter Snap to validate key fields. Example expression for email validation: $email != null && $email.match(/^[^@]+@[^@]+\.[^@]+$/) Route invalid data to a dedicated error or “review” path. 🖼 Pro tip: Centralize validation logic in a sub-pipeline for reusability across integrations. 4. Hardcoding Values Instead of Using Pipeline Parameters ❌ The mistake: Typing static values like URLs, credentials, or file paths directly inside Snaps. 💡 Why it’s a problem: When moving from Dev → Test → Prod, every Snap needs manual editing — risky and time-consuming. ✅ How to fix it: Define Pipeline Parameters (e.g., baseURL, authToken, filePath). Reference them in Snaps as $baseURL or $filePath. Use Project-level Parameters for environment configurations. 🖼 Pro tip: Maintain a single “Config Pipeline” or JSON file for all environment parameters. 5. Not Previewing Data Frequently ❌ The mistake: Running the entire pipeline without previewing data in between. 💡 Why it’s a problem: You won’t know where data transformations failed or what caused malformed output. ✅ How to fix it: Use Snap Preview after each Snap during development. Check input/output JSON to verify structure. Use the “Validate Pipeline” button before full runs. 🖼 Pro tip: Keep sample input data handy — it saves time during design and debugging. 6. Overcomplicating Pipelines ❌ The mistake: Trying to do everything in a single, lengthy pipeline. 💡 Why it’s a problem: Hard to maintain, slow to execute, and painful to debug. ✅ How to fix it: Break large flows into smaller modular pipelines. Use Pipeline Execute Snaps to connect them logically. Follow a naming pattern, e.g., 01_FetchData 02_Transform 03_LoadToTarget 🖼 Pro tip: Treat each pipeline as one clear business function. 7. Not Documenting Pipelines ❌ The mistake: No descriptions, no comments, and cryptic Snap names like “Mapper1”. 💡 Why it’s a problem: Six months later, even you won’t remember what “Mapper1” does. ✅ How to fix it: Add clear pipeline descriptions under Properties → Documentation. Use descriptive Snap names: Validate_Email, Transform_Employee_Data. Comment complex expressions in the Mapper. 🖼 Pro tip: Good documentation is as important as the pipeline itself. 8. Storing Credentials Inside Snaps ❌ The mistake: Manually entering passwords, API keys, or tokens inside REST Snaps. 💡 Why it’s a problem: It’s a major security risk and difficult to rotate credentials later. ✅ How to fix it: Use Accounts in SnapLogic Manager for authentication. Link your Snap to an Account instead of embedding credentials. Manage API tokens and passwords centrally through the Account configuration. 🖼 Pro tip: Never commit sensitive data to version control — use SnapLogic’s vault. 9. Ignoring Schema Validation Between Snaps ❌ The mistake: Assuming the output structure of one Snap always matches the next Snap’s input. 💡 Why it’s a problem: You’ll encounter “Field not found” or missing data during runtime. ✅ How to fix it: Always check Input/Output schemas in the Mapper. Use explicit field mapping instead of relying on auto-propagation. Add “safe navigation” ($?.field) for optional fields. 🖼 Pro tip: Use a JSON Formatter Snap before external APIs to verify structure. 10. Forgetting to Clean Up Temporary Data ❌ The mistake: Leaving test logs, CSVs, or temporary JSON files in the project folder. 💡 Why it’s a problem: Consumes storage and creates confusion during maintenance. ✅ How to fix it: Store temporary files in a /temp directory. Add a File Delete Snap at the end of your pipeline. Schedule cleanup jobs weekly for old files. 🎯 Final Thoughts SnapLogic makes integration development fast and intuitive — but good practices turn you from a beginner into a professional. Focus on: Clean, modular pipeline design Strong error handling Proper documentation and parameterization By avoiding these common mistakes, you’ll build SnapLogic pipelines that are scalable, secure, and easy to maintain — ready for enterprise-grade automation.47Views3likes0Commentstrace API and proxy calls
Hi ! I'm new to Snaplogic and I would like to trace all API and proxy calls in Datadog. Is there a way in SnapLogic to access a list that contains all API and proxy calls that have been made, along with their response codes? Additionally, in order to create a dashboard in Datadog, where can I find the necessary information in SnapLogic to retrieve this data? Thank you for the help !Real-Time Flow Control Event Analytics and Predictive Maintenance using SnapLogic and OPC UA
Overview In industrial plants, flow control valves play a critical role in maintaining safe and efficient operations by regulating the flow of steam, gas, or liquids through turbines and auxiliary systems. However, even minor valve performance issues — such as delayed actuation, partial closure, or sensor faults — can trigger cascading operational problems across the system. Without a real-time event detection and analytics mechanism, these issues often remain unnoticed until they cause visible production impact or downtime. Engineers traditionally rely on manual monitoring or post-failure analysis, which leads to: Delayed Detection of Flow Anomalies Lack of Root-Cause Visibility Unplanned Downtime and Maintenance Costs No Predictive Maintenance Capability To overcome these challenges, we implemented a real-time data integration pipeline using SnapLogic and OPC UA, enabling event-driven monitoring, automated data capture, and intelligent analytics in Snowflake Use Case Summary When a flow control valve in the turbine system is triggered, it generates an event in the OPC UA server. The SnapLogic pipeline, built with the OPC UA Subscribe Snap, detects this event instantly. Once the event is received, the Snaplogic pipeline reads live data from Sensor OPC UA nodes, including: Pressure Temperature Flow Rate Controller Status All these values are combined with the OPC UA server timestamp into a single unified record. The record is then stored in Snowflake for historical tracking, trend analysis, and real-time analytics dashboards. Workflow: Snaplogic Pipeline Workflow: Parent Pipeline: Subscribe to Flow Control Data events Child Pipeline: Capture sensor node details and aggregate Data Subscribe to Flow Control Data events using OPCUA Subscribe: parent SnapLogic pipeline, “Headless Ultra”, is designed to run continuously (indefinitely) as a background monitoring service. Its primary role is to capture all real-time flow control data events from the OPC UA server using the OPC UA Subscribe Snap. Parameter Value Description Pipeline Type Headless Ultra The pipeline is deployed in Ultra Task mode without any frontend or manual trigger. It runs as a persistent listener to capture OPC UA events in real-time. Execution Duration Indefinite The pipeline never stops unless explicitly terminated. This ensures continuous data monitoring and streaming. Snap Used OPC UA Subscribe Snap This Snap subscribes to specific OPC UA nodes (like flow control valve, pressure, temperature, and controller status) and receives event updates from the OPC UA server. Publish Interval 1000 milliseconds (1 second) Defines how often the OPC UA server sends updates to the subscriber. Every 1 second, the Snap receives the latest data from the subscribed nodes. Monitoring Mode Reporting In “Reporting” mode, the Subscribe Snap reports value changes or events whenever an update occurs, ensuring that only meaningful data changes are captured — not redundant values. Queue Size 2 The number of unprocessed event messages that can be queued at once. A queue size of 2 ensures lightweight buffering while maintaining near real-time responsiveness. If new events arrive faster than processing speed, older ones are replaced, preventing data backlog. Capture Real-Time node values from Sensor Nodes and load data to Snowflake Warehouse Child pipeline collects diagnostic context by fetching live data from related OPC UA sensor nodes and consolidates them into a single analytical record before loading it into Snowflake for historical analysis and predictive maintenance Select Sensor Nodes using OPC UA Node Selector snap Read live data from Sensor Nodes using OPCUA Read Snap Group all Sensor node values to single record using Group By N snap Combine Flow Control value details and Sensor node values to single record Output: Node Type What Happens After Trigger Why It’s Important Pressure Node (Sensor.Pressure) The current pressure is read and stored with the event. Helps determine if over pressure caused the flow control valve to trigger. Temperature Node (Sensor.Temperature) Captured as part of the same record. High temperature may indicate overheating, cavitation, or pump issues. Flow Rate Node (Sensor.FlowRate) Logged when the trigger occurs. Confirms whether the actual flow exceeded or dropped below the threshold. Controller/Motor Status (Controller.Status) Captured to show control logic state (ON, OFF, FAULT). Correlates actuator or PLC state with the trigger condition. Server Timestamp Captured from the OPC UA event source. Ensures temporal accuracy for event reconstruction and trend analysis. Write data into Snowflake warehouse 📊 Analytics Dashboard Overview The analytics dashboard is powered by data ingested and processed through SnapLogic OPC UA pipelines and stored in Snowflake. It provides real-time visibility, trend analytics, and predictive insights to help operations and reliability teams monitor and optimize industrial equipment performance Event Stream: Displays real-time flow control valve events captured by the SnapLogic OPC UA Subscribe Snap in the parent Headless Ultra pipeline Sensor Trends: Visualizes time-series data from multiple OPC UA sensor nodes related to flow control — including pressure, temperature and flow rate Predictive Insights: Highlights machine learning–driven predictions and risk scores derived from historical flow control event data like Predicted Downtime, Anomaly Scores etc System Health Summary: Displays the overall health and operational status of the monitored flow system Conclusion This use case demonstrates how SnapLogic’s intelligent integration capabilities, combined with OPC UA data streams and Snowflake’s analytical power, can transform raw industrial sensor data into actionable insights. By automating the ingestion, transformation, and visualization of real-time flow control events, temperature, and pressure data, the solution enables engineers to detect anomalies early, predict potential equipment issues, and make informed operational decisions. The analytics dashboard provides a consolidated view through Event Streams, Sensor Trends, Predictive Insights, and System Health Summaries, helping organizations move from reactive monitoring to proactive and predictive maintenance. In essence, this architecture proves how data integration and AI-driven analytics can empower industrial enterprises to enhance reliability, optimize performance, and reduce downtime — paving the way toward truly smart, data-driven operations.46Views0likes0CommentsPagination Logic Fails After Migrating from REST GET to HTTP Client Snap
Hello everyone, Three years ago, I developed a pipeline to extract data from ServiceNow and load it into Snowflake. As part of this, I implemented pagination logic to handle multi-page responses by checking for the presence of a "next" page and looping through until all data was retrieved. This job has been running successfully in production without any issues. Recently, we were advised by the Infrastructure team to replace the REST GET Snap with the HTTP Client Snap, as the former is being deprecated and is no longer recommended. I updated the pipeline accordingly, but the pagination logic that worked with REST GET is not functioning as expected with the HTTP Client Snap. The logic I used is as follows: Pagination → Has Next: isNaN($headers['link'].match(/",<([^;"]*)>;rel="next",/)) Override URI → Next URL: $headers['link'].match(/\",<([^;\"]*)>;rel=\"next\",/) ? $headers['link'].match(/\",<([^;\"]*)>;rel=\"next\",/)[1].replace(_servicenow_cloud_base_url, _servicenow_b2b_base_url) : null However, with the HTTP Client Snap, I’m encountering the following error: Error Message: Check the spelling of the property or, if the property is optional, use the get() method (e.g., $headers.get('link')) Reason: 'link' was not found while evaluating the sub-expression '$headers['link']' This exact logic works perfectly in the existing job using REST GET, with no changes to the properties. It seems the HTTP Client Snap is not recognizing or parsing the link header in the same way.Solved156Views0likes5CommentsAPI Key Authenticator token validation
Hello everyone, I have a query with respect to the API key authenticator configured for an API created by me. After setting the API key to '1234', I expect to receive the API response upon auth_token=1234 in the request parameter. However, I notice that I receive a valid API response for any token value except 1234. The expected functionality is opposite to what is being observed. My expectation is to receive a response only when auth_token is present AND equals the value set in the API key of the policy (Eg:1234). How do I achieve this in Snaplogic? The corresponding screenshots have been attached. Thanks.SolvedPipeline Execute Pool size
I am trying to use the pipeline execute snap to launch child pipelines. currently the throughput of this is relatively slow, to improve this I would like to increase the pool size to allow multiple childs to run at the same time however regardless of changing this setting it seems the run continues to only process one document at a time. for example please see screenshots below: In the screenshots above i would expect 10 documents to be on the input section, with 10 instances of the DELME pipeline running, however only 1 is running at a time?SolvedConcat values of a field based on value of another field
Hi All, I'm working on a requirement where I need to concatenate the values of field A with '|' based on the value of field B of a JSON array. Below is the example. API response: there're multiple records as below { "impactTimestamp":"2025-10-23T10:47:47ZZ", "la":"1.2", "lg":"3.4", "IR":"12", "IA": [ { "number":"78","type":"C" } { "number":"89","type":"C" } { "number":"123","type":"A" } { "number":"456","type":"A" } ] } desired output: { " impactTimestamp":"2025-10-23T10:47:47ZZ", "la":"1.2", "lg":"3.4", "IR":"12", "impactedAs": "123|456", "impactedCs": "78|89" } I tried multiple ways to filter, map and join functions on the API response but it doesn't work. Group by and Aggregate snaps are asked to be avoided as much so been trying with functions. Please suggest anything either on functions or snaps. Thanks !SolvedIntroducing the Agent Snap
7 MIN READ Flashback: What’s an Agent? “Agents are autonomous LLM-based processes that can interact with external systems to carry out a high-level goal.” Agents are LLM-based systems that can perform actions based on the user’s request and the scenario, determined by the LLM of the Agent system. A minimal agent consists of 1. an LLM component, and 2. tools that the Agent can use. Think of the Agent as a Robot with a brain (LLM) + robotic arms (Tools). Based on the request, the brain can “decide” to do something, and then the arm will carry out the action decided by the brain. Then, depending on the scenario, the brain can determine if more action is needed, or end if the request is complete. The process of an agent We previously introduced the “Agent Driver and Agent Worker“ pipeline pattern, which clearly defines every single operation that would occur in an Agent process. The process of the pattern can be described as follows Agent Driver Define the instruction of the Agent. (System prompt) Format the user’s request into a conversation. (Messages array) Define tools to make available to the Agent. Send all information above into a “loop“, run the Agent worker until the process is complete. Agent Worker Call the LLM with the instructions, conversation, and tool definitions LLM decides… If it is able to complete the request, end the conversation and go to step 7. If tool calls are required, go to step 3. Call the tools. Format the tool result. Add the tool results to the conversation Back to step 1. Request is complete, the agent responds. The rationale From the Agent Driver and the Agent Worker pipeline, here’s an observation: The driver pipeline handles all of the “configuration“ of the Agent. The worker pipeline handles the “operation“ of the Agent. Now, imagine this: If we can package the “Agent operation” into a single module, so that we can create Agents just by providing instructions, and tools. Wouldn’t this be great? This is exactly what Agent Snap does. The Agent Snap combines the PipeLoop Snap and the Agent Worker pipeline, so all of the agent operations happen in a single Snap. Information and prerequisites Now, before dreaming about having your own company of agents, since building agents is now so simple, there is some information to know about and conditions to be met before this can happen. 1. Agent Snaps are model-specific The Agent Snap is a combination of the “loop” and the Agent Worker, therefore, the LLM provider to be used for an Agent Snap is also fixed. This design allows users to stick to their favorite combination of customized model parameters. 2. Function(Tool) definitions must be linked to a pipeline to carry out the execution Previously, in an Agent Worker pipeline, the Tool Calling Snap is connected to Pipeline Execute Snaps to carry out tool calls, but this is no longer the case with the Agent Snap. Instead, a function definition should include the path of the pipeline to carry out the execution if this tool is called. This way, we can ensure every tool call can be performed successfully. If the user does not provide a tool pipeline with the function definition, the Agent Snap will not proceed. 3. Expected Input and Output of a tool pipeline When a tool call is requested by an LLM, the LLM will provide the name of the tool to call and the corresponding parameters to call. The Agent Snap will unwrap the parameters and send them directly to the tool pipeline. Here’s an example: I have a tool get_weather, which takes city: string as a parameter. The LLM decides to call the tool get_weather with the following payload: { "name": "get_weather", "parameters": { "city": "New York City" }, "sl_tool_metadata": { ... } } For this to work, my tool pipeline must be able to accept the input document : {"city": "New York City"} On a side note, the sl_tool_metadata object will also be available to the tool pipeline as the input for APIM and OpenAPI tools. Now, assume my tool pipeline has successfully retrieved the weather of New York City, It’s time for the Agent Snap to collect the result of this tool call. The Agent Snap will collect everything from the output document of the tool pipeline as the tool call result*. So that the LLM can determine the next steps properly. *Note: with one exception, if the output of a “tool pipeline“ contains the field “messages“ or "contents", it will be treated as the conversational history of the “child agent”, which will be filtered and will not be included. Build an Agent with Agent Snap We’ve understood the idea, we’ve gone through the prerequisites, and it’s time to build an Agent. In this example, we have an Agent with 2 tools: a weather tool and a calendar tool. We first start with a prompt generator to format the user input. Then define the tools the Agent can access. Let’s look into one of the tool definitions. In this example tool, we can see the name of the tool, the description of the tool, the parameters, and the path of the tool pipeline to carry out this task. This satisfies the requirement of a tool to be used by an Agent Snap. After we have the tools set, let’s look at the Agent Snap, using the Amazon Bedrock Converse API Agent Snap as an example. The configuration of an Agent Snap is similar to its corresponding Tool calling Snap, except for some extra fields, such as a button to visualize the agent flow, and a section to configure the operation of the Agent, such as iteration limit and number of threads for tool pipeline executions. The Agent Snap handles the whole executional process, and terminates when 1. The request is complete (no more tool calls are required) or 2. An error occurred. Voila! You have created an agent. After the Agent pipeline completes a round of execution, the user can use the “Visualize Agent Flow“ button in the Agent Snap to see the tools that are called by the LLM. Tips and Tricks for the Agent Snap Let’s take a look at the features built into the Agent Snap. Reuse pipelines Most agentic tool calls are processes that can be reused. To minimize execution load, we can use the “Reuse tool pipeline“ feature. This feature allows tool pipeline instances to be reused, so that the Agent will not need to spawn a pipeline every time a tool is called. To use this feature, the tool pipeline to be reused must be “Ultra compatible“; otherwise, the pipeline execution would hang, and the Agent Snap would eventually timeout. Tool call monitoring Agents can be long-running; it’s not rare to have an Agent run multiple iterations. To see what’s happening in the process, Agent Snap has built in monitoring during validation. The user will be able to see the iteration index, the tool that is currently being called, and the parameters that are used for the tool call in the pipeline statistics status bar. Selecting the “Monitor tool call“ option includes the parameter in the status update, which is an opt-in feature. If the user does not wish to expose the information to SnapLogic, the user should disable this. Warnings Agent configuration is a delicate process; a mistake can potentially lead to errors. The Agent Snap has a bunch of built-in warning capabilities, so the user can be better aware of what could go wrong. 1. Agent process completed before all tool calls completed In the Agent Snap, there is an Iteration limit setting, which limits the number of iterations the Agent can run. If the user provided a smaller limit, which caused the Agent to stop while the LLM is still awaiting tool calls, this warning would pop up to signal the user that the execution is incomplete. 2. Tool pipeline path is not defined A function (tool) definition to be used by the Agent Snap should include a tool pipeline path, so the Agent Snap can link to the actual pipeline that carries out the execution. If the pipeline path is not included in the function definition, this warning will pop up to signal the user that the Agent will not proceed. 3. Duplicate tool naming As we try to add more and more tools to the Agent Snap, two tools likely share the same name. The Agent Snap has the ability to rename the tools being sent to the LLM, and then still link to the correct pipeline. There will also be a warning available in the pipeline statistics to alert the user about a change in the behavior. Release Timeframes The Agent Snap is the foundation of the next-generation SnapLogic Agent. We will be releasing 4 Agent Snaps in the November release: Amazon Bedrock Converse API Agent OpenAI Chat Completions Agent Azure OpenAI Chat Completions Agent Google Gemini API Agent To better use the Agent Snaps, we will be introducing new capabilities to some of our Function Generators as well. Here is the list of Function Generator Snaps that will be modified soon: APIM Function Generator Snap OpenAPI Function Generator Snap MCP Function Generator Snap We hope you are as excited as we are about this one.JWT Configuration for SnapLogic Public API
This document details the process of configuring JWT authentication for the SnapLogic Public API using self-generated keys without the use of any third party JWT providers. It covers key generation, JWKS creation, SnapLogic configuration. 1. Key Generation and JWKS Creation 1.1 Setup the CMD Open CMD Mount the OpenSSL bin folder 1.2 Generate the Private Key Use the following command to generate a 2048-bit RSA private key in the PEM format. BASH openssl genpkey -algorithm RSA -out jwt_private_key.pem -pkeyopt rsa_keygen_bits:2048 Result:A file named jwt_private_key.pem will be created. This key must be kept secret and secure. 1.3 Convert to PKCS#8 Format The JWT generation requires the private key to be in the PKCS#8 format for proper decoding. So, convert the jwt_private_key.pem into PKCS8 format. BASH openssl pkcs8 -topk8 -in jwt_private_key.pem -out jwt_private_key_pkcs8.pem -nocrypt Result:A new file, jwt_private_key_pkcs8.pem, will be created. Use this key in your application for signing JWTs. 1.4 Extract the Public Key The public key is required for the JWKS document. BASH openssl rsa -in jwt_private_key_pkcs8.pem -pubout -out jwt_public_key_pkcs8.pem Result:A file named jwt_public_key.pem will be created. 1.5 Extract Public Key Components for JWKS: Extract the Modulus and Exponent from the CA-signed public key. These are the core components of your JWKS. BASH openssl rsa -pubin -in jwt_public_key_pkcs8.pem -text -noout The output will look like this:Public-Key: (2048 bit)Modulus: 00:d2:e3:23:2c:15:a6:5b:54:c1:89:f7:5f:41:bf:...Exponent: 65537 (0x10001) 2. JWKS Creation and JWT Endpoint Configuration 2.1. The below steps explain how to create the JWKS JSON within Snaplogic. 2.1.1 Create a new project sapce and a project "JWKS" or even an API with name "JWKS" - (This step is just for access control and the API policy to be applied only for this purpose) 2.1.2 Create the pipeline CreateJWKS 2.1.3 Update the Modulus and Exponent values in the mapper copied from the step 1.5 in the section Key Generation, JWKS Creation, and Certificate Signing. 2.1.4 Select the language as Python and replace the default script in the script snap with # Import the interface required by the Script snap. from com.snaplogic.scripting.language import ScriptHook import base64 import hashlib class TransformScript(ScriptHook): def __init__(self, input, output, error, log): self.input = input self.output = output self.error = error self.log = log # Helper function to convert an integer to a big-endian byte string # This is a manual implementation of int.to_bytes() for Python 2.7 def int_to_bytes(self, n): if n == 0: return '\x00' hex_string = "%x" % n if len(hex_string) % 2 == 1: hex_string = '0' + hex_string return hex_string.decode("hex") def execute(self): self.log.info("Executing Transform script") while self.input.hasNext(): try: inDoc = self.input.next() # Modulus conversion logic hex_input = inDoc['hex_string_field'] clean_hex_string = hex_input.replace('\n', '').replace(' ', '').replace(':', '') modulus_bytes = clean_hex_string.decode("hex") modulus_base64url = base64.urlsafe_b64encode(modulus_bytes).rstrip('=') # Exponent conversion logic exponent_input_str = inDoc['exponent_field'] import re match = re.search(r'^\d+', exponent_input_str) if match: exponent_int = int(match.group(0)) else: raise ValueError("Could not parse exponent value from string.") exponent_bytes = self.int_to_bytes(exponent_int) exponent_base64url = base64.urlsafe_b64encode(exponent_bytes).rstrip('=') # Dynamic Key ID (kid) generation logic # Concatenate the Base64url-encoded modulus and exponent jwk_string = modulus_base64url + exponent_base64url # Compute the SHA-256 hash kid_hash = hashlib.sha256(jwk_string).digest() # Base64url encode the hash to create the kid kid = base64.urlsafe_b64encode(kid_hash).rstrip('=') # Prepare the output document with all values outDoc = { 'modulus_base64url': modulus_base64url, 'exponent_base64url': exponent_base64url, 'kid': kid } self.output.write(inDoc, outDoc) except Exception as e: errDoc = { 'error' : str(e) } self.log.error("Error in python script: " + str(e)) self.error.write(errDoc) self.log.info("Script executed") def cleanup(self): self.log.info("Cleaning up") hook = TransformScript(input, output, error, log) 2.1.5 Replace the default value in the JSON generator with { "keys": [ { "kty": "RSA", "alg": "RS256", "kid": $kid, "use": "sig", "e": $exponent_base64url, "n": $modulus_base64url } ] } This will return us the JWKS JSON. 2.2. The below step creates the public endpoint for the JWKS JSON. The below steps can be done as a standalone API as well as a separate project for this JWKS authentication. 2.2.1 Create the pipeline getJWKS 2.2.2 Paste the JWKS generated in step 2.1.5 above in the JSON Generator: { "keys": [ { "kty": "RSA", "alg": "RS256", "kid": "vTfx70NbtVbarHnBetDHNqLXsWVr4Ue5oC32TFNSMlc", "use": "sig", "e": "AQAB", "n": "ANLjIywVpltUwYn3X0G_********_3JmpnSh419wDZC_8-Ts" } ] } 2.2.3 Follow the config as shown for JSON Formatter: 2.2.4 Create a Task named jwks.json and follow the task config as shown and copy the Ultra Task HTTP Endpoint: Select the Snaplex as Cloud, as the endpoint have to be truly public. 2.2.5 Create an API Policy - Anonymous Authenticator and key in the details as shown: 2.2.6 Create an API Policy - Authorize By Role and key in the details as shown: 3. SnapLogic JWT Configuration This step links SnapLogic to your JWKS. Configure Admin Manager: 3.1 In the SnapLogic Admin Manager, navigate to Authentication > JWT. 3.1.1 Issuer ID: Enter a unique identifier for your issuer. This can be a custom string. 3.1.2 JWKS Endpoint: Enter the full HTTPS URL where you have hosted the JWKS JSON file, HTTP Endpoint copied from step B.4 in the Section JWKS Creation and JWT Endpoint Configuration. 3.2 In the SnapLogic Admin Manager, navigate to Allowlists > CORS allowlist 3.2.1 Add domain: Key in the domain https://*.snaplogic.com in the Domain text box, click on Add Domain and click on Save. 4. JWT Generation and Structure The JWT must be created with a header that references your custom kid and a payload with claims that match SnapLogic's requirements. 4.1 Header: JSON { "alg": "RS256", "typ": "JWT", "kid": "use the key id generated in step 2.1.5 from the section JWKS Creation and JWT Endpoint Configuration"} 4.2 Payload: JSON { "iat": {{timestampIAT}}, "exp": {{timestampEXP}}, "sub": "youremail@yourcompany.com", "aud": "https://elastic.snaplogic.com/api/1/rest/public", "iss": "issuer id given in section 3.1.1.1", "org": "Your Snaplogic Org" } 4.3 Sign the JWT: Use the jwt_private_key_pkcs8.pem to sign the token with your application's JWT library. 4.4 Postman Pre-Request script to automatically generate epoch timestamps for iat and exp claims let now = new Date().getTime(); let iat = (now/1000) let futureTime = now + (3600 * 1000); let exp = (futureTime/1000) // Set the collection variable pm.collectionVariables.set("timestampIAT", iat); pm.collectionVariables.set("timestampEXP", exp);