Recent Content
Way to lock down in Prod org to "Monitor" only access?
Hello, is there way to structure access so when a person logs into the Prod org space, they only see the monitor tab or view, and the "Designer" and "Manager" areas are hidden? Purely we want someone to look at how integrations are running only. Any ideas appreciated. Mike39Views0likes2Commentstrace API and proxy calls
Hi ! I'm new to Snaplogic and I would like to trace all API and proxy calls in Datadog. Is there a way in SnapLogic to access a list that contains all API and proxy calls that have been made, along with their response codes? Additionally, in order to create a dashboard in Datadog, where can I find the necessary information in SnapLogic to retrieve this data? Thank you for the help !Real-Time Flow Control Event Analytics and Predictive Maintenance using SnapLogic and OPC UA
Overview In industrial plants, flow control valves play a critical role in maintaining safe and efficient operations by regulating the flow of steam, gas, or liquids through turbines and auxiliary systems. However, even minor valve performance issues — such as delayed actuation, partial closure, or sensor faults — can trigger cascading operational problems across the system. Without a real-time event detection and analytics mechanism, these issues often remain unnoticed until they cause visible production impact or downtime. Engineers traditionally rely on manual monitoring or post-failure analysis, which leads to: Delayed Detection of Flow Anomalies Lack of Root-Cause Visibility Unplanned Downtime and Maintenance Costs No Predictive Maintenance Capability To overcome these challenges, we implemented a real-time data integration pipeline using SnapLogic and OPC UA, enabling event-driven monitoring, automated data capture, and intelligent analytics in Snowflake Use Case Summary When a flow control valve in the turbine system is triggered, it generates an event in the OPC UA server. The SnapLogic pipeline, built with the OPC UA Subscribe Snap, detects this event instantly. Once the event is received, the Snaplogic pipeline reads live data from Sensor OPC UA nodes, including: Pressure Temperature Flow Rate Controller Status All these values are combined with the OPC UA server timestamp into a single unified record. The record is then stored in Snowflake for historical tracking, trend analysis, and real-time analytics dashboards. Workflow: Snaplogic Pipeline Workflow: Parent Pipeline: Subscribe to Flow Control Data events Child Pipeline: Capture sensor node details and aggregate Data Subscribe to Flow Control Data events using OPCUA Subscribe: parent SnapLogic pipeline, “Headless Ultra”, is designed to run continuously (indefinitely) as a background monitoring service. Its primary role is to capture all real-time flow control data events from the OPC UA server using the OPC UA Subscribe Snap. Parameter Value Description Pipeline Type Headless Ultra The pipeline is deployed in Ultra Task mode without any frontend or manual trigger. It runs as a persistent listener to capture OPC UA events in real-time. Execution Duration Indefinite The pipeline never stops unless explicitly terminated. This ensures continuous data monitoring and streaming. Snap Used OPC UA Subscribe Snap This Snap subscribes to specific OPC UA nodes (like flow control valve, pressure, temperature, and controller status) and receives event updates from the OPC UA server. Publish Interval 1000 milliseconds (1 second) Defines how often the OPC UA server sends updates to the subscriber. Every 1 second, the Snap receives the latest data from the subscribed nodes. Monitoring Mode Reporting In “Reporting” mode, the Subscribe Snap reports value changes or events whenever an update occurs, ensuring that only meaningful data changes are captured — not redundant values. Queue Size 2 The number of unprocessed event messages that can be queued at once. A queue size of 2 ensures lightweight buffering while maintaining near real-time responsiveness. If new events arrive faster than processing speed, older ones are replaced, preventing data backlog. Capture Real-Time node values from Sensor Nodes and load data to Snowflake Warehouse Child pipeline collects diagnostic context by fetching live data from related OPC UA sensor nodes and consolidates them into a single analytical record before loading it into Snowflake for historical analysis and predictive maintenance Select Sensor Nodes using OPC UA Node Selector snap Read live data from Sensor Nodes using OPCUA Read Snap Group all Sensor node values to single record using Group By N snap Combine Flow Control value details and Sensor node values to single record Output: Node Type What Happens After Trigger Why It’s Important Pressure Node (Sensor.Pressure) The current pressure is read and stored with the event. Helps determine if over pressure caused the flow control valve to trigger. Temperature Node (Sensor.Temperature) Captured as part of the same record. High temperature may indicate overheating, cavitation, or pump issues. Flow Rate Node (Sensor.FlowRate) Logged when the trigger occurs. Confirms whether the actual flow exceeded or dropped below the threshold. Controller/Motor Status (Controller.Status) Captured to show control logic state (ON, OFF, FAULT). Correlates actuator or PLC state with the trigger condition. Server Timestamp Captured from the OPC UA event source. Ensures temporal accuracy for event reconstruction and trend analysis. Write data into Snowflake warehouse 📊 Analytics Dashboard Overview The analytics dashboard is powered by data ingested and processed through SnapLogic OPC UA pipelines and stored in Snowflake. It provides real-time visibility, trend analytics, and predictive insights to help operations and reliability teams monitor and optimize industrial equipment performance Event Stream: Displays real-time flow control valve events captured by the SnapLogic OPC UA Subscribe Snap in the parent Headless Ultra pipeline Sensor Trends: Visualizes time-series data from multiple OPC UA sensor nodes related to flow control — including pressure, temperature and flow rate Predictive Insights: Highlights machine learning–driven predictions and risk scores derived from historical flow control event data like Predicted Downtime, Anomaly Scores etc System Health Summary: Displays the overall health and operational status of the monitored flow system Conclusion This use case demonstrates how SnapLogic’s intelligent integration capabilities, combined with OPC UA data streams and Snowflake’s analytical power, can transform raw industrial sensor data into actionable insights. By automating the ingestion, transformation, and visualization of real-time flow control events, temperature, and pressure data, the solution enables engineers to detect anomalies early, predict potential equipment issues, and make informed operational decisions. The analytics dashboard provides a consolidated view through Event Streams, Sensor Trends, Predictive Insights, and System Health Summaries, helping organizations move from reactive monitoring to proactive and predictive maintenance. In essence, this architecture proves how data integration and AI-driven analytics can empower industrial enterprises to enhance reliability, optimize performance, and reduce downtime — paving the way toward truly smart, data-driven operations.46Views0likes0CommentsPagination Logic Fails After Migrating from REST GET to HTTP Client Snap
Hello everyone, Three years ago, I developed a pipeline to extract data from ServiceNow and load it into Snowflake. As part of this, I implemented pagination logic to handle multi-page responses by checking for the presence of a "next" page and looping through until all data was retrieved. This job has been running successfully in production without any issues. Recently, we were advised by the Infrastructure team to replace the REST GET Snap with the HTTP Client Snap, as the former is being deprecated and is no longer recommended. I updated the pipeline accordingly, but the pagination logic that worked with REST GET is not functioning as expected with the HTTP Client Snap. The logic I used is as follows: Pagination → Has Next: isNaN($headers['link'].match(/",<([^;"]*)>;rel="next",/)) Override URI → Next URL: $headers['link'].match(/\",<([^;\"]*)>;rel=\"next\",/) ? $headers['link'].match(/\",<([^;\"]*)>;rel=\"next\",/)[1].replace(_servicenow_cloud_base_url, _servicenow_b2b_base_url) : null However, with the HTTP Client Snap, I’m encountering the following error: Error Message: Check the spelling of the property or, if the property is optional, use the get() method (e.g., $headers.get('link')) Reason: 'link' was not found while evaluating the sub-expression '$headers['link']' This exact logic works perfectly in the existing job using REST GET, with no changes to the properties. It seems the HTTP Client Snap is not recognizing or parsing the link header in the same way.Solved156Views0likes5CommentsPipeline Execute Pool size
I am trying to use the pipeline execute snap to launch child pipelines. currently the throughput of this is relatively slow, to improve this I would like to increase the pool size to allow multiple childs to run at the same time however regardless of changing this setting it seems the run continues to only process one document at a time. for example please see screenshots below: In the screenshots above i would expect 10 documents to be on the input section, with 10 instances of the DELME pipeline running, however only 1 is running at a time?SolvedConcat values of a field based on value of another field
Hi All, I'm working on a requirement where I need to concatenate the values of field A with '|' based on the value of field B of a JSON array. Below is the example. API response: there're multiple records as below { "impactTimestamp":"2025-10-23T10:47:47ZZ", "la":"1.2", "lg":"3.4", "IR":"12", "IA": [ { "number":"78","type":"C" } { "number":"89","type":"C" } { "number":"123","type":"A" } { "number":"456","type":"A" } ] } desired output: { " impactTimestamp":"2025-10-23T10:47:47ZZ", "la":"1.2", "lg":"3.4", "IR":"12", "impactedAs": "123|456", "impactedCs": "78|89" } I tried multiple ways to filter, map and join functions on the API response but it doesn't work. Group by and Aggregate snaps are asked to be avoided as much so been trying with functions. Please suggest anything either on functions or snaps. Thanks !SolvedJWT Configuration for SnapLogic Public API
This document details the process of configuring JWT authentication for the SnapLogic Public API using self-generated keys without the use of any third party JWT providers. It covers key generation, JWKS creation, SnapLogic configuration. 1. Key Generation and JWKS Creation 1.1 Setup the CMD Open CMD Mount the OpenSSL bin folder 1.2 Generate the Private Key Use the following command to generate a 2048-bit RSA private key in the PEM format. BASH openssl genpkey -algorithm RSA -out jwt_private_key.pem -pkeyopt rsa_keygen_bits:2048 Result:A file named jwt_private_key.pem will be created. This key must be kept secret and secure. 1.3 Convert to PKCS#8 Format The JWT generation requires the private key to be in the PKCS#8 format for proper decoding. So, convert the jwt_private_key.pem into PKCS8 format. BASH openssl pkcs8 -topk8 -in jwt_private_key.pem -out jwt_private_key_pkcs8.pem -nocrypt Result:A new file, jwt_private_key_pkcs8.pem, will be created. Use this key in your application for signing JWTs. 1.4 Extract the Public Key The public key is required for the JWKS document. BASH openssl rsa -in jwt_private_key_pkcs8.pem -pubout -out jwt_public_key_pkcs8.pem Result:A file named jwt_public_key.pem will be created. 1.5 Extract Public Key Components for JWKS: Extract the Modulus and Exponent from the CA-signed public key. These are the core components of your JWKS. BASH openssl rsa -pubin -in jwt_public_key_pkcs8.pem -text -noout The output will look like this:Public-Key: (2048 bit)Modulus: 00:d2:e3:23:2c:15:a6:5b:54:c1:89:f7:5f:41:bf:...Exponent: 65537 (0x10001) 2. JWKS Creation and JWT Endpoint Configuration 2.1. The below steps explain how to create the JWKS JSON within Snaplogic. 2.1.1 Create a new project sapce and a project "JWKS" or even an API with name "JWKS" - (This step is just for access control and the API policy to be applied only for this purpose) 2.1.2 Create the pipeline CreateJWKS 2.1.3 Update the Modulus and Exponent values in the mapper copied from the step 1.5 in the section Key Generation, JWKS Creation, and Certificate Signing. 2.1.4 Select the language as Python and replace the default script in the script snap with # Import the interface required by the Script snap. from com.snaplogic.scripting.language import ScriptHook import base64 import hashlib class TransformScript(ScriptHook): def __init__(self, input, output, error, log): self.input = input self.output = output self.error = error self.log = log # Helper function to convert an integer to a big-endian byte string # This is a manual implementation of int.to_bytes() for Python 2.7 def int_to_bytes(self, n): if n == 0: return '\x00' hex_string = "%x" % n if len(hex_string) % 2 == 1: hex_string = '0' + hex_string return hex_string.decode("hex") def execute(self): self.log.info("Executing Transform script") while self.input.hasNext(): try: inDoc = self.input.next() # Modulus conversion logic hex_input = inDoc['hex_string_field'] clean_hex_string = hex_input.replace('\n', '').replace(' ', '').replace(':', '') modulus_bytes = clean_hex_string.decode("hex") modulus_base64url = base64.urlsafe_b64encode(modulus_bytes).rstrip('=') # Exponent conversion logic exponent_input_str = inDoc['exponent_field'] import re match = re.search(r'^\d+', exponent_input_str) if match: exponent_int = int(match.group(0)) else: raise ValueError("Could not parse exponent value from string.") exponent_bytes = self.int_to_bytes(exponent_int) exponent_base64url = base64.urlsafe_b64encode(exponent_bytes).rstrip('=') # Dynamic Key ID (kid) generation logic # Concatenate the Base64url-encoded modulus and exponent jwk_string = modulus_base64url + exponent_base64url # Compute the SHA-256 hash kid_hash = hashlib.sha256(jwk_string).digest() # Base64url encode the hash to create the kid kid = base64.urlsafe_b64encode(kid_hash).rstrip('=') # Prepare the output document with all values outDoc = { 'modulus_base64url': modulus_base64url, 'exponent_base64url': exponent_base64url, 'kid': kid } self.output.write(inDoc, outDoc) except Exception as e: errDoc = { 'error' : str(e) } self.log.error("Error in python script: " + str(e)) self.error.write(errDoc) self.log.info("Script executed") def cleanup(self): self.log.info("Cleaning up") hook = TransformScript(input, output, error, log) 2.1.5 Replace the default value in the JSON generator with { "keys": [ { "kty": "RSA", "alg": "RS256", "kid": $kid, "use": "sig", "e": $exponent_base64url, "n": $modulus_base64url } ] } This will return us the JWKS JSON. 2.2. The below step creates the public endpoint for the JWKS JSON. The below steps can be done as a standalone API as well as a separate project for this JWKS authentication. 2.2.1 Create the pipeline getJWKS 2.2.2 Paste the JWKS generated in step 2.1.5 above in the JSON Generator: { "keys": [ { "kty": "RSA", "alg": "RS256", "kid": "vTfx70NbtVbarHnBetDHNqLXsWVr4Ue5oC32TFNSMlc", "use": "sig", "e": "AQAB", "n": "ANLjIywVpltUwYn3X0G_********_3JmpnSh419wDZC_8-Ts" } ] } 2.2.3 Follow the config as shown for JSON Formatter: 2.2.4 Create a Task named jwks.json and follow the task config as shown and copy the Ultra Task HTTP Endpoint: Select the Snaplex as Cloud, as the endpoint have to be truly public. 2.2.5 Create an API Policy - Anonymous Authenticator and key in the details as shown: 2.2.6 Create an API Policy - Authorize By Role and key in the details as shown: 3. SnapLogic JWT Configuration This step links SnapLogic to your JWKS. Configure Admin Manager: 3.1 In the SnapLogic Admin Manager, navigate to Authentication > JWT. 3.1.1 Issuer ID: Enter a unique identifier for your issuer. This can be a custom string. 3.1.2 JWKS Endpoint: Enter the full HTTPS URL where you have hosted the JWKS JSON file, HTTP Endpoint copied from step B.4 in the Section JWKS Creation and JWT Endpoint Configuration. 3.2 In the SnapLogic Admin Manager, navigate to Allowlists > CORS allowlist 3.2.1 Add domain: Key in the domain https://*.snaplogic.com in the Domain text box, click on Add Domain and click on Save. 4. JWT Generation and Structure The JWT must be created with a header that references your custom kid and a payload with claims that match SnapLogic's requirements. 4.1 Header: JSON { "alg": "RS256", "typ": "JWT", "kid": "use the key id generated in step 2.1.5 from the section JWKS Creation and JWT Endpoint Configuration"} 4.2 Payload: JSON { "iat": {{timestampIAT}}, "exp": {{timestampEXP}}, "sub": "youremail@yourcompany.com", "aud": "https://elastic.snaplogic.com/api/1/rest/public", "iss": "issuer id given in section 3.1.1.1", "org": "Your Snaplogic Org" } 4.3 Sign the JWT: Use the jwt_private_key_pkcs8.pem to sign the token with your application's JWT library. 4.4 Postman Pre-Request script to automatically generate epoch timestamps for iat and exp claims let now = new Date().getTime(); let iat = (now/1000) let futureTime = now + (3600 * 1000); let exp = (futureTime/1000) // Set the collection variable pm.collectionVariables.set("timestampIAT", iat); pm.collectionVariables.set("timestampEXP", exp);Unable to preview records
Hello! I'm new to Snaplogic and have a strange issue. I built a simple pipeline that reads data from a csv file I uploaded to Snaplogic. The pipeline validates fine. When I click the preview button between snaps, I see the preview and it shows the headers from my csv. But I don't see the records themselves regardless if I switch to Table, Raw or JSON. What's strange is my colleagues CAN see the records when they click the preview button. Would appreciate any guidance. Thank you!SolvedWelcome to the Gold Star to the Winner Challenge - Halloween 2025 Edition! ⭐️
From time to time I send out to my team at SnapLogic fun pipeline building challenges that Expression Enthusiasts may enjoy solving. We have decided to open it up to the broader Snaplogic Community. The Gold Star to the Winner Challenge Halloween 2025 Edition is the spookiest challenge of the year. Your job will be to cast a powerful spell in the form of an expression to tame some monstrously messy data. As usual, this challenge is from a real world use case. It centers on schemalessly transforming ‘somewhat’ structured data into a perfectly structured, “OCD-approved” format. The Details: In the following dataset, there are two keys: “Name” and “Path”. The Trick is to craft an expression that can magically break apart the Path string into separate keys, numbering them sequentially (pathelement_1, pathelement_2, etc.).For example: a path with 3 elements in it would transform to 3 json keys:Input JSON: { “Path”:“my drive/matt/customers” } Output JSON: { “pathelement_1: “my drive”, “pathelement_2": “matt”, “pathelement_3": “customers” } Here’s the raw input to be put in a JSON Generator: [{"Name":"Fred","Path":"spooky/graveyard/tombstones/fog/cackles/witches/brewing/potions/spells/hauntedhouse.jpg"},{"Name":"Wilma","Path":"kids/yard ornaments/ghosts/goblins/monsters/jack o lantern/leaves/cocoa/chill/candysacks/excitement/pumpkins/tricks/treats.png"},{"Name":"Pebbles","Path":"shadows/bats/moonlight/screams/night/costumes/party.mp4"},{"Name":"Dino","Path":"creepy/cornfields/scarecrows/spiders/webs.gif"}] And the expected output: [{"pathelement_1":"spooky","pathelement_2":"graveyard","pathelement_3":"tombstones","pathelement_4":"fog","pathelement_5":"cackles","pathelement_6":"witches","pathelement_7":"brewing","pathelement_8":"potions","pathelement_9":"spells","pathelement_10":"hauntedhouse.jpg","Name":"Fred"},{"pathelement_1":"kids","pathelement_2":"yard ornaments","pathelement_3":"ghosts","pathelement_4":"goblins","pathelement_5":"monsters","pathelement_6":"jack o lantern","pathelement_7":"leaves","pathelement_8":"cocoa","pathelement_9":"chill","pathelement_10":"candysacks","pathelement_11":"excitement","pathelement_12":"pumpkins","pathelement_13":"tricks","pathelement_14":"treats.png","Name":"Wilma"},{"pathelement_1":"shadows","pathelement_2":"bats","pathelement_3":"moonlight","pathelement_4":"screams","pathelement_5":"night","pathelement_6":"costumes","pathelement_7":"party.mp4","Name":"Pebbles"},{"pathelement_1":"creepy","pathelement_2":"cornfields","pathelement_3":"scarecrows","pathelement_4":"spiders","pathelement_5":"webs.gif","Name":"Dino"}] Solution approaches: There are many ways to skin this cat; highlighting the flexibility of the SnapLogic platform. My solution contains a single expression in a mapper. Others (the purists) have solved this by configuring and connecting many transform Snaps. All solutions are good as long as the solutions matches the above expected output and is done in a completely schemaless way. The Prize: The winner will receive recognition in the form of SnapLogic Swag (👕🥤🍾 🎁...). The rules: To keep the playing field level, send solutions directly to me via email (msager@snaplogic.com) and attach your pipeline .slp file. (i.e. we don't want to give solutions out on this post for others to see) Contest ends on 10/31/2025 Good Luck to All! I look forward to seeing your solutions.SnapLogic Product Release - October 2025
This week we released the SnapLogic October 2025 Release. This update brings key enhancements across AI, automation, and observability—plus an important change to how you monitor your pipelines. Dashboard Retirement & New Monitor Training As of this release, the legacy Dashboard has been officially retired. All execution, health, and observability functions are now available in Monitor, which is your primary and default app going forward. To help people get started, a new on-demand training video is available that walks through the Monitor layout, key features, and customization options. Just follow the link here to watch: Monitor Overview & Training Video. You can already read more about SnapLogic Monitor by checking out the Monitor community post October 2025 Release Highlights AgentCreator Introduced LLM-agnostic Function Generator Snaps for building reusable agent functions across OpenAI, Azure OpenAI, Google GenAI, and Amazon Bedrock Added GPT-5 and Claude 4 model support Prompt Composer now features adjustable panels for a more flexible workspace. AutoSync Added Google Service Account JSON authentication for BigQuery endpoints. Enhanced error visibility and reliability for integrations that previously stalled in “running” state. Snaps PostgreSQL Multi Execute Snap for multiple write operations in one transaction. In-memory OAuth2 Accounts improve HTTP Client Snap performance. AWS Signature V4 and Redshift Snaps enhanced for IAM and cross-account access. Monitor The new destination for monitoring and metrics. New usability improvements: Search within filters Scrollable execution tables Status icons now include descriptive text for clarity Platform and Snaplex Update We recommend upgrading to Snaplex version main-36396 - 4.42.2.0 to benefit from performance fixes and enhanced reliability in Triggered Tasks and Snaplex node logging. For full release details, visit the October 2025 Release Notes