Recent Content
API Key Authenticator token validation
Hello everyone, I have a query with respect to the API key authenticator configured for an API created by me. After setting the API key to '1234', I expect to receive the API response upon auth_token=1234 in the request parameter. However, I notice that I receive a valid API response for any token value except 1234. The expected functionality is opposite to what is being observed. My expectation is to receive a response only when auth_token is present AND equals the value set in the API key of the policy (Eg:1234). How do I achieve this in Snaplogic? The corresponding screenshots have been attached. Thanks.Pipeline Execute Pool size
I am trying to use the pipeline execute snap to launch child pipelines. currently the throughput of this is relatively slow, to improve this I would like to increase the pool size to allow multiple childs to run at the same time however regardless of changing this setting it seems the run continues to only process one document at a time. for example please see screenshots below: In the screenshots above i would expect 10 documents to be on the input section, with 10 instances of the DELME pipeline running, however only 1 is running at a time?trace API and proxy calls
Hi ! I'm new to Snaplogic and I would like to trace all API and proxy calls in Datadog. Is there a way in SnapLogic to access a list that contains all API and proxy calls that have been made, along with their response codes? Additionally, in order to create a dashboard in Datadog, where can I find the necessary information in SnapLogic to retrieve this data? Thank you for the help !Pagination Logic Fails After Migrating from REST GET to HTTP Client Snap
Hello everyone, Three years ago, I developed a pipeline to extract data from ServiceNow and load it into Snowflake. As part of this, I implemented pagination logic to handle multi-page responses by checking for the presence of a "next" page and looping through until all data was retrieved. This job has been running successfully in production without any issues. Recently, we were advised by the Infrastructure team to replace the REST GET Snap with the HTTP Client Snap, as the former is being deprecated and is no longer recommended. I updated the pipeline accordingly, but the pagination logic that worked with REST GET is not functioning as expected with the HTTP Client Snap. The logic I used is as follows: Pagination → Has Next: isNaN($headers['link'].match(/",<([^;"]*)>;rel="next",/)) Override URI → Next URL: $headers['link'].match(/\",<([^;\"]*)>;rel=\"next\",/) ? $headers['link'].match(/\",<([^;\"]*)>;rel=\"next\",/)[1].replace(_servicenow_cloud_base_url, _servicenow_b2b_base_url) : null However, with the HTTP Client Snap, I’m encountering the following error: Error Message: Check the spelling of the property or, if the property is optional, use the get() method (e.g., $headers.get('link')) Reason: 'link' was not found while evaluating the sub-expression '$headers['link']' This exact logic works perfectly in the existing job using REST GET, with no changes to the properties. It seems the HTTP Client Snap is not recognizing or parsing the link header in the same way.Solved119Views0likes4CommentsConcat values of a field based on value of another field
Hi All, I'm working on a requirement where I need to concatenate the values of field A with '|' based on the value of field B of a JSON array. Below is the example. API response: there're multiple records as below { "impactTimestamp":"2025-10-23T10:47:47ZZ", "la":"1.2", "lg":"3.4", "IR":"12", "IA": [ { "number":"78","type":"C" } { "number":"89","type":"C" } { "number":"123","type":"A" } { "number":"456","type":"A" } ] } desired output: { " impactTimestamp":"2025-10-23T10:47:47ZZ", "la":"1.2", "lg":"3.4", "IR":"12", "impactedAs": "123|456", "impactedCs": "78|89" } I tried multiple ways to filter, map and join functions on the API response but it doesn't work. Group by and Aggregate snaps are asked to be avoided as much so been trying with functions. Please suggest anything either on functions or snaps. Thanks !SolvedIntroducing the Agent Snap
7 MIN READ Flashback: What’s an Agent? “Agents are autonomous LLM-based processes that can interact with external systems to carry out a high-level goal.” Agents are LLM-based systems that can perform actions based on the user’s request and the scenario, determined by the LLM of the Agent system. A minimal agent consists of 1. an LLM component, and 2. tools that the Agent can use. Think of the Agent as a Robot with a brain (LLM) + robotic arms (Tools). Based on the request, the brain can “decide” to do something, and then the arm will carry out the action decided by the brain. Then, depending on the scenario, the brain can determine if more action is needed, or end if the request is complete. The process of an agent We previously introduced the “Agent Driver and Agent Worker“ pipeline pattern, which clearly defines every single operation that would occur in an Agent process. The process of the pattern can be described as follows Agent Driver Define the instruction of the Agent. (System prompt) Format the user’s request into a conversation. (Messages array) Define tools to make available to the Agent. Send all information above into a “loop“, run the Agent worker until the process is complete. Agent Worker Call the LLM with the instructions, conversation, and tool definitions LLM decides… If it is able to complete the request, end the conversation and go to step 7. If tool calls are required, go to step 3. Call the tools. Format the tool result. Add the tool results to the conversation Back to step 1. Request is complete, the agent responds. The rationale From the Agent Driver and the Agent Worker pipeline, here’s an observation: The driver pipeline handles all of the “configuration“ of the Agent. The worker pipeline handles the “operation“ of the Agent. Now, imagine this: If we can package the “Agent operation” into a single module, so that we can create Agents just by providing instructions, and tools. Wouldn’t this be great? This is exactly what Agent Snap does. The Agent Snap combines the PipeLoop Snap and the Agent Worker pipeline, so all of the agent operations happen in a single Snap. Information and prerequisites Now, before dreaming about having your own company of agents, since building agents is now so simple, there is some information to know about and conditions to be met before this can happen. 1. Agent Snaps are model-specific The Agent Snap is a combination of the “loop” and the Agent Worker, therefore, the LLM provider to be used for an Agent Snap is also fixed. This design allows users to stick to their favorite combination of customized model parameters. 2. Function(Tool) definitions must be linked to a pipeline to carry out the execution Previously, in an Agent Worker pipeline, the Tool Calling Snap is connected to Pipeline Execute Snaps to carry out tool calls, but this is no longer the case with the Agent Snap. Instead, a function definition should include the path of the pipeline to carry out the execution if this tool is called. This way, we can ensure every tool call can be performed successfully. If the user does not provide a tool pipeline with the function definition, the Agent Snap will not proceed. 3. Expected Input and Output of a tool pipeline When a tool call is requested by an LLM, the LLM will provide the name of the tool to call and the corresponding parameters to call. The Agent Snap will unwrap the parameters and send them directly to the tool pipeline. Here’s an example: I have a tool get_weather, which takes city: string as a parameter. The LLM decides to call the tool get_weather with the following payload: { "name": "get_weather", "parameters": { "city": "New York City" }, "sl_tool_metadata": { ... } } For this to work, my tool pipeline must be able to accept the input document : {"city": "New York City"} On a side note, the sl_tool_metadata object will also be available to the tool pipeline as the input for APIM and OpenAPI tools. Now, assume my tool pipeline has successfully retrieved the weather of New York City, It’s time for the Agent Snap to collect the result of this tool call. The Agent Snap will collect everything from the output document of the tool pipeline as the tool call result*. So that the LLM can determine the next steps properly. *Note: with one exception, if the output of a “tool pipeline“ contains the field “messages“ or "contents", it will be treated as the conversational history of the “child agent”, which will be filtered and will not be included. Build an Agent with Agent Snap We’ve understood the idea, we’ve gone through the prerequisites, and it’s time to build an Agent. In this example, we have an Agent with 2 tools: a weather tool and a calendar tool. We first start with a prompt generator to format the user input. Then define the tools the Agent can access. Let’s look into one of the tool definitions. In this example tool, we can see the name of the tool, the description of the tool, the parameters, and the path of the tool pipeline to carry out this task. This satisfies the requirement of a tool to be used by an Agent Snap. After we have the tools set, let’s look at the Agent Snap, using the Amazon Bedrock Converse API Agent Snap as an example. The configuration of an Agent Snap is similar to its corresponding Tool calling Snap, except for some extra fields, such as a button to visualize the agent flow, and a section to configure the operation of the Agent, such as iteration limit and number of threads for tool pipeline executions. The Agent Snap handles the whole executional process, and terminates when 1. The request is complete (no more tool calls are required) or 2. An error occurred. Voila! You have created an agent. After the Agent pipeline completes a round of execution, the user can use the “Visualize Agent Flow“ button in the Agent Snap to see the tools that are called by the LLM. Tips and Tricks for the Agent Snap Let’s take a look at the features built into the Agent Snap. Reuse pipelines Most agentic tool calls are processes that can be reused. To minimize execution load, we can use the “Reuse tool pipeline“ feature. This feature allows tool pipeline instances to be reused, so that the Agent will not need to spawn a pipeline every time a tool is called. To use this feature, the tool pipeline to be reused must be “Ultra compatible“; otherwise, the pipeline execution would hang, and the Agent Snap would eventually timeout. Tool call monitoring Agents can be long-running; it’s not rare to have an Agent run multiple iterations. To see what’s happening in the process, Agent Snap has built in monitoring during validation. The user will be able to see the iteration index, the tool that is currently being called, and the parameters that are used for the tool call in the pipeline statistics status bar. Selecting the “Monitor tool call“ option includes the parameter in the status update, which is an opt-in feature. If the user does not wish to expose the information to SnapLogic, the user should disable this. Warnings Agent configuration is a delicate process; a mistake can potentially lead to errors. The Agent Snap has a bunch of built-in warning capabilities, so the user can be better aware of what could go wrong. 1. Agent process completed before all tool calls completed In the Agent Snap, there is an Iteration limit setting, which limits the number of iterations the Agent can run. If the user provided a smaller limit, which caused the Agent to stop while the LLM is still awaiting tool calls, this warning would pop up to signal the user that the execution is incomplete. 2. Tool pipeline path is not defined A function (tool) definition to be used by the Agent Snap should include a tool pipeline path, so the Agent Snap can link to the actual pipeline that carries out the execution. If the pipeline path is not included in the function definition, this warning will pop up to signal the user that the Agent will not proceed. 3. Duplicate tool naming As we try to add more and more tools to the Agent Snap, two tools likely share the same name. The Agent Snap has the ability to rename the tools being sent to the LLM, and then still link to the correct pipeline. There will also be a warning available in the pipeline statistics to alert the user about a change in the behavior. Release Timeframes The Agent Snap is the foundation of the next-generation SnapLogic Agent. We will be releasing 4 Agent Snaps in the November release: Amazon Bedrock Converse API Agent OpenAI Chat Completions Agent Azure OpenAI Chat Completions Agent Google Gemini API Agent To better use the Agent Snaps, we will be introducing new capabilities to some of our Function Generators as well. Here is the list of Function Generator Snaps that will be modified soon: APIM Function Generator Snap OpenAPI Function Generator Snap MCP Function Generator Snap We hope you are as excited as we are about this one.JWT Configuration for SnapLogic Public API
This document details the process of configuring JWT authentication for the SnapLogic Public API using self-generated keys without the use of any third party JWT providers. It covers key generation, JWKS creation, SnapLogic configuration. 1. Key Generation and JWKS Creation 1.1 Setup the CMD Open CMD Mount the OpenSSL bin folder 1.2 Generate the Private Key Use the following command to generate a 2048-bit RSA private key in the PEM format. BASH openssl genpkey -algorithm RSA -out jwt_private_key.pem -pkeyopt rsa_keygen_bits:2048 Result:A file named jwt_private_key.pem will be created. This key must be kept secret and secure. 1.3 Convert to PKCS#8 Format The JWT generation requires the private key to be in the PKCS#8 format for proper decoding. So, convert the jwt_private_key.pem into PKCS8 format. BASH openssl pkcs8 -topk8 -in jwt_private_key.pem -out jwt_private_key_pkcs8.pem -nocrypt Result:A new file, jwt_private_key_pkcs8.pem, will be created. Use this key in your application for signing JWTs. 1.4 Extract the Public Key The public key is required for the JWKS document. BASH openssl rsa -in jwt_private_key_pkcs8.pem -pubout -out jwt_public_key_pkcs8.pem Result:A file named jwt_public_key.pem will be created. 1.5 Extract Public Key Components for JWKS: Extract the Modulus and Exponent from the CA-signed public key. These are the core components of your JWKS. BASH openssl rsa -pubin -in jwt_public_key_pkcs8.pem -text -noout The output will look like this:Public-Key: (2048 bit)Modulus: 00:d2:e3:23:2c:15:a6:5b:54:c1:89:f7:5f:41:bf:...Exponent: 65537 (0x10001) 2. JWKS Creation and JWT Endpoint Configuration 2.1. The below steps explain how to create the JWKS JSON within Snaplogic. 2.1.1 Create a new project sapce and a project "JWKS" or even an API with name "JWKS" - (This step is just for access control and the API policy to be applied only for this purpose) 2.1.2 Create the pipeline CreateJWKS 2.1.3 Update the Modulus and Exponent values in the mapper copied from the step 1.5 in the section Key Generation, JWKS Creation, and Certificate Signing. 2.1.4 Select the language as Python and replace the default script in the script snap with # Import the interface required by the Script snap. from com.snaplogic.scripting.language import ScriptHook import base64 import hashlib class TransformScript(ScriptHook): def __init__(self, input, output, error, log): self.input = input self.output = output self.error = error self.log = log # Helper function to convert an integer to a big-endian byte string # This is a manual implementation of int.to_bytes() for Python 2.7 def int_to_bytes(self, n): if n == 0: return '\x00' hex_string = "%x" % n if len(hex_string) % 2 == 1: hex_string = '0' + hex_string return hex_string.decode("hex") def execute(self): self.log.info("Executing Transform script") while self.input.hasNext(): try: inDoc = self.input.next() # Modulus conversion logic hex_input = inDoc['hex_string_field'] clean_hex_string = hex_input.replace('\n', '').replace(' ', '').replace(':', '') modulus_bytes = clean_hex_string.decode("hex") modulus_base64url = base64.urlsafe_b64encode(modulus_bytes).rstrip('=') # Exponent conversion logic exponent_input_str = inDoc['exponent_field'] import re match = re.search(r'^\d+', exponent_input_str) if match: exponent_int = int(match.group(0)) else: raise ValueError("Could not parse exponent value from string.") exponent_bytes = self.int_to_bytes(exponent_int) exponent_base64url = base64.urlsafe_b64encode(exponent_bytes).rstrip('=') # Dynamic Key ID (kid) generation logic # Concatenate the Base64url-encoded modulus and exponent jwk_string = modulus_base64url + exponent_base64url # Compute the SHA-256 hash kid_hash = hashlib.sha256(jwk_string).digest() # Base64url encode the hash to create the kid kid = base64.urlsafe_b64encode(kid_hash).rstrip('=') # Prepare the output document with all values outDoc = { 'modulus_base64url': modulus_base64url, 'exponent_base64url': exponent_base64url, 'kid': kid } self.output.write(inDoc, outDoc) except Exception as e: errDoc = { 'error' : str(e) } self.log.error("Error in python script: " + str(e)) self.error.write(errDoc) self.log.info("Script executed") def cleanup(self): self.log.info("Cleaning up") hook = TransformScript(input, output, error, log) 2.1.5 Replace the default value in the JSON generator with { "keys": [ { "kty": "RSA", "alg": "RS256", "kid": $kid, "use": "sig", "e": $exponent_base64url, "n": $modulus_base64url } ] } This will return us the JWKS JSON. 2.2. The below step creates the public endpoint for the JWKS JSON. The below steps can be done as a standalone API as well as a separate project for this JWKS authentication. 2.2.1 Create the pipeline getJWKS 2.2.2 Paste the JWKS generated in step 2.1.5 above in the JSON Generator: { "keys": [ { "kty": "RSA", "alg": "RS256", "kid": "vTfx70NbtVbarHnBetDHNqLXsWVr4Ue5oC32TFNSMlc", "use": "sig", "e": "AQAB", "n": "ANLjIywVpltUwYn3X0G_********_3JmpnSh419wDZC_8-Ts" } ] } 2.2.3 Follow the config as shown for JSON Formatter: 2.2.4 Create a Task named jwks.json and follow the task config as shown and copy the Ultra Task HTTP Endpoint: Select the Snaplex as Cloud, as the endpoint have to be truly public. 2.2.5 Create an API Policy - Anonymous Authenticator and key in the details as shown: 2.2.6 Create an API Policy - Authorize By Role and key in the details as shown: 3. SnapLogic JWT Configuration This step links SnapLogic to your JWKS. Configure Admin Manager: 3.1 In the SnapLogic Admin Manager, navigate to Authentication > JWT. 3.1.1 Issuer ID: Enter a unique identifier for your issuer. This can be a custom string. 3.1.2 JWKS Endpoint: Enter the full HTTPS URL where you have hosted the JWKS JSON file, HTTP Endpoint copied from step B.4 in the Section JWKS Creation and JWT Endpoint Configuration. 3.2 In the SnapLogic Admin Manager, navigate to Allowlists > CORS allowlist 3.2.1 Add domain: Key in the domain https://*.snaplogic.com in the Domain text box, click on Add Domain and click on Save. 4. JWT Generation and Structure The JWT must be created with a header that references your custom kid and a payload with claims that match SnapLogic's requirements. 4.1 Header: JSON { "alg": "RS256", "typ": "JWT", "kid": "use the key id generated in step 2.1.5 from the section JWKS Creation and JWT Endpoint Configuration"} 4.2 Payload: JSON { "iat": {{timestampIAT}}, "exp": {{timestampEXP}}, "sub": "youremail@yourcompany.com", "aud": "https://elastic.snaplogic.com/api/1/rest/public", "iss": "issuer id given in section 3.1.1.1", "org": "Your Snaplogic Org" } 4.3 Sign the JWT: Use the jwt_private_key_pkcs8.pem to sign the token with your application's JWT library. 4.4 Postman Pre-Request script to automatically generate epoch timestamps for iat and exp claims let now = new Date().getTime(); let iat = (now/1000) let futureTime = now + (3600 * 1000); let exp = (futureTime/1000) // Set the collection variable pm.collectionVariables.set("timestampIAT", iat); pm.collectionVariables.set("timestampEXP", exp);Unable to preview records
Hello! I'm new to Snaplogic and have a strange issue. I built a simple pipeline that reads data from a csv file I uploaded to Snaplogic. The pipeline validates fine. When I click the preview button between snaps, I see the preview and it shows the headers from my csv. But I don't see the records themselves regardless if I switch to Table, Raw or JSON. What's strange is my colleagues CAN see the records when they click the preview button. Would appreciate any guidance. Thank you!SolvedWelcome to the Gold Star to the Winner Challenge - Halloween 2025 Edition! ⭐️
From time to time I send out to my team at SnapLogic fun pipeline building challenges that Expression Enthusiasts may enjoy solving. We have decided to open it up to the broader Snaplogic Community. The Gold Star to the Winner Challenge Halloween 2025 Edition is the spookiest challenge of the year. Your job will be to cast a powerful spell in the form of an expression to tame some monstrously messy data. As usual, this challenge is from a real world use case. It centers on schemalessly transforming ‘somewhat’ structured data into a perfectly structured, “OCD-approved” format. The Details: In the following dataset, there are two keys: “Name” and “Path”. The Trick is to craft an expression that can magically break apart the Path string into separate keys, numbering them sequentially (pathelement_1, pathelement_2, etc.).For example: a path with 3 elements in it would transform to 3 json keys:Input JSON: { “Path”:“my drive/matt/customers” } Output JSON: { “pathelement_1: “my drive”, “pathelement_2": “matt”, “pathelement_3": “customers” } Here’s the raw input to be put in a JSON Generator: [{"Name":"Fred","Path":"spooky/graveyard/tombstones/fog/cackles/witches/brewing/potions/spells/hauntedhouse.jpg"},{"Name":"Wilma","Path":"kids/yard ornaments/ghosts/goblins/monsters/jack o lantern/leaves/cocoa/chill/candysacks/excitement/pumpkins/tricks/treats.png"},{"Name":"Pebbles","Path":"shadows/bats/moonlight/screams/night/costumes/party.mp4"},{"Name":"Dino","Path":"creepy/cornfields/scarecrows/spiders/webs.gif"}] And the expected output: [{"pathelement_1":"spooky","pathelement_2":"graveyard","pathelement_3":"tombstones","pathelement_4":"fog","pathelement_5":"cackles","pathelement_6":"witches","pathelement_7":"brewing","pathelement_8":"potions","pathelement_9":"spells","pathelement_10":"hauntedhouse.jpg","Name":"Fred"},{"pathelement_1":"kids","pathelement_2":"yard ornaments","pathelement_3":"ghosts","pathelement_4":"goblins","pathelement_5":"monsters","pathelement_6":"jack o lantern","pathelement_7":"leaves","pathelement_8":"cocoa","pathelement_9":"chill","pathelement_10":"candysacks","pathelement_11":"excitement","pathelement_12":"pumpkins","pathelement_13":"tricks","pathelement_14":"treats.png","Name":"Wilma"},{"pathelement_1":"shadows","pathelement_2":"bats","pathelement_3":"moonlight","pathelement_4":"screams","pathelement_5":"night","pathelement_6":"costumes","pathelement_7":"party.mp4","Name":"Pebbles"},{"pathelement_1":"creepy","pathelement_2":"cornfields","pathelement_3":"scarecrows","pathelement_4":"spiders","pathelement_5":"webs.gif","Name":"Dino"}] Solution approaches: There are many ways to skin this cat; highlighting the flexibility of the SnapLogic platform. My solution contains a single expression in a mapper. Others (the purists) have solved this by configuring and connecting many transform Snaps. All solutions are good as long as the solutions matches the above expected output and is done in a completely schemaless way. The Prize: The winner will receive recognition in the form of SnapLogic Swag (👕🥤🍾 🎁...). The rules: To keep the playing field level, send solutions directly to me via email (msager@snaplogic.com) and attach your pipeline .slp file. (i.e. we don't want to give solutions out on this post for others to see) Contest ends on 10/31/2025 Good Luck to All! I look forward to seeing your solutions.SnapLogic Test Automation with Robot Framework: A Complete Testing Solution
7 MIN READ Introduction In today's fast-paced integration landscape, ensuring the reliability and performance of your SnapLogic pipelines is crucial. We're excited to introduce a comprehensive test automation framework that combines the power of Robot Framework with SnapLogic's APIs to deliver a robust, scalable, and easy-to-use testing solution. This approach leverages the snaplogic-common-robot [PyPI-published library] to provide prebuilt Robot Framework keywords for interacting with SnapLogic Public APIs, integrated within a Docker-based environment.. This lets teams spin up full SnapLogic environments on demand—including Groundplex, databases, and messaging services—so tests run the same way everywhere This blog post explores two key components of our testing ecosystem: snaplogic-common-robot: A PyPI-published library https://pypi.org/project/snaplogic-common-robot/ providing reusable Robot Framework keywords for SnapLogic automation snaplogic-robotframework-examples: A public repository providing a complete testing framework with baseline test suites and Docker-based infrastructure for comprehensive end-to-end pipeline validation Key Features and Benefits 1. Template-Based Testing The framework supports template-driven test cases, allowing you to: Define reusable test patterns Parameterize test execution Maintain consistency across similar test scenarios 2. Intelligent Environment Management The framework automatically: Loads environment variables from multiple .env files Auto-detects JSON values and converts them to appropriate Robot Framework variables Validates required environment variables before test execution Why Robot Framework for SnapLogic Testing? Robot Framework offers several advantages for SnapLogic test automation: Human-readable syntax: Tests are written in plain English, making them accessible to both technical and non-technical team members Keyword-driven approach: Promotes reusability and reduces code duplication Extensive ecosystem: Integrates seamlessly with databases, APIs, and various testing tools Comprehensive reporting: Built-in HTML reports with detailed execution logs CI/CD friendly: Easy integration with Jenkins, GitLab CI, and other automation platforms The Power of Docker-Based Testing Infrastructure One of the most powerful features of our framework is its Docker-based architecture. Isolated Test Environments: Each test run operates in its own containerized environment Groundplex Control: Automatically spin up and tear down Groundplex instances for testing Database Services: Pre-configured containers for Oracle, PostgreSQL, MySQL, SQL Server, DB2, and more Message Queue Systems: Integrated support for Kafka, ActiveMQ, and other messaging platforms Storage Services: MinIO for S3-compatible object storage testing This architecture allows below capabilities: Test in production-like environments without affecting actual production systems Quickly provision and tear down complete testing stacks Run parallel tests with isolated resources Ensure consistency across different testing environments snaplogic-common-robot Library Installation The snaplogic-common-robot library is published on PyPI, making installation straightforward https://pypi.org/project/snaplogic-common-robot/ pip install snaplogic-common-robot Core Components The library provides the below components SnapLogic APIs: Low-level keywords for direct API interactions SnapLogic Keywords: High-level business-oriented keywords for common operations Common Utilities: Database connections, file operations, and utility functions. Dependency Libraries: Install all necessary dependency libraries to run Robot Framework tests for SnapLogic. These libraries support API testing, database operations, Docker container testing, JMS messaging, and AWS integration tools. The following libraries are automatically installed as dependencies when you install snaplogic-common-robot, providing comprehensive API support. This library ecosystem continues to expand as we add support for additional features and capabilities. Snaplogic RobotFramework-examples Repository The snaplogic-robotframework-examples repository demonstrates how to build a complete testing framework using the snaplogic common library.https://github.com/SnapLogic/snaplogic-robotframework-examples Framework Overview Note: This project structure is continuously evolving! We're actively working to make the framework easier and more efficient to use,The structure is subject to change as we iterate on improvements to enhance developer experience and framework efficiency. The framework follows a modular architecture with clear separation of concerns: Configuration Layer .env and .env.example manage environment variables for sensitive credentials and URLs env_files/ folder have all details required for creating accounts Makefile provides a central command interface for all build and test operations docker-compose.yml orchestrates the entire multi-container environment with a single command Build Automation makefiles/ directory contains modular scripts organized by service type (databases, messaging, mocks) Each service category has dedicated makefiles for independent lifecycle management Infrastructure docker/ holds Docker configurations for all services (Groundplex, Oracle, PostgreSQL, Kafka) env_files/ stores service-specific environment variables to isolate configuration Containerized approach ensures reproducible test environments across all systems Test Organization test/suite/ contains Robot Framework test suites organized by pipeline functionality test/test_data/ provides input files and expected outputs for validation Tests are grouped logically (Oracle, PostgreSQL+S3, Kafka) for easy maintenance Pipeline Assets src/pipelines/ stores the actual SnapLogic SLP files being tested src/tools/ includes helper utilities and requirements.txt with Python dependencies The snaplogic-common-robot library is installed via requirements.txt, providing reusable keywords Test Reporting Robot Framework automatically generates comprehensive test reports after each execution report.html provides a high-level summary with pass/fail statistics and execution timeline log.html offers detailed step-by-step execution logs with keyword-level information output.xml contains structured test results in XML format for CI/CD integration Reports include screenshots, error messages, and detailed traceability for debugging All reports are timestamped and can be archived for historical analysis Supporting Components travis_scripts/ enables CI/CD automation for continuous testing README/ holds project documentation and setup guides Key Architecture Benefits Modular design allows independent service management Docker isolation ensures consistent test environments Makefile automation simplifies complex operations Clear directory structure improves maintainability CI/CD integration enables automated testing workflows Integration with CI/CD Pipelines One of the most powerful aspects of our Robot Framework testing solution is its seamless integration with CI/CD pipelines. This enables continuous testing, ensuring that every code change is automatically validated against your SnapLogic integrations. Why CI/CD Integration Matters In modern DevOps practices, manual testing becomes a bottleneck. By integrating our Robot Framework tests into your CI/CD pipeline, you achieve: Automatic Test Execution: Tests run automatically on every commit, pull request, or scheduled interval Early Bug Detection: Issues are caught immediately, not days or weeks later in production Consistent Testing: The same tests run every time, eliminating human error and variability Fast Feedback Loop: Developers know within minutes if their changes broke existing integrations Quality Gates: Prevent deployments if tests fail, ensuring only validated code reaches production Jenkins is one of the most popular CI/CD tools, and integrating our Robot Framework tests is straightforward. How It Works? Stage 1: Prepare Environment Install SnapLogic common Robot library and required dependencies Stage 2: Start Docker Services Launches Groundplex, Oracle DB, Kafka, and MinIO containers Stage 3: Run Robot Framework Tests Execute test suites in parallel across 4 threads using pabot Stage 4: Publish Test Results Generate HTML reports, XML results, and test artifacts and can upload to S3. Stage 5: Send Notifications Distributes test results via Slack. Post: Cleanup Tears down containers, removes temp files, archives logs Slack Notifications include the below details Our CI/CD pipeline automatically sends detailed test execution reports to Slack, providing immediate visibility into test results for the entire team. HTML Reports have the below details Robot Framework automatically generates comprehensive HTML reports after each test execution, providing detailed insights into test results, performance, and execution patterns. Real-World Benefits Here's what this means for your team: For Developers Push code with confidence - Tests run automatically Get feedback in minutes - No waiting for QA cycles Fix issues immediately - While the code is still fresh in your mind For QA Teams Focus on exploratory testing - Let automation handle regression Better test coverage - Tests run on every single change Clear reports - See exactly what's tested and what's not Future Enhancements We're continuously improving the framework with planned features include: Enhanced support for more end points Integration with cloud storage services Advanced performance testing capabilities Enhanced security testing features Conclusion The combination of snaplogic-common-robot library and snaplogic-robotframework-examples framework provides a powerful, scalable solution for SnapLogic test automation. By leveraging Docker's containerization capabilities, Robot Framework's simplicity, and SnapLogic's robust APIs, teams can: Reduce testing time from hours to minutes Increase test coverage with automated end-to-end scenarios Improve reliability through consistent, repeatable tests Enable continuous testing in CI/CD pipelines Whether you're testing simple pipeline transformations or complex multi-system integrations, this framework provides the tools and patterns needed for comprehensive SnapLogic testing. Getting Involved We welcome contributions from the SnapLogic community! Here's how you can get involved: Try the Framework: Install snaplogic-common-robot and run the example tests Report Issues: Help us improve by reporting bugs or suggesting enhancements Contribute Code: Submit pull requests with new keywords or test patterns Share Your Experience: Let us know how you're using the framework in your organization Resources snaplogic-common-robot on PyPI: pip install snaplogic-common-robot https://pypi.org/project/snaplogic-common-robot/ snaplogic-robotframework-example repo: https://github.com/SnapLogic/snaplogic-robotframework-examples Documentation: Comprehensive HTML documentation available after installation via README folder Community Support: Join the discussion in SnapLogic Community forums Start automating your SnapLogic tests today and experience the power of comprehensive, containerized test automation! Questions? We're Here to Help! We hope this comprehensive guide helps you get started with automated testing for your SnapLogic integrations. The combination of snaplogic-common-robot and Docker-based infrastructure provides a powerful foundation for building reliable, scalable test automation. Have questions or need assistance implementing this framework? The SLIM (SnapLogic Intelligent Modernization) team is here to support you! We'd love to hear about your use cases, help you overcome any challenges, or discuss how this framework can be customized for your specific needs. Contact the SLIM Team: Reach out to us directly through the SnapLogic Community forums Tag us in your questions with @slim-team Email us at: slim-team@snaplogic.com We're committed to helping you achieve testing excellence and look forward to seeing how you leverage this framework to enhance your SnapLogic automation journey! Happy Testing! The SLIM Team344Views2likes0Comments