Improve Your Pipelines: An Intro to SnapGPT Analysis Building a data pipeline is one thing; ensuring it's efficient, scalable, and error-free is another challenge. This video introduces you to the SnapGPT Pipeline Analysis feature, a powerful tool designed to help you continuously improve your pipelines. In this quick tutorial, we'll walk you through a complete analysis, explaining the color-coded warning system so you can prioritize what to fix. https://youtu.be/RCz43UsGbF8?si=ePOZ0o8BFybiiq4o
Hi Giuseppe - When you disable snaps in your pipeline:
Disabled snaps appear with a circle and slash icon overlay
Downstream snaps from disabled ones turn a light shade of gray
Upstream snaps that execute successfully turn green
All snaps (enabled, disabled, and downstream) still appear in the statistics table
Available Filtering Options The Snap Statistics tab currently provides these filtering capabilities:
ERROR - Filter to show only snaps with error status
WARN - Filter to show only snaps with warning status
Sorting options - Sort alphabetically by snap name or by dataflow order
Search functionality - Use the search box to find specific snaps
Workarounds for Better Visibility While you can't filter out disabled snaps entirely, here are some approaches to get a clearer view:
Use the search box to quickly locate specific active snaps
Sort by "Dataflow" to see snaps in execution order, which may help you focus on the active portion of your pipeline
Look for visual indicators - Active snaps will show actual execution statistics (CPU %, Memory, Duration) while disabled snaps will have minimal or no statistics
Hope this helps
Hi Paul - SnapGPT uses a combination of models, notably AWS Bedrock with Claude 2/Sonnet.
SnapLogic February 2026 Release is Here! We’ve just rolled out our latest updates, focusing on a more streamlined user experience and enhanced GenAI capabilities. Here are the highlights: 🎨 Major UI/UX Refresh
New Waffle Menu: One-click access to Development, Mission Control, and Resources.
Project Manager: A new dedicated space for managing projects and assets (transitioning away from Classic Manager).
Redesigned Designer: A modern header and toolbar for a more consistent building experience.
🤖 GenAI & AgentCreator
Pipeline Tagging: Easily categorize pipelines as MCP Servers or Tools for better AI agent integration.
Dynamic Accounts: Support for OpenAPI and APIM services as tools using dynamic account parameters.
Improved Logs: A new sl_agent_log field to better track LLM reasoning.
🔌 New & Enhanced Snaps
Databricks Execute: New Snap for batch execution (previous Snap renamed to Multi Execute).
HTML to PDF Converter: Easily generate PDF documents from HTML sources.
SAP Upgrade: Updated JCo drivers (3.1.12) for better connectivity.
⚙️ Platform & Infrastructure
JRE 17 Support: Snaplex now supports JRE 17 bytecode.
Upgrade Recommended: We recommend upgrading your Snaplex nodes to version 4.44 GA to utilize all new features.
⚠️ Note for Admins: Classic Manager functionality is being migrated to Project Manager, Admin Manager, and Monitor ahead of its retirement in May 2026. 🔗 Full Release Notes & Details: https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/4414636040/February+2026+Release+Notes+-+UAT#Highlights
Hi Sneha - This is a challenging situation that involves both Git operations and file system integrity. Why File Content Was Lost After Restoring When files are deleted during a Git commit operation and moved to the Recycle Bin, the restoration process may not fully recover the file content or metadata. This can happen because:
Git Operations vs. File System: Git operations work at a different level than simple file deletion/restoration
Metadata Corruption: The file metadata or internal references may have been corrupted during the Git operation
Incomplete Restoration: The Recycle Bin restoration may not have fully reconstructed the file's internal structure
Why You Cannot Delete the Files The inability to delete these corrupted files, even with admin access, suggests:
File System Lock: The files may be in a locked or corrupted state
Metadata Issues: The file system metadata is inconsistent
Reference Problems: Internal SnapLogic references to these files may be broken
Recommended Solutions 1. Use Git Operations to Restore Properly Instead of relying on Recycle Bin restoration, try using Git operations:
Discard Changes: Use the Git discard changes functionality to restore files from the repository
Hard Reset: Perform a hard reset to the last known good commit
Checkout Previous Version: Check out the files from a previous commit where they were working
2. Manual Cleanup Process For the corrupted files that cannot be deleted:
Contact SnapLogic Support: This level of file system corruption typically requires backend intervention
Database Cleanup: The references to these files may need to be removed from SnapLogic's internal database
3. Preventive Measures To avoid this issue in the future:
Always commit working files before performing Git operations
Use Git's built-in restore mechanisms rather than file system operations
Create backups before major Git operations
Test Git operations in a development environment first
4. Immediate Workaround If you need to continue working:
Create new versions of the affected files with different names
Import content from backups if available
Recreate the pipelines if the content is not recoverable
Next Steps
Document the affected files and their paths
Contact SnapLogic Support with details about the Git operation that caused the issue
Check if you have project backups that can be used for restoration
This type of issue typically requires support intervention because it involves both the file system and SnapLogic's internal metadata consistency. The corrupted state of the files suggests that the restoration process didn't properly reconstruct the file structure that SnapLogic expects.
Boehringer Ingelheim has an immediate need for an agile Program Manager Platform Delivery to help scale to the next level for their Data, Application, Analytics and AI/Agentic platform teams. Role is based in Germany (could also be Barcelona, Spain).
Hi Rachel - Search results don't reveal a specific known bug for this issue, here are some possible causes: 1. Pipeline Validation/Refresh Issues
Sometimes during pipeline validation or when the canvas refreshes, snap configurations can revert to default states
This might happen if there are temporary connectivity issues with the SnapLogic platform
2. Browser/Session Issues
Browser cache problems or session timeouts might cause configuration changes to not persist properly
Multiple users editing the same pipeline simultaneously could cause conflicts
3. Platform Updates
Occasionally, platform updates might affect how snap configurations are stored or retrieved
Recommended Solutions Immediate Actions:
Document Your Settings: Keep a record of which parameters should have expression toggles enabled
Version Control: Use SnapLogic's project versioning to track when changes occur
Preventive Measures:
Clear Browser Cache: Regularly clear your browser cache and cookies
Single User Editing: Avoid having multiple users edit the same pipeline simultaneously
If the Problem Persists: Contact SnapLogic Support: Since this affects multiple parameters simultaneously and happens with snaps that haven't been opened recently, this could be a platform issue that needs investigation Provide Details: When contacting support, include:
Specific snap types affected
Timeline of when the issue occurs
Browser and version information
Whether it happens across different projects/environments
New SnapLogic Sigma Framework: Continuous Data Processing with Headless Ultra Tasks SnapLogic Ultra Tasks offer the speed and scalability needed for critical integrations that demand high availability, high throughput, and continuous execution. This feature ensures that data reaches its destination in realtime, regardless of data volume, endpoint variety, or integration complexity. This document serves as a reference guide detailing the necessary configurations and steps for the successful deployment of Headless Ultra tasks in the SnapLogic platform. This guide is targeted towards SnapLogic developers and administrators who have familiarity with the SnapLogic platform, tasks, and pipelines. Author: Ram Bysani, SnapLogic Enterprise Architecture team
Hi James H. - Clarifying how Python libraries get installed when using the Remote Python Executor (RPE). How Python Libraries Are Installed in RPE The installation process depends on which approach you're using: 1. Using the Standard RPE Image (Pre-built) When you use the standard RPE Docker image with:
docker pull snaplogic/rpe:main-5
docker run --memory-swap="-1" --restart=always -dti -p 5301:5301 -e "REMOTE_PYTHON_EXECUTOR_TOKEN=" -v /opt/remote_python_executor_log/:/opt/remote_python_executor_log/ --name=rpe snaplogic/rpe:main-5The libraries are automatically installed at runtime using the SLTool.ensure() method in your Python script. You don't need to run pip commands manually. Here's how it works:
from snaplogic.tool import SLTool as slt
# Libraries are automatically installed when ensure() is called
slt.ensure("scikit-learn", "0.20.0")
slt.ensure("keras", "2.2.4")
slt.ensure("tensorflow", "1.5.0")2. Using a Custom RPE Image If you want to pre-install libraries or have more control, you can:
Download the custom RPE package (contains Dockerfile and requirements.txt)
Add your required libraries to requirements.txt
Build your custom Docker image:
docker build --no-cache -t snaplogic_custom_rpe .
Run the custom container:
docker run --memory-swap="-1" --restart=always -dti -p 5301:5301 -e "REMOTE_PYTHON_EXECUTOR_TOKEN=" -v /opt/remote_python_executor_log/:/opt/remote_python_executor_log/ --name=rpe snaplogic_custom_rpe
Key Points:
No manual pip commands needed - The RPE handles library installation automatically
Runtime installation happens when you use SLTool.ensure() in your Python scripts
Custom images allow you to pre-install libraries via requirements.txt during the Docker build process
The requirements.txt approach is used when building custom RPE images, not for runtime installation
The confusion likely comes from the fact that there are two different workflows - the standard image uses runtime installation via SLTool.ensure(), while custom images use the traditional requirements.txt + Docker build approach.
