How to integrate 3rd party job scheduler (e.g. Cisco Tidal) to kickoff and monitor SnapLogic pipeline execution
This document demonstrates how to integrate 3rd party job scheduler (e.g. Cisco Tidal) to kickoff and monitor snaplogic pipeline execution. Invoke: Deploy the SnapLogic integration pipeline as a trigger task Invoke the SnapLogic pipeline execution from Tidal through the URL (ground or cloud) Design the long running pipeline to be asynchronous to avoid timeout (firewall, Snaplogic control plane) Monitor: Use the SnapLogic Pipeline Monitoring API to check for execution status You can find the Snaplogic pipeline monitoring API here Case Study: A master pipeline to accept all the Tidal request and distribute accordingly (Triggered) Call different snaplogic trigger task based on the request. e.g. https://elastic.snaplogic.com:443/api/1/rest/slsched/feed/organization_name/projects/project_name/pipeline_name Log the Tidal Job ID & SnapLogic pipeline runtime ID into Database for status check Database table to include the key fields: Tidal Job ID, SnapLogic Pipeline Runtime ID, Status Response to Tidal request to close the connection to avoid time-out A monitoring pipeline to update Tidal regarding the execution status (Scheduled) Query the Database for running pipeline Obtain the pipeline execution status using the SnapLogic pipeline runtime ID (Rest Call) e.g. https://elastic.snaplogic.com/api/1/rest/public/runtime/organization_name/ruuid? Update the Tidal regarding the execution status using Tidal Job ID Update the Database with the new status4.4KViews0likes2CommentsWhat happens during the Snaplex Upgrade
When the Snaplex is upgraded: a) JCCs will go through a rolling restart. As one JCC is being restarted, the others will continue processing the pipelines on the older version. The next JCCs will wait for max 15 minutes (configurable value at the snaplex level) for the running pipelines to finish, before they get restarted. b) FeedMasters (FM) will go through a rolling restart. One FM will be up and running as the other goes into restart mode. The running FM will queue up the messages in the inbound queue up to 10GB size. When FMs restart they will persist the messages from the queue to the disk. These will be retrieved back into the queue from disk. Once JCCs are up, FM starts sending the messages from the queue to be processed.4.1KViews1like3CommentsHow to migrate existing production environment to new servers
For clients that are moving from one set of snaplexes to new set of snaplexes (ie. their own AWS groundplex’s vs regular data warehouse). Clients are facing the issue here is that, when they migrate the pipelines from old snaplexes to new snaplexes, they have to manually verify the defined tasks (not the jobs that are run) are pointing to the new groundplex nodes. Following two pipelines will read all the tasks from a project and from project space as well. This can be used when migrating from existing current prod environment to new servers in more automated fashion. List or Read tasks from project space_2018_06_05.slp (4.2 KB) To get the task details for project_2018_06_05.slp (4.3 KB)2.3KViews0likes0CommentsTip: Recovering Deleted Pipelines from Execution History in Dashboard
From @cstewart in the thread https://community.snaplogic.com/t/pipeline-versioning/2445: If a pipeline has been deleted or changed, but there was an execution, when you click on the pipeline name in Dashboard (the Dashboard will show executions up to 60 days) then when you see the pipeline in Designer, it shows the pipeline which was actually executed, as it doesn’t exist any more, you can’t edit it, or save it. However you can export it! Once exported, you can import it back in. This gives you an extra 30 days after the pipeline is purged from the recycle bin.2.1KViews1like0Comments