Forum Discussion
This is what we have seen and implemented in field
Work Flow
· Schedule a JOB (triggered task ran as a scheduled job via external job utility, ex: controlM) and get the run_id (shown below)
· Pass it to the monitoring JOB as HTTP QueryString Param
· For multiple JOB’s, follow the same process.
Sample Response
[
{“pipeline_name”:“Test pipeline”,“run_id”:“**a2890a41-5dc9-4f48-80cd-453fcb25ba0b”,“status”:“Started”}
]
In your task add a mapper with an open outputview and the mapper need to have 2 properties, you can just add it as an idepenpendent snap in a pipeline.
pipe.ruuid returns the runtime uuid and set status = ‘started’
Sample Response
[
{“run_id”:“a487e390-296e-4c97-9417-28611d82ddd2”,“status”:“Completed”}
]
More details on monitoring api provided by SnapLogic
To get pipeline status:
Ex: you can invoke this URL to get status of all pipelines running in an org, in this case ConnectFasterInc
state param takes a comma separated list, valid values are [NoUpdate, Prepared, Started, Queued, Stopped, Stopping, Completed, Failed]. (case sensitive)
this then in turn returns JSON that as details of relevant pipelines , from there you filter (mapper) out pipe_id (unique ruuui) and pass it onto another call to stop/start or get error logs associated with this pipeline, another way of getting pipe ruuid.
POST call
https://elastic.snaplogic.com/api/1/rest/public/runtime/stop/ConnectFasterInc/pipe_id
pipe_id is a variable that contains run_id of a given pipeline
Both of these calls would require you to pass your elastic uname/pwd as http basic auth
More details-http://doc.snaplogic.com/monitoring-api