ContributionsMost RecentMost LikesSolutionsRe: How do we use a pipeline parameter in a snap to pass NO account? In my current project, I actually used expression _accountSftp == ‘’ ? null : _accountSftp in a file reader snap to allow easy switch between SFTP and SLDB. Re: Rerun Pipeline until success Yes. what you described can be done by a couple of pipelines utilizing Pipeline Monitoring API The basic idea is to capture parameters in runtime history and rerun the failed pipeline with the captured parameters. This solution only work for a subset of the pipelines that meet following pre-requisites: Pre-requisites: The pipeline in question is parameterizable You need to check the Capture checkbox for the pipeline in question. Selecting this checkbox will send the value of the parameter to the pipeline runtime history if the pipeline is run through a task or another pipeline with PipeExecute or ForEach This means that the existing way that you schedule this pipeline will need to be changed a little bit. Sample pipelines: Example pipeline in question with pipeline parameters The pipeline used to discovery one or many failed pipelines within a given folder within last given hours. Output is a list of ruuids. In you case, you may use “Failed” as the state. The pipeline that takes a ruuid, read the captured the parameters from the runtime history and rerun the same pipeline with the captured parameters. Here’s what you would see in the Dashboard for the testing: As a general discussion, I am not going into the detail of how do you define successfulness. Job-DoSomething_2017_09_19 (1).slp (2.4 KB) Runtime-List_2017_09_19.slp (4.6 KB) pipeRerunByRuuid_2017_09_19.slp (5.7 KB) Re: Pipeline Parameter being last run's date Eric, I have built similar solution which is called RollingJobRunner. There you go. RollingJobRunner_2017_09_19.slp (11.8 KB) Runtime-LastCompleted_2017_09_19.slp (12.0 KB) Job-DoSomething_2017_09_19.slp (4.4 KB) Snaplex Monitoring API - timespan In https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1438923/Snaplex+Monitoring+APIs, it reads If a timespan is not set, it will return information for the last hour. What’s the definition of the timespan in this context? It sounded that there’s a way to set a timespan. If this is the case, how can we set such a timespan? Re: What are the purposes of $property_map.input and $property_map.output in a *.slp file That make sense. This is not my pipeline and it is huge. Failed to noticed what’s is obvious. Thanks What are the purposes of $property_map.input and $property_map.output in a *.slp file I have downloaded a pipeline as a *.slp file and opened it in a text editor. Noticed that both $property_map.input and $property_map.output refer certain snaps in the pipeline. What’s purpose of following two objects: $property_map.input $property_map.output Why are those snaps singled out, for example “Mapper_FailedCertificatesList - output0”? Following is the snippet: "property_map": { "info": null, "input": { "7536ff22-1c0d-44c6-ada7-718de5a98634_input0": { "label": { "value": "Conditional_Certificates_Offers_Length - input0" }, "view_type": { "value": "document" } } }, "settings": { "param_table": { "value": }, "imports": { "value": [] } }, "output": { "ac14846d-7856-4e1d-9ca0-b519741a25b6_output0": { "view_type": { "value": "document" }, "label": { "value": "Error 400 Union - output0" } }, "2bfe809b-a4b7-4f9b-9007-5262ea6b0b58_output0": { "label": { "value": "Mapper_FailedCertificatesList - output0" }, "view_type": { "value": "document" } }, "36e341cd-bb8d-42d9-94f9-24eb72a812c8_output0": { "view_type": { "value": "document" }, "label": { "value": "Mapper_FailedCertificatesList - output0" } } }, I do noticed that snap “Mapper_FailedCertificatesList - output0” has io_stats object has value while most other snap don’t in the pipeline monitor API output. Here’s an example of the io_stats value: [ { "send_duration": 49923, "remote": "pa23sl-fmsv-ux02007.fsac5.snaplogic.net/172.29.66.28:8089", "bytes_recv": 0, "start_time": 1499885665780, "error_duration": 0, "bytes_sent": 1370, "recv_duration": 0, "error_count": 0, "type": "socket" }, { "send_duration": 149004, "remote": "pa23sl-fmsv-ux02000.fsac5.snaplogic.net/172.29.65.214:8089", "bytes_recv": 0, "start_time": 1499885663392, "error_duration": 0, "bytes_sent": 1178, "recv_duration": 0, "error_count": 0, "type": "socket" } ] This behavior can be seen in SnapRuntimeFlashlight-AllSnaps-RedeemV2.2.xlsx (80.0 KB). It is an aggregated view of the pipeline monitor api output of many ruuids for the same pipeline. Also attached the .slp file Redeem_Pl_V2.2_2017_07_12.slp.txt (212.8 KB) P.S. after post this, saw this in dashboard: and now reading https://doc.snaplogic.com/wiki/display/SD/Check+Pipeline+Execution+Statistics Re: Map data to json structure Looks like you are facing a transformation issue. Normal snaps can do the job. Assume your input data is flat like what’s in following picture. They would looks like this in a snap preview: [ { "S": "1", "STATUSDESCRIPTION": "Testing 1.", "P": "urgent" }, { "S": "2", "STATUSDESCRIPTION": "Testing 2.", "P": "normal" } ] A pipeline like this will produce the expected output: The Group By N snap can group individual input records into an array. The Mapper snap should looks like this: Re: Not able to use toggle endpoint to enable an ultra task @akidave, perfect. Should have read the document before this post. I am trying with the Metadata now. Re: Not able to use toggle endpoint to enable an ultra task Just noticed that I should use Put instead of Post. Am using Put now and is getting a different error now. Not able to use toggle endpoint to enable an ultra task I am trying to use following toggle endpoint to enable or disable ultra task: snap = REST POST url = "https://elastic.snaplogic.com/api/1/rest/slsched/job/toggle%2F"+$snode_id http entity = {"snode_id":$snode_id,"enabled":true} I am getting: { "error_entity": { "query_string": "", "path": "/api/1/rest/slsched/job/toggle/5935c7e1bc9ffa5b8bd6883e", "response_map": { "error_list": [ { "message": "Expecting body argument for endpoint toggle_job: enabled" } ] }, "http_status": "400 Bad Request", "http_status_code": 400 } } What could I be doing wrong?