How can we pass a dynamically assigned variable from one end of a pipeline to the other?

Our organization has decided that they only want a single pipeline for each integration we develop (no execute pipeline to pass parameters to another pipeline). They say that this is due to the way their error handling works and that they only want one pipeline referenced when they have an error.

I explain the above because people will probably suggest assigning the variable in a parent pipeline and passing it as a parameter to a child.

I need to pass variables assigned earlier in a pipeline to snaps that occur later in the same pipeline. For instance, I assign a filename variable like:


I use it in a File reader snap at the beginning of the pipeline and need to use it again multiple times near the end of the pipeline. At the end of the pipeline I move the file to an archive location and need the file name to do that.

I can’t set parameters dynamically in “Edit Pipeline” or within the pipeline, and it is difficult to pass a variable from one end of the pipeline to the other. In searching this forum I find that others have this same issue but I didn’t see any solutions other than using parent/child pipelines. Is there an easy way to set a variable early in a pipeline and use it in other snaps that come later in the same pipeline? Other workarounds…?


2020-11-02 15_04_23-snaplogic_notes.docx - Word

You probably need to specify passthru on most snaps

Using passthru results in some messy json that can complicate things by the time it gets to the end, and it seems that there are some snaps that don’t have the passthru option.

I’m not sure how I can use jsonpath(."…PARAMNAME") for this issue. Can I pass the variable somehow using this?


You define PARAMNAME at the beginning of the pipeline, use passthru where possible, and when you want to use the PARAMNAME later, you use jsonpath to get it.

Other possibility is you get the PARAMNAME value, then use a COPY snap with multiple outputs. Then you use a JOIN snap to bring the value back into the pipeline at other later locations where you need it.

Depending on what you’re trying to do later in your pipeline, the approaches that don’t involve Pipeline Execute are either awkward or not possible.

With Pipeline Execute, this becomes straightforward. Urge your organization to reconsider their decision to avoid one of our product’s most powerful and useful features.

@swright Yes, this is one of the challenging part in snaplogic which I have faced during SOA to Snaplogic migration. There are work around but it’s make your pipeline lengthy.

  1. Use copy snap where you have exact name of the file. You might need multiple output view as per the requirement.
  2. Then use gate snap just before the snap where you want to reuse the name of file reader.
    3.After gate either you can use mapper or can directly use filename on your write snap.
1 Like

Hey Scott,
As mentioned already, there are several complex options that can be used as a workaround. Those mentioned already are as good as any, so I refrain from mentioning more. Also, I agree with Patrick, that it may behoove your organization to reconsider their position on the Pipeline Execute snap.

However, based solely on your example of needing a dynamically generated date stamp constant throughout your pipeline, this may be a less clunky option for you:

Try using pipe.startTime (or a formatted derivative) as part of your filename expression.
"test_" + pipe.startTime.toLocaleDateTimeString({"format":"yyyyMMdd"}) + ".csv"
I believe pipe.startTime should remain constant throughout the pipeline run and this might provide what you can use in your specific use case example.

This won’t solve other pass through scenarios though; which is basically the meaning of the title of your post. However, another option for those scenarios might be to place your dynamic expression in a pipeline parameter and use the eval() function to evaluate the parameter. I have used this technique before. As long as it is not an expression result that will change with run time, it should work.

Thanks for the explanation! This sounds like a useful techique.

Thanks. I think that this is good advice.

Another good method! Thanks!

Thanks Del! I haven’t used pipe.startTime yet but now that you’ve brought it to my attention I’m sure that I will use it in the future. I’ll look into the eval() function method too which also sounds like it will be useful.

@swright I’ve used @Supratim’s approach to this with a gate snap. Might also be able to use a 1:1 join, the resulting document ends up cleaner.

What we’ve done is made “parent” pipelines to generate this type of thing, then we can pass it to the child and have it all available as a pipeline parameter to be reused - this also allows us to parallelize multiple jobs

eg: we have one parent and one child pipeline for Salesforce CDC - parent grabs the “job” record from our DB that has the case object, which connection to use, the path for storage, and the date range filters, and then it passes those all to a singular child pipeline for execution all at the same time.

We’ve done this as well for other cases like yours, just to keep it clean and knwo where to look and we’ve found it a bit easier to manage than chasing the json through pass through.