cancel
Showing results for 
Search instead for 
Did you mean: 

How can we pass a dynamically assigned variable from one end of a pipeline to the other?

swright
New Contributor III

Our organization has decided that they only want a single pipeline for each integration we develop (no execute pipeline to pass parameters to another pipeline). They say that this is due to the way their error handling works and that they only want one pipeline referenced when they have an error.

I explain the above because people will probably suggest assigning the variable in a parent pipeline and passing it as a parameter to a child.

I need to pass variables assigned earlier in a pipeline to snaps that occur later in the same pipeline. For instance, I assign a filename variable like:

“test_”+Date.now().toLocaleDateTimeString({“format”:“yyyyMMdd”})+“.csv”

I use it in a File reader snap at the beginning of the pipeline and need to use it again multiple times near the end of the pipeline. At the end of the pipeline I move the file to an archive location and need the file name to do that.

I can’t set parameters dynamically in “Edit Pipeline” or within the pipeline, and it is difficult to pass a variable from one end of the pipeline to the other. In searching this forum I find that others have this same issue but I didn’t see any solutions other than using parent/child pipelines. Is there an easy way to set a variable early in a pipeline and use it in other snaps that come later in the same pipeline? Other workarounds…?

Thanks!

12 REPLIES 12

swright
New Contributor III

Thanks Del! I haven’t used pipe.startTime yet but now that you’ve brought it to my attention I’m sure that I will use it in the future. I’ll look into the eval() function method too which also sounds like it will be useful.

acesario
Contributor II

@swright I’ve used @Supratim’s approach to this with a gate snap. Might also be able to use a 1:1 join, the resulting document ends up cleaner.

gmcclintock
New Contributor

What we’ve done is made “parent” pipelines to generate this type of thing, then we can pass it to the child and have it all available as a pipeline parameter to be reused - this also allows us to parallelize multiple jobs

eg: we have one parent and one child pipeline for Salesforce CDC - parent grabs the “job” record from our DB that has the case object, which connection to use, the path for storage, and the date range filters, and then it passes those all to a singular child pipeline for execution all at the same time.

We’ve done this as well for other cases like yours, just to keep it clean and knwo where to look and we’ve found it a bit easier to manage than chasing the json through pass through.