Forum Discussion
The `$_debug` object is only available in the output document, it's not available for pagination due to how the processing for paging works. I would suggest using the `snap.out.totalCount` to keep track of your paging (noting that the first page will use `snap.out.totalCount` being set to `0`.
- Supratim5 years agoContributor III
@swright Yes, this is one of the challenging part in snaplogic which I have faced during SOA to Snaplogic migration. There are work around but it’s make your pipeline lengthy.
- Use copy snap where you have exact name of the file. You might need multiple output view as per the requirement.
- Then use gate snap just before the snap where you want to reuse the name of file reader.
3.After gate either you can use mapper or can directly use filename on your write snap.
- swright5 years agoNew Contributor III
Another good method! Thanks!
- wpenfold5 years agoContributor
You probably need to specify passthru on most snaps
- swright5 years agoNew Contributor III
Using passthru results in some messy json that can complicate things by the time it gets to the end, and it seems that there are some snaps that don’t have the passthru option.
I’m not sure how I can use jsonpath($.“$…PARAMNAME”) for this issue. Can I pass the variable somehow using this?
Thanks.
- wpenfold5 years agoContributor
You define PARAMNAME at the beginning of the pipeline, use passthru where possible, and when you want to use the PARAMNAME later, you use jsonpath to get it.
Other possibility is you get the PARAMNAME value, then use a COPY snap with multiple outputs. Then you use a JOIN snap to bring the value back into the pipeline at other later locations where you need it.
- ptaylor5 years agoEmployee
Depending on what you’re trying to do later in your pipeline, the approaches that don’t involve Pipeline Execute are either awkward or not possible.
With Pipeline Execute, this becomes straightforward. Urge your organization to reconsider their decision to avoid one of our product’s most powerful and useful features.
- swright5 years agoNew Contributor III
Thanks. I think that this is good advice.
- del5 years agoContributor III
Hey Scott,
As mentioned already, there are several complex options that can be used as a workaround. Those mentioned already are as good as any, so I refrain from mentioning more. Also, I agree with Patrick, that it may behoove your organization to reconsider their position on the Pipeline Execute snap.However, based solely on your example of needing a dynamically generated date stamp constant throughout your pipeline, this may be a less clunky option for you:
Try using pipe.startTime (or a formatted derivative) as part of your filename expression.
"test_" + pipe.startTime.toLocaleDateTimeString({"format":"yyyyMMdd"}) + ".csv"
I believe pipe.startTime should remain constant throughout the pipeline run and this might provide what you can use in your specific use case example.This won’t solve other pass through scenarios though; which is basically the meaning of the title of your post. However, another option for those scenarios might be to place your dynamic expression in a pipeline parameter and use the eval() function to evaluate the parameter. I have used this technique before. As long as it is not an expression result that will change with run time, it should work.
- swright5 years agoNew Contributor III
Thanks Del! I haven’t used pipe.startTime yet but now that you’ve brought it to my attention I’m sure that I will use it in the future. I’ll look into the eval() function method too which also sounds like it will be useful.
- gmcclintock5 years agoNew Contributor
What we’ve done is made “parent” pipelines to generate this type of thing, then we can pass it to the child and have it all available as a pipeline parameter to be reused - this also allows us to parallelize multiple jobs
eg: we have one parent and one child pipeline for Salesforce CDC - parent grabs the “job” record from our DB that has the case object, which connection to use, the path for storage, and the date range filters, and then it passes those all to a singular child pipeline for execution all at the same time.
We’ve done this as well for other cases like yours, just to keep it clean and knwo where to look and we’ve found it a bit easier to manage than chasing the json through pass through.