cancel
Showing results for 
Search instead for 
Did you mean: 

Keep data "through" snaps which don't provide original input under $original in their output

JoeDyndale
New Contributor

SnapLogic isn’t consistent in how data is passed through various snaps. It’s often necessary to keep some data obtained early in the flow (e.g. from the source) for use later (e.g. when writing to target). However, some snaps, like the XML Parser, requires that only the data to be parsed is passed as input to it while also not supporting binary headers or such mechanisms to forward data from one side of the snap to the other - effectively removing everything except the data it cares about from the stream.

There’s an enhancement request for fixing this posted here somewhere, and we’ve written about this to our SnapLogic contacts, so hopefully the following work-around won’t be necessary for very long, but here it is:

Move the “problem” snap to a child pipeline and call it via the Pipeline Execute snap, making sure “Reuse executions to process documents” is not checked (won’t work if it is). If needed, at the start of the child pipeline, remove any data not to be used with the “problem” snap. The Pipeline Execute snap will output the original input data under $original (as the “problem snap” should have done).

1 REPLY 1

joel_bourgault
New Contributor II

Hello,

Very useful, thanks!

An addition though: `original` is appended if both following conditions are met:

  • "Reuse executions to process documents" is *not* checked
  • "Batch size" is 1

However, this comes at the prize of a significantly slower execution. It’s still possible to set Pool Size though.

Too bad there’s no "pass-through" option for Pipeline Execute snap.