So, what I ended up doing and which I humbly feel is the best solution, is to get the max count from the join between the dummy record with zero count and the other rows (if they exist). Then I check to see if maxcount > 0; if it is, I do a second select from Oracle to get the actual data without the dummy row of course; otherwise, I simply end the pipeline without doing anything other than writing to our batch history table that the pipeline was run etc…
![]()
I appreciate your feedback and other possible solutions. The format to json and binary to document would probably work, but that’s two snaps in addition to the oracle snap that would always need to happen before my ‘Is there data’ comparison, regardless of whether there are exceptions or not. With the design I went with, although there is one additional Oracle execute snap, that upper pipeline path will only get run maybe 2-3% of the time max being that exceptions happen very rarely.
For anyone interested, the ‘Filter’ snap is so I can toggle on/off whether to send an email when there are no exceptions, mainly for testing purposes, and of course, it’s controlled by a pipeline parameter. Also, the ‘Pipeline Execute’ snaps are simply to record when the pipeline starts and stops in our batch history table.