- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2020 04:09 AM
Hi,
I am looking for some help regarding DIFF Snap.
Below is the scenario, I am working on:
I have two Source DBs - A & B. DIFF snap is being used to identify the eligible data for Insert, Update and Delete, and accordingly operations performed in Source B as Target DB.
e.g.
Lets say 10 records are there in A
20 records are there in B
then DIFF will identify 10 records for deletion from B.
Now, the problem I am facing is,
if due to any data issue or connection failure, if no records are coming from Source A, then DIFF is identifying all records from B to flow in the deletion link, which eventually deleting all the data from B.
I dont want this to happen. I want my pipeline to stop/fail, if any such error happend in the source, so that target cannot be empty in any case.
I am using Error Pipeline, which is tracking the error fine but then not stopping the pipeline.
Is there a way, I can track the error and stop the pipeline in such scenarios of connection failure or Data Issue ?
Quick Help will be really appreciated.
Thanks in Advance,
Payal Srivastava
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-03-2020 06:52 AM
@koryknick @Spiro_Taleski
Thanks for your valuable comments 🙂 🙏
I just wanna share the good news that I am finally able to crack this now.
PFB workarounds I did to make it a success:
- Updated Error pipeline to insert records in the DB.
- filtered data connection specific errors.
- Used exit snap with threshold value as ‘0’.
- Created a separate account with batch size as 1 for this error pipeline.
By this, my error pipeline will stop the parent pipeline only for particular errors, and will continue the pipeline in all other data failures.
Thanks,
Payal Srivastava

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-02-2020 11:12 AM
Try using an Oracle Account that has batch size set to 1. I think you’re running into record batching, which is great for making large volume database updates more efficient, but doesn’t allow you to fail quickly in this case.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-02-2020 10:41 PM
Thnx @koryknick
I tried this as well. This was the very recent thing I did. I made the batch size to 1 thinking it will reach to EXIT snap making my pipeline stop. But, unfortunately, this also dint work. I dont know if I am missing anything or Snaplogic is behaving weird. Because, in the logs, I could see record is flowing to the exit snap, but still not stopping the pipeline 😢
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-03-2020 06:52 AM
@koryknick @Spiro_Taleski
Thanks for your valuable comments 🙂 🙏
I just wanna share the good news that I am finally able to crack this now.
PFB workarounds I did to make it a success:
- Updated Error pipeline to insert records in the DB.
- filtered data connection specific errors.
- Used exit snap with threshold value as ‘0’.
- Created a separate account with batch size as 1 for this error pipeline.
By this, my error pipeline will stop the parent pipeline only for particular errors, and will continue the pipeline in all other data failures.
Thanks,
Payal Srivastava
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2021 12:52 PM
hi Payal @PayalS
Blockquote
,
I am new to Snaplogic, can you help me with sample snap how you able to solve this issue.
Thanks
Pallavi

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-04-2020 05:01 AM
Well done @PayalS! I’m sorry that my advice didn’t get you to this sooner, but I’m very glad for you that you found such a workable solution. And that you didn’t need to update 70 pipelines!
