Snaplogic pipeline failing when triggering another pipeline using Execute Pipeline Snap

Hi, We have a pipeline which trigger another pipeline with the help of ‘Task Execute Snap’ and it stopped working for the last few days.

I was told that ‘Task Execute Snap’ has deprecated, we have tried to execute the pipeline with the help of ‘Pipeline execute Snap’ but facing lot of Issues.

  1. Could not process the documents in a batch. The child pipeline is getting executed for every documents like a loop.
  2. Facing Deadlock issues
  3. The package is continuously executing without providing result.

Could you please check and let me know that how do we effectively execute the pipeline inside pipeline.

Thanks in advance.
Gopi k

I believe you want to enable the “Reuse executions to process documents” setting on the Pipeline Execute to have it behave as you are expecting.

Thanks for your reply.
I have enabled the ‘Reuse’ and it is still no success. After I enabled ‘Reuse’ , it has reported that ’ child Pipeline must have one unlinked input view and one unlinked output view’. I have corrected that as well by making unlinked input and output view for all the child packages.

But Still my child pipeline continuously executing whereas this pipeline usually should not take more than few minutes.

Please share your thoughts.


Can you please elaborate on what you mean by that?

My child pipeline is executing close to 30 minutes whereas the whole pipeline usually take max 1 0r 2 mins to complete.
Please find the screenshot

Please share your thoughts.


After running close to 2hours of execution, the pipeline failed with the below error.

The Update statement is the one which update the status in the table for all the incoming documents.

I have not changed any other thing in the pipeline other than ‘Execute Pipeline Snap’ instead of ‘Task Execute Snap’

Could you please share your thoughts if any.


You may need to read through the documentation on the Pipeline Execute snap as it isn’t exactly a replacement of the Task Execute.
The deadlock behavior you are experiencing is not related to the Pipeline Execute. This is a database deadlock, so it is related to how the update statement is being executed. You may wish to use “Validate Pipeline” to view the documents as they are being passed to the child and determine if the child will process the records appropriately.

Also remember that by enabling “reuse executions” on the Pipeline Execute, the pipeline parameters passed to the child are only evaluated on the first document, which I believe is different from the Task Execute behavior.

Thanks for the information. I will validate it with my pipeline.


The combination of your posts:



indicates to me that the error is within your SQL Server and has nothing to do with the Task Execute being deprecated. I think the “deadlock victim” error needs to be evaluated first with your DBA or DBD before trying to fix the issue by pipeline modifications. Then move on to the Pipeline Execute after the DB error is fixed.

(note: I typed this up before I fully read Kory’s post, so it may be redundant; but I submitted anyway in case it helps)