Yes, a key gap in the Pipeline execute is the simple Batch function that the Execute Task has. I figured out how to simulate that functionality by adding a Group by N, and a splitter to the sub-pipeline. This should be basic functionality for Pipeline Execute imho.
Anyway, while this solves the time out issue, i discovered the next issue which is once you start a pipeline execution, even if your snapplex has multiple nodes, all executions stay in the same node. So force Snaplogic to do workload management, I created a simple way to split the work in 1/2, and created 2 separate tasks that use a pipeline parameter to call different groups of data.
The problem I have now is that even when I start the 2 tasks a couple of minutes apart, that they are both going to the exact same Node. I have tested multiple times and occasionally I do get work to be put on the two nodes, but it isn’t consistent. I need the platform to consistetly realize that a node is at 80+% utilization and use the node that is at 5%…
Any ideas?