Limit the number of pipelines running on a node at a time

Is there an option in Snaplex to limit the number of running pipelines? I currently only see an option to limit the number of Snaps but with multiple different pipelines running, this would not be ideal plus I believe it’s strictly for manual runs and not for tasks.

You’re right about the Snap limit – the Snaplex has a setting called Maximum Slots that prevents more than that number of snaps ever concurrently running on a node. This setting does apply for both the manually and pipelines started by Tasks. Another related setting the Reserved slot % allows one to set aside a percentage of the slots on a node for interactive pipeline execution, like when you’re building a pipeline and validating it. However this won’t limit the number of pipelines running on a node.

Could you tell me more about why you want to limit the number of pipelines running?

Thanks for the explanation @tlikarish, the reason I’d like to limit the number of pipelines is to get a handle on available resources.
Instead of potentially crashing the node because a certain number of high cpu intensive pipelines decide to run at the same time we would instead limit it to maybe 2 or max 3 pipelines to run at the same time on the node.
The maximum memory property in the Snaplex that’s currently defaulted to 85 doesn’t stop the node from crashing because it may be because of the cpu.

Thanks for the explanation @tlikarish, the reason I’d like to limit the number of pipelines is to get a handle on available resources.
Instead of potentially crashing the node because a certain number of high cpu intensive pipelines decide to run at the same time we would instead limit it to maybe 2 or max 3 pipelines to run at the same time on the node.

There is no pre-configured way to do this and I don’t have any great solutions to offer up. I’d like to list out a couple options in case that’s helpful to you or someone else with these kinds of issues.

If the pipeline’s had a somewhat consistent number of snaps, you could reduce the maximum slots to be 3 * average_number_of_snaps, but you’ve probably already thought of that.

Another option might be to use an external pipeline/script that is monitoring the Snaplex and Pipelines running on it via the Pipeline Monitoring API and Snaplex Monitoring API. With those API’s you can monitor the CPU usage, if you suspect that’s related, as well as the number of pipelines currently running. If you detected things getting into a bad state, then you could stop pipelines until the node recovers. This might be kind of complex to get right and more work than you’d like to invest though.

The new API Management feature has a Client Throttling Policy, so if there are Triggered or Ultra tasks that you suspect were causing the load and you could throttle them, that might be something to check out.

The maximum memory property in the Snaplex that’s currently defaulted to 85 doesn’t stop the node from crashing because it may be because of the cpu.

If you haven’t already opened a support ticket, then definitely reach out and we can help debug your specific issue further.

1 Like

Thanks @tlikarish, yes I’ve thought about getting the average and the margin for a crash is still too large for production use and it would be a pain whenever we have to make modifications to existing pipelines.
The other option might help but like you said it’ll take some work and I would rather find out the underlying issue and resolve it from there.
I have a ticket open about the crashes so I’ll follow up on that.
Thanks for the help.