cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Limit the number of pipelines running on a node at a time

winosky
New Contributor III

Is there an option in Snaplex to limit the number of running pipelines? I currently only see an option to limit the number of Snaps but with multiple different pipelines running, this would not be ideal plus I believe itโ€™s strictly for manual runs and not for tasks.

4 REPLIES 4

tlikarish
Employee
Employee

Youโ€™re right about the Snap limit โ€“ the Snaplex has a setting called Maximum Slots that prevents more than that number of snaps ever concurrently running on a node. This setting does apply for both the manually and pipelines started by Tasks. Another related setting the Reserved slot % allows one to set aside a percentage of the slots on a node for interactive pipeline execution, like when youโ€™re building a pipeline and validating it. However this wonโ€™t limit the number of pipelines running on a node.

Could you tell me more about why you want to limit the number of pipelines running?

winosky
New Contributor III

Thanks for the explanation @tlikarish, the reason Iโ€™d like to limit the number of pipelines is to get a handle on available resources.
Instead of potentially crashing the node because a certain number of high cpu intensive pipelines decide to run at the same time we would instead limit it to maybe 2 or max 3 pipelines to run at the same time on the node.
The maximum memory property in the Snaplex thatโ€™s currently defaulted to 85 doesnโ€™t stop the node from crashing because it may be because of the cpu.

Thanks for the explanation @tlikarish, the reason Iโ€™d like to limit the number of pipelines is to get a handle on available resources.
Instead of potentially crashing the node because a certain number of high cpu intensive pipelines decide to run at the same time we would instead limit it to maybe 2 or max 3 pipelines to run at the same time on the node.

There is no pre-configured way to do this and I donโ€™t have any great solutions to offer up. Iโ€™d like to list out a couple options in case thatโ€™s helpful to you or someone else with these kinds of issues.

If the pipelineโ€™s had a somewhat consistent number of snaps, you could reduce the maximum slots to be 3 * average_number_of_snaps, but youโ€™ve probably already thought of that.

Another option might be to use an external pipeline/script that is monitoring the Snaplex and Pipelines running on it via the Pipeline Monitoring API and Snaplex Monitoring API. With those APIโ€™s you can monitor the CPU usage, if you suspect thatโ€™s related, as well as the number of pipelines currently running. If you detected things getting into a bad state, then you could stop pipelines until the node recovers. This might be kind of complex to get right and more work than youโ€™d like to invest though.

The new API Management feature has a Client Throttling Policy, so if there are Triggered or Ultra tasks that you suspect were causing the load and you could throttle them, that might be something to check out.

The maximum memory property in the Snaplex thatโ€™s currently defaulted to 85 doesnโ€™t stop the node from crashing because it may be because of the cpu.

If you havenโ€™t already opened a support ticket, then definitely reach out and we can help debug your specific issue further.

winosky
New Contributor III

Thanks @tlikarish, yes Iโ€™ve thought about getting the average and the margin for a crash is still too large for production use and it would be a pain whenever we have to make modifications to existing pipelines.
The other option might help but like you said itโ€™ll take some work and I would rather find out the underlying issue and resolve it from there.
I have a ticket open about the crashes so Iโ€™ll follow up on that.
Thanks for the help.