Maximum Slots in a Plex and performance

The default Maximum slots of a snaplex is 4000.
I have a pipeline that calls a large number of child pipelines (>1900) the total number of snaps combined across all the child pipelines is likely to be well over 15000.
This pipeline is taking a very long time…so wanted to know if the Maximum slot could be a bottle neck ?
Is there any way of finding out? the CPU usage and the memory usage of the nodes don’t go above 60%.

A slot roughly corresponds to a thread and the maximum number of threads-per-process for a JCC on Linux is 4096, so this maximum is intended to prevent an overload where too many threads are created.

A snap consumes one slot as long as it is running, so it will be freed up as soon as it finishes. Presumably, you are not running 1,900 child executions simultaneously, so you are probably not going to reach the limit. Are you executing all child pipelines locally (i.e. the Snaplex setting in PipelineExecute is blank) or are spreading executions across other nodes in the Snaplex?

In the Dashboard, you can view the active thread count by going to a node, clicking the dropdown arrow and selecting the ‘Additional Information’ menu item. That should open a dialog with some more stats, in particular check the “Active Threads” count to see how high it is. If it really is close to 4,000 then it might be a bottleneck.
You can also check the parent executions to see if there any status messages on the Pipeline Execute snap, it will indicate if it has to wait for resources to become available. Note that you will need to bump up the limits for the JCC process before you can safely increase the Max Slots in the Snaplex settings.

In general, if you’re looking for bottlenecks in your pipeline executions, look at the “Duration” column in the execution statistics window. In particular, look for long blue bars which indicate that the snap is actively working on something (see the Check Pipeline Execution Statistics documentation for more information).

@tstack for ultra tasks … what is the slot consumption for a live ultra task that is not currently processing any documents?

The slot consumption should correspond to the number of snaps in the ultra pipeline since they will most likely all be running.

@tstack … some of our ultra pipelines may have ~50 snaps… just doing the math… in the worst case one groundplex would reach capacity after running ~80 tasks executing such pipelines.

How do you advise we structure our pipelines to be able to have more ultra pipeline tasks active on a groundplex?

Is it possible to modify this setup where every snap consumes one slot = one thread? Or configure the groundplex to increase capacity?

The recommended maximum number of concurrently executing pipelines on a standard node is around 5, the recommended maximum executing Ultra tasks per node is 20. You should size your Snaplex accordingly.