Converting pipelines to ultra

Suppose that I have two pipelines - one which stores the incoming document in a specified queue and the other one which picks up the document from that queue and processes it. According to my understanding, the easiest way to turn these into ultra tasks would be:

  1. eliminate the first pipeline totally (as the queue is already present in ultra)
  2. eliminate the ‘picking up the document from the queue’ parts from the second pipeline
    and either
  3. call the second pipeline using the ‘pipeline execute’ snap from a new pipeline (if snaps are not compatible with ultra).
  4. construct ultra task using the new pipeline.
    or
  5. use the second pipeline as ultra pipeline if all snaps are compatible.

Please let me know if my understanding is correct.

Also if some comparatively time consuming snaps are present in the ultra pipeline processing (assume tasks like 10+ SQLServer Execute, half a dozen REST Posts (ServiceNow create/update, other POSTs etc.), my understanding is that the caller will time out waiting for the response, but the ultra task will run to completion. In this case how can the caller know if it succeeded or not? In the original scenario that I’ve described, if the document has been put into the specified queue, it is assumed that it would be picked up and caller will get a response immediately.

What you describe are two different design patterns. Depending on the nature of your needs, synchronous or asynchronous your two approaches will achieve thos different needs.

If you take the design pattern where you have an Ultra Task, listening for the incoming requests, which then accepts the incoming data, maybe does some basic validation, stores it (database, queue) and then responds back to the requester indicating the message has been recieved and may be processed downstream. this has the benefit that the requester gets a positive response that the message was accepted and will be processed, but does not have to hang around waiting for the process to complete. A second pipeline can then be run (again potentially as an Ultra Task) with no open input, but reading from the Queue, which then asynchronously processes the message. This can optionally then send some notification/callback on successful completion.

The other option where you synchronously process the message with an ultra task relies on the calling application hanging around for the completion. If it times out (15 minutes), the process may still run to completion, but you’d have to send a notification on success.

If you use a pipeline execute in an Ultra task, it is optimised to have the metada cached in the executing snaplex node, enabling faster execution, use of non-ultra compatible snaps (aggregation for instance) and resilience.

1 Like

Thanks for the clear explanation. One more question that I have is, does guaranteed delivery (as mentioned in the advantages of using ultra) still apply if an ultra task is used for the asynchronous scenario above?

The guarantee of delivery then becomes the responsibility of the developer in the pipelines which then process the message downstream.