09-20-2017 08:11 AM
Suppose that I have two pipelines - one which stores the incoming document in a specified queue and the other one which picks up the document from that queue and processes it. According to my understanding, the easiest way to turn these into ultra tasks would be:
Please let me know if my understanding is correct.
Also if some comparatively time consuming snaps are present in the ultra pipeline processing (assume tasks like 10+ SQLServer Execute, half a dozen REST Posts (ServiceNow create/update, other POSTs etc.), my understanding is that the caller will time out waiting for the response, but the ultra task will run to completion. In this case how can the caller know if it succeeded or not? In the original scenario that I’ve described, if the document has been put into the specified queue, it is assumed that it would be picked up and caller will get a response immediately.
09-20-2017 11:59 AM
What you describe are two different design patterns. Depending on the nature of your needs, synchronous or asynchronous your two approaches will achieve thos different needs.
If you take the design pattern where you have an Ultra Task, listening for the incoming requests, which then accepts the incoming data, maybe does some basic validation, stores it (database, queue) and then responds back to the requester indicating the message has been recieved and may be processed downstream. this has the benefit that the requester gets a positive response that the message was accepted and will be processed, but does not have to hang around waiting for the process to complete. A second pipeline can then be run (again potentially as an Ultra Task) with no open input, but reading from the Queue, which then asynchronously processes the message. This can optionally then send some notification/callback on successful completion.
The other option where you synchronously process the message with an ultra task relies on the calling application hanging around for the completion. If it times out (15 minutes), the process may still run to completion, but you’d have to send a notification on success.
If you use a pipeline execute in an Ultra task, it is optimised to have the metada cached in the executing snaplex node, enabling faster execution, use of non-ultra compatible snaps (aggregation for instance) and resilience.
09-21-2017 04:15 AM
Thanks for the clear explanation. One more question that I have is, does guaranteed delivery (as mentioned in the advantages of using ultra) still apply if an ultra task is used for the asynchronous scenario above?
10-15-2017 08:32 AM
The guarantee of delivery then becomes the responsibility of the developer in the pipelines which then process the message downstream.