What you describe are two different design patterns. Depending on the nature of your needs, synchronous or asynchronous your two approaches will achieve thos different needs.
If you take the design pattern where you have an Ultra Task, listening for the incoming requests, which then accepts the incoming data, maybe does some basic validation, stores it (database, queue) and then responds back to the requester indicating the message has been recieved and may be processed downstream. this has the benefit that the requester gets a positive response that the message was accepted and will be processed, but does not have to hang around waiting for the process to complete. A second pipeline can then be run (again potentially as an Ultra Task) with no open input, but reading from the Queue, which then asynchronously processes the message. This can optionally then send some notification/callback on successful completion.
The other option where you synchronously process the message with an ultra task relies on the calling application hanging around for the completion. If it times out (15 minutes), the process may still run to completion, but you’d have to send a notification on success.
If you use a pipeline execute in an Ultra task, it is optimised to have the metada cached in the executing snaplex node, enabling faster execution, use of non-ultra compatible snaps (aggregation for instance) and resilience.