Forum Discussion
Can I ask why you’re doing it this way instead of using PipeExec in the JMS pipeline?
- KumarR7 years agoNew Contributor
Hi Tim,
As of now i have ultra pipe say X without open views with kafka snaps as source and destination. When any error happens in pipe, i want to capture original kafka message in error pipeline(snap.oroginal.load()) to reprocess message, But as snap.oroginal.load() method will only work in one i/p one o/p ultra.
To overcome with this i want to keep source kafka snap in a ultra pipe X and at next step will call another ultra pipe Y(with open i/p o/p) by Restpost snap, by this i will be able to get original message of Y pipe in error pipe in case of error in Y pipe.
But high throughput is key in kafka and RetsPost is not processing doc with that latency even with parallel processing.- tstack7 years agoFormer Employee
Are you using the Confluent Kafka snaps?
Are you sure it’s the RESTPost snap and not the ultra pipeline that is the bottleneck here? How many instances of the ultra are running?
- KumarR7 years agoNew Contributor
I tried sending payload to the ultra pipe from jmeter, jmeter was able to send ~6k docs per minute to ultra pipe.
- KumarR7 years agoNew Contributor
@tstack
@skidambi
@cstewart
Any workaround for this, As i don’t find a way to get the kafka snap message in error pipe in case of main ultra pipe fails(no i/p no o/p unlinked) , so using another ultra with 1 i/p 1 o/p and sending kafka message to this pipe so i will be able to get ultra payload in error pipe.
But while calling this ultra pipe from restpost parallel processing is not working.