06-07-2024 04:54 AM - edited 06-07-2024 05:05 AM
I have a requirement need to process 1000k records from sldb(.csv) file and process it parallelly using pipeline execute and push the 1000k records to the target system by preparing CSV file in snap logic and sending that file to the target system.
Child pipeline:
In the child pipeline, I'm invoking another system using HTTP protocol to get the response and map the required fields.
i tried in different ways but it consuming more time. Please help me to process this many records more efficiently.
I appreciate any help you can provide.