Showing results for 
Search instead for 
Did you mean: 

processing 1000k csv data

New Contributor

I have a requirement need to process 1000k records from sldb(.csv) file and process it parallelly using pipeline execute and push the 1000k records to the target system by preparing CSV file in snap logic and sending that file to the target system.

Child pipeline: 

In the child pipeline, I'm invoking another system using HTTP protocol to get the response and map the required fields.

i tried in different ways but it consuming more time. Please help me to process this many records more efficiently.

I appreciate any help you can provide.