cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

processing 1000k csv data

satya
New Contributor

I have a requirement need to process 1000k records from sldb(.csv) file and process it parallelly using pipeline execute and push the 1000k records to the target system by preparing CSV file in snap logic and sending that file to the target system.

Child pipeline: 

In the child pipeline, I'm invoking another system using HTTP protocol to get the response and map the required fields.

i tried in different ways but it consuming more time. Please help me to process this many records more efficiently.

I appreciate any help you can provide.

 

 

0 REPLIES 0