I am trying insert 70K records in loop till a condition is false. Looping logic working fine. But, I see it takes 1 hour to insert 70K records each time. PFA of the pipeline I am executing. Is there a way to perform insertion faster? Any tips would be helpful.
Hi Rajesh, I have batch and pool size in pipeline execute as 1. Looping total record count for each cycle is 70k. Any tips on what should be batch size and pool size to make this pipeline execution complete sooner? Please let me know.
You can filter your records by using a filter snap before the redshift snap. And you can use Bulk Load snap to load all those filtered records into a target table. Batch size is irrelevant for Bulk Load snap. It just loads all the records at once.
I already have a filter condition before bulk load snap to restrict records. It still executing for 43 hours. Total record count in the target table is closer to 2.5 million records.