Forum Discussion
Personally, I would simply read from the Excel file and load into the database without trying to “chunk” the file. Your database account should have a “batch size” configuration that provides record checkpoint commits. If there is a concern that the pipeline may error or time out before complete, you can use a pipeline parameter to set the “start row” property on the parser so you can recover from the fail point without reprocessing records that have already loaded into the table.
Looping is a procedural concept… SnapLogic works in data streams. However, if you feel strongly about chunking the data load, you can use Pipeline Execute and call a child pipeline to process a sub-portion of the input file. Keep in mind that it will be reading the file multiple times to get to the particular record number you are starting from in each child sub-process.
- mithsrini6 years agoNew Contributor II
Kory,
Thanks for your support. I used Pipeline execute and GroupByN to make data groups. Wonderful!, except I am not able to gather the response data that is returned in a Filewriter, as csv or xlsx not supporting ‘append’ action.
Thanks,
Mithila JT