07-07-2020 12:00 AM
07-07-2020 12:03 AM
Did you look at the Excel Parser “Start row” and “End row” properties?
07-07-2020 12:23 AM
Sure I will try that. Thanks for quick response.
Thanks,
Mithila
07-07-2020 12:46 AM
But this is effective and calling all the records in the excel file. Basically how to repeatedly call all the rows by looping? I want to chunk the data load, but need to read all the data.
07-07-2020 01:02 AM
Personally, I would simply read from the Excel file and load into the database without trying to “chunk” the file. Your database account should have a “batch size” configuration that provides record checkpoint commits. If there is a concern that the pipeline may error or time out before complete, you can use a pipeline parameter to set the “start row” property on the parser so you can recover from the fail point without reprocessing records that have already loaded into the table.
Looping is a procedural concept… SnapLogic works in data streams. However, if you feel strongly about chunking the data load, you can use Pipeline Execute and call a child pipeline to process a sub-portion of the input file. Keep in mind that it will be reading the file multiple times to get to the particular record number you are starting from in each child sub-process.