โ07-07-2020 12:00 AM
โ07-07-2020 12:03 AM
Did you look at the Excel Parser โStart rowโ and โEnd rowโ properties?
โ07-07-2020 12:23 AM
Sure I will try that. Thanks for quick response.
Thanks,
Mithila
โ07-07-2020 12:46 AM
But this is effective and calling all the records in the excel file. Basically how to repeatedly call all the rows by looping? I want to chunk the data load, but need to read all the data.
โ07-07-2020 01:02 AM
Personally, I would simply read from the Excel file and load into the database without trying to โchunkโ the file. Your database account should have a โbatch sizeโ configuration that provides record checkpoint commits. If there is a concern that the pipeline may error or time out before complete, you can use a pipeline parameter to set the โstart rowโ property on the parser so you can recover from the fail point without reprocessing records that have already loaded into the table.
Looping is a procedural conceptโฆ SnapLogic works in data streams. However, if you feel strongly about chunking the data load, you can use Pipeline Execute and call a child pipeline to process a sub-portion of the input file. Keep in mind that it will be reading the file multiple times to get to the particular record number you are starting from in each child sub-process.