Head/Tail snap dynamically setting Offset value

Hello,

I am working on loading data from big excel file into a database. I want to use Head/Tail snaps to dynamically take “Documents offset*” from upstream. Please suggest if there is an alternative to this approach.

Thanks,
Mithila

Did you look at the Excel Parser “Start row” and “End row” properties?

Sure I will try that. Thanks for quick response.

Thanks,

Mithila

But this is effective and calling all the records in the excel file. Basically how to repeatedly call all the rows by looping? I want to chunk the data load, but need to read all the data.

Personally, I would simply read from the Excel file and load into the database without trying to “chunk” the file. Your database account should have a “batch size” configuration that provides record checkpoint commits. If there is a concern that the pipeline may error or time out before complete, you can use a pipeline parameter to set the “start row” property on the parser so you can recover from the fail point without reprocessing records that have already loaded into the table.

Looping is a procedural concept… SnapLogic works in data streams. However, if you feel strongly about chunking the data load, you can use Pipeline Execute and call a child pipeline to process a sub-portion of the input file. Keep in mind that it will be reading the file multiple times to get to the particular record number you are starting from in each child sub-process.

Kory,

Thanks for your support. I used Pipeline execute and GroupByN to make data groups. Wonderful!, except I am not able to gather the response data that is returned in a Filewriter, as csv or xlsx not supporting ‘append’ action.

Thanks,

Mithila JT

Per File Writer snap documentation: Append is supported for file, FTP, FTPS and SFTP protocols only.

I’m not sure what you mean that you can’t gather the response data returned by the File Writer. The output view from the File Writer snap (once enabled) provides a document with information on the filename, result, and original binary header data.

If you can provide a simple example pipeline with the issues you are facing, it might make it more clear to me so I can help further.