I am trying to use a Postgres Bulk Load snap. But my table has a serial column (viz. id). I tried a couple of things but the postgres bulk load snap fails due to various errors as my input data does not have id column. I was expecting that the snap would behave like the postgres copy command. I would like to know how others have used postgres bulk load snap in such cases.
Pls let me know if anyone has a tip to share here.
I haven’t encountered this problem but first that comes into my mind, why don’t you add ID into your input data?
You can easily do this with mapper snap with function “snap.in.totalCount”. Basically this will count how many records passed through the mapper, and simply mapped this value into desired field.
Here is a link from the snaplogic documentation:
Thanks for the response, Pero.
However, this solution would cause duplicate entries getting inserted into the id column (serial column) in the table for multiple runs of the pipeline. Postgres /Copy command allows you to do this by not including such id column and specifying the names of the other columns explicitly in the copy command.
I tried the same using the Columns field in the Postgres Bulk Load snap and included the other columns explicitly and left the serial column out. But it throws errors where it still tries to load the serial id column incorrectly.
For the sake of discussion and if adding additional field resolve your issue.
If you are concern about id duplicates, you can always do a quick look up into the table and get the most recent id. After that snap.in.totalCount + look up value.
param1 = 222;
Thanks Pero for your helpful suggestions.
It should not be required to add the serial column data in the input document.
I was able to fix this problem by doing the following…
- specified the list of columns (excluding the serial id column) in the Columns list in the snap
- used ‘Text’ option for the format field in the snap
- unchecked ‘Header provided’ field in the snap.