06-03-2020 02:11 PM
Hi Snaplogic gurus,
Quick question regarding PostgreSQL Bulk Load snap when input is binary (output from file reader which is a csv) does the psql command also needs to be setup in the nodes to work? something like the SQL bulk load wherein it utilizes the bcp.exe
Thanks,
EmEm
06-04-2020 11:01 AM
The PostgreSQL bulk load snap utilizes the COPY command available in PostgreSQL. COPY runs over JDBC like the regular PostgreSQL snaps, so no additional setup should be required for using the snap. The same is true for the Redshift bulk snaps also.
06-05-2020 10:02 AM
Thanks akidave, few more questions…
is there a certain format that the csv file needs to be followed? e.g quoted characters, row eliminator LF or CRLF
since I’m using a CSV, it will be ok to not specify any columns right (Header provided is checked) or it only applies to the streaming document?
06-05-2020 12:47 PM
How does the COPY command using the headers when format is CSV ?
I’m getting this error column record_id which is the 1st column and is a serial column, because my CSV doesn’t have the record_id and my table starting column is record_id then effective_timestamp
In order to allow insert for serial column in a COPY command it should be ommitted?
COPY table_name(column names) from myfile with csv
…myfile row 2
…myfile row 3
…
06-05-2020 12:50 PM
I’m doing this because there are about 500 tables that needs to be staged with a minimum of 200 columns… I don’t want to declare those columns in the snap ☹️