Forum Discussion
The PostgreSQL bulk load snap utilizes the COPY command available in PostgreSQL. COPY runs over JDBC like the regular PostgreSQL snaps, so no additional setup should be required for using the snap. The same is true for the Redshift bulk snaps also.
How does the COPY command using the headers when format is CSV ?
I’m getting this error column record_id which is the 1st column and is a serial column, because my CSV doesn’t have the record_id and my table starting column is record_id then effective_timestamp
In order to allow insert for serial column in a COPY command it should be ommitted?
COPY table_name(column names) from myfile with csv
…myfile row 2
…myfile row 3
…
- bojanvelevski4 years agoValued Contributor
The pipeline below contains a script to flatten the data completely by iterating through every level of the JSON structure. You’ll probably need to include a counter to stop (or start) at certain point to adapt the solution according to your issue.
- bojanvelevski4 years agoValued Contributor
This is some kind of partial flattening of the data, which can be achieved with a Script. If this is the only part that needs to be modified, than it can also be transformed with an expression, but if the data contains more objects that are needed to be flattened out, than I suggest you to use Script.
Related Content
- 2 years ago
- 3 years ago
- 3 years ago