Forum Discussion
The PostgreSQL bulk load snap utilizes the COPY command available in PostgreSQL. COPY runs over JDBC like the regular PostgreSQL snaps, so no additional setup should be required for using the snap. The same is true for the Redshift bulk snaps also.
Thanks akidave, few more questions…
is there a certain format that the csv file needs to be followed? e.g quoted characters, row eliminator LF or CRLF
since I’m using a CSV, it will be ok to not specify any columns right (Header provided is checked) or it only applies to the streaming document?
@swright your input JSON doesn’t look correct. If you want to reference two variables, they need to be within the same JSON object. For instance, this is an example of an input document to the REST PATCH Snap that would allow referencing
$serviceUrl
and$requestBody
in the Snap settings:[ { "requestBody": { "ExternalIdentifierNumber": "002500012" }, "serviceUrl": "https://" } ]
If you wanted two PATCH requests to be sent (sequentially, to different URLs etc), the input body would look like:
[{ "requestBody": { "ExternalIdentifierNumber": "002500012" }, "serviceUrl": "https://someurl.com" }, { "requestBody": { "ExternalIdentifierNumber": "ABCD" }, "serviceUrl": "https://anotherurl.com" }]
Hi Robin,
I’m having some trouble getting it into that format. I’m using a union and somehow that is the output of the union. The target path of mapper2 is the $requestBody.ExternalIdentifierNumber and mapper3 is the $serviceUrl.:
Thanks,
Scott
@swright A Union will combine two streams of documents into a single stream.
What you want to do is combine the actual document contents themselves (a document in one stream is combined with the content of a document in another stream). To do this, use the Join Snap where the left and right paths resolve to the same constant value e.g.
Related Content
- 2 years ago
- 3 months ago