cancel
Showing results for 
Search instead for 
Did you mean: 

Insert error with JDBC insert and PGSQL

cesar_bolanos
New Contributor

I have a pipeline that reads from Oracle and writes to PGSQL. When inserting to PGSQL, I get this error

Batch entry 0 insert into “pg_temp”.“crmi002b_ut_pprs001” (“end_date”, “opportunity_type”, “sales_person”, “opportunity_status_type”, “sr_partition_id”, “program_id”, “promotion_id”, “contact_id”, “created_by”, “last_updated_by”, “sr_checksum”, “opportunity_id”, “sr_modification_time”, “entity_code”, “created_on”, “last_updated_on”, “sr_creation_time”, “start_date”) values (cast(NULL as varchar), ‘LOV_OT_INTERACTION’, ‘C_NACAR’, ‘LOV_OS_INTERACTION’, ‘22’::numeric, cast(NULL as varchar), cast(NULL as varchar), ‘151000000’::numeric, ‘C_NACAR’, ‘C_NACAR’, ‘f1503e2c1ad94f56d9deef140da28ead’, ‘122000000’::numeric, ‘1569965343729’::numeric, cast(NULL as varchar), cast(‘2010-12-11 10:51:24-06’ as timestamp), cast(‘2010-12-11 10:51:24-06’ as timestamp), ‘1569965343729’::numeric, cast(NULL as varchar)) was aborted: ERROR: column “end_date” is of type timestamp without time zone but expression is of type character varying
Hint: You will need to rewrite or cast the expression.
Position: 364 Call getNextException to see other errors in the batch., error code: 0, SQL state: 42804

I am unsure why the snap is trying to cast NULLs as varchars for date and int fields. I assume it is because it does not see a good first example with data, so it translates properly as is done in the other fields that have data

Is there a way to bypass this casting, have this fixed? This is not the only pipeline that runs from Oracle to PGSQL, and not the first one that has null values in non-test fields. I find it hard to believe that in all other pipelines like this one the first record always have data for all fields

6 REPLIES 6

jaybodra
New Contributor III

Weird! Open up a SL support ticket.

Also you can try using generic JDBC snap and configure the appropriate drivers based on postgres version you are using, which should work but might be low performing

bgiles
Former Employee

Could you submit a ticket for that? It should ideally contain the table’s schema, a sample input document, and any errors that show the generated sql.

For background we perform a database metadata query to learn what the database expects or provides and have a few dozen classes that convert between the database types and Document field types. In theory it should be easy to make the correct choice but even in the standard field types there are subtle differences, sometimes even between different major versions of a single product, that can cause problems with the conversions.

Off the top of my head the only reason we would try to use a varchar instead of a date or int is if the database metadata is reporting an unrecognized field type. That might be due to a new field type in a recently released version, a locally installed extension (e.g., postgis) that defines its own types, or simply an oversight. In some of those cases we can add that value to the internal lookup tables in the next release but we need to know what we need to add.

Thanks