cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Insert error with JDBC insert and PGSQL

cesar_bolanos
New Contributor

I have a pipeline that reads from Oracle and writes to PGSQL. When inserting to PGSQL, I get this error

Batch entry 0 insert into โ€œpg_tempโ€.โ€œcrmi002b_ut_pprs001โ€ (โ€œend_dateโ€, โ€œopportunity_typeโ€, โ€œsales_personโ€, โ€œopportunity_status_typeโ€, โ€œsr_partition_idโ€, โ€œprogram_idโ€, โ€œpromotion_idโ€, โ€œcontact_idโ€, โ€œcreated_byโ€, โ€œlast_updated_byโ€, โ€œsr_checksumโ€, โ€œopportunity_idโ€, โ€œsr_modification_timeโ€, โ€œentity_codeโ€, โ€œcreated_onโ€, โ€œlast_updated_onโ€, โ€œsr_creation_timeโ€, โ€œstart_dateโ€) values (cast(NULL as varchar), โ€˜LOV_OT_INTERACTIONโ€™, โ€˜C_NACARโ€™, โ€˜LOV_OS_INTERACTIONโ€™, โ€˜22โ€™::numeric, cast(NULL as varchar), cast(NULL as varchar), โ€˜151000000โ€™::numeric, โ€˜C_NACARโ€™, โ€˜C_NACARโ€™, โ€˜f1503e2c1ad94f56d9deef140da28eadโ€™, โ€˜122000000โ€™::numeric, โ€˜1569965343729โ€™::numeric, cast(NULL as varchar), cast(โ€˜2010-12-11 10:51:24-06โ€™ as timestamp), cast(โ€˜2010-12-11 10:51:24-06โ€™ as timestamp), โ€˜1569965343729โ€™::numeric, cast(NULL as varchar)) was aborted: ERROR: column โ€œend_dateโ€ is of type timestamp without time zone but expression is of type character varying
Hint: You will need to rewrite or cast the expression.
Position: 364 Call getNextException to see other errors in the batch., error code: 0, SQL state: 42804

I am unsure why the snap is trying to cast NULLs as varchars for date and int fields. I assume it is because it does not see a good first example with data, so it translates properly as is done in the other fields that have data

Is there a way to bypass this casting, have this fixed? This is not the only pipeline that runs from Oracle to PGSQL, and not the first one that has null values in non-test fields. I find it hard to believe that in all other pipelines like this one the first record always have data for all fields

6 REPLIES 6

cesar_bolanos
New Contributor

Unsure if this is the reason. We switched schemas from pg_temp to a regular schema, and it works fine. This is the first time we are trying to use pg_temp and it does not work with snap. Is snap trying to get the table definitions through a different connection? It would not see the table in the case if pg_temp

Itโ€™sโ€ฆ complicated. You should generally assume that different snaps use different connections. In rare situations it looks like a connection is passed from one snap to the next but there should be a reset between snaps. E.g., with postgresql we reset the default schemas.

Within an individual snap itโ€™sโ€ฆ complicated. Thereโ€™s usually a single connection used during both validation and execution. However if the snap is at the end of a pipeline that has a long delay between launch and the first documents hitting the snap itโ€™s possible that theyโ€™ll use a different connection. Itโ€™s theoretically possible that a connection will be changed execution but I donโ€™t think that happens in practice. We shouldnโ€™t do anything during validation thatโ€™s required during execution but something could have slipped through.

Could you write up two tickets? One with enough details about this pipeline for us to try to duplicate it, and a more general one asking for support for pg_temp? We can check a few things in the current snaps but full support for pg_temp (e.g., guaranteeing the same connection in all snaps) will require a lot of research into its feasibility.

jaybodra
New Contributor III

@cesar.bolanos did you try defining TIMESTAMP without time zone?

This also happens with int fields