ContributionsMost RecentMost LikesSolutionsRe: Insert error with JDBC insert and PGSQL This also happens with int fields Re: Insert error with JDBC insert and PGSQL Unsure if this is the reason. We switched schemas from pg_temp to a regular schema, and it works fine. This is the first time we are trying to use pg_temp and it does not work with snap. Is snap trying to get the table definitions through a different connection? It would not see the table in the case if pg_temp Insert error with JDBC insert and PGSQL I have a pipeline that reads from Oracle and writes to PGSQL. When inserting to PGSQL, I get this error Batch entry 0 insert into “pg_temp”.“crmi002b_ut_pprs001” (“end_date”, “opportunity_type”, “sales_person”, “opportunity_status_type”, “sr_partition_id”, “program_id”, “promotion_id”, “contact_id”, “created_by”, “last_updated_by”, “sr_checksum”, “opportunity_id”, “sr_modification_time”, “entity_code”, “created_on”, “last_updated_on”, “sr_creation_time”, “start_date”) values (cast(NULL as varchar), ‘LOV_OT_INTERACTION’, ‘C_NACAR’, ‘LOV_OS_INTERACTION’, ‘22’::numeric, cast(NULL as varchar), cast(NULL as varchar), ‘151000000’::numeric, ‘C_NACAR’, ‘C_NACAR’, ‘f1503e2c1ad94f56d9deef140da28ead’, ‘122000000’::numeric, ‘1569965343729’::numeric, cast(NULL as varchar), cast(‘2010-12-11 10:51:24-06’ as timestamp), cast(‘2010-12-11 10:51:24-06’ as timestamp), ‘1569965343729’::numeric, cast(NULL as varchar)) was aborted: ERROR: column “end_date” is of type timestamp without time zone but expression is of type character varying Hint: You will need to rewrite or cast the expression. Position: 364 Call getNextException to see other errors in the batch., error code: 0, SQL state: 42804 I am unsure why the snap is trying to cast NULLs as varchars for date and int fields. I assume it is because it does not see a good first example with data, so it translates properly as is done in the other fields that have data Is there a way to bypass this casting, have this fixed? This is not the only pipeline that runs from Oracle to PGSQL, and not the first one that has null values in non-test fields. I find it hard to believe that in all other pipelines like this one the first record always have data for all fields Re: Pipeline Parameters now trimmed in Dashboartd No. I got no response to this post either Pipeline Parameters now trimmed in Dashboartd Before the new version of Snaplogic (2015/05, 4.17. 2019.2), we were able to get full pipeline parameters from the dashboard (this included queries and list of partitions), which in some cases were large (> 2048b) Now the values appear as trimmed, which hinders our ability to troubleshoot issues. Is there a way to return to the previous functionality of having the full runtime parameter value in the dashboard? Configure SMTP snap account to connect to Amazon SES We need to connect snaplogic to Amazon SES to send emails. We have been provided a pair of user name and password, plus an email address to use as the “from”. We have tried all possible combinations: FROM email + SES password (we get invalid credential) SES User name in from and SES password (invalid FROM email) FROM email and base64 encoding os SES_user:SES_Password (invalid credentials) Has anyone been able to connect the Email Snap to Amazon SES? Thanks Handling Oracle non-standard CSV exports We have worked on several projects where Oracle DBAs create direct table extracts using standard Oracle tools. Those tools do not generated properly escaped CSV files. As such, lines like “123”,"quote",“test” “345”,“Mc"Donalds”,“test2” Cause Snaplogic’s CSV parser to reject those lines Does anyone have any recommendations on how to deal with this type of file? Thanks PS We even have a case in which a database column has JSON text, which compounds the non-escaped values even further What is the purpose of the "Max in-flight" parameter in an ultra task? The documentation description “to the maximum number of documents that can be processed by an instance at any one time” does not say a lot The default value is 200, but it is not clear if the task will shutdown after 200 documents, or if 200 documents are the max number of requests that can be held in the request queue Want to be able to extract full runtime information from pipelines We can see in the dashboard that most pipelines have snap stats (like snap execution time) that are not included in the pipeline runtime information exposed by the public API. We are not setting the level=summary, but it seems the API response is always summarized Is there a way to extract this full detail of the pipeline execution details? If so, how? Re: How to reduce the manual mapping in mapper? without using SmartLink option Sometimes we use JSON generators for this. We generate the direct mappings on the generator using a spreadsheet or editor, then a mapper for the simple ones