โ07-10-2018 02:50 PM
Weโre using Oracle Insert Snap with an Oracle Thin Account. We have the Fetch Size and Batch Size properties of the account set to 5000. Weโve been successfully using the account with this configuration to load millions of rows to different target tables. Suddenly we are sporadically getting the following error. Hereโs the error view (minus the original document). If we attempt to load the failed document using the same pipeline it will insert successfully, so it doesnโt appear that thereโs an issue with the data. Has anybody else encountered this or know why this error would be produced?
{
โerrorโ:โSQL operation failedโ,
โstacktraceโ:โjava.sql.BatchUpdateException: ORA-01438: value larger than specified precision allowed for this column\n\n\tat oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10500)\n\tat oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:230)\n\tat com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:128)\n\tat com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)\n\tat org.jooq.tools.jdbc.DefaultStatement.executeBatch(DefaultStatement.java:93)\n\tat org.jooq.impl.BatchSingle.executePrepared(BatchSingle.java:239)\n\tat org.jooq.impl.BatchSingle.execute(BatchSingle.java:182)\n\tat com.snaplogic.snap.api.sql.operations.JdbcOperationsImpl.flushBatch(JdbcOperationsImpl.java:1285)\n\tat com.snaplogic.snap.api.sql.operations.JdbcOperationsImpl.insert(JdbcOperationsImpl.java:724)\n\tat com.snaplogic.snaps.sql.SimpleSqlInsertSnap.processData(SimpleSqlInsertSnap.java:131)\n\tat com.snaplogic.snaps.sql.SimpleSqlWriteSnap.processDocument(SimpleSqlWriteSnap.java:100)\n\tat com.snaplogic.snaps.sql.SimpleSqlSnap.process(SimpleSqlSnap.java:256)\n\tat com.snaplogic.snap.api.ExecutionUtil.process(ExecutionUtil.java:93)\n\tat com.snaplogic.snap.api.ExecutionUtil.execute(ExecutionUtil.java:105)\n\tat com.snaplogic.snap.api.ExecutionUtil.execute(ExecutionUtil.java:73)\n\tat com.snaplogic.snap.api.SimpleSnap.execute(SimpleSnap.java:67)\n\tat com.snaplogic.cc.snap.common.SnapRunnableImpl.executeSnap(SnapRunnableImpl.java:718)\n\tat com.snaplogic.cc.snap.common.SnapRunnableImpl.execute(SnapRunnableImpl.java:493)\n\tat com.snaplogic.cc.snap.common.SnapRunnableImpl.doRun(SnapRunnableImpl.java:779)\n\tat com.snaplogic.cc.snap.common.SnapRunnableImpl.access$000(SnapRunnableImpl.java:109)\n\tat com.snaplogic.cc.snap.common.SnapRunnableImpl$1.run(SnapRunnableImpl.java:336)\n\tat com.snaplogic.cc.snap.common.SnapRunnableImpl$1.run(SnapRunnableImpl.java:332)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)\n\tat com.snaplogic.cc.snap.common.SnapRunnableImpl.call(SnapRunnableImpl.java:331)\n\tat com.snaplogic.cc.snap.common.SnapRunnableImpl.call(SnapRunnableImpl.java:109)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\nโ,
โreasonโ:โORA-01438: value larger than specified precision allowed for this column\n, error code: 1438, SQL state: 22003โ,
โresolutionโ:โPlease check for valid Snap properties and input data.โ,
โstatus_codeโ:โerrorโ,
โstatusโ:-2,
โsnap_detailsโ:{
โlabelโ:โABCโ,
โinstance_idโ:โโ,
โclass_idโ:โcom-snaplogic-snaps-oracle-insertโ,
โbuild_tagโ:โsnapsmrc486โ,
โviewsโ:{
โinโ:{
โinputRowsโ:{
โcountโ:5500
}
},
โoutโ:{
โoutput101โ:{
โcountโ:5000
}
},
โerrorโ:{
โerror0โ:{
โcountโ:0
}
}
}
},
โoriginalโ:{
}
}
โ07-10-2018 10:32 PM
As error specified- value larger than specified precision allowed for this column, so either change the column size limit on oracle table or use substring to make max limit on snap.
โ07-12-2018 04:03 PM
If thereโs one document in the batch thatโs going to cause the SQL operation (insert) to fail then none of the documents in the batch will be inserted. This makes sense. A document will be written to the error view for every document in the failed batch. this also makes sense. What weโve seen is that the first error encountered in the batch operation is associated with every document written to the error view for the batch; regardless of whether the error applies to that specific document or not. Does anyone know if this behavior is coming from the JDBC driver or from SnapLogic? I understand the need to have an error that indicates why the document failed in the batch operation, but it would be helpful if it were possible to associate the database error only with the document that caused the failure and associate a โbatch SQL operation failedโ error on the other documents. This would make it easier to identify the document that produced the error.
โ10-29-2018 10:57 AM
Hello PSAmmirata
The same issue I am facing right now. my database account batch size is set to 50 and if error is in single record entire batch is getting error out with error message of first record where as other 49 records are valid records without erroneous data (tried and tested by manually removing 1 erroneous record ).
By setting batch size to 1 resolved the issue but now pipeline is taking 20 times more time as earlier โน๏ธ
Kindly suggest me you came across any other better solution that would help me a lot.
โ12-06-2019 10:07 AM
Just adding to the chorus here. I am having the same issue. I am working to replace sqlldr, so, I am able to compare the same import between 2 methods. Snaplogic loads fine if the data is perfect. If there is any issue, in my case foreign key exceptions. It errors out a lot of good records with the same error code. The error clearly does not apply to a majority of the records (the parent does exist). The rejects do include the records that are appropriately rejected.