cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Google Analytics Query Explorer Error

marilitza
New Contributor

Hello,

I have a โ€œfor eachโ€ snap that calls a pipeline to connect to Google Analytics query explorer and extracts the data for 20 different websites. The problem that I encounter is that when I execute the โ€œfor eachโ€ for multiple interactions, I get the following:

Error copying data from S3 to Redshift.

Resolution:
Fix issue which caused copy to fail.

Reason:
[Amazon]JDBC Parameters cannot be used with normal Statement objects, use PreparedStatements instead.

The error is always at a random website connection and it always happens when I execute the โ€œfor eachโ€ for more than 1 day.

Each โ€œfor eachโ€ interaction is doing 20 different connections per day for x amount of dates.

1

3 REPLIES 3

mbowen
Employee
Employee

It looks like this error can be raised by either the Redshift BulkLoad or S3Upsert snap. It seems the generated SQL has a placeholder or something that is causing it to be interpreted as a parameter in a Statement object. These snaps use both PreparedStatements and Statement objects.

A stack trace would be super helpful to investigate this.

Iโ€™m not sure if this is the stack trace that you are looking for. If not, can you lead me to where I can get it?

Thanks!

1
2

Ok, it looks like exception is getting raised from the BulkUpdate snap (which inherits from BulkLoad). It doesnโ€™t like the INSERT SQL. A question mark (?) in the SQL will be interpreted as a SQL parameter placeholder even in a comment it seems ( [Amazon][JDBC](11220) Parameters cannot be used with normal Statement objects, use PreparedStatement... ).

This SQL is actually logged and that would tell us exactly whatโ€™s triggering this. The SQL is logged as a DEBUG message of this form:

"Running insert from temp table to target table : <INSERT-SQL>"

To get this log, we need to download the JCC logs. Unfortunately, you arenโ€™t able to do this, however, our snap support engineers can. The logs do rollover though, but if you logged a support ticket shortly after this exception occurs, the log message would likely still be available.

If you donโ€™t want to do this, then think about if thereโ€™s a question mark (?) somewhere thatโ€™s getting included in the INSERT SQL which is tripping up the interpretation. Obviously, you donโ€™t control this directly, but maybe some hints with the source/attributes โ€ฆ Iโ€™m grasping a bitโ€ฆ However, the logged SQL would tell us exactly why.