Ingesting multiple AWS S3 files into a database
I have a Amazon S3 bucket containing multiple files that I’d like to extract and read into a database. The files are all .GZ (gzip) files. The file names will change each day but will all have the same format/contents once unzipped. I was thinking it would be like: S3 browser → mapping → S3 reader → JSON Parser → database target But this fails validation at JSON parser step, with: Failure: Cannot parse JSON data, Reason: Unable to create json parser for the given input stream, Illegal character ((CTRL-CHAR, code 31)): only regular white space (\r, \n, \t) is allowed between tokens After the S3 Reader step, I can preview the data and see the list of files that would be imported, but not the contents of the files themselves. Any suggestions for the right way to read in the contents of several files in a S3 bucket at once? Thank you!Solved4.2KViews0likes4CommentsUpdated S3 Snap Pack and Selecting Accounts
I’m playing around with some of the new S3 snaps that were recently released. They are not allowing me to use existing S3 accounts. Do we have to create new S3 accounts for these snaps every time? For instance, when I open the S3 Upload snap and click on the accounts tab, when I click to add an existing account, nothing shows up, but I know that I have S3 accounts that should be available in my project space and project. When I click on the add account button and click continue, the dialogue box opens to add a new account. I’m on the latest snap pack version.2.4KViews0likes2CommentsSnaplogic Redshift Copy Command
Hi Team, When I read data from a file/csv and do a bulk load, into redshift I know that at the backend it’s trying to copy the data and tries to do the bulk load. Here…I need to know what was the full set of options the Snaplogic code used for the Redshift Copy command at its backend. Where I can get that details… Thanks in advance. MakeshSolved2.8KViews0likes2CommentsRedshift bulk load
Hi team, Looking for a suggestion/enhancement to achieve the below senario. when i read a csv file (it might have \n\r (windows) & \n(unix) ). Who to fix this … Today I read the file and when I do the bulk load using redshift bulk load snap and the data gets loaded with the \n\r . How can I escape these characters? When i look at the properties of the snap , i could see the below. but it’s not working.Solved4.2KViews0likes5CommentsWriting Zip Files to S3
I am encountering some severe sluggishness in writing zip files to S3. When writing a 76 MB file, it takes 12 minutes to complete, versus 16 second when writing to a local destination. I think the problem is in transferring from ground to cloud. This process is part of a generic file transport solution, so the read file snap is being executed on a groundplex, and the write file snap is being executed on the cloudplex. This switch is done by a pipeline execute snap specifying execution on the cloudplex. I’m thinking it is possible the issues are cause by the conversion from binary to document and then back to binary once the document stream is passed into the child pipeline. Has anyone else run into similar issues? I am happy to provide an outline of the pipeline if that helps. Thanks.6.4KViews1like5CommentsSE Server-side encryption
We have tightened up our bucket policy and specified AES256 as our server side encryption. As a result our pipeline now fails. Reason: You may not have access right to bucket: {bucketname}, detail: Access Denied Resolution: Check for valid bucket name, AWS credential and permission. This is using a snaplogic cross account IAM role. Any suggestions on what we can do to make this work?2.1KViews0likes1Comment