ContributionsMost RecentMost LikesSolutionsRe: Reading and parsing multiple files from s3 bucket Would I use a single file reader in that case or a multi file reader? Wasn’t sure about what the difference was Reading and parsing multiple files from s3 bucket Hey All, So I’ve been able to read a single csv file from my s3 bucket, parse it with csv parser, map it, and load it into my snowflake DB with some additional join logic. However, what I need to do now is repeat this multiple times for multiple files in the s3 bucket. Ie, s3 bucket has 3 files. for each file, it would go through this same process I’ve tried using the multi file reader but not sure how to make it go through all the files? Or does it do it automatically. It works if I have only a single file in my s3 bucket, just need it to work for all the files now. Not sure if the parsers/mappers will automatically run for multiple files Here is my pipeline setup Loading Null values into Snowflake DB via Bulk Load Snap Hey All, So I’m trying to load certain columns from a csv file into snowflake, but facing some issues. I have the file, properly parsed it via the csv parser, and I’ve also used a mapper to map the columns properly. However, I need snaplogic to be able to do tell snowflake, if the entry in this row says “(null)” or “(None)” to treat it as a NULL insert in the bulk load. Is there a way to do this? For example, say I have a snowflake DB with column that accepts only 1 char, a Y or N. The row from the dataset has a “(null)” as the entry for that column, which is causing an issue since it is > 1 char. Is there a way for snaplogic to understand “(null)” means NULL and insert a NULL into snowflake via the bulk load snap? Thank you! Selecting only certain columns from CSV using Mapper Hey All, Having a little bit of trouble with this. So I currently have a csv file Ive pulled from s3 loaded into snaplogic via the S3 file reader. I’ve used a CSV parser to look at the file and it looks good. What I need to do from here is only pull 2 columns from the entire csv (lets say user and login ID) so that I can insert it into my snowflake DB instance Does anyone know what the mapper setup should look like for this? Thank you! Getting list of csv files through directory browser and parsing by date Hey All, So I’ve been trying to develop a pipeline that reads a bunch of files from an s3 bucket, checks which ones are no more than a week old, and then sends those to be parsed and eventually uploaded into snowflake. I’ve already been able to read a single csv file from my s3 bucket, parse it, and insert it into snowflake. Having trouble now with setting up a file filter within the directory browser snap to only send me the files that are a week old or younger. Any ideas on what filter expression I can use?