ContributionsMost RecentMost LikesSolutionsRecord Count in CSV Is there an easier way to find the number of records in an incoming CSV? Want to log this information before processing the file. Appreciate the help but looking for a setting or simpler way to count the number. Propagate Schema downstream We are trying to write files into S3. But we want to write in a specific JSON format so that there is no impact to all the systems that are already consuming an existing structure. We are trying to use a mapper to map the source to this JSON format. where and how can I specify the format that I am looking for so that I can map the values in mapper? Is there an alternative approach this can be achieved? Re: JSON Lines with each entity in individual line - Json formatter Hi, Based on the information you have provided, have you tried reading the contents of the file and doing a JSON splitter on the records and calling the API ? GroupN - Failing for a specific case We query DB (collects the events generated by Parent system) and grouping the records based on field “Type”. Using GroupN, I get all the records for that particular type. Using this list of recordValues, we retrieve rest of the information from Parent system. This design is working well as groupN is helping us to retrieve all the records in a single query for all the recordValues retrieved in that instance of SQL. However, there is one scenario where for a particular type of event, there is additional information that comes along in the event. When we do GroupN for this event type, grouping is happening at individual fields and thus the correlation between recordValue and additional information is getting lost. The work around we did is to process individual query to the parent system just for this event type. But this approach is degrading the performance as rest of the snaps are waiting for this event type to complete. [ { “record_id”: 53538, “event_type”: “GROCERIES”, “objectInfo”: “Milk” }, { “record_id”: 53539, “event_type”: “SPICES”, “objectInfo”: “Paprica”, “noLongerCarry”: “Salt” } ] GroupN works when the object doesnt have “noLongerCarry” as I can query all the information based on objectInfo and create the output document which carries multiple recordID information in one single document. But when noLongerCarry object gets generated in any of the event_type, groupN is losing the correlation. This is forcing us to process such event individually and thus making multiple trips to the source system and generating a document for each entry. Is there a mapping technique that solve this issue? Re: Array functions Thats a clean trick. Thanks for sharing it. Re: Array functions slice will work if I have static length of JSON. Wouldnt it be more work to calculate the length and then remove the first? Re: Array functions Yes, thanks for that. It solved my usecase. However, I bumped into next problem. Now that I have an array, while trying to insert this into postgresql where column is varchar, I get the following error: PostgreSQL - Insert - Merge[5d2c0dcc9b085f032f5b6977_4f9e9ef4-21e4-4cc1-a7f8-3698051ca592 – bb2ca011-824e-40ed-ab8d-1377b1f37dde] com.snaplogic.snap.api.SnapDataException: class java.util.ArrayList cannot be cast to class java.sql.Array (java.util.ArrayList is in module java.base of loader 'bootstrap'; java.sql.Array is in module java.sql of loader 'platform') Do I have to explicitly cast it and if so what would it be? Array functions Is there a way to delete the first entry in an array if the length of the array is more than 1. if the JSON looks like this: { “content”: { “type”: “SHOP_ITEMS”, “uris”: [ “Groceries”, “Cleaning Products”, “Household Items” ] } } I want the final output to be { “content”: { “type”: “SHOP_ITEMS”, “uris”: [ “Cleaning Products”, “Household Items” ] } } Re: Amazon S3 account I was able to identify the error. The pipeline had S3 parameters until the csv formatter step and while writing the file to S3, it was giving the error. I had to once again enter the S3 location values in the File Writer snap and things started working. Re: Amazon S3 account we encountered similar error. S3 pattern is as specified. It worked well in QA but when deployed into Prod, is giving us this error. Any insights is highly appreciated.