ContributionsMost RecentMost LikesSolutionsRe: Array Rename @del Thank You so much for all your help. I think this is better solution and I will not need pivot and even data validator if I go with this solution. This will also make the pipeline more dynamic. Any new regex I need to add I will add in expression library. I hope the performance is as good as I have seen in data validator. But this is the best solution for the use case I have. Appreciate your help. Will update the post with final solution and performance metrics once I complete the code and testing. Re: Array Rename @del Thank you so much… this is what I was exactly looking for. I will verify the performance and update you… Can you provide example if possible about the way it can be done with expression library. As data validator does not allow any expression library or parameters used I am not sure how this can be achieved in a mapper using expression library. Re: Array Rename Thank You @del … This works but in my case the number and name of fields will be different from one file to other as I am creating a generic pipeline. Is there anyway to make the Pivot snap number of fields and field names dynamic. I am also not sure about performance as this method is going to divide each record into x records based on the number of fields. I appreciate you looking into the use case and providing directions. Re: Array Rename @cjhoward18 @bojanvelevski any directions how to approach the above use case. Re: Array Rename @cjhoward18 Thank you. This is renaming the arrays but as there is more than one fields in the object with same key name the output is just keeping the last instance. @bojanvelevski Thank You. my requirement is to validate each field against different regex. I have used data validator with all possible regex I have and I was trying to rename the fields to corresponding regex name I have provided in data validator snap so that each field will be validated against associated regex. regex name will always be second and last element. Attaching sample pipeline. data-validation-regex_2021_08_05 (1).slp (15.3 KB) If there is any other recommended approach to achieve this please let me know. Array Rename Hi Team, I am trying to achieve below. I would need directions on how to achieve below in a mapper or any other way. Source : { “UBER_ID”:[ “1”, “Integer_pattern” ], “First_name”:[ “Majid”, “TextOnly_pattern” ], “Last_name”:[ “”, “TextOnly_pattern” ] } Target : I would like to rename the above fields based on the value in the array. Below is the output I am looking at. { “Integer_pattern”:[ “1”, ], “TextOnly_pattern”:[ “Majid”, ], " TextOnly_pattern":[ “”, ] } Re: Create Avro Dynamic Schema Thank You @ptaylor … I am looking directions on ways to generate Avro schema dynamically using data file or copybook structure. Once the schema is generated and created in sldb I can sue above method to pass that to child pipeline so that Avro formatter can use it while creating Avro target file in S3. Create Avro Dynamic Schema Hi Team, I would like some recommendations/directions regarding possible solution to below requirement. I have created a generic pipeline to read 800 different Mainframe VSAM binary files with different formats and parse them using associated copybooks, validate all the data fields against a specified regular expression and if the file passed validation I will like to create a target Avro file. Avro formatter requires Avro schema predefined. As my pipelines are generic and can work with any VSAM files as long as you provide file location in S3 and associated copybook I would like to know if there is anyway to generate Avro schema dynamically based on the copybook structure or output of data after copybook parser. Please note the source files have different records types with each record having different structure. I would appreciate any directions. Re: Join using between clause Thank you so much @koryknick. This is working as expected and also performance issue is resolved using this approach. I had earlier tried to use sequence generator to generate multiple documents by passing start and end values as parameters but that approach was taking lot of time because it was triggering millions of child pipeline with sequence generator. Re: Join using between clause Hi @koryknick I am not able to do join at DB level as my source files are Mainframe binary format VSAM files with COMP3 fields in AWS S3. I am using Copybook parser to change the files from EBCDIC into ASCII using the corresponding copybook before I can join them.