ContributionsMost RecentMost LikesSolutionsHow to Safely Append New Blackout Datetime to Pipeline Schedule Without Overwriting Existing Ones? Hi community, I’d like to ask how I can set blackout dates for a specific pipeline. Recently, the blackout date format changed — now it’s possible to specify not only the date but also the time. According to the metadata, the blackout date fields are: jsonPath($, "$schedule.repeat.blackout_datetimes[*].startDate") and jsonPath($, "$schedule.repeat.blackout_datetimes[*].endDate") My questions are: How can I add a new blackout datetime if no blackout is currently defined (i.e. the fields don't exist yet, and concat fails)? When a blackout already exists, adding a new one overwrites the existing start and end date/time. I’m looking for a universal approach to append one new blackout datetime (start + end) to the list — without creating duplicates, and whether or not blackout dates already exist. Thank you very much in advance! Re: split target csv file into more smaller CSV's Hi, thank you for your answer - thats the way how i did it - how i splitted rows into multiple files split target csv file into more smaller CSV's Hi Snaplogic experts, my pipeline is creating one big CSV file (50k rows) as the result. Collegues asked me, if it is possible to split this big CSV file into smaller ones (create csv files with thousand rows) My question is, how to make more csv files as an output ? my idea is: get number of all rows for output, using Math.floor function get how many iterations are needed and then make a loop in order to split rows into csv files first file - row 1 - 1000 second file 1001 - 2000 and so on .... or is there another better approach? Thank you SolvedRe: Best Approach for Handling CSV Import with Truncate in Pipeline: Ensuring Correct Execution Order Thank you very much, it works as i want 🙂 Best Approach for Handling CSV Import with Truncate in Pipeline: Ensuring Correct Execution Order I want to ask: I’m attaching a screenshot of the pipeline that should: Read a CSV file, truncate SQL tables, and after the successful truncate, insert all the records from the CSV into the truncated table. It seems that the pipeline doesn't work correctly. It reads the CSV file (100k rows), then does the truncate, and inserts new rows. Shouldn’t there be a GATE or sequence before the truncate step? I mean, to make sure that the truncate happens only once after all the rows are loaded? What’s the best approach? Thank you. SolvedLogging mechanism in order to get parent pipeline name in logging pipeline Hi Snaplogic experts i have general question, concerning logging. Lets say pipeline with logging function can be called by 5 parent pipelines: parent_pipeline_A parent_pipeline_B parent_pipeline_C .. .. then i have child_pipeline_A child_pipeline_B and pipelineWithLogging() workflow: parent calls child and child has another child - pipelineWithLogging() parent_pipeline_A -> child_pipeline_A -> pipelineWithLogging() parent_pipeline_B -> child_pipeline_B -> pipelineWithLogging() parent_pipeline_C -> pipelineWithLogging() Question: What is the best approach to get pipeline names from parent pipelines to pipelineWithLogging() in order to log them? Is good approach to get rootRuuid and parentRuuid and pair them later on and be able to read: log message: FileA from parent_pipeline_A processed successfully ? (it is just dummy text :)) Thank you Re: Split string into rows and save it to file row by row, not in one string Hi Alexandar, curious, if scenatio below is possible to do in snaplogic: Scenario first row has fixed length 10 fields "Audi";"5558";"TypeA8";"2022";"Diesel";"";test1;"";"";"Germany"; Second row has 5 "Mercedes";"";"TypeS";"2022";"India"; there can be empty space in both in first row and in second row as well. Is it possible instead of this: "Audi";"5558";"TypeA8";"2022";"Diesel";"";test1;"";"";"Germany"; "Mercedes";"";"TypeS";"2022";"India";"";"";"";"";""; get this? (get rid of empty fields(from field 6 to field 10) "Audi";"5558";"TypeA8";"2022";"Diesel";"";test1;"";"";"Germany"; "Mercedes";"";"TypeS";"2022";"India"; thank you Re: Split string into rows and save it to file row by row, not in one string This works for my example really well 🙂 Sorry but last Re: Split string into rows and save it to file row by row, not in one string Looks good but question .... in my example - one row has more fields than another row Is it possible to "connect"/export two lines into one file but without empty field at the and of shorter row? (Mercedes is shorter and CSV formatter creates last empty field) instead of this: "Mercedes";"6658";"TypeS";"2022";"" "Audi";"5558";"TypeA8";"2022";"Diesel" get This "Mercedes";"6658";"TypeS";"2022"; "Audi";"5558";"TypeA8";"2022";"Diesel" Thank you Split string into rows and save it to file row by row, not in one string Lets say i have this input string in my mapper: "Mercedes;6658;TypeS;2022;\r\nAudi;5558;TypeA8;2022;Diesel" How can i save it to csv file as single rows? in this example it should be: Mercedes;6658;TypeS;2022; Audi;5558;TypeA8;2022;Diesel I've tried myString.split('\r\n') it creates two strings which i can not export/save as file because its list, or these strings were comma separated and stored as one line in export file Thank you for your help Solved