ContributionsMost RecentMost LikesSolutionsRe: How to remove columns with null value alone in pipeline? Thanks Cole again. For the first expression, I am getting none null columns all coming as array for each rows. Is that expected? Re: How to remove columns with null value alone in pipeline? Thanks Cole. Kindly share the other expression also. Regards, Amar. How to remove columns with null value alone in pipeline? Hi, I have a scenario where I have few columns have null value few not. Objective is to not to bring in columns with null value for each records. source: id a b c d e abc null 1 2 null null def 11 null 22 33 null ijk 111 222 null 333 444 lmo null null null null 5555 Expected: abc 1 2 samething with other records as well. Does anyone know any trick on this? Please let me know. Regards, Amar. Re: How it do insertion faster for redshift I stopped the execution as it was running for 49 hours. Re: How it do insertion faster for redshift Hi Aleung, PFA. Kindly let me know if this is what you are looking for. Re: How it do insertion faster for redshift runid 595db8dd17f60c4b21d64513_b6b70371-72d5-40e1-9760-cfdb1b3f00be Re: How it do insertion faster for redshift I already have a filter condition before bulk load snap to restrict records. It still executing for 43 hours. Total record count in the target table is closer to 2.5 million records. Is there a way to reduce this execution hours? Re: How it do insertion faster for redshift Hi Rajesh, I have batch and pool size in pipeline execute as 1. Looping total record count for each cycle is 70k. Any tips on what should be batch size and pool size to make this pipeline execution complete sooner? Please let me know. Regards, Amar. Re: How it do insertion faster for redshift Hi Rajesh, Yes it’s Bulk load redshift. It checks for a condition in redshift select snaps and if it’s true, it self execute again till the condition is false. So the set of snaps in this pipeline will be executed again and again in loop till false. Other than updating account settings batch and pool size, is there a way to address this slow execution.? Regards, Amar. How it do insertion faster for redshift I am trying insert 70K records in loop till a condition is false. Looping logic working fine. But, I see it takes 1 hour to insert 70K records each time. PFA of the pipeline I am executing. Is there a way to perform insertion faster? Any tips would be helpful. Regards, Amar.