ContributionsMost RecentMost LikesSolutionsRe: ELT Execute query execution flow sainig , your assumption is correct. They run serially. Re: ELT Snaps in ELT Pipelines do not process data in Design Mode @murphy-brian , The ELT DML snaps (namely the LOAD, INSERT-SELECT, MERGE INTO, SCD2, EXECUTE) were intentionally designed to not execute in Validation mode (Design mode) for the following reasons: These snaps can inadvertently corrupt or make the data invalid on production target tables if somebody runs validation accidentally on those pipelines. We want to prevent such mishaps from occurring. Validation mode (Design mode) can run fast when DML snaps are not writing into the target tables, otherwise it will not only increase the performance overheads, but also increase CDW costs. In order to understand your problem better, can you provide some example use cases? This will help us in addressing the problem. Re: Can AZURE SQL database be used with ELT snappacks Unfortunately Azure SQL is not supported in ELT snap packs Re: Getting connection timeout error in XML Parser Hello @GBekkanti, It is not the issue with the input data. It is the issue with where the input data is coming from. That is why I mentioned the problem could be either the file reader snap or any other snap that is used for reading the input data and passes the data to xml parser. For example: If the snap that is reading the data from a network or a database that is not constantly up or intermittently slow, then it won’t be able to pass that data up to the XML Parser snap and finally XML Parser Snap will time out waiting for the data. It would be good if you can share your pipeline or at least a screen shot of your pipeline to see how the data is coming. Re: Populate Records Hello @Ajay_Chawda, Your initial use case is different from the current use case that you sent me later. In the latest use case that you sent out there is no need to switch columns to rows. Hence there is no need to use UNION etc. I have attached the pipeline and the two CSV files input1.txt and input2.txt that will address your latest requirement use case. You would need to modify the File Reader Snaps configuration to the place where you save your input files as they are all currently pointing to my local system. Hope this helps. Community_Case2_2019_05_25.slp (13.9 KB) input1.txt (110 Bytes) input2.txt (219 Bytes) Re: Getting connection timeout error in XML Parser You may want to check up the file reader snap or the snap preceding the xml parser snap. The XML source reader might be having troubles. It kind of indicates to me that there is some kind of a network error going on. Re: Populate Records You can do the following. I have written it using SQL for ease of representation , but you can transform them into Snaps SELECT Account_Summary, ID, Concatenated_ID, Cost_Center_ID, Company_ID, Location_ID, Business_Unit_ID, year, unpivot1.amount FROM input2, (SELECT tax000 FROM input1 UNION ALL SELECT tax100 FROM input1 UNION ALL SELECT tax020 FROM input1 UNION ALL SELECT tax030 FROM input1) unpivot1 (amount) I have attached a sample pipeline image that indicates how the above SQL can be translated into Snaps. The FileReader through FileReader3 Snaps are doing the reads from the input2 table and the FileReader4 snap is doing the read from the input1 table. Hope this helps. Re: Effect of Pipeline failure You could try the following: Add the Error View to the pipexec snap so that any error that comes from the child pipeline, it will log the error and continue. This way the parent pipeline will not terminate. Re: Date filter in MongoDB snap I am assuming that when you say it is not working, it is giving syntax error. I tried the exact same syntax on my local mongo db instance, it gives the syntax error with MongoDB compass which is a front end tool to connect to MongoDB directly. I think it it does not like the letter “T” that separates the date from the time component. You can just remove it and try, it should work. For example, in my table, I had a column create_time that I used as shown below: {$and:[{“create_time”:{$gte:“2019-02-12 20:53:31.000”}}, {“create_time”:{$lte:“2019-02-12 20:53:31.000”}}]} Hope this helps. Re: Snaplogic SQLServer Insert Snap I am not sure what is going on in the pipeline since I cannot see it. But based on the error message, I can say that the target table of insert which happens to be the foreign table (since it has FileId has the foreign key) does not seem to have the column “FileId”. The reason your json generator might be working is that it is not passing the column name explicitly in the insert statement. Normally databases allow default values to be inserted into column if a positional values insert syntax is used rather an parameterized values construct based insert statement (if value is not specified) in which case it will not look for the column name, but based on the position, it will add a default value which may be null or whatever value is specified. If you can send sql statement, I can take a look.