ContributionsMost RecentMost LikesSolutionsRe: Excel Formatter - need date datatype with this format MM/DD/YYYY This doesnt work. The Excel Formatter snap does not allow you to change the datatype of a column no matter what datatype you send it. The Excel formatter always defaults to General. I used this Oracle PL/SQL package to generate a Excel document and this package kind of has the same issue where it defaults to “Custom” but for our purposes the API likes Custom and Date datatypes equally when a date is concerned, but doesnt like General. Dont know why. Recommend, somehow, allowing the Excel Formatter snap to change column datatypes. I dont need a response to this unless you have a fix where I dont have to use PL/SQL. AMIS, Data Driven Blog - Oracle & Microsoft Azure – 19 Feb 11 Create an Excel-file with PL/SQL - AMIS, Data Driven Blog - Oracle &... For this project I took an Apex-plugin I have written, (IR) Report to Excel (xlsx), and turned it into a PL/SQL package. With this package it’s very easy to create an Excel 2007 file with only a few lines of PL/SQL code. The main purpose for this... Est. reading time: 1 minute Excel Formatter - need date datatype with this format MM/DD/YYYY Im using Excel Formatter. I am selecting from Oracle. I need the column in excel to be date datatype, but have this format MM/DD/YYYY with no time on the end. I can get the date into that format if I do a to_char(mydate, ‘MM/DD/YYYY’) in the select, but the datatype is always General no matter if I click the “Translate Date and Time Types”. If I let it be a date in the select as it is naturally from the table datatype or I do a to_date(to_char(mydate, ‘MM/DD/YYYY’), ‘MM/DD/YYYY’), no matter what the checkbox is set to its always General and it puts the time on the end? I cant win. When I export from Oracle SQL developer to excel, that tool gives date datatype with this format MM/DD/YYYY no problem. Re: Is there a way to generate Excel files and not write them to disk? thats what im doing. I dont need to save it on the disk though. seems like a waste of time. Seems like I should be able to pass a variable to the rest post snap pointing to the file in memory and not write (file writer) then read (rest post) from disk. But, OK. Is there a way to generate Excel files and not write them to disk? I have and Excel Formatter and a Rest Post Snap. I want to send the Excel File in the Rest Post Snap. Why do I have to write the Excel file to disk first? Can I create the Excel file in memory and pass that to the Rest Post Snap somehow without going to disk first? SolvedRe: Get current JSON document being inserted I had to do this JSON.stringify($original), but that worked. Using $original caused insertion problems into a varchar2 or clob because it wasnt the same datatype. But, your recommendation to put it in the mapper snap of the error pipeline worked out great because I only had to change the one error pipeline and not all 20 or so pipelines that call the error pipeline. Thanks Re: Get current JSON document being inserted I have a mapper in the erorr pipeline. Let me try that. Re: Get current JSON document being inserted I put $original as the expression populating a error pipeline parameter in the pipeline properties of the pipeline. When I run the pipeline the pipeline complains “Reason: Value referenced in the sub-expression ‘$original’ is null”. ? Get current JSON document being inserted I have a pipeline with a oracle insert snap and a error pipeline. Everything works. When something in the pipeline fails I am passing variables to the error pipeline and storing in a table. I want to pass the current JSON document that failed? What variable do I use to get the current JSON document that failed without having to concatenate all the values into a string myself? I want to pass the json document that failed to insert actually? I know its $.something but what? SolvedImplied commit after oracle delete snap I have a ETL pipeline. I want to do delete from table; insert into table; commit; but the oracle delete snap has a implied commit after it so I get delete; commit; insert; commit. My table is sitting there blank with no records until all the inserting is finished because of the implied commit after the oracle delete snap that I cannot turn off? Controlling Auto Commit I have a pipeline with a jdbc execute with a select sql from source (postgres), Oracle Delete of target table, Oracle insert of target table. I turn auto commit off in the target oracle account. But for some reason after the Oracle Delete snap SnapLogic still does a commit and the target table has zero records when queried from an outside session? I want the target table to have old data and then within a millisecond or so have new data. There should be no intermittent period of time where the table has zero records visible from an outside session. How do I make it so the pipeline deletes and inserts records in its own session and then commits flipping the table data from old to new immediately? If I wrote this in pl/sql I could do: delete from target_table; insert … into target_table; commit;