EDWH to Azure Blob Storage and Azure Data Lake
Submitted by @stodoroska from Interworks___ Selecting from Data warehouse data from an on-premise Oracle table, converting into CSV, storing it in Azure Blob Storage and storing in Azure Data Lake. Screenshot of pipeline Configuration In order to make this pipeline work you, need to configure the child backlog new table, which is running on premise and that is why it is put in a Pipeline Execute Snap. Also you need to configure Azure accounts and Azure paths for Blob Storage and Data Lake. Sources: Oracle table Targets: Azure Blob Storage and Azure Data Lake Snaps used: Oracle Select, Pipeline Execute, CSV Formatter, File Writer, File Reader Downloads Attach pipelines and any necessary resources such as expression libraries or source files.676Views0likes0CommentsCDC Delta Load Oracle
Submitted by @stodoroska from Interworks Delta Load from a staging to Oracle DWH table. Pipeline should detect if the record is an Update, Delete or Insert according to a flag in the Mlog table which is Oracle native table for storing logs of transactions. According to those flags and the id of the records, the pipeline is deciding which action it should perform, update, delete or insert. Screenshot of pipeline Configuration In each of the Snaps you should define your respective queries and DB accounts. Sources: Oracle DB table Targets: Oracle DB table Snaps used: Oracle Execute, Router, Mapper, Aggregate, Unique, Join, Copy Downloads Attach pipelines and any necessary resources such as expression libraries or source files.766Views0likes0Comments