01-24-2019 01:36 PM
Submitted by @stodoroska from Interworks___
Selecting from Data warehouse data from an on-premise Oracle table, converting into CSV, storing it in Azure Blob Storage and storing in Azure Data Lake.
Screenshot of pipeline
In order to make this pipeline work you, need to configure the child backlog new table, which is running on premise and that is why it is put in a Pipeline Execute Snap. Also you need to configure Azure accounts and Azure paths for Blob Storage and Data Lake.
Sources: Oracle table
Targets: Azure Blob Storage and Azure Data Lake
Snaps used: Oracle Select, Pipeline Execute, CSV Formatter, File Writer, File Reader
Attach pipelines and any necessary resources such as expression libraries or source files.