SQL Server to Azure SQL Data Migration Pattern
Created by @asharifian, SnapLogic This pattern migrates table data from SQL Server and loads them to Azure SQL. Business case: SQL Server is the transactional database and encounter high data usage. Azure SQL is utilized for analytics and BI reporting. This use case is for a one-time data migration. Thereafter if necessary, a delta pipeline can be created by modifying these pipelines. Configuration Prerequisites: Tables can either be created on the fly, using the “Create table if not present” option in the Azure Bulk Load snap in the child pipeline or, if your tables have specific data type requirements, you can create the tables pior to executing the pipeline. You will need accounts for the Email Sender, Azure SQL, and SQL Server. Pipeline can accomodate the Azure SQL Dynamic account and SQL Server dynamic account, in case you want to have the integration be portable to handle differing databases. Sources: SQL Server Tables Targets: Azure SQL Tables Snaps used: JSON Generator, Pipeline Execute, Tail, Email Sender, Azure SQL - Execute, SQL Server - Execute, Azure SQL - Bulk Load Downloads SQL Server-Azure SQL-Data Migration.slp (8.9 KB) z_SQL Server-Azure SQL-Data Migration.slp (6.8 KB)2KViews0likes0CommentsSalesforce Contacts - Database Replication (SQL Server-MongoDB-Oracle)
Created by @asharifian This pattern provides an example pipeline to replicate data from Salesforce to multiple different database types: SQL Server, MongoDB, and Oracle. Data replication is useful for disaster recovery, analytics, and improves overall system resilience and reliability. Configuration Two approaches are covered in this pattern: Data replication post success of an insert into a dependent database, such as data first getting inserted into MongoDB, and upon success, inserted into Oracle. Data replication in parallel, such as data being inserted to MongoDB and SQL Server in a parallel fashion. Sources: Salesforce Contacts Targets: Database tables for customer contact information in SQL Server, MongoDB, Oracle Snaps used: Salesforce Read, Mapper, Copy, MongoDB - Insert, SQL Server - Insert, Oracle - Insert Downloads Salesforce Contacts-Database Replication (SQL Server-MDB-Oracle).slp (14.5 KB)2.9KViews0likes0CommentsIngest data from SQL Server (RDBMS) to AWS Cloud Storage (S3)
Contributed by @SriramGopal from Agilisium Consulting The pipeline is designed to fetch records on an incremental basis from any RDBMS system and load to cloud storage (Amazon S3 in this case) with partitioning logic. This use case is applicable to Cloud Data Lake initiatives. This pipeline also includes, the Date based Data Partitioning at the Storage layer and Data Validation trail between source and target. Parent Pipeline Control Table check : Gets the last run details from Control table. ETL Process : Fetches the incremental source data based on Control table and loads the data to S3 Control Table write : Updates the latest run data to Control table for tracking S3 Writer Child Pipeline Audit Update Child Pipeline Control Table - Tracking The Control table is designed in such a way that it holds the source load type (RDBMS, FTP, API etc.) and the corresponding object name. Each object load will have the load start/end times and the records/ documents processed for every load. The source record fetch count and target table load count is calculated for every run. Based on the status (S-success or F-failure) of the load, automated notifications can be triggered to the technical team. Control Table Attributes: UID – Primary key SOURCE_TYPE – Type of Source RDBMS, API, Social Media, FTP etc TABLE_NAME – Table name or object name. START_DATE – Load start time ENDDATE – Load end time SRC_REC_COUNT – Source record count RGT_REC_COUNT – Target record count STATUS – ‘S’ Success and ‘F’ Failed based on the source/ target load Partitioned Load For every load, the data gets partitioned automatically based on the transaction timestamp in the storage layer (S3) Configuration Sources : RDBMS Database, SQL Server Table Targets : AWS Storage Snaps used : Parent Pipeline : Sort, File Writer, Mapper, Router, Copy, JSON Formatter, Redshift Insert, Redshift Select, Redshift - Multi Execute, S3 File Writer, S3 File Reader, Aggregate, Pipeline Execute S3 Writer Child Pipeline : Mapper, JSON Formatter, S3 File Writer Audit Update Child Pipeline : File Reader, JSON Parser, Mapper, Router, Aggregate, Redshift - Multi Execute Downloads IM_RDBMS_S3_Inc_load.slp (43.6 KB) IM_RDBMS_S3_Inc_load_S3writer.slp (12.2 KB) IM_RDBMS_S3_Inc_load_Audit_update.slp (18.3 KB)7.2KViews1like3CommentsSchema Bulk Load from SQL Server to Snowflake
Created by @ebarner This pipeline loads the schema from the specified SQL Server table into the Snowflake table. Configuration Sources: SQL Server table Targets: Snowflake table Snaps used: SQL Server - Select, Snowflake - Bulk Load Downloads Schema Bulk Load.slp (4.5 KB)4.5KViews1like2CommentsExcel to SQL Server after Clearing the Lookup Table
Submitted by @stodoroska from Interworks This pipeline reads the data from Excel spreadsheet, deletes all the records from the corresponding SQL Server lookup table, and after that re-populates the same SQL Server lookup table with the data that was previously read. Configuration Join Snap is waiting for all the records from the database table to be deleted, then the database table is re-populated Sources: Excel file on a File share system Targets: SQL Server database table Snaps used: File Reader, Excel Parser, Copy, Filter, SQL Server - Delete, Mapper, Join, SQL Server Execute Downloads Re-populate.SQLServer.LookupTable.slp (10.5 KB)3.1KViews0likes0CommentsRead data from a file in SnapLogic Database, transform the data in Workday and load data into SQL Server
Created by @sreddi This pipeline pattern extracts data from the SnapLogic Database file, transforms the data in Workday and loads the data into a target, such as a SQL Server. Configuration File Reader Snap Filter Snap Mapper Snap SQL Bulk Load Snap Sources: JSON file in the SnapLogic Database Targets: Workday Read to Staffing, SQL Server Bulk Load to Create New Table Snaps used: File Reader, JSON Parser, Filter, Mapper, Workday Read, SQL Server Bulk Load Downloads Workday_PM.slp (13.0 KB)4.8KViews0likes0Comments