CLI for BCP (SQL SVR) and dynamic file names
The scenario I needed to accomplish was to pull data from SQL server, group the data sets together based on different fields and then dynamically define the file name based on those groupings. Performance was the #1 consideration and competitive tool sets were able to complete the action in under 2 hours with a 300M record set. For my testing and documentation below, I’m using a local data set of 3.6M records. Initial attempts at using the SQL Read and SQL Execute snaps quickly excluded those based on required throughput (was getting somewhere between 12k and 18k records per second over the network, 26k records per second local). Calculated out, that would have taken 7-10 hours just for the query to complete. The final solution ended up being broken down into 3 main steps: Command line execution to call a BAT file which invokes the BCP command Reading the dataset created from step 1 and performing the GROUP BY and a mapper to prep the data Document2Binary snap and then a file write which utilizes the $[‘content-location’] value to dynamical set the name. Attached to this post is the ZIP export of the project which contains 3 pipelines along with a SAMPLE bat file which starts the bcp utility. You will need to edit the bat file for your specific database requirements, the pipeline parameter for where the output of the bat file is stored and the location of the file you want written out. Pipeline: 1_Run_BCP_CLI NOTE: There is a pipeline parameter for the BCP output file that gets generated ‘Reference to BAT’ - points to a file on the local drive which executes the actual BCP process. (My files are located in c:\temp\bcp_testing) ‘DEL BCP Out’ will delete the bcp file if it already exists (optional as the bcp process will just overwrite the file anyway) ‘cli to run’ is renaming the bat key value (originally done as I was testing the cli and bcp execute - could be removed) ‘remove nulls’ will clear out the results from the ‘DEL BCP Out’ since it’s not part of the command line that needs to be executed. ‘Execute CLI’ is a script snap which will kick off the bat file and once completed, return a single record with the results. ‘Process BCP’ Out’ is a pipeline execute which calls 2_BULK_DAT_READ and passes the pipeline parameter for the file to read in the next step Pipeline: 2_BULK_DAT_READ ‘BCP Out File Read’ will use the pipeline parameter value specified for which file to read ‘CSV Parser’ self explanatory - does NOT have any headers on the data file (to enhance the pipeline, you could add the second input and define the file format with column names and types) ‘Group by Fields’ takes the first 2 field names (field001 and field002) and will create groupings for each set. This is the results of the initials for both first and last name from the BCP SQL Query. ‘Mapper’ will convert the JSON payload to $content as well as define $[‘content-location’] based on the grouped by fields. The expression is $groupBy.field001+"_"+$groupBy.field002 ‘Pipeline Execute’ will provide both $content and $[‘content-location’] to the next pipeline Pipeline: 3_DynamicFile_Write ‘Document to Binary’ with the option for ‘Document’ as the Encode or Decode setting allows the JSON records to be output as a binary stream. ‘File Writer’ will build the protocol, folder specification and the file name based on the provided $[‘content-location’] value from before. Execution Results The 3.6M records were actually processed in 16 seconds. The BCP process itself took 24 seconds. My group by was based on the first characters on both First name and Last name. This process ended up creating 294 files (locally) and used the naming convention of _.json Sample screen cap of the A_A.json file: Notes and File The file created contains a KEY for ‘content’ and is not pretty print json. For the screen cap above, I’m utilizing the JSTool → JSFormat plug-in for Notepad++. This approach will only create JSON formatted data (not CSV or other formatter options) BCP is required to be installed and this was only tested on a WINDOWS groundplex EricBarner-SQLSvr_BCP_CLI_Community.zip (4.4 KB)5.7KViews4likes2CommentsSQL Server account connection setup failing with multiple error
Hello Community, We are trying to setup SQL server account in Snaplogic, however, getting below error Invalid username password- Able to login with windows authentication in microsoft sql management studio Failed to validate account: Failed to retrieve a database connection. Cause: Login failed for user ‘\empid’. ClientConnectionId:27dc1f0c-b783-4327-8d10-c77c82f6c198 (Reason: Login failed for user ‘\empid’. ClientConnectionId:27dc1f0c-b783-4327-8d10-c77c82f6c198; Resolution: Ensure credentials are valid, multiple attempts with invalid credentials may result into account getting locked) If we use domainname/instancename in HOSTNAME, getting below error Failed to validate account: Failed to retrieve a database connection. Cause: The TCP/IP connection to the host /, port 1433 has failed. Error: “/. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.”. (Reason: The TCP/IP connection to the host /, port 1433 has failed. Error: “/. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.”.; Resolution: Address the reported issue.) version - 13.0.5 Here are our account settings - Hostname: abc.bot.com/mydatabase port:1433 DBname:emp_name username: abc\123 password:<> Also, Please share sample account settings for our reference. Thank you in advance.Ingest data from SQL Server (RDBMS) to AWS Cloud Storage (S3)
Contributed by @SriramGopal from Agilisium Consulting The pipeline is designed to fetch records on an incremental basis from any RDBMS system and load to cloud storage (Amazon S3 in this case) with partitioning logic. This use case is applicable to Cloud Data Lake initiatives. This pipeline also includes, the Date based Data Partitioning at the Storage layer and Data Validation trail between source and target. Parent Pipeline Control Table check : Gets the last run details from Control table. ETL Process : Fetches the incremental source data based on Control table and loads the data to S3 Control Table write : Updates the latest run data to Control table for tracking S3 Writer Child Pipeline Audit Update Child Pipeline Control Table - Tracking The Control table is designed in such a way that it holds the source load type (RDBMS, FTP, API etc.) and the corresponding object name. Each object load will have the load start/end times and the records/ documents processed for every load. The source record fetch count and target table load count is calculated for every run. Based on the status (S-success or F-failure) of the load, automated notifications can be triggered to the technical team. Control Table Attributes: UID – Primary key SOURCE_TYPE – Type of Source RDBMS, API, Social Media, FTP etc TABLE_NAME – Table name or object name. START_DATE – Load start time ENDDATE – Load end time SRC_REC_COUNT – Source record count RGT_REC_COUNT – Target record count STATUS – ‘S’ Success and ‘F’ Failed based on the source/ target load Partitioned Load For every load, the data gets partitioned automatically based on the transaction timestamp in the storage layer (S3) Configuration Sources : RDBMS Database, SQL Server Table Targets : AWS Storage Snaps used : Parent Pipeline : Sort, File Writer, Mapper, Router, Copy, JSON Formatter, Redshift Insert, Redshift Select, Redshift - Multi Execute, S3 File Writer, S3 File Reader, Aggregate, Pipeline Execute S3 Writer Child Pipeline : Mapper, JSON Formatter, S3 File Writer Audit Update Child Pipeline : File Reader, JSON Parser, Mapper, Router, Aggregate, Redshift - Multi Execute Downloads IM_RDBMS_S3_Inc_load.slp (43.6 KB) IM_RDBMS_S3_Inc_load_S3writer.slp (12.2 KB) IM_RDBMS_S3_Inc_load_Audit_update.slp (18.3 KB)7.2KViews1like3CommentsSchema Bulk Load from SQL Server to Snowflake
Created by @ebarner This pipeline loads the schema from the specified SQL Server table into the Snowflake table. Configuration Sources: SQL Server table Targets: Snowflake table Snaps used: SQL Server - Select, Snowflake - Bulk Load Downloads Schema Bulk Load.slp (4.5 KB)4.5KViews1like2Comments