The scenario I needed to accomplish was to pull data from SQL server, group the data sets together based on different fields and then dynamically define the file name based on those groupings.
Performance was the #1 consideration and competitive tool sets were able to complete the action in under 2 hours with a 300M record set. For my testing and documentation below, I’m using a local data set of 3.6M records.
Initial attempts at using the SQL Read and SQL Execute snaps quickly excluded those based on required throughput (was getting somewhere between 12k and 18k records per second over the network, 26k records per second local). Calculated out, that would have taken 7-10 hours just for the query to complete.
The final solution ended up being broken down into 3 main steps:
Command line execution to call a BAT file which invokes the BCP command
Reading the dataset created from step 1 and performing the GROUP BY and a mapper to prep the data
Document2Binary snap and then a file write which utilizes the $[‘content-location’] value to dynamical set the name.
Attached to this post is the ZIP export of the project which contains 3 pipelines along with a SAMPLE bat file which starts the bcp utility. You will need to edit the bat file for your specific database requirements, the pipeline parameter for where the output of the bat file is stored and the location of the file you want written out.
NOTE: There is a pipeline parameter for the BCP output file that gets generated
’Reference to BAT’ - points to a file on the local drive which executes the actual BCP process. (My files are located in c:\temp\bcp_testing)
’DEL BCP Out’ will delete the bcp file if it already exists (optional as the bcp process will just overwrite the file anyway)
’cli to run’ is renaming the bat key value (originally done as I was testing the cli and bcp execute - could be removed)
’remove nulls’ will clear out the results from the ‘DEL BCP Out’ since it’s not part of the command line that needs to be executed.
’Execute CLI’ is a script snap which will kick off the bat file and once completed, return a single record with the results.
’Process BCP’ Out’ is a pipeline execute which calls 2_BULK_DAT_READ and passes the pipeline parameter for the file to read in the next step
’BCP Out File Read’ will use the pipeline parameter value specified for which file to read
’CSV Parser’ self explanatory - does NOT have any headers on the data file (to enhance the pipeline, you could add the second input and define the file format with column names and types)
’Group by Fields’ takes the first 2 field names (field001 and field002) and will create groupings for each set. This is the results of the initials for both first and last name from the BCP SQL Query.
’Mapper’ will convert the JSON payload to content as well as define [‘content-location’] based on the grouped by fields. The expression is
’Pipeline Execute’ will provide both content and [‘content-location’] to the next pipeline
’Document to Binary’ with the option for ‘Document’ as the Encode or Decode setting allows the JSON records to be output as a binary stream.
’File Writer’ will build the protocol, folder specification and the file name based on the provided $[‘content-location’] value from before.
The 3.6M records were actually processed in 16 seconds. The BCP process itself took 24 seconds.
My group by was based on the first characters on both First name and Last name. This process ended up creating 294 files (locally) and used the naming convention of _.json
Sample screen cap of the A_A.json file:
Notes and File
The file created contains a KEY for ‘content’ and is not pretty print json. For the screen cap above, I’m utilizing the JSTool -> JSFormat plug-in for Notepad++.
This approach will only create JSON formatted data (not CSV or other formatter options)
BCP is required to be installed and this was only tested on a WINDOWS groundplex