ContributionsMost RecentMost LikesSolutionsRe: SQL Server Bulk Load Arun, I have confirmed with with our PM organization that customization (installing client utilities etc.) to CloudPlex enviroment is not possible. Previously I thought this was an option and I apologize for leading you down the wrong path. You may want to try turning off AutoCommit to see if that improves performance at all. Re: SQL Server Bulk Load Arun, To use the BCP utility (BULK LOAD) for SQL Server, it does require the SQL Server client tools (or command line utilities) to be installed on the SnapLogic Execute Node. You mention a cloud-plex in your post so please reach out to SnapLogic Support and they can work to provision that in the cloud node(s) for you. Eric B. Re: Snaplogic connect to MS-Access MDB database The accdb file must be somewhere that can be accessed from the groundplex execution (not within the SnapLogic Database). In your example, it’s looking for a file in the folder /tmp. Hope that helps. Eric B. Re: Snaplogic connect to MS-Access MDB database If you are using the UcanAccessDriver, that JDBC URL doesn’t look correct since it won’t know the SLDB enviroment. That should be close to what the original stated but pointing to the file location on the node. CLI for BCP (SQL SVR) and dynamic file names The scenario I needed to accomplish was to pull data from SQL server, group the data sets together based on different fields and then dynamically define the file name based on those groupings. Performance was the #1 consideration and competitive tool sets were able to complete the action in under 2 hours with a 300M record set. For my testing and documentation below, I’m using a local data set of 3.6M records. Initial attempts at using the SQL Read and SQL Execute snaps quickly excluded those based on required throughput (was getting somewhere between 12k and 18k records per second over the network, 26k records per second local). Calculated out, that would have taken 7-10 hours just for the query to complete. The final solution ended up being broken down into 3 main steps: Command line execution to call a BAT file which invokes the BCP command Reading the dataset created from step 1 and performing the GROUP BY and a mapper to prep the data Document2Binary snap and then a file write which utilizes the $[‘content-location’] value to dynamical set the name. Attached to this post is the ZIP export of the project which contains 3 pipelines along with a SAMPLE bat file which starts the bcp utility. You will need to edit the bat file for your specific database requirements, the pipeline parameter for where the output of the bat file is stored and the location of the file you want written out. Pipeline: 1_Run_BCP_CLI NOTE: There is a pipeline parameter for the BCP output file that gets generated ‘Reference to BAT’ - points to a file on the local drive which executes the actual BCP process. (My files are located in c:\temp\bcp_testing) ‘DEL BCP Out’ will delete the bcp file if it already exists (optional as the bcp process will just overwrite the file anyway) ‘cli to run’ is renaming the bat key value (originally done as I was testing the cli and bcp execute - could be removed) ‘remove nulls’ will clear out the results from the ‘DEL BCP Out’ since it’s not part of the command line that needs to be executed. ‘Execute CLI’ is a script snap which will kick off the bat file and once completed, return a single record with the results. ‘Process BCP’ Out’ is a pipeline execute which calls 2_BULK_DAT_READ and passes the pipeline parameter for the file to read in the next step Pipeline: 2_BULK_DAT_READ ‘BCP Out File Read’ will use the pipeline parameter value specified for which file to read ‘CSV Parser’ self explanatory - does NOT have any headers on the data file (to enhance the pipeline, you could add the second input and define the file format with column names and types) ‘Group by Fields’ takes the first 2 field names (field001 and field002) and will create groupings for each set. This is the results of the initials for both first and last name from the BCP SQL Query. ‘Mapper’ will convert the JSON payload to $content as well as define $[‘content-location’] based on the grouped by fields. The expression is $groupBy.field001+"_"+$groupBy.field002 ‘Pipeline Execute’ will provide both $content and $[‘content-location’] to the next pipeline Pipeline: 3_DynamicFile_Write ‘Document to Binary’ with the option for ‘Document’ as the Encode or Decode setting allows the JSON records to be output as a binary stream. ‘File Writer’ will build the protocol, folder specification and the file name based on the provided $[‘content-location’] value from before. Execution Results The 3.6M records were actually processed in 16 seconds. The BCP process itself took 24 seconds. My group by was based on the first characters on both First name and Last name. This process ended up creating 294 files (locally) and used the naming convention of _.json Sample screen cap of the A_A.json file: Notes and File The file created contains a KEY for ‘content’ and is not pretty print json. For the screen cap above, I’m utilizing the JSTool → JSFormat plug-in for Notepad++. This approach will only create JSON formatted data (not CSV or other formatter options) BCP is required to be installed and this was only tested on a WINDOWS groundplex EricBarner-SQLSvr_BCP_CLI_Community.zip (4.4 KB) RDBMS - BLOB Data Type The post below was designed for the BLOB data type in SQL Server (2014) but similar concepts should work for other data sources as well. Setup: in SQL Server Mgmt Studio CREATE TABLE [BLOBS]( [Id] [int] IDENTITY(1,1) NOT NULL, [Data] varbinary NULL, [type] varchar NULL CONSTRAINT [PK_BLOBS] PRIMARY KEY CLUSTERED ( [Id] ASC ) ) ON [PRIMARY] insert a few records based on the date types you want to try: INSERT BLOBS( Data, type ) SELECT *,‘jpg’ FROM OPENROWSET (BULK ‘C:\FOLDER\FILENAME.jpg’, SINGLE_BLOB) rs INSERT BLOBS( Data, type ) SELECT *,‘csv’ FROM OPENROWSET (BULK ‘C:\FILDER\FILENAME.csv’, SINGLE_BLOB) rs … The data itself when try to look at in SQL Server is not very meaningful: With a snapLogic pipeline though, you can control what you want to have happen with it in a simple peipline that selects the data and converts $data into $content. SQL Select: data to content: Router Document to Binary (both same configuration) CSV Parser (data preview): File Writer I built in the bottom path as a future catch bucket for other data types added to my BLOB storage table. ServiceNow - Outbound REST with a Business Rule Using ServiceNow (SNOW) Outbound REST and Business Rules with SnapLogic. This document defines the steps required to create a Business Rule and REST Endpoint in ServiceNow that will call a SnapLogic pipeline. Business rules allow the user to define conditions that will trigger an Outbound REST Message when an expression or expressions are evaluated. Using the sample pipeline as a template create a pipeline and triggered task (associating your SNOW account with the ServiceNow Query snap). NOTE – 1: this pipeline is setup to accept a POST and is ASYNC – sending back just the RUUID though no additional processing is done against it in SNOW once returned NOTE – 2: A pipeline parameter called ‘Inc_Number’ has been setup and is used in the query snap Copy the Bearer Token and URL for use the newly created task: Next create an Outbound REST Message in ServiceNow. Go to SNOW and type “REST” in the Navigator Search Under ‘Outbound’ click ‘REST Message’ Create a New Message, pointing to the triggered URL + “?” + QueryParams NOTE: You will need to remember the Name (eg. EB – Snaplogi – Send Incident) and variable (ex. ${IncidentNumber}) as both will be used during the script process for the business rule This should create the HTTP Methods for you – I have POST only but you could also use other methods as needed based on the use case. GET would have also worked in this scenario since I’m only passing Query Parameters to the endpoint The Next step is to create a Business Rule which references the created REST (under System Definition) Setup a new business rule select the table / SNOW Object (eg. Incident) Because we are scripting the REST call, you’ll need to select “Advanced” in the top section select desired ‘When to run’ options (eg. before / insert / update / delete) and any filter conditions needed In the Advanced area on the bottom, you’ll need to add in the following script, substituting your specific values from above for the RESTMessageName and VARIABLE_NAME function onBefore(current, previous) { //This function will be automatically called when this rule is processed. var incnum =current.number; var r = new RESTMessage('<RESTMessageName>', 'post'); r.setStringParameter('<VARIABLE_NAME>',incnum); var response = r.execute(); } Done! Test the rule by editing an Incident in ServiceNow and seeing if the pipeline was triggered. Here is a dashboard view of the pipeline execution: And the file created in the middle SNOW Query Path: Using Windows Authentication with SQL Server Accounts The topic of "providing windows ID / Password to SQL Server has come up a few different times so below is a solution that I found to work in most instances. Scenario: Service accounts with Native SQL Server authentication is NOT supported (internal security rules, regulatory compliance etc.) Running SnapLogic using “IntegratedSecurity = true” would require every database to grant access to an individual robot / service account and didn’t allow specific user tracking Solution: 3rd party JDBC driver called jtds (http://jtds.sourceforge.net/) allows you to specify URL Properties and pass in account credentials. You will need to upload this jar file into the SLDB and reference that within the account settings (see screen caps below). On the SQL Server, you can see Windows Authentication selection for ID “SQLSvr_TESTING” Account settings within SnapLogic for “SQL Server Account” type: Account settings that need to be defined Referencing the JAR files uploaded to the SLDB The JDBC Driver Class should be set to net.sourceforge.jtds.jdbc.Driver URL Properties Defined: useNTLMv2 = true domain = [YourDomain] instance = [InstanceName-OPTIONAL] NOTE : It’s important to stipulate that if the users windows password changes, they will need to come into SnapLogic and adjust the password in the account as well. By default, the SQL Server snaps will attempt to reconnect 3 times so you may get into a situation where the ID becomes locked out.