Join two arrays , one array with same unique ID
Hi, So I am trying to join two arrays, but one array contains multiple unique IDs and the other one not. Let me explain below: One array from sql select with these objects: { “personid”: 433817, “customernumber”: 10796670, “firstname”: “Jens”, “lastname”: “Lam” } The other sql select with an array of these object: { “personid”: 433817, “customernumber”: 10796670, “media”: “email”, “mediaaddress”: “info@tesla.de” }, { “personid”: 433817, “customernumber”: 10796670, “media”: “phone”, “mediaaddress”:“0484848484” } Now I want to join them together based on PersonID, but it the second select you have multiple records with same personID and I want it to look like this : { “personid”: 433817, “customernumber”: 10796670, “firstname”: “Jens”, “lastname”: “Lam” “media”: “email”, “mediaaddress”: “info@tesla.de” “media”: “phone”, “mediaaddress”:“0484848484” } Anyone know an solution, I found one that I use union and then groupby the field personID but that makes it complicater to get all the fields later in different etls. Regards JensSolved4.3KViews0likes3Comments"Connection is closed" error with MSSQL service account
I’m running into a strange problem connecting a pipeline to one of my company’s MS SQL servers. My pipeline requires two service accounts as I’m connecting to servers on two different domains. I tested it using accounts linked to lower environments in each domain, and it worked perfectly. Then I added accounts for our prod environments. One of these works fine, but the other keeps giving me an error “Connection is closed.” Here’s the stack trace: com.snaplogic.api.ExecutionException: Could not retrieve a connection to database. at com.snaplogic.snap.api.sql.operations.JdbcOperationsImpl.acquireConnection(JdbcOperationsImpl.java:351) at com.snaplogic.snaps.sql.SimpleSqlSelectSnap.defineOutputSchema(SimpleSqlSelectSnap.java:433) at com.snaplogic.cc.util.SnapUtil.defineSchema(SnapUtil.java:286) at com.snaplogic.cc.snap.common.SnapRunnableImpl.configureSnap(SnapRunnableImpl.java:746) at com.snaplogic.cc.snap.common.SnapRunnableImpl.executeForSuggest(SnapRunnableImpl.java:624) at com.snaplogic.cc.snap.common.SnapRunnableImpl.doRun(SnapRunnableImpl.java:865) at com.snaplogic.cc.snap.common.SnapRunnableImpl.call(SnapRunnableImpl.java:436) at com.snaplogic.cc.snap.common.SnapRunnableImpl.call(SnapRunnableImpl.java:120) at java.base/java.util.concurrent.FutureTask.run(Unknown Source) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.base/java.util.concurrent.FutureTask.run(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source) Caused by: java.sql.SQLException: Connection is closed at com.zaxxer.hikari.pool.ProxyConnection$ClosedConnection.lambda$getClosedConnection$0(ProxyConnection.java:502) at com.sun.proxy.$Proxy172.getAutoCommit(Unknown Source) at com.zaxxer.hikari.pool.HikariProxyConnection.getAutoCommit(HikariProxyConnection.java) at com.snaplogic.snap.api.sql.connection.ConnectionProxy.getAutoCommit(ConnectionProxy.java:127) at com.snaplogic.snap.api.sql.operations.JdbcOperationsImpl.handleAutoCommit(JdbcOperationsImpl.java:2460) at com.snaplogic.snap.api.sql.operations.JdbcOperationsImpl.acquireConnection(JdbcOperationsImpl.java:346) ... 13 more Reason: Connection is closed Resolution: Please address the reported issue. This is a general service account for accessing this server, and it works perfectly well for another project. I copied all the configuration information exactly, and the account validates. Is this a problem within SnapLogic, or something I need to take up with my company’s database team?1.6KViews0likes0CommentsSQL Server Stored Procedure Snap hangs when returning data
Hello, I’m working with a Stored Procedure that has no input variables and returns about 4800 rows of data. When I build out my pipeline and validate, the procedure returns the first 50 rows successfully (as expected). However, when I go for an actual execution the pipeline hangs on that particular snap. Given that my pipeline works when validating, I don’t think there is a connectivity issue. I don’t know if there is some action I need to take in order to handle larger data sets? Any information or insight here would be very helpful Thank you Dan Edit: Here is a screen shot of the pipeline execution in the dashboard. It will hang at exactly 1024 docs each time6KViews0likes9CommentsSQL Bulk load truncating data
Hi Community, I am trying to SQL Bulk Load with the Create Table option. When this runs it is erroring on string truncation. I do not understand how to use the Mapper with only one side as a second input into the pipeline. I do not want to have to map all the fields/objects I have 1900+ fields and 167+ tables. SQL Insert is inserting with Varchar(8000) and this doesn’t work because it is not supporting unicode characters. Thanks for any help.6.1KViews0likes7CommentsTable Creation in SQL server with specific data type using data from excel file
Hi all, Thanks in advance for read and any help here. I have a pipeline where I had a excel file, with columns name and table name, so first I read this excel file and them make a parse, this is a sample of the excel file: So I want (if not exist) create a table AAA with the columns 0XX5, 0XX6, 0XX7, 0XX8, 0XX9 each one with the respective Datatype, how can I o that?3.5KViews0likes2CommentsSQL Insert from JSON change type and only type
Hi All, I have a Pipeline that takes data from a Rest API and converts it to a SQL database. This is all working and we are able to create the table without having to define the Schema in a mapper (we have hundreds of tables with thousands of fields and it would be very difficult to map them all individually). My current issue is that when converting from the JSON to SQL Server it is automatically converting the type from string to varchar(8000). We have Japanese characters etc that are being lost because of this conversion. I need to have it be nvarchar(4000) or if the string is >4000 be nvarchar max. The question is how do I make that conversion without having to map/define each and every field. And at the same time making sure I don’t truncate anything. Thanks for your help.2.6KViews0likes1CommentSQL Server account connection setup failing with multiple error
Hello Community, We are trying to setup SQL server account in Snaplogic, however, getting below error Invalid username password- Able to login with windows authentication in microsoft sql management studio Failed to validate account: Failed to retrieve a database connection. Cause: Login failed for user ‘\empid’. ClientConnectionId:27dc1f0c-b783-4327-8d10-c77c82f6c198 (Reason: Login failed for user ‘\empid’. ClientConnectionId:27dc1f0c-b783-4327-8d10-c77c82f6c198; Resolution: Ensure credentials are valid, multiple attempts with invalid credentials may result into account getting locked) If we use domainname/instancename in HOSTNAME, getting below error Failed to validate account: Failed to retrieve a database connection. Cause: The TCP/IP connection to the host /, port 1433 has failed. Error: “/. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.”. (Reason: The TCP/IP connection to the host /, port 1433 has failed. Error: “/. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.”.; Resolution: Address the reported issue.) version - 13.0.5 Here are our account settings - Hostname: abc.bot.com/mydatabase port:1433 DBname:emp_name username: abc\123 password:<> Also, Please share sample account settings for our reference. Thank you in advance.SQL server snap - Unexpected Re-throwing connection error
Hi Community, I was trying to access SQL server via snaplogic but I wasn’t successful as I was facing an issue with the connection. Below are the two error messages when I tried with different plexes. Error while validating with groundplex Failed to validate account: Failed to retrieve a database connection. Cause: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: “Unexpected rethrowing”. (Reason: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: “Unexpected rethrowing”.; Resolution: Address the reported issue.) Error while validating with cloudplex: Failed to validate account: Failed to retrieve a database connection. Cause: The TCP/IP connection to the host ***********************, port 1433 has failed. Error: “null. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.”. (Reason: The TCP/IP connection to the host ****************************, port 1433 has failed. Error: “null. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.”.; Resolution: Address the reported issue.) JDBC drives that I used: mssql-jdbc-9.4.0.jre8 mssql-jdbc-7.4.1.jre8 mssql-jdbc-10.2.0.jre8.jar mssql-jdbc-7.4.1.jre11.jar* Details: I would really appreciate if anyone could help me on this. Thank you.10KViews0likes10CommentsChange the key to the value of the value of the key of a String For identifying columns
Hello, So I have a DataSet that looks like this: { Name : testvalue1, Name : testvalue2 } But I want to change the key ‘Name’ into the value it contains: output { testvalue1 : testvalue1, testvalue2 : testvalue2 } It’s needed because after that I want to fil all the keys with the right data and do an SQL insert with the Keys as the column names.Solved4.6KViews0likes4CommentsCLI for BCP (SQL SVR) and dynamic file names
The scenario I needed to accomplish was to pull data from SQL server, group the data sets together based on different fields and then dynamically define the file name based on those groupings. Performance was the #1 consideration and competitive tool sets were able to complete the action in under 2 hours with a 300M record set. For my testing and documentation below, I’m using a local data set of 3.6M records. Initial attempts at using the SQL Read and SQL Execute snaps quickly excluded those based on required throughput (was getting somewhere between 12k and 18k records per second over the network, 26k records per second local). Calculated out, that would have taken 7-10 hours just for the query to complete. The final solution ended up being broken down into 3 main steps: Command line execution to call a BAT file which invokes the BCP command Reading the dataset created from step 1 and performing the GROUP BY and a mapper to prep the data Document2Binary snap and then a file write which utilizes the $[‘content-location’] value to dynamical set the name. Attached to this post is the ZIP export of the project which contains 3 pipelines along with a SAMPLE bat file which starts the bcp utility. You will need to edit the bat file for your specific database requirements, the pipeline parameter for where the output of the bat file is stored and the location of the file you want written out. Pipeline: 1_Run_BCP_CLI NOTE: There is a pipeline parameter for the BCP output file that gets generated ‘Reference to BAT’ - points to a file on the local drive which executes the actual BCP process. (My files are located in c:\temp\bcp_testing) ‘DEL BCP Out’ will delete the bcp file if it already exists (optional as the bcp process will just overwrite the file anyway) ‘cli to run’ is renaming the bat key value (originally done as I was testing the cli and bcp execute - could be removed) ‘remove nulls’ will clear out the results from the ‘DEL BCP Out’ since it’s not part of the command line that needs to be executed. ‘Execute CLI’ is a script snap which will kick off the bat file and once completed, return a single record with the results. ‘Process BCP’ Out’ is a pipeline execute which calls 2_BULK_DAT_READ and passes the pipeline parameter for the file to read in the next step Pipeline: 2_BULK_DAT_READ ‘BCP Out File Read’ will use the pipeline parameter value specified for which file to read ‘CSV Parser’ self explanatory - does NOT have any headers on the data file (to enhance the pipeline, you could add the second input and define the file format with column names and types) ‘Group by Fields’ takes the first 2 field names (field001 and field002) and will create groupings for each set. This is the results of the initials for both first and last name from the BCP SQL Query. ‘Mapper’ will convert the JSON payload to $content as well as define $[‘content-location’] based on the grouped by fields. The expression is $groupBy.field001+"_"+$groupBy.field002 ‘Pipeline Execute’ will provide both $content and $[‘content-location’] to the next pipeline Pipeline: 3_DynamicFile_Write ‘Document to Binary’ with the option for ‘Document’ as the Encode or Decode setting allows the JSON records to be output as a binary stream. ‘File Writer’ will build the protocol, folder specification and the file name based on the provided $[‘content-location’] value from before. Execution Results The 3.6M records were actually processed in 16 seconds. The BCP process itself took 24 seconds. My group by was based on the first characters on both First name and Last name. This process ended up creating 294 files (locally) and used the naming convention of _.json Sample screen cap of the A_A.json file: Notes and File The file created contains a KEY for ‘content’ and is not pretty print json. For the screen cap above, I’m utilizing the JSTool → JSFormat plug-in for Notepad++. This approach will only create JSON formatted data (not CSV or other formatter options) BCP is required to be installed and this was only tested on a WINDOWS groundplex EricBarner-SQLSvr_BCP_CLI_Community.zip (4.4 KB)5.7KViews4likes2Comments