Dynamic DB accounts & Dynamic SQL query

Hi everyone,

I am very new to SnapLogic. Recently I got a task to design triggered task to connect to Oracle with different account and then extract data with dynamic sqls.

Let me clarify my issues.

Basically the pipeline would be something like this

Read data from Oracle (various accounts) --> Transform --> Output to target dbs

First, I need to connect to various oracle dbs from various users (of course they will have different accounts) and extract data. I know I should user parameters for my pipeline, but how should I do it?

Second, can I pass a dynamic sql query for each time I connect to different database and select what I need? Where should I use parameter here and how should I pass dynamic parameter values?

Thanks in advance!

@tonyzero Yes you can do this with a dynamic reference to an account and SQL via pipeline parameters which are then bound to the query parameters (that share the same name) of the triggered task URL e.g.

Once you have set up your triggered task, you can then call the Task URL using an HTTP client (e.g. Postman) and pass the account and the sql to be used for this particular request (do note the account needs to exist already and you have the correct path to the account defined)

@robin Thank you so much!

I do have a following question regarding to dynamic databases.

What if I have different types of dbs (Oracle, MS-SQL etc) ? Can I use one pipeline to connect all of them?

My assumption is no, because each type of snap could only work for specific type of database. Oracle snap could only work for oracle; MySQL snap could only work for MySQL etc.

How about files? Can I use a pipeline to read either csv or xlsx file? I am guessing no because I tried csv parser would not work on xlsx files and vice versa.

@tonyzero

What if I have different types of dbs (Oracle, MS-SQL etc) ? Can I use one pipeline to connect all of them? My assumption is no, because each type of snap could only work for specific type of database. Oracle snap could only work for oracle; MySQL snap could only work for MySQL etc.

Well we have a JDBC Snap Pack that is designed for generic database operations but a fully dynamic configuration for a wide variety of databases probably wouldn’t be what I would recommend.

Instead you could consider (for both multi-database and multi-filesystem scenarios) a routed-parent-child pipeline approach by using the Router Snap and then a Pipeline Execute (“PipeExec”) Snap.

The Router would examine some field’s value (e.g. db_name or file_protocol) to determine what DB engine or file type was being used and then route the document flow to a Pipe Exec Snap that would call a particular child pipeline that was designed for dynamic configuration for that particular database, filesystem or file type.

It would look something like this:

Perhaps give it a try and see what you can come up with.

@robin Thank you so much!

@robin Another problem occurred to me while working with actual project. You mentioned in the first reply that "do note the account needs to exist already and you have the correct path to the account defined"

So based on what you replied to me, whenever I run into a new account, I need to add this account to my Account References so that it is an “existing account”. Then I could connect it dynamically with parameters.

Can I use the dynamic connection without “pre-setting” a new account? I mean can I create account with “Oracle Thin Dynamic Account” and parameterize Hostname, Database Name, Username and Password? If it is possible, I could call my triggered task and pass all the required info above to both creating account and execute sql at the same time?

Thanks in advance