12-02-2020 12:23 AM
Hi everyone,
I am very new to SnapLogic. Recently I got a task to design triggered task to connect to Oracle with different account and then extract data with dynamic sqls.
Let me clarify my issues.
Basically the pipeline would be something like this
Read data from Oracle (various accounts) → Transform → Output to target dbs
First, I need to connect to various oracle dbs from various users (of course they will have different accounts) and extract data. I know I should user parameters for my pipeline, but how should I do it?
Second, can I pass a dynamic sql query for each time I connect to different database and select what I need? Where should I use parameter here and how should I pass dynamic parameter values?
Thanks in advance!
12-03-2020 03:23 PM
@tonyzero Yes you can do this with a dynamic reference to an account and SQL via pipeline parameters which are then bound to the query parameters (that share the same name) of the triggered task URL e.g.
Once you have set up your triggered task, you can then call the Task URL using an HTTP client (e.g. Postman) and pass the account and the sql to be used for this particular request (do note the account needs to exist already and you have the correct path to the account defined)
12-03-2020 11:52 PM
@robin Thank you so much!
I do have a following question regarding to dynamic databases.
What if I have different types of dbs (Oracle, MS-SQL etc) ? Can I use one pipeline to connect all of them?
My assumption is no, because each type of snap could only work for specific type of database. Oracle snap could only work for oracle; MySQL snap could only work for MySQL etc.
How about files? Can I use a pipeline to read either csv or xlsx file? I am guessing no because I tried csv parser would not work on xlsx files and vice versa.
12-07-2020 03:15 PM
What if I have different types of dbs (Oracle, MS-SQL etc) ? Can I use one pipeline to connect all of them? My assumption is no, because each type of snap could only work for specific type of database. Oracle snap could only work for oracle; MySQL snap could only work for MySQL etc.
Well we have a JDBC Snap Pack that is designed for generic database operations but a fully dynamic configuration for a wide variety of databases probably wouldn’t be what I would recommend.
Instead you could consider (for both multi-database and multi-filesystem scenarios) a routed-parent-child pipeline approach by using the Router Snap and then a Pipeline Execute (“PipeExec”) Snap.
The Router would examine some field’s value (e.g. db_name
or file_protocol
) to determine what DB engine or file type was being used and then route the document flow to a Pipe Exec Snap that would call a particular child pipeline that was designed for dynamic configuration for that particular database, filesystem or file type.
It would look something like this:
Perhaps give it a try and see what you can come up with.
12-08-2020 04:58 AM
@robin Thank you so much!