ContributionsMost RecentMost LikesSolutionsDynamic Table Loads Trying to do ETL to move source data to target db. I don’t want to create a pipeline for every single table. There are hundreds of tables. To start I want it to do this simply and assume a truncate and reload of all the tables. I will worry about incremental loads later. I want to run a SQL Execute snap with this code to get a list of all the tables. SELECT TOP 1 s.name AS srcSchema, t.name AS srcTbl, ‘mynewschema’ AS targSchema, t.name AS targTbl FROM MyDb.sys.schemas s JOIN MyDb.sys.tables t ON s.schema_id = t.schema_id WHERE s.name in (‘dbo’) ORDER BY s.name,t.name I want to then run the pipeline to go through the list of tables, truncate the target, select the source data, BCP the data into the target staging area. Once there, I have a SQL Merge routine already built that will pull in the differences to our lake and I can easily call that with a SQL Execute. I’m struggling to find an example of this in any documentation or online. Creating hundreds of pipelines is not going to work. Error Pipeline Help Can anyone point me to a comprehensive video or explanation re: Error Pipelines? The documentation leaves out a lot of details. If an error occurs, I want to pass in the value of $ProcessId (as one example) to the error pipeline so that I can flag the process as no longer running in my logs. I want to log the error. I want to send a custom email. If there was an example video that showed building the error pipeline, building a normal pipeline, pointing it to an error pipeline, passing the params, etc. etc. etc. That would help. Re: Salesforce Oauth Redirect URL for Snaplogic I am having the same issue. I think the connection Authorizes when I set it up. But when I go execute, I get the same error. Error occurred while querying the daily job limits with url Re: Salesforce data types not coming over when loading SQL Server I am extracting data for the purposes of loading a data warehouse using the bulk api as stated in my original post. The match data type documentation is not clear. It only applies in certain circumstances. I have checked and unchecked and run the pipeline through a dozen times and it is not behaving consistently. If I have to manually create the tables to get the data types to be correct, it eliminates the usefulness of the create table if not present checkbox. I do not have a way to generate a create table script from Salesforce. I can’t be the only person that has experienced this issue. Re: Salesforce data types not coming over when loading SQL Server I have turned this on and off several times. The documentation is confusing. That property only seems to apply in certain cases but the UI isn’t disabling it when it doesn’t apply. Salesforce data types not coming over when loading SQL Server I am grabbing an object in Salesforce using the Salesforce Read Snap. I am using SQL Bulk Load and it is flagged to create table if it doesn’t exist. My numeric types are not being created in SQL. Almost everything is coming over as varchar(8000). I have read the documentation on this snap many times and it is not well written to explain how do I get the data types to come across and create the tables correctly. Anyone have issues with this? Do I use XML/Json? Bulk API makes sense to use because I am loading a data warehouse. I’m stuck. Re: Bit is converted to 'true' and 'false' It chokes because the source is bit and the destination is bit. It is converting bit to string and then when it tries to insert into the target, it fails. I have it working with a mapper but it seems like they should stop converting bit to string. Re: Bit is converted to 'true' and 'false' Thank you for the suggestions. I am doing the mapper solution now. Why should I have to cast a field as int when it is already a bit and therefore not string? I don’t really want to do either of these options but it seems I have no choice. I just want them to stop converting my bit data to a string. If they can’t handle bit, at least automate it to int instead of a string. It creates extra work for me with the “no code” solution (rolling eyes). Bit is converted to 'true' and 'false' Everytime I pull data from SQL Server / Salesforce etc, if the data type is bit, it returns a string of ‘true’ or ‘false’ that I am then having to convert back to a 0 or 1. How do I get this to behave differently? Migrating Oracle Number to SQL Server Numeric I have data I need to move from Oracle to SQL Server. In Oracle the data type is just “NUMBER”. No precision or scale specified. I know from analysis that these values are in some of the rows for one of the columns. .201025955675839502926546893128461404242 239.41789473684210526315789473684210526 SQL Server has a Numeric data type. Max precision is 38. I cannot define a numeric data type that will work with these values. If I did something like Numeric(12,26), it won’t hold the first value. If I did something like Numeric(38,38), it won’t hold the second value. I want to just lop off some of the decimals so that I never exceed 26 numbers to the right of the decimal. I have been trying to do this in the mapper and am stuck. The toPrecision function doesn’t do what I want. If X = 239.41789473684210526315789473684210526 x.toPrecision(26) returns 239.41789473684210065584921 It starts changing everything instead of just rounding to 26 decimal points. Any ideas on what I can do here?