ContributionsMost RecentMost LikesSolutionsSnowfalke Execute Behaving strangely I’ve a pipeline, at the end of which is a SnowFlake Execute snap which has an update query like below "Update table_schema.Employees set Salary = '"+ $salary + "' where EmpID = '" + $empId +"' " Now the issue is for all the employees Salary of last empId is being updated (of-course the query is madeup). The snap before Snowflake Execute snap is a data mapper with distinct values for both $salary and $empId ; The queries that are being generated (as evident in the Output Preview of the Expression builder of Snowflake Execute) are also properly made, each with their own empId and salary value and as many queries are made as the documents in previous snap. But when I check the table, I see same salary for each EmpId, that means, somehow the parameter $empId was recognized and held up uniquely for each document, but the parameter $salary in the set part of the query (outside where) got its value from the last document. What is this sorcery? I read all the documentations that I could find about Snowflake execute, but couldn’t understand what’s going on. Could you please give any hint or clue? Rest POST snap not working: Says Certificate Error Hi there, I am trying to post a simple json to our backend api through REST POST snap. Everything was working. And then we changed the url to a new url and I updated the url in the snap. But now request is simply not reaching our new api server. In fact, the snaplogic pipeline also completes successfully, with all snaps turning green. Only when I click on the properties of pipeline, I get to know that REST POST snap has failed, with attached screenshot error. Please note that we are able to reach the new api server via postman or any other rest client. Nothing has changed at all except for the url. My request is: How do I debug this more? What exactly is the issue here? Can you please help me with this situation? Snowflake Bulk Upsert: How to have conditional key columns? I’ve two pipelines, let’s call them Parent Pipeline and Child Pipeline: In the Child pipeline, I am using Snowflake Upsert snap like below: The key column i.e. _KEY_COLUMN is passed in Parent Pipeline via Execute Pipeline snap parameters. Now, I want to add multiple key columns i.e. consider below image: but the trick is I want to be able to decide which one gets invoked i.e. is there a way I can say from the parent that _KEY_COLUMN_1 is not required? Now, there can ofcourse be n no. of key columns, but want to start with a finite (say 4) at the moment. More context about the nature of the pipelines: the child pipeline is basically a generic pipeline which accepts a dataset and tablename which it upserts into and also creating the audit entries the parent pipeline can be any other pipeline which is responsible for a specific dataset and when it comes to upserting that in SF, it calls the child pipeline. But some datasets may have 1 key column, some can have 2, some can have more. How can I control this generically? Any suggestions is appreciated.