Forum Discussion
Tagging the experts to get some help on this one.
Sorry, I’m finding this a very confusing post.
Is that related to the paragraph before it or the one after it? I’m not seeing anything else about joins.
What do you mean by “going blank”? What’s supplying that data?
We need more context to make sense of this. What’s a brand new record? From where? It sounds like you might want to change the snap that updates Salesforce to Execute only
so that validation doesn’t perform updates that you only want to happen during execution.
Please use “validate” and “execute”, not “compile” and “run”. There’s nothing in SnapLogic that’s really equivalent to “compiling” so that’s a confusing term.
- darshthakkar4 years agoValued Contributor
Sincere apologies @ptaylor for the confusion. How I understand the 3 buttons on snapLogic is as below:
- Validating a pipeline: Compile time
- Executing a pipeline: Run time
- Saving a pipeline with the last snap having snap execution as “Validate & Execute”: Compile time
I will not use Compile and Run time to avoid confusion henceforth. Let me explain again what the issue is and what I’m trying to achieve.
Upstream system: Snowflake
Downstream system: SalesforceIssue: Data is different in output preview of a snap when the pipeline is saved or validated. I did a sanity test and the ID record (i.e. col A) that should have some data (i.e. in col H) is coming as “Null” values. This data that I’m expecting is coming from Joins though.
The above behavior is not the same when the pipeline is executed. With execution in place, the data comes as expected.
Definition of brand new records: With the help of Inner Join, I’m considering IDs from snowflake that are not in Salesforce and I’m ingesting those NEW records to Salesforce with this pipeline.
Salesforce Create and Salesforce Update doesn’t work efficiently (from what I’ve observed) so I’ve been using Salesforce Upsert for this operation.
The concerning piece is that, the last snap of my pipeline is a Salesforce Upsert with snap execution as “Validate & Execute”; whenever I make some minute changes and save my pipeline, those changes flow to downstream system (i.e. Salesforce) and that’s an expected behavior however with Saving/Validating pipeline, the data is NOT consistent for the joins (as explained before) and this inserts the records (if new record) or updates the record (if existing record) as we are using an Upsert.
When the pipeline is executed, the same record won’t be inserted as it’s no longer a NEW record and it won’t be updated too as that record hasn’t received any update from the upstream systems. This is basically the data snapLogic should have calculated during saving/validating pipeline but it wasn’t able to!
What I’m trying to achieve: Consistent data flow from Snowflake to Salesforce (with the help of Joins as those would be needed anyhow)
Happy to clarify further questions if any!
Solution: For now, I’ve already changed my Salesforce Upsert snap’s settings to “Execute ONLY” so that even when I save/validate my pipeline, those updates don’t flow downstream. I would have expected joins to work in all the different scenarios like save/validate/execute. Is this a limitation of the tool or am I doing something wrong over here? I wouldn’t be surprised if I’m missing a crucial step, I’m still learning snapLogic so by all means, feel free to give me right directions (I wouldn’t be offended).
Apologies again for the confusion and looking forward to your thoughts on this one.
Best Regards,
DarshRemember that a Join will only join the data that’s available from its input views – the output of the upstream snaps attached to its inputs. Those data sets are also constrained by the Preview Document Count if you’re validating. So a Join that might normally find many matches when the complete data is available during an execution might find no matches when it’s only dealing with a 50 record subset from each input view. I think this might not be obvious. It’s easy to get the misimpression that the Preview Document Count is only a limit on the amount of data that we’re displaying but all the rest of the data is still processed. That’s not the case. When a snap reaches that count on its output, it actually stops and doesn’t output any more data, even invisibly, even if there’s a lot more data than that available. This is important to understand.
Could that explain what you’re seeing?
- darshthakkar4 years agoValued Contributor
It does a bit however I specified before that during validating/saving a pipeline, say for instance, ID = 100 will display Phone Number as “Null” and during pipeline execution, it will display Phone Number as 123456.
As I had access to upstream systems, I knew ID=100 has a phone number 123456 (this is achieved via Join in snapLogic) but I don’t get this while validating my pipeline. If ID=100 wasn’t available in the output preview, I would have been fine with that considering it didn’t come in the first 50 records however it shows ID=100 and Phone Number = “Null”; this is concerning for me!
Moreover, as the Join didn’t work as expected, ID=100 inserted a new record in Salesforce (as ID=100 was a new record) with Phone number = “Null” (when the pipeline was validated as Salesforce Upsert had its snap execution set to Validate & Execute) and this is a weird behavior in my honest opinion. The same has been observed while saving a pipeline if snap execution is Validate & Execute.
I wouldn’t have realized this as I wasn’t checking the output preview but when I validated the records in Salesforce, I was seeing a lot of “Null” values, that made me go to the upstream system, randomly pick up 10 IDs and check their values. To my surprise, those IDs had some data and still were going as Null. After investigating, I came to a conclusion that minute changes in the pipeline and then saving it is causing the issue, thus I quickly disabled the salesforce upsert snap, made all the relevant changes, changed snap execution of Salesforce Upsert to Execute ONLY from Validate & Execute. I still disable the Salesforce Upsert whenever the requirement changes and I have to modify my pipeline as I feel that Joins are not functioning during validating/saving pipeline but they do during executing pipeline (which is a bit absurd)