Forum Discussion
Remember that a Join will only join the data that’s available from its input views – the output of the upstream snaps attached to its inputs. Those data sets are also constrained by the Preview Document Count if you’re validating. So a Join that might normally find many matches when the complete data is available during an execution might find no matches when it’s only dealing with a 50 record subset from each input view. I think this might not be obvious. It’s easy to get the misimpression that the Preview Document Count is only a limit on the amount of data that we’re displaying but all the rest of the data is still processed. That’s not the case. When a snap reaches that count on its output, it actually stops and doesn’t output any more data, even invisibly, even if there’s a lot more data than that available. This is important to understand.
Could that explain what you’re seeing?
It does a bit however I specified before that during validating/saving a pipeline, say for instance, ID = 100 will display Phone Number as “Null” and during pipeline execution, it will display Phone Number as 123456.
As I had access to upstream systems, I knew ID=100 has a phone number 123456 (this is achieved via Join in snapLogic) but I don’t get this while validating my pipeline. If ID=100 wasn’t available in the output preview, I would have been fine with that considering it didn’t come in the first 50 records however it shows ID=100 and Phone Number = “Null”; this is concerning for me!
Moreover, as the Join didn’t work as expected, ID=100 inserted a new record in Salesforce (as ID=100 was a new record) with Phone number = “Null” (when the pipeline was validated as Salesforce Upsert had its snap execution set to Validate & Execute) and this is a weird behavior in my honest opinion. The same has been observed while saving a pipeline if snap execution is Validate & Execute.
I wouldn’t have realized this as I wasn’t checking the output preview but when I validated the records in Salesforce, I was seeing a lot of “Null” values, that made me go to the upstream system, randomly pick up 10 IDs and check their values. To my surprise, those IDs had some data and still were going as Null. After investigating, I came to a conclusion that minute changes in the pipeline and then saving it is causing the issue, thus I quickly disabled the salesforce upsert snap, made all the relevant changes, changed snap execution of Salesforce Upsert to Execute ONLY from Validate & Execute. I still disable the Salesforce Upsert whenever the requirement changes and I have to modify my pipeline as I feel that Joins are not functioning during validating/saving pipeline but they do during executing pipeline (which is a bit absurd)
By ‘multiple inputs’, do you mean there are multiple input documents that you want to combine into a single XML document? In other words, the two SQL snaps to the left the XML Formatter are generating multiple output documents and you want to merge those into a single XML doc? If so, you are correct that that is not directly supported in the top-level ultra pipeline. Instead, you’ll need to put those snaps in a child pipeline and use PipelineExecute to do a local execution of the child. So, the child will execute the SQL queries and then generate the XML doc that is the response.
T, thanks for responding!
No, not separate documents input. Just multiple fields.
Here is a screenshot of my input JSON…
… an here is my errored XML Format snap.
“IncidentDescription” is my second field on the input side. ☹️
Ha… I figured it out.
The error said to verify that the document is XML convertible… so I played around with my mapping in the preceding Mapper snap.
I added a common root to call of my fields… “1.”… and that satisfied the XML Formatter snap.
Its always simple once you know what to do. 😉 Regards - David.