Forum Discussion
@darshthakkar - the reason I was asking about the number of fields is that the Group By N is giving me the set of fields per object. If you increase that to consume the entire document, then creating the object would fail because you wind up with duplicate names in the object.
But I’m glad this is working for you!
- nganapathiraju8 years agoFormer Employee
You have to be careful with the reuse checkbox.
From documentation:
Parameters
Pipeline parameter values can only be changed if this flag is not enabled. In other words, reusable executions cannot have different pipeline parameter values for different documents.
Pipeline name
When this property is an expression, the Snap will need to contact the SnapLogic cloud servers to load the pipeline information for execution. Also, if reuse is enabled, the result of the expression cannot change between documents.
As you can see from the error, it has a problem with the expression value being changed when the reuse is enabled.
This is happening for the 4th document in your case.
“reason”: “The pipeline path changed between documents when reuse was enabled”
“resolution”: “Change the pipeline expression so that the same value is returned for every input document”
“snap_details”: {label:Pipeline Execute}
“label”: “Pipeline Execute”What is your use case here? What are you trying to achieve?
- krupalibshah8 years agoContributor
The result I want to achieve is as shown in the output view of “sort” snap the documents are sorted as per pipelinename and timestamp this input will be sent to pipeline execute and we want to have 3 instances each for unique pipelines and these instances should process the payload filed one by one.
Example,
if I send 5 input documents, the document has the name of the pipeline to be executed out of which 2 documents have p1-pipeline and 3 has p2-pipeline. Here I would like both p1 and p2 to start running in parallel but process payload in sequence.- nganapathiraju8 years agoFormer Employee
Can you run it without the reuse checkbox and see if that is what you want to achieve?
Yes, the pool size governs all executions, regardless of the type of pipeline being executed.
As mentioned in another reply, if you enable reuse, the child pipeline to execute cannot change from one document to the next. Instead, you might try collecting all of the documents for a given child into a single document using the Group By Fields snap. The output of that snap can be sent to a PipeExec with reuse disabled and the pipeline to execute as an expression. Then, in the child pipeline, the first snap can be a JSON Splitter that breaks up the group into separate documents. This design should work in a fashion similar to what you are thinking.
- krupalibshah8 years agoContributor
Thank you, it worked 🙂