Forum Discussion
Thanks for the valuable inputs.
So do you mean if I keep pool size as 1 then it will initiate one instance of the pipeline as per the input document and it will not allow even start another instance which is a different pipeline?
What my understanding was if I send 5 input documents, the document has the name of the pipeline to be executed out of which 2 documents have p1-pipeline and 3 has p2-pipeline. Here I would like both p1 and p2 to start running parallel but process their inputs in sequence.
so is this how it should it work? If not can you help me how can I enable such processing? I tried the below,
I am extracting documents from DB and after sorting them as per pipeline name and timestamp sending it to pipeline execute with “reuse” enabled,
here is what my pipeline execute will receive as input, and below is the configuration,
I get error in pipeline execute
{error:Invalid configuration, stacktrace:com.snaplogic.snap.api.SnapDataException: Invalid configuration\n at com.snaplogic.snaps.flow.PipeExec.process(PipeExec.java:722)\n at com.sna...}
"error": "Invalid configuration"
"stacktrace": "com.snaplogic.snap.api.SnapDataException: Invalid configuration\n at com.snaplogic.snaps.flow.PipeExec.process(PipeExec.java:722)\n at com.snaplogic.snaps.flow.PipeExec.processSafely(PipeExec.java:699)\n at com.snaplogic.snaps.flow.PipeExec.execute(PipeExec.java:628)\n at com.snaplogic.snaps.flow.PipeExec.executeForSuggest(PipeExec.java:1273)\n at com.snaplogic.cc.snap.common.SnapRunnableImpl.executeSnap(SnapRunnableImpl.java:677)\n at com.snaplogic.cc.snap.common.SnapRunnableImpl.executeForSuggest(SnapRunnableImpl.java:556)\n at com.snaplogic.cc.snap.common.SnapRunnableImpl.doRun(SnapRunnableImpl.java:735)\n at com.snaplogic.cc.snap.common.SnapRunnableImpl.access$000(SnapRunnableImpl.java:105)\n at com.snaplogic.cc.snap.common.SnapRunnableImpl$1.run(SnapRunnableImpl.java:330)\n at com.snaplogic.cc.snap.common.SnapRunnableImpl$1.run(SnapRunnableImpl.java:326)\n at java.security.AccessController.doPrivileged(Native Method)\n at javax.security.auth.Subject.doAs(Subject.java:422)\n at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)\n at com.snaplogic.cc.snap.common.SnapRunnableImpl.run(SnapRunnableImpl.java:325)\n at com.snaplogic.snap.threadpool.SnapExecutorService$SnapRunnableWrapper.run(SnapExecutorService.java:86)\n at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n at java.util.concurrent.FutureTask.run(FutureTask.java:266)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n"
"reason": "The pipeline path changed between documents when reuse was enabled"
"resolution": "Change the pipeline expression so that the same value is returned for every input document"
"snap_details": {label:Pipeline Execute}
"label": "Pipeline Execute"
"original": {RUNID:3716be8b-aa2b-4158-be89-792302c2a570, PIPELINENAME:CheckRunID, PAYLOAD:{pipelineName=P_NewExceptionTest}, ERRORTYPE:Technical, ERRORTIMESTAMP:2017-09-07T08:19:05.883Z, EXTRAP...}
"RUNID": "3716be8b-aa2b-4158-be89-792302c2a570"
"PIPELINENAME": "CheckRunID"
"PAYLOAD": "{pipelineName=P_NewExceptionTest}"
"ERRORTYPE": "Technical"
"ERRORTIMESTAMP": "2017-09-07T08:19:05.883Z"
"EXTRAPARAMETERS": "NONE"
Awaiting for your inputs.
Yes, the pool size governs all executions, regardless of the type of pipeline being executed.
As mentioned in another reply, if you enable reuse, the child pipeline to execute cannot change from one document to the next. Instead, you might try collecting all of the documents for a given child into a single document using the Group By Fields snap. The output of that snap can be sent to a PipeExec with reuse disabled and the pipeline to execute as an expression. Then, in the child pipeline, the first snap can be a JSON Splitter that breaks up the group into separate documents. This design should work in a fashion similar to what you are thinking.
Hi Rakesh,
Pipeline parameter values are set to be as string literal, so in the not working the “”.concat(_Primary_Key) is basically just concatenating the “$Emp_ID, $Group_ID”
If you want you can instead use a different approach like split and join…
in your pipeline parameter set the object the you want to concat e.g. Emp_ID|Group_ID
then in the child pipeline you can do a split and join
_Primary_Key.split(‘|’).map((x) => $.get(x,‘’)).join()
hope this helps 🙂
- rakesh_20046 years agoNew Contributor
I used eval and it seems it will work. But documentation also says it will significantly impact performance. So any better way to do it?