Forum Discussion

ahouston's avatar
ahouston
New Contributor II
2 years ago

Help With Reduce Function in Mapper

I'm having trouble using the reduce function in a mapper which will subtotal two values across a two field compound key.  For example, take the following input:

"myarray": [{
"key1": 1,
"key2": "A",
"value1": 5,
"value2": 10
}, {
"key1": 1,
"key2": "A",
"value1": 10,
"value2": 20
}, {
"key1": 2,
"key2": "A",
"value1": 15,
"value2": 30
}, {
"key1": 2,
"key2": "B",
"value1": 20,
"value2": 40
}
]

 

My desired result is as follows:

"myarray": [{
"key1": 1,
"key2": "A",
"value1": 15,
"value2": 30
}, {
"key1": 2,
"key2": "A",
"value1": 3,
"value2": 30
}, {
"key1": 2,
"key2": "B",
"value1": 4,
"value2": 40
}
]

Unfortunately due to use of an ultra pipeline, I cannot use a Group By snap and must resolve within a mapper.  Appreciate any guidance from the community on how to best solve this.

Thanks!

 

1 Reply

  • Hi @pjanapati and welcome to the SnapLogic Community! 🙂

    If the snaps that you are using does not support Pass Through, then you can add Copy Snap and a Gate Snap after the Salesforce Snap.

    That way the Gate Snap will wait until it collects all of the documents, in this case until Salesforce Snap completes.

    Afterwards just split the documents coming directly from the Copy Snap.

    Skeleton Flow:

    Sample Pipeline:
    SL-Comm-Sequentially_2023_02_28.slp (8.8 KB)

    Also, you can achieve the same with many different approaches like: pipeline nesting and pipeline execute, copy and join snap etc.

    Let me know if this helps you.

    BR,
    Aleksandar.

    • kindminis's avatar
      kindminis
      New Contributor

      Hi Aleksandar, 

      Can you provide some more information on the other approaches you mentioned to ensure the pipeline runs sequentially? I am looking to call some other pipelines from a single pipeline and want to make sure that everything downstream waits until the Pipeline Execute snap finishes completely. Thanks.