Forum Discussion

kokimura's avatar
kokimura
New Contributor
6 years ago
Solved

How to stop snap execution

I have a pipeline which along the way checks for a presence of a particular value. If there is no value it should exit without errors but not execute the the rest of the pipelines. How do I achive that?
For example:

  1. MySQL Snap reads a value from somewhere → 2. MySQL Snap uses the value and writes it to another table
    Imagine that the first snap’s read returns null i.e. there is not value. How can I check that and make the pipeline stop without an error and not execute the second snap?
    I tried with Filter and Router but that did not work.
    I tried with Router and Exit but the Exit throws an error and does not stop the second snap from executing either.
    Thanks!
  • Hi @kokimura,

    Perhaps you could use the object.hasPath(field) function to check for the existence of a field in a Filter Snap, in order to stop the invalid documents from propagating further down the pipeline and cause errors in the downstream snaps.

    BR,
    Dimitri

2 Replies

  • nickhumble - my apologies - I missed your original response to my question.

    If there are two input documents that you want to combine to a single document, I would recommend using the Gate snap in this case.  Then follow with a Mapper snap with the following expression:

    The jsonPath is a powerful function that allows you to rip through a JSON structure looking for specific sub-elements easily and return the results as an array.  This specific syntax tells jsonPath to return all elements buried under the top three arrays. 

    Note that the Gate snap has some considerations when processing larger volumes of data, but for this case, it works very nicely.  Also, the jsonPath documentation has some external links for additional details of how you can work with it along with some good examples.

    Hope this helps!

     

    • nickhumble's avatar
      nickhumble
      New Contributor II

      Thanks Koryknick - this has worked for me.  My dataset is larger than what i shared but not enormous and i think should be ok.