One major omission we’ve found in Snaplogic is there is no standard framework for testing of Pipelines. One approach has been to put in a JSON Generator Snap with some test data, however this results in changing the Pipeline definition in order to perform the test. Ideally we should be able to test the Pipeline without changing it, so that if it passes, we can promote that same version to Production without changing it back. Otherwise there is a risk that something is broken when removing the test data.
An alternative is to create a triggered task for each Pipeline and use an external test tool (e.g. JUnit) to submit test data to the Pipeline. This works well, and is particularly useful for automated test execution and performance testing, however requires users to have Java development skills to develop test.
We’ve requested (Service Request #18186) a feature set including:
- Ability to validate and execute Pipelines using test data without changing the Pipeline definition (e.g. ‘run in test mode’)
- Ability to set specific Snaps to ‘No Execute in Production’ or ‘No Execute in Test’ so that test logic can be included in Pipelines if necessary or vice versa
- Ability to mock Snap outputs and ‘flick a switch’ between mock results and real results without changing the Pipeline definition
- Ability to create suites of automated tests which randomly submit from a pool of test data and can be scheduled
- Ability to perform performance, soak and smoke testing using pre-defined automated tests
- Ability to compare test results (Pipeline outputs) against expected results and notify only where there are discrepancies
- Ability to capture, browse and compare test results over an extended period
I’d be keen to hear any thoughts or feedback from the community - is this a feature you would like to see too?