10-03-2022 05:45 AM
I have a pipeline that calls an API, flattens JSON records, outputs a csv file with a .tmp extension, verifies the file was written, then renames the .tmp to .csv. This pipeline is in production.
The pipeline was copied and a change was made to one API parameter. It ran fine. When snaps, such as the emails, are enable/disabled, the pipeline fails on the file rename snap because the .tmp file is not found.
Another corruption occurred with the changed parameter. The pipeline ran fine, then when attempting to run the same unchanged pipeline the next day, an error that the parameter that was changed wasn’t defined was encountered.
Another corruption occurred with the changed parameter. The pipeline ran fine, then when attempting to run the same unchanged pipeline the next day, an error that the parameter that was changed wasn’t defined was encountered.
Another corruption occurred in a copy the day after successfully running AFTER NO CHANGE with the MetaAggregator throwing error Failure: Invalid number in the input data at %s, Reason: Character N is neither a decimal digit number, decimal point, nor “e” notation exponential mark., Resolution: Please make sure input data at parseInt(jsonPath($, “metadata[*].entity.paging.count”).toString()) be a number type.
Why would this pipeline that is in Production be corrupting so badly and so unpredictably in the Test environment?