We are using SnapLogic as an integration platform for a trading platform running several marketplaces simultaneously. We would like to have a generic, dynamically and configurable logging pipeline where we can adjust logging parameters on the fly. This logging pipeline, which all pipelines will use for logging using Pipeline Execute, is based on parent pipeline only passing the Document being processed. When there is a need for more detailed logging (by extracting more information from Document sent to Logging Pipeline) in one marketplace, one specific customer or some other criteria or level added later, we would like to turn this on and change logging information for exactly this marketplace, customer etc. The logging pipeline currently writes output to AWS S3 and we use Configuration file stored on cloudplex server.
The configuration file, which is read using File Reader snap, is read every time pipeline is executed to capture any potential changes in parameters. But we find the File Reader quite slow. It uses approximately 2 seconds to read the file. Even the S3 File Reader writing much more data to S3 is a little bit faster. Since this Logging pipeline is executed in quite a large number, we would like to optimize it.
So, the questions are:
Any ideas for how to store global parameters which (easily) can be changed on the fly which do not require file reading?
Or even better; a suggestion of how to create such a generic, dynamically and configurable logging pipeline