Hi,
You’re right, the Public API just provides access to the log data captured in Dashboard, which doesn’t include any document data from the pipelines.
One of our Snaplogic users solved this by creating a re-usable ‘Logger’ Pipeline which is invoked via a Pipeline Execute at the relevant points within their Pipelines. The Logger is very lightweight and captures some basic document data and pushes it to our centralised logging, in our case LogStash.
However we need to be careful how much we use it, particularly in production, as it adds the overhead of an additional pipeline execution every time. At the very least it should be designed so that you can ‘Reuse Executions’. We also don’t consider it good practice to capture significant amounts of document data in our logs as:
a) it will fill up log storage unnecessarily, when for most successful transactions we won’t ever look at it
b) it presents a security / privacy risk if any sensitive business data is included (e.g. employee date of birth) because logs are usually not as tightly secured
Generally we’ll just log the unique identifiers (e.g. employee ID) or if you’re using one, correlation ID.
Cheers,
C.J.