I have a pipeline that looks like this:
The Snaps are:
- List all files in a given Box directory
- Filter the files according to a mask
- Read the matching Excel file into the pipeline
- Convert the first Worksheet to CSV
- Write a CSV file to an archive folder in Box
- Delete the original Excel file
Everything is working up to step 5, where I need to access the output variables of step 2 again, the FileName is needed to create the matching file name in the archive folder (except this time with a .csv extension) and then the FileId is needed in step 6 in order to delete the Excel file.
The business reason behind doing this is that the file coming in to the Box folder will have a date prefix at the front of it. There is no set frequency for this file, it is practically arriving at random intervals. Therefore, I have no way to hard code the actual file name into the Pipeline and must dynamically check for a new file every day.
How can I store the output fields from step 2 somewhere in memory so that the final two Snaps in the pipeline can access them?