Forum Discussion
Thank you for the options/insight.
I think we’re probably looking at the first scenario. The files are generated daily and saved in the Snaplogic folder (SLDB?).
On certain days of the month they need to be copied to a target server.
I think whatever approach we decide on, we’ll need to (re)read the files based on their file types and then use the corresponding file writer snap to write to the target server.
Appreciate everyone’s input.
Having a scheduled process/task that will run on certain days of the month, scanning the SLDB project/directory for a files(using Directory Browser Snap), File Reader Snap(to read the files), and File Writer Snap(to write the file to target system), will work, but also you should think of a logic where once copied files from SLDB to not be picked again in the next execution.
Regards,
Spiro Taleski
- SusanB4 years agoNew Contributor II
Hi Spiro,
Thanks for your additional input. Luckily the filenames already include a timestamp.
Knowing this, the new pipeline will include a parameter of the list of month days that need to have files copied. IE “1,15” (for the first and fifteenth of each month).
The pipe will have a router that compares today’s month date to the parameter list. If today is not in the parameter’s list of values we exit. If it is we perform the copy steps:
Find and filter the list of files that contain today’s date, route them for zip vs non zip files, read and write (adding the option to Overwrite and/or Ignore if the file is already in the target directory. Currently zip file is having issue with ignore - so we’ll just overwrite those).
We will then schedule the task to run daily.
When needed for special requests we can easily add another temporary scheduled task for other days.
As time allows, I may attempt to make this more flexible by adding a parameter for specific file name prefixes.
For now this should eliminate the need for a human to manually copy files.