Zip Snap file structure

I have a pipeline which reads data from an S3 location and writes to an SMB location with no transformations. The pipeline involves 2 main snaps, S3 reader connected to Zip write snap. The Zip write snap recreates the whole structure of the S3 location in the Zip file, which I would like to avoid and place all the files in either a root directory or a directory of my choice instead of copying the S3 structure.
As there is no transformation, I do not want to parse the files and parse back again. Is there a way of doing this?

Hi @SuryaReddy,

I cannot understand the reason why you have to write Zip file , and read it after? Why not a plain S3 read, and write to SMB? Are you using the same path from the S3, to set the file path to SMB? I’m just guessing. Example:

s3:/// => smb://

If this is the case, you can easily take the filename only, and define the target path on your own, if not, please send some more details.

Hi @bojanvelevski ,
The pipeline does this:
Read files (and there are multiple files to read) from S3 bucket ==> Write to an SMB Location as a zipped file.
When doing this, the zip file has the same folder structure as the S3, which is very deep. I am looking to avoid this deep structure and make a much more flat structure.

S3 file folder structure: Bucket → Folder1 → Folder2 → Folder3 → File
Zip file structure: Zip file name (User defined) → Bucket → Folder1 → Folder2 → Folder3 → File
Required: Zip file name (User defined) → Bucket → Folder1 → File

That’s because you are passing the content location as a target filepath. Use the following expression to get the filename only:

$['content-location'].split('/')[$['content-location'].split('/').length -1]

than, you can define the path as you like it to be. Example:

"smb://" + $['content-location'].split('/')[$['content-location'].split('/').length -1]

Don’t forget to enable the expressions functionality.

Test pipeline 0_2021_10_01.slp (3.0 KB)
I have uploaded a stripped down version of the pipeline, can you advise how to change the path you mention above?

Your current version of the pipeline, will write the zip file on SLDB under the name “” . What am I suggesting is to use the expression above to get the actual filename you’re reading from S3, and use that filename to write the file on SMB. Example: