Writing Zip Files to S3

I am encountering some severe sluggishness in writing zip files to S3. When writing a 76 MB file, it takes 12 minutes to complete, versus 16 second when writing to a local destination.

I think the problem is in transferring from ground to cloud. This process is part of a generic file transport solution, so the read file snap is being executed on a groundplex, and the write file snap is being executed on the cloudplex. This switch is done by a pipeline execute snap specifying execution on the cloudplex. I’m thinking it is possible the issues are cause by the conversion from binary to document and then back to binary once the document stream is passed into the child pipeline.

Has anyone else run into similar issues? I am happy to provide an outline of the pipeline if that helps.

Thanks.

Your groundplex normally should also have access to S3. Have you try configure your child pipeline to also use groundplex? Also, it is alway more performant when you can have your process in a single pipeline instead of parent child.

Hi,

Yes, we can configure the groundplex to access S3. However, this process is designed to generic enough to move files from locations accessible only from the groundplex to locations accessible from the cloudplex, and vice versa. The S3 scenario is just an example.

For anyone interested, I have received a reply from SnapLogic support stating that going through the control plane for large binary streams like this will result in a serious performance hit.