ContributionsMost RecentMost LikesSolutionsRe: Writing Zip Files to S3 Yes, we did have to run everything from the ground. Python Dictionary Conversion in Script Snap Hello, I am trying to run a python script in the script snap that mocks up some test data for unit testing pipelines. The script relies on python data types such as list and dictionary to iterate through the incoming JSON document and manipulate it. However, the document coming in are of type java.util.LinkedHashMap. When trying to convert to a python dict using dict(in_doc), it doesn’t process the document into a dict correctly. Initial googling returned a number of jython bug around this issue. Is there any workaround for this that I can implement? You can see the script here: test-file-gen/snap-file-gen.py at master · jskrable/test-file-gen · GitHub Thanks, Jack Re: Accessing Dashboard Insight data via api @tlikarish are any of those standard dashboard statistics available as @tsemd described? Re: Writing Zip Files to S3 Hi, Yes, we can configure the groundplex to access S3. However, this process is designed to generic enough to move files from locations accessible only from the groundplex to locations accessible from the cloudplex, and vice versa. The S3 scenario is just an example. For anyone interested, I have received a reply from SnapLogic support stating that going through the control plane for large binary streams like this will result in a serious performance hit. Writing Zip Files to S3 I am encountering some severe sluggishness in writing zip files to S3. When writing a 76 MB file, it takes 12 minutes to complete, versus 16 second when writing to a local destination. I think the problem is in transferring from ground to cloud. This process is part of a generic file transport solution, so the read file snap is being executed on a groundplex, and the write file snap is being executed on the cloudplex. This switch is done by a pipeline execute snap specifying execution on the cloudplex. I’m thinking it is possible the issues are cause by the conversion from binary to document and then back to binary once the document stream is passed into the child pipeline. Has anyone else run into similar issues? I am happy to provide an outline of the pipeline if that helps. Thanks. Re: About the Asset, User, & Security Management category Related question: How can we provision a user to have read access to pipelines, access to view their execution on the dashboard, but no execute or write access? Thanks. Re: Script Snap Data Structures I wasn’t aware of that snap, thanks for bringing it to my attention. Re: S3 File as Email Attachment Thanks Charlie. I’ve managed this for now by simply writing to SLDB, sending the message with the attachment, then deleting the file afterwards. I’ll let you know if we’d like to talk about a product enhancement down the line. Best, Jack Script Snap Data Structures Hello, I have written a script to create a JSON object with counters of a parameterized length, with the desired structure like this: [ { “counter”: 1 }, { “counter”: 2 }, { “counter”: 3 }, … ] However, when I do this through a loop in a script snap, it puts the result in an extraneous array, like this: [ [ { “counter”: 1 }, { “counter”: 2 }, { “counter”: 3 }, … ] ] Can anyone let me know why this is happening? I can’t get rid of the extra array using a structure or mapper snap either. My script looks like this: def execute(self): self.log.info("Executing Transform script") list = [] for i in range(1,277): entry = {} entry["counter"] = i list.append(entry) output = list try: # Read the next document, wrap it in a map and write out the wrapper self.output.write(list) except Exception as e: errWrapper = { 'errMsg' : str(e.args) } self.log.error("Error in python script") self.error.write(errWrapper) self.log.info("Finished executing the Transform script") Has anyone run into something like this before? Thanks, Jack Re: S3 File as Email Attachment Hi Charlie, Thanks for your detailed response. I’ve been able to manipulate the body of the email with a lot of flexibility, but my question revolves around using the attachment feature to attach an S3 file onto the email. My pipeline is already writing the pertinent error data into the body of the email, but I want to be able to keep a more detailed record of the error and stacktrace in a file attached to the alert message. This is where I ran into trouble attaching an S3 file, while I’ve been successful using a file from the native snaplogic file system. Is this something that is possible? Or does the email sender snap only support files from snaplogic? Hope this clears up my question. Thanks, Jack