ContributionsMost RecentMost LikesSolutionsCalculate bytes written or read by a snap? There’s a way to calculate the total number of input or output documents, which is monumentally handy. However, is there a way to calculate the number of bytes read on input/output? This would be particularly useful for file chunking ie. if you want to segment files written to S3 by 100mb files, as opposed to having to make a guess as to the number of documents you need to read. I can imagine a way to do this right now roughly by building a Script snap to calculate every documents’ flattened JSON space requirements, but I worry this would actually slow down the pipeline a lot since it requires basically either traversing a document dynamically or by re-serializing each document in order to produce a byte count. Is there another way we can get the number of bytes read or written on a snap? Re: Soap Execute snap - Account creds in header Awesome, glad to hear it! I’ll come running back here if I ever have any issues 😉 Re: Soap Execute snap - Account creds in header That would actually be really fantastic, because I’d love to completely scrap my custom SOAP setup. While I have your ear, is there any chance you can also file a bug related to the names of custom columns? When you enter raw text for custom columns, it changes the definition after saving & reopening the snap to being [object Object] instead of what’s expected. The only way to save it properly is to click through the auto completed column name when the Snap processes the SOAP API, which is extremely, extremely painful to do since it takes a long time to load every time. Re: Soap Execute snap - Account creds in header FYI you can view the WSDL that defines nullFieldList here: https://webservices.na1.netsuite.com/xsd/platform/v2016_1_0/core.xsd The main WSDL is here: https://webservices.na1.netsuite.com/wsdl/v2016_1_0/netsuite.wsdl Re: Soap Execute snap - Account creds in header ptaylor: <SOAP-ENV:Body> <ns0:update> <ns0:record ns2:type=“ns1:Job” internalId=“481756”> <ns1:customFieldList> <ns4:customField ns4:scriptId=“custentity777” ns4:internalId=“489” ns2:type=“ns4:StringCustomFieldRef”> <ns4:value/> </ns4:customField> <ns4:customField ns4:scriptId=“custentity15” ns4:internalId=“119” ns2:type=“ns4:DateCustomFieldRef”> <ns4:value/> </ns4:customField> </ns1:customFieldList> </ns0:record> </ns0:update> </SOAP-ENV:Body> Your body statement is invalid as per the SOAP API’s specs. There is a separate region of null fields you have to declare. Your statement should be: <SOAP-ENV:Body> <ns0:update> <ns0:record ns2:type="ns1:Job" internalId="481756"> <ns1:customFieldList> <ns4:customField ns4:scriptId="custentity777" ns4:internalId="489" ns2:type="ns4:StringCustomFieldRef"> <ns4:value/> </ns4:customField> </ns1:customFieldList> <ns3:nullFieldList ns2:type="ns3:NullField"> <ns3:name>custentity15</ns3:name> </ns3:nullFieldList> </ns0:record> </ns0:update> </SOAP-ENV:Body> EDIT: actually, even this is wrong - both of your values should be in the nullFieldList: <SOAP-ENV:Body> <ns0:update> <ns0:record ns2:type="ns1:Job" internalId="481756"> <ns3:nullFieldList ns2:type="ns3:NullField"> <ns3:name>custentity15</ns3:name> <ns3:name>custentity777</ns3:name> </ns3:nullFieldList> </ns0:record> </ns0:update> </SOAP-ENV:Body> Re: Soap Execute snap - Account creds in header IIRC I tried to report this as a part of asking for help with a workaround. I think there was another issue relating to null fields still not being updated properly which is why I also did this. I mean, I asked this over a year ago, lol. It’s probably worth checking it out as it is definitely still a problem or is at least ticketed - I almost did this workaround again very for a much more complicated series of Netsuite SOAP calls but gave up when I realized I didn’t actually have to change the number of date columns we’re already splitting calls across, and it doesn’t need proper updates as we expect only to create new Netsuite objects using the process. Re: Soap Execute snap - Account creds in header Date types are not parsed properly in custom body fields and thus if you want to use the standard snaps you cannot use null values. Queueing Data for a Pipeline Execute I want to call an external service in a pipeline execute that can only run a single call at a time, but each document I could conceivably call against it can actually be aggregated and passed in a batch call. Is there any way to queue documents up and collapse them until the pipeline execute is finished running, then submit those documents to the pipeline execute? Currently the only way I can think to do this is to simply group documents by a predefined number at a time, but the pipeline execute that’s sending data to the endpoint could conceivably support much more than the batch size at a time to hand off. Re: Parallel Reused Pipeline Executes Uneven Load Yep, Snaplex property is empty. There’s only one node in this specific instance, too. I think in this case I don’t need to reuse executions, as it’s been a while since I reexamined what I was doing in the intermediate pipeline execute which doesn’t really need to be parallelized any more, so I might end up just getting rid of “reuse pipeline executions” at this point! Parallel Reused Pipeline Executes Uneven Load I’ve been trying to build out a pipeline that reuses pipeline executes across a pool size of 3 to get input documents on an even spread between them to take advantage of multiple cores on similar work between differing input documents. However, in practice, it seems like it’s unevenly distributing the work - one such example I’m looking at right now is that two pipeline executes are getting 8 documents each, and the third is getting 25. My input data is super simple - it just passes in a day to run across, which the internal logic takes use of and does a bunch of self-contained work before completing. The actual work done is not at all simple, but I would figure that when distributing work, it would want to distribute them as evenly as possible. Is there a way to guarantee that this work does get evenly distributed without pre-aggregating the data to pass into each pipeline execute thread? That works for evenly distributing the work, but it’s annoying as a pattern, plus doesn’t take advantage of the fact that some of the days executing might run significantly faster than other days.