ContributionsMost RecentMost LikesSolutionsRe: How to set the execution user For triggered tasks, the username field will show the user who created the task, the pipeline actually runs as the task owner. A change to show the task invoker username was done a few releases back. It is displayed as a new property called Remote User in the pipeline runtime info dialog, under the Extra Details tab. If the task was invoked with basic auth, this property will show the invoker username. If the task was invoked with some other authentication, like bearer token, this property will not show up, since we do not have the invoker details in that case. Re: Fetching 1k records from table using "Snowflake select snap" The database lookup pattern would work well with an OLTP database, which has low query startup costs and is optimized for single record operations. With a data warehouse like Snowflake which is optimized for analytical queries, the query startup cost and id based lookup will not be as performant as an OLTP database. You could fetch the required column of the data set from Snowflake using a select query and then use the In-Memory Lookup snap. If the lookup table fits in memory, that would be more performant. If the data set is too large, using a OLTP database would be better for such an use case. Re: Fetching 1k records from table using "Snowflake select snap" The Snowflake Lookup snap covers this functionality. That will allow lookup on any number of ids, it will internally batch as required. Also, Lookup snap will use bind parameters, avoiding possible issues with SQL Injection which can happen with a SQL statement string constructed in the Select snap. Re: PipeLine Execute- File Writter- Getting Batch Number Using the Group By N snap will work fine for smaller document sizes and document counts. Since grouping combines multiple documents and creates larger documents in memory, that approach is not recommended when document sizes or counts are large. The batching option in the PipeExec snap does not support automatically passing a batch number. The parent pipeline can use an expression like ((snap.in.totalCount + 1000) / 1000).toFixed() to generate a batch number to pass to the child pipeline, the child can use that info to generate the file name. See the attached parent and child pipelines pparent_2022_03_03.slp (parent) pchild_2022_03_03.slp (child) Re: Groundplex network rules and security A load balancer needs to be provisioned for such scenarios where APIs are triggered on the Snaplex nodes externally. The Snaplex nodes should not be exposed to the external network, all requests to the Snaplex should come through the load balancer. The load balancer requirement is the same with or without the APIM management feature being enabled. Enabling the APIM feature will provide additional authentication and authorization mechanisms to protect the API endpoint. Re: Groundplex without internet For Ultra pipeline instances that are already running (and do not have any dependency on the control plane or other network resources), the pipelines will continue processing documents for up to two days if the heartbeat with the control plane is not happening. For other functionality, like scheduled pipelines and Snaplex triggers, short network disruptions are tolerated. Longer disruptions will result in failures. New pipeline execution information is cached in memory and pushed to the control plane when the network is restored. How much data can be cached will depend on the pipeline load on the node and the memory available. Any node which cannot talk to the control plane for two days will restart itself, to try and re-establish communication through the restart. Re: Snaplogic project metadata property question The metadata field is used internally to store information about assets, including asset recycling (after delete) and asset search related info. It is persisted within the org, it will not be persisted across project export and migration. The metadata field is not exposed currently for external use. We could open an enhancement request to expose and document the metadata. The metadata does have limits in terms of data size. Limiting the external metadata to under 16 entries with values lower than 1KB in size each would be recommended. Re: Cannot preview data The issue with special characters causing preview issues was fixed in a previous release. If the preview is not showing up, it is possible that one of the snaps is set to “Execute only” mode. Changing to “Validate & Execute” will make the snap execute when validation is done on the pipeline. For Writer snaps and other snaps which can have side effects, the default is “Execute only”, so the mode has to be changed for those snaps. Re: Create Key Value Pair Array There is an experimental API here. Given one or more input documents and the same number of desired output documents, the API tries to generate an expression that does the desired transformation. In this case, change the input to [ { "washCare": [ "Machine Wash, WARM", "Hand Wash Only" ] } ] and output to [ { "washCare": [ { "instruction": "Machine Wash, WARM" }, { "instruction": "Hand Wash Only" } ] } ] and click on Synthesize. The generated result is { washCare: $.washCare.map(elem => { instruction: elem }) } Note: This is experimental right now and works for limited inputs only Re: Unable to find proper resource to which can help import external libraries in script snap The script runs in the context of the JVM, so it is a Jython script rather than a pure python script. Additional python dependencies cannot be installed. Python standard library functions can be used. In this case, use urllib2 instead of the requests library. For more complex use cases, external Java libraries can be loaded for use from the Jython script. See this post for an example.