Recent Discussions
Platform Memory Alerts & Priority Notifications for Resource Failures
This is more about platform memory alerts. From my understanding, we have alert metrics in place that trigger an email if any of the nodes hit the specified threshold in the manager. However, I am looking at a specific use case. Consider an Ultra Pipeline that needs to invoke a child pipeline for transformation logic. This child pipeline is expected to run on the same node as the parent pipeline to reduce additional processing time, as it is exposed to the client side. Now, if the child pipeline fails to prepare due to insufficient resources on the node, no alert will be generated since the child pipeline did not return anything in the error view. Is there any feature or discussion underway to provide priority notifications to the organization admin for such failures? Task-level notifications won't be helpful as they rely on the configured error limits at the task level. While I used the Ultra Pipeline as an example, this scenario applies to scheduled and API-triggered pipelines as well. Your insights would be appreciated.Ranjith13 days agoNew Contributor II511Views0likes1CommentSnapLogic Execution Mode Confusion: LOCAL_SNAPLEX vs SNAPLEX_WITH_PATH with pipe.plexPath
I understand the basic difference between the two execution options for child pipelines: LOCAL_SNAPLEX: Executes the child pipeline on one of the available nodes within the same Snaplex as the parent pipeline. SNAPLEX_WITH_PATH: Allows specifying a Snaplex explicitly through the Snaplex Path field. This is generally used to run the child pipeline on a different Snaplex. However, I noticed a practical overlap: Let’s say I have a Snaplex named integration-test. If I choose LOCAL_SNAPLEX, the child pipeline runs on the same Snaplex (integration-test) as the parent. If I choose SNAPLEX_WITH_PATH and set the path as pipe.plexPath, it also resolves to the same Snaplex (integration-test) where the parent is running — so the execution again happens locally. I tested both options and found: The load was distributed similarly in both cases. Execution time was nearly identical. So from a functional perspective, both seem to behave the same when the Snaplex path resolves to the same environment. My question is: What is the actual difference in behavior or purpose between these two options when pipe.plexPath resolves to the same Snaplex? Also, why is using SNAPLEX_WITH_PATH with pipe.plexPath flagged as critical in the pipeline quality check, even though the behavior appears equivalent to LOCAL_SNAPLEX? Curious if anyone has faced similar observations or can shed light on the underlying difference.SolvedRanjith13 days agoNew Contributor II114Views0likes2CommentsNeed Guidance on Dynamic Excel File Generation and Email Integration
Hello Team, I am currently developing an integration where the data structure in the Mapper includes an array format like [{}, {}, ...]. One of the fields, Sales Employee, contains values such as null, Andrew Johnson, and Kaitlyn Bernd. My goal is to dynamically create separate Excel files for each unique value in the Sales Employee field (including null) with all the records, and then send all the generated files as attachments in a single email. Since the employee names may vary and increase in the future, the solution needs to handle dynamic grouping and file generation. I would appreciate any expert opinions or best practices on achieving this efficiently in SnapLogic. Thanks and Regards,deepanshu_116 days agoNew Contributor III66Views0likes0CommentsInserting large data in servicenow
Hello Team, I am developing a pipeline in SnapLogic where there are 6000000 records coming from snowflake and i have designed my pipeline like this: Parent pipeline: snowflake execute -> mapper where i have mapped one to one field -> group by n with 10000 group size -> pipeline execute where Pool size is 5 and in child pipeline i have used json spliter and service now insert ? what can i do to optimize the performance and make it execute faster in snaplogic, currently it takes much time to execute ? Can someone assist in this regards? Thanks in advance.deepanshu_116 days agoNew Contributor III279Views0likes3CommentsHow to convert a MS Word file to PDF
What are my options regarding file conversion from one format to another? In my particular case, I need to read a word file (.docx) and write it as a .pdf file. Any suggestion on how to accomplish this is welcome. Thank you, Agron BautaAgronBauta26 days agoNew Contributor II187Views1like0CommentsQuick Vote for SnapLogic for the DBTA Readers’ Choice Awards
Calling on our Integration Nation Community: this one’s for you! We’re in the running for Best Data Integration Solution at the DBTA Readers’ Choice Awards - but we need your vote to win. ✅ It’s quick. ✅ It’s easy. ✅ It makes a difference. Vote now 👉 https://lnkd.in/e7hiSGrScott2 months agoAdmin195Views0likes0CommentsTrying to connect to an external SFTP
I have generated a key-value pair, shared the public key with the client and setup a Binary SSH account in the Manager in order to connect to the client's SFTP. Additionally, I have got the Groundplex's external IPs whitelisted at the client side and on our side as well. After all this I am getting the following error when I tried to browse the path using the Directory Browser snap: error: Unable to create filesystem object for sftp://.... stacktrace: Caused by: com.jcraft.jsch.JSchException: Session.connect: java.net.SocketException: Connection reset Caused by: java.net.SocketException: Connection reset reason: Failed to get SFTP session connected resolution: Please check all properties and credentials I am stuck in competing the solution due to this error. So, any help is very much appreciated, thanks!vgautam642 months agoNew Contributor III375Views2likes0CommentsData reconciliation solutions?
One of my company's use cases for SnapLogic today is replication of data from Salesforce into internal Kafka topics for use throughout the enterprise. There have been various instances of internal consumers of the Kafka data reporting missing records. Investigations have found multiple causes for these data drops. Some of the causes are related to behavior that Salesforce describes as "Working As Designed". Salesforce has recommended other replication architectures, but there are various concerns with my company about using them (license cost, platform load) ... and we might still end up with missing data. So, we're looking into data reconciliation / auditing solutions. Are there any recommendations on a tool that can: * Identify record(s) where the record in Salesforce does not have a matching record (e.g. same timestamp) existing in Kafka * Generate a message containing relevant metadata (e.g. record Id, Salesforce object, Kafka topic) to be sent to a REST endpoint / message queue for reprocessingfeenst3 months agoNew Contributor III2.1KViews0likes6CommentsCloning Data Stream: Copy Snap vs Router Snap for Efficiency and Performance
Hi Team, In my pipeline, I need to clone the incoming data stream for further transformation. I have two options: either to use the Copy Snap or the Router Snap with both conditions set to true. Which option is more suitable for cloning large volumes of data, and which one is more efficient in terms of CPU and memory usage?SolvedRanjith8 months agoNew Contributor II1.5KViews2likes3Comments