Reltio API calls on Bulk Load and Read Object snaps
Hello all, One of AstraZenica’s requirements is that they need to take bulk export from Reltio MDM using Snaplogic. They are looking for clarification if the snaps Reltio Bulk Export & Reltio Read (Object) get their export using single API call (all records in a hit) or multiple calls (one hit for each record)… I am not sure how it was designed, but we do not document this at our side. Is anyone aware of how we handle the API calls for these Reltio snaps? Thanks! Rob593Views0likes0CommentsPipeline queuing and Pipeline Exeute
Hello team, We have documentation regarding queueing of pipelines here: https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1439392/Pipeline+Queueing It notes: The SnapLogic control plane and data plane were improved in the Winter 2016/4.4 release to more gracefully handle certain overload conditions on a Snaplex. This change introduces a new “Queued” state for pipeline executions and adds properties to the Snaplex configuration to set the resource thresholds used to detect an overload. When an execution is in the Queued state, the control plane will try to start the execution when resources become available on the Snaplex. A pipeline will not stay in the Queued state forever; it will timeout after a while. A scheduled pipeline will timeout before the next scheduled execution or 4 hours, whichever comes first. A ForEach execution expires after 5 minutes. Not all methods of execution will result in a pipeline being put into the Queued state. Only executions started by a Scheduled Task or a ForEach Snap. My customer would like to know regarding the last sentence, what about executions that are started by a Pipeline Eecute - will that method of execution result in a pipeline being put into a queued state? I believe so, but I wanted to verify. Thanks Rob1.5KViews0likes3CommentsJoin snap hangs, but adding sort snaps on the input views fixes it
Crown has a pipeline that gets stuck on the join snap and never finishes. The pipeline (CrownInsiteDev/User Projects/Kim Miesse/ load_fact_daily_hour_meter) will complete and return the expected data if we configure the Join snap with a join type of Left outer and data as unsorted. The pipeline completes in about a minute in .that scenario The pipeline will not complete with a join type of Merge on the same snap, even after running for 30 min There were no sort snaps before the join input views. I was able to get the pipeline to complete in less than 30 seconds by adding sort snaps before each of the join snap input views. The customer realized that would work but the Join snap in question was set to unsorted so they think it should work without sorts. My recommendation was that this behavior was due to the Merge Algorithm. The Merge algorithm is the most efficient way to join between two very large sets of data which are both sorted on the join key. The Merge Join simultaneously reads a row from each input and compares them using the join key. If there’s a match, they are returned. Otherwise, the row with the smaller value can be discarded because, since both inputs are sorted, the discarded row will not match any other row on the other set of data. This repeats until one of the tables is completed. Even if there are still rows on the other table, they will clearly not match any rows on the fully-scanned table, so there is no need to continue. Since both tables can potentially be scanned, the maximum cost of a Merge Join is the sum of both inputs. Or in terms of complexity: O(N+M). Generally speaking, if we sort the data prior to the Join Merge, we are more efficient because of the way the Merge algorithm works. The customer explained that they are using the join as a method to wait for all rows in a stream before continuing processing i.e. They want to wait for all rows to be inserted into a table before querying that table. They do not want to concatenate two data sets together as a union snap would. And they want to wait for both input streams to finish before outputting, which the union does not do. In the documentation for the Join snap https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1439005/Join (https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1439005/Join) it says “If you select Merge, the documents from the input views are merged into one document. You do not have to specify any other join properties when merging documents.” This tells them is doesn’t matter what the join criteria is on a merge and that it doesn’t look for matches and therefore it’s even less important to be sorted. So they do not want feel they should have to sort the data before the joins. Anyone have any additional technical recommendations/suggestions around why using using a sort snap will improve the efficiency and in cases like this, avoid the “hung” condition on the Join snap? Thanks! Rob695Views0likes0CommentsOutside Community
Continuing the discussion from Stackoverflow – Snaplogic tag created: @dmiller @cstewart I have a strong feeling that no posts should be entertained outside of our community. We should keep our platform tightly knit so that any information is not available easily outside publicly. If posts and solutions are allowed outside, it would become polluted. There would be opponents to this idea who believe in widespread message in the internet world from marketing point of view but there is marketing to handle that. One example I would give is Oracle and Workday. Oracle - you can find anything out there on Google. Workday - Good luck if you find any relevant posting to its technology or even mere discussion of its platform publicly. They only expose information through their community. Whereas our documentation is public, Workday documentation is not. Its only available to its customers/partners. Just my two cents on it.1.8KViews0likes4CommentsResponse Needed: Testing the "Solved" plugin
I want to test the ability for users accept answers as a solution before I announce that the functionality was enabled in the categories. Can I get a couple people to post to this internal-only thread just so I have a few items to select from?Solved2.3KViews1like4CommentsSnaplex Monitoring API - timespan
In https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1438923/Snaplex+Monitoring+APIs, it reads If a timespan is not set, it will return information for the last hour. What’s the definition of the timespan in this context? It sounded that there’s a way to set a timespan. If this is the case, how can we set such a timespan?817Views0likes1CommentRest post cookies
Continuing the discussion from Request Body in Rest API Pipeline: @tstack @dmiller Can you guys take a look at this? This request needs a Cookie to be sent across and authenticated via our REST POST snap. I have tried the same request in SOAPUI and it works. I was trying to get it to work in POSTMAN but even with Interceptor that handles Cookies, I was unable to do it. Can you guys suggest what else could be done? Here is my pipeline. https://elastic.snaplogic.com/sl/designer.html?pipe_snode=598b5f21a92066355787c461&active_org=ConnectFasterInc1.2KViews0likes2CommentsCommunity folder in ConnectFasterInc
Folks We can create the sample pipelines for any solutions we create for SnapLogic Community questions. In ConnectFasterInc, I have created a project space/folder called _Community/SnapLogic. This will help in other snaplogicians to look at the same solution in case they want to refer or want to enhance.1.3KViews0likes3CommentsExpression to remove non-valid XML chars
Hello all, I am trying to create a mapper expression to remove any non-valid XML characters. The need behind the request: The Workday Write snap only can handle valid XML characters. If we have content in a field such as: “Name”: “Firstname Lastname mbH\u000b (NVG)” The Workday Write will fail because of the non-valid XML \u000b, with the below error: Failure: An error occurred while parsing the request document, Reason: An invalid XML character (Unicode: 0xb) was found in the element content of the document., Resolution: Please verify the validity of the XML document We know from Extensible Markup Language (XML) 1.0 (Fifth Edition) that valid XML characters include: Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF] /* any Unicode character, excluding the surrogate blocks, FFFE, and FFFF We could just replace the\u000b in a mapper snap with but my customer would like a way to include all non-valid XML chars, as opposed to just this one, as they cannot anticipate what other non-valid characters could possiblly occur. So I am working on an expression to remove non-valid XML before the workday writer snap, and I thought someone may have already done this before. Thanks for any help! Rob761Views0likes0CommentsWhat are the purposes of $property_map.input and $property_map.output in a *.slp file
I have downloaded a pipeline as a *.slp file and opened it in a text editor. Noticed that both $property_map.input and $property_map.output refer certain snaps in the pipeline. What’s purpose of following two objects: $property_map.input $property_map.output Why are those snaps singled out, for example “Mapper_FailedCertificatesList - output0”? Following is the snippet: "property_map": { "info": null, "input": { "7536ff22-1c0d-44c6-ada7-718de5a98634_input0": { "label": { "value": "Conditional_Certificates_Offers_Length - input0" }, "view_type": { "value": "document" } } }, "settings": { "param_table": { "value": }, "imports": { "value": [] } }, "output": { "ac14846d-7856-4e1d-9ca0-b519741a25b6_output0": { "view_type": { "value": "document" }, "label": { "value": "Error 400 Union - output0" } }, "2bfe809b-a4b7-4f9b-9007-5262ea6b0b58_output0": { "label": { "value": "Mapper_FailedCertificatesList - output0" }, "view_type": { "value": "document" } }, "36e341cd-bb8d-42d9-94f9-24eb72a812c8_output0": { "view_type": { "value": "document" }, "label": { "value": "Mapper_FailedCertificatesList - output0" } } }, I do noticed that snap “Mapper_FailedCertificatesList - output0” has io_stats object has value while most other snap don’t in the pipeline monitor API output. Here’s an example of the io_stats value: [ { "send_duration": 49923, "remote": "pa23sl-fmsv-ux02007.fsac5.snaplogic.net/172.29.66.28:8089", "bytes_recv": 0, "start_time": 1499885665780, "error_duration": 0, "bytes_sent": 1370, "recv_duration": 0, "error_count": 0, "type": "socket" }, { "send_duration": 149004, "remote": "pa23sl-fmsv-ux02000.fsac5.snaplogic.net/172.29.65.214:8089", "bytes_recv": 0, "start_time": 1499885663392, "error_duration": 0, "bytes_sent": 1178, "recv_duration": 0, "error_count": 0, "type": "socket" } ] This behavior can be seen in SnapRuntimeFlashlight-AllSnaps-RedeemV2.2.xlsx (80.0 KB). It is an aggregated view of the pipeline monitor api output of many ruuids for the same pipeline. Also attached the .slp file Redeem_Pl_V2.2_2017_07_12.slp.txt (212.8 KB) P.S. after post this, saw this in dashboard: and now reading https://doc.snaplogic.com/wiki/display/SD/Check+Pipeline+Execution+Statistics2.7KViews0likes2Comments