Creating APIs with SnapLogic Pipelines and Ultra Tasks
Overview API (Application Program Interface) is an old concept repurposed to mean a modern web service based on the REST protocol and increasingly using the JSON data format. These modern APIs have become the franca lingua of the digital economy, by facilitating lightweight, performant communication between applications across an enterprise or across the internet. Typically RESTful APIs perform operations on “resources” (Customers, Orders, People, etc). By convention, the type of operation is identified using the most common HTTP verbs such as POST (create), GET (read), PUT (update), DELETE SnapLogic provides Ultra Tasks as the means by which a Pipeline can be exposed as a secure, high-availability, low-latency, sub-second request/response API. For example, the following is a Pipeline that embodies a Customer API exposed using an Ultra Task: Once the Ultra Task is enabled, the associated Pipeline stays resident in memory on the Snaplex node(s) it was configured to execute on. The number or Instances can be configured to accomodate the expected concurrent API request volume. The API can then be called securely from an external application (Postman REST client in this case): This is an example of an API GET (read) request for a specific Customer identified by the ID “1001” Designing the Pipeline Ultra Tasks deliver the components of the HTTP request message to its associated Pipeline as fields in the JSON document: content: The request body for POST or PUT requests headers: For example, the ‘User-Agent’ HTTP header can be referenced in the input document as $[‘user-agent’] uri: The original URI of the request. method: The HTTP request method. query: The parsed version of the query string. The value of this field will be an object whose fields correspond to query string parameters and a list of all the values for that parameter. For example, the following query string: foo=bar&foo=baz&one=1 Will result in a query object that looks like: { "foo" : ["bar", "baz"], "one": ["1"] } task_name: The name of the Ultra task. path_info: The part of the path after the Ultra task URL. server_ip: The IP address of the feed-master that received the request. server_port: The TCP port of the feed-master that received the request. client_ip: The IP address of the client that sent the request. client_port: The TCP port of the client that sent the request. In the above Customer API example: A Mapper Snap is used to parse the ID from the portion of the URL after the base path provided by the Ultra Task (demo-fm.snaplogic.io/api/1/rest/feed-master/queue/MyOrg/MyProjectSpace/API/Customers)… In this case “/1001”: A Router Snap is used to conditionally direct execution to the appropriate subflows designed in accordance with the RESTful CRUD operations described above: Additional Reference https://en.wikipedia.org/wiki/Representational_state_transfer http://doc.snaplogic.com/ultra-tasks13KViews1like5CommentsSending XML payload to Ultra pipeline
I’m trying to execute an Ultra pipeline with an XML payload, but I’m getting the error below: Document input views only accept JSON-encoded data Resolution Please raise a defect with the following information, Pipeline Ruuid:d3c6dcc2-a498-4517-8df9-54910d504959 Reason Snap failed unexpectedly and did not provide any reason What steps can be taken to make Ultra parse an XML payload?8.7KViews0likes10CommentsHow to find incoming data format(json,xml) dynamically in pipeline input view?
How to find incoming data format dynamically? Say for example, If the incoming data is JSON then I need to perform certain validation and flow for JSON If the incoming data is xml then I to perform certain operations. Can any one suggest How to find incoming data format during the run-time. by using single input view for the pipeline?7.3KViews0likes6CommentsCapture runtime document in ultra pipeline for debugging purpose
Sometimes you want to capture input or output document of snaps at runtime in ultra pipeline for debugging purpose. Here’s a quick way: Don’t use File Writer Snap as it break the lineage in ultra pipeline Use a Pipeline Execute Snap Send the logging content as one or multiple parameter to the child pipeline Make sure the child pipeline do nothing heavy weight Make sure set the child pipeline capture the parameter in question View the captured parameter in Dashboard You can copy the captured parameter and paste it to a JSON pretty print tool for better viewing5.6KViews3likes5CommentsConverting pipelines to ultra
Suppose that I have two pipelines - one which stores the incoming document in a specified queue and the other one which picks up the document from that queue and processes it. According to my understanding, the easiest way to turn these into ultra tasks would be: eliminate the first pipeline totally (as the queue is already present in ultra) eliminate the ‘picking up the document from the queue’ parts from the second pipeline and either call the second pipeline using the ‘pipeline execute’ snap from a new pipeline (if snaps are not compatible with ultra). construct ultra task using the new pipeline. or use the second pipeline as ultra pipeline if all snaps are compatible. Please let me know if my understanding is correct. Also if some comparatively time consuming snaps are present in the ultra pipeline processing (assume tasks like 10+ SQLServer Execute, half a dozen REST Posts (ServiceNow create/update, other POSTs etc.), my understanding is that the caller will time out waiting for the response, but the ultra task will run to completion. In this case how can the caller know if it succeeded or not? In the original scenario that I’ve described, if the document has been put into the specified queue, it is assumed that it would be picked up and caller will get a response immediately.4.1KViews0likes3CommentsUltra task with Salesforce Read snap: merge output into single document
I have a question similar to the one asked here Merge documents into one big document Specifically, I have a pipeline with a Salesforce Read Snap which returns 0 or more documents as the result. I’d like the output of the Read Snap to contain a single document containing an array with results returned by the Read Snap. Based on my reading of the forum posting: I cannot use the Group By N snap as it isn’t supported in Ultra tasks It is possible to use the script snap in an Ultra snap to do this. I’ve attached a pipeline that can do this: SFReadUltraPipeline.slp (13.6 KB) The pipeline returns the following output from a triggered (non-ultra task): [ { "theOriginal": { "Id": "0036A00000Vatj6QAB", "FirstName": "Andrew", "LastName": "Ramirez", "hed__AlternateEmail__c": "aramirez@destinsolutions.com", "original": { "FirstName": "Andrew", "Email": "aramirez@destinysolutions.com" } }, "duplicates": [ { "Id": "0036A00000AbbdpQAB", "FirstName": "Andrew", "LastName": "Ramirez", "hed__AlternateEmail__c": "aramirez@destinysolutions.com", "original": { "FirstName": "Andrew", "Email": "aramirez@destinysolutions.com" } }, { "Id": "0036A00000AbbeOQAR", "FirstName": "Andrew", "LastName": "Mayzak", "hed__AlternateEmail__c": "4541474710648498@destinysolutions.com", "original": { "FirstName": "Andrew", "Email": "aramirez@destinysolutions.com" } }, { "Id": "0036A00000Vatj6QAB", "FirstName": "Andrew", "LastName": "Ramirez", "hed__AlternateEmail__c": "aramirez@destinsolutions.com", "original": { "FirstName": "Andrew", "Email": "aramirez@destinysolutions.com" } } ] } ] with the dashboard view demonstrating that documents are merged correctly: A request sent to an ultra task based on the pipeline times out, with the dashboard indicating the script snap never returns: I suspect that the issue is that document lineage is not being maintained by the script snap. However, I don’t know how to maintain document lineage and also get the snap to output a single document with an array containing the results of the Salesforce Read Snap. Can someone tell me how to alter this pipeline to make it ultra compatible while still returning a single output document containing the results of the Salesforce Read snap?3.5KViews0likes2CommentsHow to automatically disable and enable Ultra task through api's
Hello, We have a requirement to automatically recycle ultra task threads on every day at a certain time. Are there any api’s to disable and enable ultra task ? or any alternative to recycle threads when ultra task is hung. Thank you in advance. Harish3.3KViews0likes3CommentsI wanted to create multiple arrays from a single array based on specific Field
I have the below input: “AT_Pricehash”: [ { "@TermsListID": "Test1", "ORGID":"1000", "@MinimumQuantity": 1 }, { "@TermsListID": "Test2", "ORGID":"1001", "@MinimumQuantity": 1 }, { "@TermsListID": "Test3", "ORGID":"1000", "@MinimumQuantity": "1" } ] I wanted to generate two arrays based on the ORGID, expected output as below: "1000Array" : [ { "@TermsListID": "Test1", "ORGID":"1000", "@MinimumQuantity": 1 }, { "@TermsListID": "Test3", "ORGID":"1000", "@MinimumQuantity": "1" } ] "1001Array"[ { "@TermsListID": "Test2", "ORGID":"1001", "@MinimumQuantity": 1 } ] Since this requirement is part of the ultra pipeline, I cannot use the straightforaward way i.e. by using the Split, sort, and then perform the GroupBy Field snap to generate arrays based on ORGID. We can use the child pipeline but I would like to see if we can get this done in the same pipeline directly. Is there any way to achieve this such as a direct expression or expression library, etc? Any help would be appreacited.Solved3.1KViews0likes2Comments