Creating APIs with SnapLogic Pipelines and Ultra Tasks

Overview

API (Application Program Interface) is an old concept repurposed to mean a modern web service based on the REST protocol and increasingly using the JSON data format. These modern APIs have become the franca lingua of the digital economy, by facilitating lightweight, performant communication between applications across an enterprise or across the internet.

Typically RESTful APIs perform operations on “resources” (Customers, Orders, People, etc). By convention, the type of operation is identified using the most common HTTP verbs such as POST (create), GET (read), PUT (update), DELETE

SnapLogic provides Ultra Tasks as the means by which a Pipeline can be exposed as a secure, high-availability, low-latency, sub-second request/response API.

For example, the following is a Pipeline that embodies a Customer API exposed using an Ultra Task:

Once the Ultra Task is enabled, the associated Pipeline stays resident in memory on the Snaplex node(s) it was configured to execute on. The number or Instances can be configured to accomodate the expected concurrent API request volume.

The API can then be called securely from an external application (Postman REST client in this case):

This is an example of an API GET (read) request for a specific Customer identified by the ID “1001”

Designing the Pipeline

Ultra Tasks deliver the components of the HTTP request message to its associated Pipeline as fields in the JSON document:

  • content: The request body for POST or PUT requests
  • headers: For example, the ‘User-Agent’ HTTP header can be referenced in the input document as $[‘user-agent’]
  • uri: The original URI of the request.
  • method: The HTTP request method.
  • query: The parsed version of the query string. The value of this field will be an object whose fields correspond to query string parameters and a list of all the values for that parameter. For example, the following query string:
foo=bar&foo=baz&one=1
Will result in a query object that looks like:
 {
"foo" : ["bar", "baz"],
"one": ["1"]
 }
  • task_name: The name of the Ultra task.
  • path_info: The part of the path after the Ultra task URL.
  • server_ip: The IP address of the feed-master that received the request.
  • server_port: The TCP port of the feed-master that received the request.
  • client_ip: The IP address of the client that sent the request.
  • client_port: The TCP port of the client that sent the request.

In the above Customer API example:

Additional Reference


http://doc.snaplogic.com/ultra-tasks

1 Like

Thanks for this post - it is helpful and aligns with one of the primary uses of SnapLogic for us at Davidson College. I have a few questions…

  1. Is the Ultra task necessary to meet the general requirements of creating an API resource / endpoint or could we use a trigger task? In other words does the Ultra task only address the “low latency” characteristic?

  2. One bearer token per task / endpoint is a bit cumbersome when considering building out an API-based architecture. For example, we want to provide access to multiple resources / endpoints for a single application or consumer via a single bearer token. Is there a tweak to your design pattern which would accommodate this?

Thanks again for this post. I hope to hear back!
~Nick

  1. Ultra tasks allow for low latency, they also allow for data processing to continue when communication with the control plane is broken for some reason, allowing for higher availability.

  2. Using JSON Web Tokens (through the use of JWT Snaps in the pipeline) allows for more control on the tokens. The tokens can be configured with an expiration time, there can be multiple tokens per task, a single token can be shared across tasks in a project or project space etc.

Thank you…

Do we have sample pipelines to demonstrate “2”

The documentation pages for the Snaps have sample pipelines, see JWT Generate and JWT Validate