cancel
Showing results for 
Search instead for 
Did you mean: 

Ultra Pipeline Functionality

parvathy_r
New Contributor II

One of our customers would like to use the ultrapipeline functionality as REST endpoint. In this regard would like to get some confirmations

  1.    What would be limitations on TPS that it can handle?   How do we scale,  increase Feed Servers and ground plex?
    
  2.   If we design our pipelines and data storage well, what is the kind of SLA that we can commit.  We would like to understand what is the time required in instantiating and running
    
  3.   Is there any mechanism to throttle the requests?
    
  4.   If the number of requests are higher, will they queue at Feedserver level or what would it return?
    
  5.   How to get URL mapping?   Have you got some thoughts with the existing clients?
    
  6.   What kind of security mechanism can be configured - Authentication and handshake?
    

Currently only bearer token is provided.

5 REPLIES 5

tstack
Former Employee

I’ll try to answer your questions:

  1. The transaction rate really depends on what you’re going to be doing in the pipeline. If you are contacting local database/REST servers as part of handling requests, you can probably sustain a pretty high rate. But, if the pipeline needs to talk to cloud services, you’ll be bounded by the amount of time it takes to communicate over the internet, which can be quite high.
    There is, of course, going to be overhead incurred by the ultra pipeline infrastructure. The request message is written to disk so that we can replay it in case of a node failure in the GroundPlex. There is an extra hop from the Feed-Master to the the GroundPlex node so that we can scale by adding more nodes and be resilient to a GroundPlex node failing. The overhead is relatively constant, though, so it should work if you can tolerate that cost.

  2. Ultra Tasks are intended to be resilient to pipeline and whole GroundPlex node failures. The pipeline instances that will be handling requests are spread across nodes, so that if one goes down the other instances will continue processing. The pipeline instances on those failed nodes will then be respawned on other nodes to get you back to capacity.
    As you mention, pipeline design does play a part. The ultra pipeline instances should be able to survive an outage of the SnapLogic cloud servers, but if your pipeline is reading from or writing to SLFS, you’ll experience problems.

  3. Not really. You will need a load-balancer to spread load across the Feed-Masters, so I think we would recommend setting up throttling on that when possible. The Feed-Master will start rejecting requests if it’s space for storing requests fills up.

  4. Requests will queue up on the feed-master until they can be serviced by one of the ultra pipeline instances. A 503 will be returned by the Feed-Master if it has no more space for requests.

  5. I’m not quite sure what you mean here. The document sent into the pipeline that contains most everything in the original request, so you should be able to do what you want.

  6. The pipeline should be able to do all aspects of request handling, including authentication. For example, you can parse the ‘Authorization’ header and check against a DB to see if it’s valid, responding with a 401 if it’s not or passing the doc down the rest of the pipeline to do the real handling.

parvathy_r
New Contributor II

Thanks for the Answers…

We want provide REST interface to our data for application using the ultra pipelines for example application will ask for all products in category or details about a specific customer Id that is sent which has to respond with the required details. Do you think that will work as we are intended? Has any of your clients used ultra pipelines to support these kind of requirements?

Bhavin
Former Employee

Adding to what Tim has said, yes you can use ultra for such requirements and not only that you can also create a Swagger or RAML or APIBlueprints based spec and make it available to internal and external consumers, for ex: here you’ll find SnapLogic pipeline exposed as API Blueprints - Order API · Apiary I have used Apiary (https://apiary.io/ ) to create this spec, I can also do something similar using swagger.io - Build, Collaborate & Integrate APIs | SwaggerHub
And both of these specs are using ultra pipelines under the hood.

parvathy_r
New Contributor II

Hi,
Thanks for your inputs. We have a simple pipeline that posts data to ultra pipeline and waits for the response. But it is not completing as it is waiting for the response from the ultrapipeline which is ever running and the first pipeline fails at timeout without getting the actual response. But we can see that the ultra pipeline does the task but doesn’t give back any response. It would be helpful for us to start with if you could share your ultrapipeline .slp files.
Regards,
Parvathy