This website uses Cookies. Click Accept to agree to our website's cookie use as described in our Privacy Policy. Click Preferences to customize your cookie settings.
The SnapLogic Mainframe Accelerator is a collection of fully functional pipelines that read and write Cobol Copybooks, DB2 and file systems. It is now available as a free download from the AWS Marketplace. Learn more in this video.
The question was, "can SnapLogic move 30 terabytes of data from Oracle to Redshift and how long would it take?" This question came from a current customer who asked for some performance numbers. To fulfill this request we needed to spin up an Enterpr...
Created by @rdill
This pipeline pattern allows users to respond to a REST Post event triggered by a third party HR solution (e.g. Jobvite, Glassdoor, LinkedIn, etc.) and insert new employee(s) data into Workday.
Configuration
This pipeline require...
Created by @rdill
This pipeline pattern is ideal for organizations that require a complete view of their employees. This pattern is used to run a scheduled/periodic update of employee data and information from Oracle into Workday. The pipeline will ...
Contributed by @rdill
This pattern reads an Excel file that contains the necessary attributes to create a user in an existing SnapLogic org, with the Mapper Snap defining the user and setting permissions.
Configuration
The user who runs this pipel...
If this is a valid copybook file, we have a COBOL Copybook parser that would do this with one snap, Ref: https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/2386591995/Copybook+Snap+Pack. Other snaps of interest would be the transcoder and fixe...
Also, you need to confirm the source does not have any API limits. Some endpoints throttle traffic and limit the number of calls per minute or rows of data per 24 hours. Make sure the source is not ignoring your requests.
You want to use the HTTP client snap for this. It will handle pagination easier than the REST snap pack and should be used instead of the REST Snap for all new development. Ref: https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/2614591489/HTT...
instead of doing it in a single pipeline, use multiple pipelines and partition the reads. Depending on the performance of their API you could spin up 10 or 20 pipelines, have them read at the same time and get it done much faster.
Follow these steps to learn more
In the dashboard, find the instance of the pipeline that failed. Click on the pipeline name in the first column. This will load it into the designer, if the pipeline has been altered since it ran, you may see a warnin...