Redshift Bulk Load on Cloudplex
We have a Redshift cluster that we would like to load data into. Using the Redshift - Insert snap is simple, but prohibitively slow and isn’t the recommended way to load data into a Redshift. We would like to use the Redshift - Bulk Load snap, but are running into a few issues when setting up the Redshift Account in SnapLogic. Our understanding is that using IAM Roles for authentication is NOT possible on a Cloudplex. Is this true? If so, this is a huge issue. If we can’t use an IAM Role for authentication, the only other option is an AWS Access Key with its secret and token. The main issue with this approach is that the tokens are temporary and only last a few hours at a maximum. How can we use an AWS Access Key with its secret and token without having to refresh the token every 15 minutes? This doesn’t seem useful. Any help would be great. Thanks!Solved7.4KViews0likes9CommentsOracle to Redshift - Dynamic Metadata
Created by Chris Ward, SnapLogic This Pipeline replicates table data obtained from an Oracle database and upserts the data into Redshift. Firstly, the Parent Pipeline reads the Source and Target schema & tables and a where clause used to filter the source data from an expression file “oracle_to_redshift__c.expr” (the customer would need to modify this file to correspond with the table & schemas they are looking to target), then obtains the primary key constraints for each target table in Redshift. The Pipeline then constructs an array of documents for up to 10 key columns that will be used to determine the unicity of the data when being upserted into Redshift. Each document is then passed into the child pipeline that reads the data from the source Oracle table and then upserts the data into Redshift. A router Snap is used between the Oracle read and Redshift upsert to achieve concurrency for large data volumes. The Pattern used within this Pipeline could potentially be reworked to operate with different source and target databases, eg SQL Server to Google Big Query. Configuration To make this Pipeline reusable, make use of the expression file to store the source and target tables & schemas. Sources: Oracle Table Targets: Redshift Table Snaps used: Mapper, JSON Splitter, Redshift Execute, Group By Fields, Pipeline Execute, Oracle Select, Router, Redshift Upsert Downloads Oracle to Redshift - Dynamic Metadata.slp (15.8 KB) Oracle_to_Redshift_02.slp (11.3 KB) oracle_to_redshift_c.expr (3.2 KB)2.7KViews0likes0CommentsPurple to Redshift-WiFi Analytics
Created by @asharifian , SnapLogic This pattern integrates Purple with Redshift. Purple is a WiFi analytics application that builds detailed visitor profiles and tracks visitors in your venues, such as conferences, workshops, gatherings, etc. At Purple, all of the data collected is stored within a centralized, enterprise-class reporting suite, available for analysis. This pattern retrieves the analytics data from Purple and bulk upserts it into Redshift for reporting and additional data analytics purposes. Configuration A child pipeline is used to generate the HMAC SHA256 key. This is a requirement for the Purple API’s. Sources: Purple Venues, Visitors Targets: Redshift database tables to store venue and visitor data. Snaps used: Mapper, Pipeline Execute, REST Get, JSON Splitter, Copy, Redshift Bulk Upsert, Script, Join Downloads Purple to Redshift-WiFi Analytics.slp (21.4 KB) HMAC_SHA256.slp (18.5 KB)2.4KViews0likes0CommentsSnaplogic Redshift Copy Command
Hi Team, When I read data from a file/csv and do a bulk load, into redshift I know that at the backend it’s trying to copy the data and tries to do the bulk load. Here…I need to know what was the full set of options the Snaplogic code used for the Redshift Copy command at its backend. Where I can get that details… Thanks in advance. MakeshSolved2.8KViews0likes2CommentsRedshift bulk load
Hi team, Looking for a suggestion/enhancement to achieve the below senario. when i read a csv file (it might have \n\r (windows) & \n(unix) ). Who to fix this … Today I read the file and when I do the bulk load using redshift bulk load snap and the data gets loaded with the \n\r . How can I escape these characters? When i look at the properties of the snap , i could see the below. but it’s not working.Solved4.2KViews0likes5CommentsCalling a Redshift Stored procedure in Snaplogic
Hi All, Has anyone been able to call a redshift stored proc using Snaplogic, couple of items which I read on the community mentioned they were not able to do so and there is an enhancement request for the same. If anyone one has an idea regarding how to achieve this, please let me know in the comments. Thanks and Regards Anubhav3.6KViews0likes3CommentsHow to Grant Access to SnapLogic's IPs for Amazon Redshift
In order for SnapLogic’s Snaplex application to be able to read and write data to your Amazon Redshift cluster, we need allow external connection from the SnapLogic IP addresses for inbound access. Open the Clusters page within the Amazon Redshift console and choose the cluster you wish to integrate with. Click the “Properties” tab: In the “Network and security” section, click the “Edit publicly accessible” button and change it to be “Enabled” (if not already). Click the link to the security group for the cluster: Under the “Inbound rules” tab, click the “Edit inbound rules” button and add the following entries, clicking the “Add Rule” button after each one (you may customize the Description column value as you see fit): Redshift TCP 5439 52.11.8.103/32 Redshift TCP 5439 34.208.181.167/32 Redshift TCP 5439 52.10.35.99/32 Redshift TCP 5439 52.36.97.11/32 Redshift TCP 5439 34.208.230.181/32 Redshift TCP 5439 34.209.24.34/32 SnapLogic should now be able to connect to your Redshift cluster directly. If you experience any issues, follow the Amazon Redshift documentation for assistance, or please do contact us at trial-support@snaplogic.com.2.9KViews0likes0CommentsUnable to execute Redshift Script with comments in it using Redshift Execute
Hi All, I am using redshift execute to run a redshift script . While I am able to run the script when every redshift query is enclosed in double quote, but the script is failing when I am adding comments to that. Would like to know how to write comment in redshift script so that it can be acceptable in “Redshift Execute”.2.9KViews0likes2Comments