ContributionsMost RecentMost LikesSolutionsRe: Using Github as a code repository for SnapLogic artifacts Mike/Sudhendu, glad that you found it useful, these pipelines could be easily invoked via a jenkins / teamcity pipeline. Before we jump into jenkins/teamcity or any ci-cd tool-chain we need to understand what kind of “artifacts” SnapLogic generates, at a high level you have a SnapLogic project which resides has a ORG/SPACE/PROJECT hierarchy where ORG is the tenant, space could be mapped to an ORG UNIT for ex: BI, DEV, any PROJECT-NAME and so on and with in each space you have more than one PROJECT. When it comes to artifact you have Pipelines Jobs (Scheduled, Triggered, Ultra) Accounts Files (could be any anything, xml, json, script files, xslt , csv and so on which you your pipelines depends upon) Pipelines are the actual work horse which are invoked via jobs, for lack of a better term pipelines are interpreted by the execution engine (JCC aka node) and hence there is not much to “build”, now with this architecture a typical CI-CD work flow may not directly fit neverthless I have seen customers would still like to leverage their fav CI/CD tool chain to automate as much as they can, tasks like SnapLogic assets sharing via Github Promoting projects from one ENV to another Use Jenkins as the proverbial "rug" that ties the room together :) Here is an actual implementation, we invoke SnapLogic pipelines (triggered tasks) via Jenkins job as an HTTP call. Tools required Jenkins v2.28 Jenkins Http_request plugin (HTTP Request) Access to github rest api - GitHub REST API - GitHub Docs CodeMigrate pipelines - Attached Pipeline design and Things to know There are two projects involved Project_Promotion = Utilizes Meta Snap pack to promote assets within same and different environments. Github_Integration = Utilizes Meta Snap pack and REST Snap pack to promote assets within same and different environments. We are utilizing github rest api’s and they are invoked via basic_auth i.e. your github login account. Pipelines: Project_Promotion has 01 Main - Migrate Project exposed as a triggered task calls rest of the pipelines via pipeline execute pipeline parameters source_proj target_proj include_account target_org source_space include_pipeline update_task target_space update_account include_task source_org Example values: account_org = ConnectFasterInc account_space = LCM account_proj = Artifacts source_org = ConnectFasterInc source_space = LCM source_proj = DEV target_org = ConnectFasterPOC target_space = BK target_proj = PROD GH_Owner = snapsrepo or your github account username GH_Repo = reponame ex: cicd-demo GH_Source_Path = relative path to repo ex: BK/DEV (case sensitive) include_pipeline = true or false include_account = true or false include_task = true or false update_account = true or false Finished product would have a Jenkins pipeline that utilizes HTTP request plugin to invoke SnapLogic pipelines that read/write to GitHub and also promote projects from one env to another in SnapLogic. Using pipeline params we decouple source and target location along with what SnapLogic artifacts to “include” during Jenkins job invocation. CICD-SnapLogic-Projects.zip (31.2 KB) snaplogic-jenkins.docx (344.9 KB) Re: Using Github as a code repository for SnapLogic artifacts integration with github is achieved via github rest api GitHub REST API - GitHub Docs From design perspective this is how it works Create a snaplogic pipeline that uses Meta Snaps - https://doc.snaplogic.com/wiki/display/SD/SnapLogic+Metadata+Snap+Pack get a list of SnapLogic assets (pipelines, tasks, files and accounts) Invoke GitHub REST api (uses HTTP basic auth - https://doc.snaplogic.com/wiki/display/SD/Basic+Auth ) Read or Write to GitHub Pipeline uses pipeline param to decouple runtime param from the actual implementation logic, so when you invoke these pipelines you can specify which Snaplogic projects to read, what assets to cin into GitHub, which repo to use on GitHub side and so on. We have implemented bi-directional flow i.e. you can cin and cout source code from github Attached SnapLogic project export has all the required files, please note that this is a custom solution, to use it you’ll need to keep your GitHub creds ready (repo name, uname and pwd), create a basic auth account in snaplogic and pass it on to the pipelines. You may struggle a bit, but don’t give up, keep pounding and eventually you’ll crack it 🙂 Attached SnapLogic project export, please import it using these steps - https://doc.snaplogic.com/wiki/display/SD/How+to+Import+and+Export+Projects Now for this And that this can be further use to move code from one environment to another (code migration). try this API API Detail: Syntax = https://elastic.snaplogic.com:443/api/1/rest/public/project/migrate/ORG/SPACE/PROJECT Authorization Header = Basic Auth, pass on your Snaplogic uname/pwd Body = application/json Example: https://elastic.snaplogic.com:443/api/1/rest/public/project/migrate/ConnectFasterInc/BK/DEV { "dest_path":"/tacobell/projects/bk", "asset_types":["File","Job","Account","Pipeline"], "async":"true", "duplicate_check":"false" } Response: { “response_map”: { “status_token”: “6e6600cd-2992-4423-95c3-ffb94293a3bd”, “status_url”: “http://elastic.snaplogic.com/api/1/rest/public/project/migrate/6e6600cd-2992-4423-95c3-ffb94293a3bd” }, “http_status_code”: 200 } This runs as an async call and will migrate (copy) everything from ConnecFasterInc/BK/DEV to/tacobell/projects/bk, you can check status of the migration by visiting status_url If a project already exists and duplicate_check set to false will create another project with the same name appended by (NUMBER) ex: if bk already exists inside /tacobell/projects then subsequent runs will add bk(1), bk(2) and so on, I wish we had an “overwrite” or “merge” parameter option but neverthless this is much easier than META snaps (IMO). BK-Github Integration.zip (12.1 KB) Re: Passing parameters/session variables between nested pipelines/snaps File reader does accetpt $ and _ variables, did you toggle the = button ? toggle-me Re: Block communication for On - Premises HTTP URL my bad ☹️ I should have double checked, I do not think we let you remove HTTP URL, looks like a feature request to me. Re: Block communication for On - Premises HTTP URL Not aware of any global settings but when you create a task just clear out “On Premise URL” field, just delete the http:// URL and that would stop anyone invoking this service via HTTP Re: Passing parameters/session variables between nested pipelines/snaps _Param’s cannot be set using a mapper or any other means except pipeline param’s, think of it as a global param which you set it at pipeline level, you can use it across pipelines but can be set at a global/pipeline level. Now when it comes to mapper or any other snap that lets you set variables via $varName notation, this are like local vars which can have values set via mapper and are available to the immediate following snap. when I type “_mypipelineparam” as the Target Path in a Mapper, it gets corrected to “$_mypipelineparam and this is the expected behavior, now lets say you are getting some value from an external call, ex: you invoke a triggered task and when you do that you pass on a “fileName” param to this task which is then used in the pipeline as a fileName + timeStamp + .extn in your file writer snap. In that case you can use something like this in your File Writer snap settings ‘/some/project/path’ + _fileName + $timeStamp + ‘.json’ where timeStamp is set via a mapper like this mapper you could have also created a $fileToWriteTo variable in a mapper which will have this expression ‘/some/project/path’ + __fileName + ‘_’ + Date.now().toLocaleDateTimeString() + ‘.json’ Hope it makes sense! Re: Passing parameters/session variables between nested pipelines/snaps “Does not support pipeline _parameters” - on the contrary Script snap does support pipeline params It can be accessed in a script snap by $_pipeline_param_name Example: Python data["pipelineparam"] = $_pipeline_param JavaScript new_data.pipelineparam = $_pipeline_param; also documented over here - https://doc.snaplogic.com/wiki/display/SD/Parameters+and+Fields Re: Is it possible to host Soap service in SnapLogic Anythings possible Prakash 🙂 I’ll send you a sample soon. Re: XML parser generating an '@' prefix before each parsed element how about applying xslt to incoming xml? check this pipeline out, I am getting some errors on xslt parsing but I think we are still able to apply this xsl and get move all attribs to elements. attached pipeline and xsl file (source - Convert XML Attributes To Elements XSLT - Stack Overflow ) xsl file, rname extn to .xsl convert-xml-attrib-to-elem.xsl.txt (469 Bytes) PARSE-ORDER_2017_06_27.slp (5.0 KB) Re: How to use 3rd party python libraries in python Script AFAIK you can call 3rd party java libs inside script snap using py as scripting language, here is a sample script that uses aws java sdk, gets a list of s3 objects from a bucket # Script begin # Import the interface required by the Script snap. import java.util import json import sys sys.path.append('/opt/snaplogic/userlibs/aws-java-sdk-1.9.6.jar') from com.amazonaws import AmazonClientException from com.amazonaws import AmazonServiceException from com.amazonaws.regions import Region from com.amazonaws.regions import Regions from com.amazonaws.services.s3 import AmazonS3 from com.amazonaws.services.s3 import AmazonS3Client from com.amazonaws.services.s3.model import Bucket from com.amazonaws.services.s3.model import GetObjectRequest from com.amazonaws.services.s3.model import ListObjectsRequest from com.amazonaws.services.s3.model import ObjectListing from com.amazonaws.services.s3.model import PutObjectRequest from com.amazonaws.services.s3.model import S3Object from com.amazonaws.services.s3.model import S3ObjectSummary from com.snaplogic.scripting.language import ScriptHook class TransformScript(ScriptHook): def __init__(self, input, output, error, log): self.input = input self.output = output self.error = error self.log = log # The "execute()" method is called once when the pipeline is started # and allowed to process its inputs or just send data to its outputs. def execute(self): self.log.info("Executing Transform script") while self.input.hasNext(): try: # Read the next document, wrap it in a map and write out the wrapper in_doc = self.input.next() # wrapper = java.util.HashMap() # bucket is a property set in a mapper that precedes script snap and holds the bucket # name bucket = in_doc.get("bucket") # $_bucketPipelineParam is a pipeline param and holds the bucket name bucketParam = $_bucketPipelineParam s3 = AmazonS3Client() usEast1 = Region.getRegion(Regions.US_EAST_1) s3.setRegion(usEast1) objectListing = s3.listObjects(ListObjectsRequest().withBucketName(bucket)) s3objectList = {} for objectSummary in objectListing.getObjectSummaries(): wrapper.put(objectSummary.getKey(),objectSummary.getSize()) self.output.write(in_doc, wrapper) except Exception as e: errWrapper = { 'errMsg' : str(e.args) } self.log.error("Error in python script") self.error.write(errWrapper) self.log.info("Finished executing the Transform script") # The Script Snap will look for a ScriptHook object in the "hook" # variable. The snap will then call the hook's "execute" method. hook = TransformScript(input, output, error, log) # Script end I am on windows so I’ve copied aws-java-sdk-1.9.6.jar file to c:/opt/snaplogic/userlibs on all of the plex nodes, you need to copy 3rd party jars to all of the plex nodes and make sure to save it at a consitent, same path on all nodes. Also on my nodes I have edited/created a credentials file located here C:\Users\Bkukadia\.aws\credentials which contains these key=value pairs aws_access_key_id=AWSKEYAKIAIGFUBXI aws_secret_access_key=AWSSECRET8++Kg6QNMX6 I think you can also use IAM roles but I am not much familiar with it. For more details on aws java sdk check this out - AWS SDK for Java Invoke aws java sdk via py_2017_06_27.slp (6.0 KB)