Forum Discussion
integration with github is achieved via github rest api GitHub REST API - GitHub Docs
From design perspective this is how it works
Create a snaplogic pipeline that uses Meta Snaps - https://doc.snaplogic.com/wiki/display/SD/SnapLogic+Metadata+Snap+Pack get a list of SnapLogic assets (pipelines, tasks, files and accounts)
Invoke GitHub REST api (uses HTTP basic auth - https://doc.snaplogic.com/wiki/display/SD/Basic+Auth )
Read or Write to GitHub
Pipeline uses pipeline param to decouple runtime param from the actual implementation logic, so when you invoke these pipelines you can specify which Snaplogic projects to read, what assets to cin into GitHub, which repo to use on GitHub side and so on.
We have implemented bi-directional flow i.e. you can cin and cout source code from github
Attached SnapLogic project export has all the required files, please note that this is a custom solution, to use it you’ll need to keep your GitHub creds ready (repo name, uname and pwd), create a basic auth account in snaplogic and pass it on to the pipelines.
You may struggle a bit, but don’t give up, keep pounding and eventually you’ll crack it 🙂
Attached SnapLogic project export, please import it using these steps - https://doc.snaplogic.com/wiki/display/SD/How+to+Import+and+Export+Projects
Now for this
And that this can be further use to move code from one environment to another (code migration).
try this API
API Detail:
Syntax = https://elastic.snaplogic.com:443/api/1/rest/public/project/migrate/ORG/SPACE/PROJECT
Authorization Header = Basic Auth, pass on your Snaplogic uname/pwd
Body = application/json
Example:
https://elastic.snaplogic.com:443/api/1/rest/public/project/migrate/ConnectFasterInc/BK/DEV
{
"dest_path":"/tacobell/projects/bk",
"asset_types":["File","Job","Account","Pipeline"],
"async":"true",
"duplicate_check":"false"
}
Response:
{
“response_map”: {
“status_token”: “6e6600cd-2992-4423-95c3-ffb94293a3bd”,
“status_url”: “http://elastic.snaplogic.com/api/1/rest/public/project/migrate/6e6600cd-2992-4423-95c3-ffb94293a3bd”
},
“http_status_code”: 200
}
This runs as an async call and will migrate (copy) everything from ConnecFasterInc/BK/DEV to/tacobell/projects/bk, you can check status of the migration by visiting status_url
If a project already exists and duplicate_check set to false will create another project with the same name appended by (NUMBER) ex: if bk already exists inside /tacobell/projects then subsequent runs will add bk(1), bk(2) and so on, I wish we had an “overwrite” or “merge” parameter option but neverthless this is much easier than META snaps (IMO).
BK-Github Integration.zip (12.1 KB)
Many thanks for such a lot of detail and depth @Bhavin ! I will pass this onto our project team, a bit too intricate for me :).
Are there any plans to add native support for github/tfs into the product in the future? And integrations with TeamCity/Jenkins/etc?
- vsunilbabu5 years agoNew Contributor II
Because I was stuck, unable to come up with list of column names to pass into pipeline execute as parameters, I used CSV generator initially to test my logic. I could not do it.
Any help is appreciated. Pipeline should look at the first row and change the datatype of that column.
Thanks in advance.Regards,
SunilThe CSVParser has some functionality for doing type conversion, but the types are expected to be in a separate file (see the Input Views section of the doc).
If you are not able to get your data in that form, I’m attaching an example pipeline that might do what you want. This pipeline uses a Router snap to split the first row off and then a Join to merge it back in with all the remaining rows. A Mapper snap is then used to do the type conversion with the following expression:
$.mapValues( (value, key) => match $types.get(key) { 'char' => value, 'integer' => parseInt(value), 'date' => Date.parse(value, "dd/mm/YY"), _ => value } )
Since that’s a little involved, I’ll go into some more detail. First, the
mapValues()
method is used to rewrite the value of each property in the input document. That method takes a callback that does the actual work. The callback uses thematch
operator to check the type of each property and then executes the conversion expression (e.g. the type of “Priority” is “integer”, so the match arm withparseInt(value)
is executed).TypeConversion_2019_09_16.slp (10.5 KB)
- vsunilbabu5 years agoNew Contributor II
Thank you @tstack . This worked perfectly.
- smanoharan5 years agoEmployee
@vsunilbabu If Create new table if not present option is selected without providing the schema in a secondary input view, varchar will be used for all the column’s data types.
In your use case, you can provide the schema of the table that you want to create in a second input view to get exactly the data type that you want in the Snowflake Table.
The first example mentioned in the Snowflake Bulk Load snap’s documentation covers a similar use case with an example: https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1438549/Snowflake+-+Bulk+Load
cc: @dmiller