ContributionsMost RecentMost LikesSolutionsRe: What if No input is coming to Mapper Sandeep, Many snaps will do nothing if there is no input at all from a previous snap. In order to get around this, you can create a JSON Generator to produce a dummy document and merge it with your normal input using the JOIN snap. Check the link below for an example. Performing an Action when there is no data Designing Pipelines A common integration pattern is to do something when no data is received. For example, we might read a file, parse it, and find that no records meet some filter criteria. As a result, we might send an email, or insert a ticket into a ticket management system like ServiceNOW. However, in SnapLogic, this can be somewhat more difficult than it seems initially because of the streaming architecture. In fact, many snaps will not execute without input documents - rather hard to accomplish when there i… Re: Triggered vs Ultra Task Sandeep, When you call a triggered task, there is some overhead in initiating the pipeline, executing the pipeline, tearing down the pipeline. That overheard of initiating and closing is removed by using Ultra Pipelines. An Ultra Pipeline will start once then remain running, listening for new calls. In the even of a failure, there is a new type of node called a Feedmaster that is always watching an will attempt to start the pipeline again so it’s ready for the next call. You can have multiple executions of a pipeline running on multiple nodes so you can tune it to meet your throughput needs. Teradata Snap Pack on a Windows Node? Hello everyone, We are attempting to use the Teradata TPT snap from the Teradata snap pack and are running into a road block in giving the snap the location of tbuild.exe in the “TBUILD location” field. I can’t get it to find the file no matter what combination of slashes and folder paths I can think of. "C:\folder" “C:/folder/” “C://folder//” etc… Has anybody else gotten this to work? The documentation doesn’t specify the format and support just says they haven’t thoroughly tested it on windows yet. Thanks in advance, Brett Re: User created with incorrect privileges As an admin… Manager > Settings > Manage Password Logins. Re: Profile Badges for SnapLogic employees? That’s perfect! Thanks for the quick solution and reply! Profile Badges for SnapLogic employees? Is it possible to get some sort of indicator in our profiles that can distinguish a user from a SnapLogic employee? Perhaps a badge or something added to the avatar? I would help provide context when reading a post. Re: Citizen Integrator Solution? Thanks for the thought-out responses! Or environment currently consists of 3 Orgs (Dev, QA, Prod) We have 2 SnapPlexes (Cloud and Ground) that are at the Org-Shared level. We have defined Project spaces to align with line-of-business development groups Org Ecommerce Corporate Systems Store Systems etc… Marketing is the first set of users that aren’t purely development groups but will not be the last. This needs to a be a practical and sustainable pattern going forward. Seeing as how Orgs cost money, it won’t be accepted very well for departments with one or two users. The solution we’re leaning toward is to A) create a new project space called Marketing B) create a new SnapPlex called Marketing within the Marketing project space’s share. We have 4 additional nodes pending installation. As far as I know, we should be able to assign one or more of those nodes to the Marketing SnapPlex to isolate the server resources to that group. The problem that remains is keeping the Marketing users from running their pipelines on the Cloud or Ground SnapPlexes. I see two option here. First, as Time mentioned, is to move the existing SnapPlexes down from the org share into a lower level. The problem is that we have many Project Spaces that would all need to have SnapPlexes made. The second option is to remove access from “all users” to the org shared folder. Instead, put every user in a group and grant that group access. Keep marketing left out in their own group that only has access to their Project space. We’d just need to move down common shared objects that we want the the Marketing group into their project space as well. We do have an Integration Success Team in place to facilitate this sort of management within SnapLogic. Side note: Seems a bit strange that a SnapPlex, which is merely a run-time environemnt, is shared along with other pipelines and accounts rather than their own type of entity that can have access granted outside of the project structure. Citizen Integrator Solution? Hello all, We’re in the process of on-boarding our first Citizen Integrator in the SnapLogic Platform. We’ve previously federated the development of production integrations to LOB developers, but the Citizen Integrator has a different set of requirements and I’d like to start a discussion as to how best to accomplish this. Below is the criteria we’re trying to meet: Enable power users of our Marketing team to do simple, low risk, integrations without involving IT Allow the Marketing user to leverage some of our existing shared pipelines and patterns Isolate the resources the Marketing team enough that it will not affect our production integration performance (new users could do some crazy stuff) Allow access to specific sets of production data, but not all shared accounts. They will not have the concept of dev, qa, and production, as digging through production data is their world. We’re considering a few options, each with drawbacks and advantages… We could make a new Project Space, as our current development teams have their own project spaces. This gives the marketing team access to shared assets, but puts them into the same production environment as mission critical integrations. We could perhaps create an entire new org. This would keep them isolated and I think we could assign a node specifically to that node to manage resources. But they’re not able to see shared objects unless we manually move them over. I’m also not sure if there is a cost involved here. I’d love to hear from anyone else that has already gone down this road. What methods did you try? What worked, and what didn’t? Thanks in advance! Brett Monitoring Pipeline Status with an External Scheduler In our Organization, SnapLogic is one of many tools used to integrate data. We use an enterprise scheduler to manage the execution alerting, and dependencies across all platforms and technologies. We use Cisco’s Tidal Enterprise Schedule to execute SnapLogic Pipelines, SSIS Packages, Informatica Workflows, FTP File Movements, Command line executables, etc. In order to expose a pipeline to an external scheduler, we create a triggered task and give the exposed API URL to the Webservice adapter within Tidal. Tidal will execute the pipeline and get a response of “200 - OK” because the pipeline task successfully triggered. This doesn’t tell us that the pipeline finished successfully, just that it kicked off successfully. In order to catch failures, we use System Center Operations Manager to call the summary pipeline status API. It will return one or more failures that are then sent to our IT Operations team that will triage and notify responsible parties. We’ve been running this way for a while and it’s been working well enough. Now we’re exposing SnapLogic to more projects and more development groups and as a result the demands on the successful executions and downstream dependencies have increased. We need our scheduler to know when jobs succeed, fail, or run long and we need each team to be notified of their own pipeline failures. From here on I’m talking theory. I’m very interested in what others have come up with as a solution to enterprise scheduling Since the only response we get back to the scheduler in a REST API call, is 200 - OK, we can’t rely on this to determine whether the job was successful or not. SnapLogic has published a set of APIs to return the given status of an individual pipeline. If we can get our scheduler to be dependent on the status of a subsequent status call, then we should be able to alert accordingly. To accomplish this, I’m attempting to implement the following (haven’t connected all the dots yet): Add a mapper to each parent pipeline that has an open output and returns the URL used to monitor this pipeline (+pipeline.ruuid) Create a tidal job (a) to call the initial pipeline task that will do the actual integration. Create a tidal job (b) that is dependent on (a)'s success that will call the monitoring URL returned from (a) repeatedly at a short interval and logs the return code to a Tidal variable. If (b) returns “Running”, keep trying. If (b) returns “Failed”, fail the job. If (b) returns success, mark job as successful. Create tidal (c) that is the next actual integration that is dependent on both the success of (b) and a value of “Success” in the tidal variable. This is quite a bit of tedium just to handle the success of failure of a job and I’ve not yet successfully implemented this solution, I feel like it’s with reach. What solutions have other come up with for managing dependency and alerting across your enterprise? Re: How are pipeline executions distributed across a Snaplex? Thank you Robin, this makes a lot of sense in regards to evening spreading the load. As we try to make our servers equal in hardware and software specifications, there will inevitably come a time when that’s not practically possible (i.e. Server minimum CPU availability raises across the board) and we will be in a situation where newer servers will inherently be more powerful. At that point, we would be looking at either upgrading some servers to make them comparable or replacing servers. Is there any plans of supporting different configurations of hardware in some sort of load balancing configuration setting? ServerA - 1.0x, ServerB - 1.0x, ServerC = 1.5x capicity, etc. Thanks!