ContributionsMost RecentMost LikesSolutionsRe: MySQL Insert Can't Insert Special Characters THANK YOU jaybodra!!! Turns out our workaround broke in MySQL 5.7, and we stumbled upon our own question which now has your solution that worked. A bunch of employees with special characters in their names will be very grateful! Re: How to integrate 3rd party job scheduler (e.g. Cisco Tidal) to kickoff and monitor SnapLogic pipeline execution Okay we’re going to try doing this instead. Have a triggered task that performs a pipeline execute on different pipelines. It will return the runtime id of the pipeline (ruuid) and then we’ll check the snaplogic “Pipeline Monitoring API” every 5 minutes for the ruuid status. Hopefully this method works, unless somehow we can increase the gateway timeout threshold. Re: How to integrate 3rd party job scheduler (e.g. Cisco Tidal) to kickoff and monitor SnapLogic pipeline execution I’m getting a “504 Gateway Time-out” html response from triggered tasks right at the 15 minute mark. The pipeline is running a >15 minute MySQL stored procedure, so there aren’t any documents coming out during that time. Any ideas? Is there a way to increase the timeout? Re: Enable or disable a task Yeah it looks like there’s some type of pipeline run queue… so if you disable a task at the end of its own pipeline, there’s still a chance it runs again because it’s already in the queue… ☹️ FIX: I added a task reader snap and a filter for “$parameters.enabled == true” at the start of the pipeline… so if the task has been disabled it will stop execution. Re: Enable or disable a task This is very helpful. I’m trying to build a “Job Scheduling” system that will enable and disable tasks when they are ready to run based on another system’s scheduling. I have it “retry” each task 5 times, so that if the pipeline fails it will auto retry… but if it succeeds, then it will disable its own task and shouldn’t run again. Unfortunately even after the task has been disabled, the pipeline will run a second time – so it appears that scheduling is added to a queue, and if it’s within 1-2 minutes of running… disabling a task doesn’t remove it from the queue. Does anyone have any ideas? I thought about using the Task Execute, but that’s being deprecated (documentation says so). I don’t want to use the pipeline execute because I don’t want them all related, because I’ll have 1 master scheduler pipeline related to every single execution. EDIT: Maybe it’s that I’ve got the retry interval set to 3 minutes? I’m dropping it to 5 and will see if that fixes it. Re: Merging fields of a document Don’t query all of the data at the same time. Have a pipeline execute that give parameters, so that your two Salesforce snaps are only selecting and joining a small subset of data (Use SystemModStamp?) Re: Single pipeline to call multiple pipelines We use Pipeline Executes as they are less error prone than just dragging the pipeline call. You can make something like the screenshot below, which will also email any errors and continue execution if there is a failure (if you want that). File Delete Snap: Bulk S3 File Deletion Is there any way to delete all files within an S3 bucket directory? Rather than making 1000 File Delete’s, it would be awesome to just have one snap execute to delete a directory and/or all files within it. Re: Data Loading Strategy RE: Option 1 Wouldn’t renaming the tables be faster? I.E. Rename table to table_old and table_temp to table? That’s what we do for MySQL. Re: How to Fetch all records while validating snap : Why do you need all the records in validation? Pretty sure the limit 50 on the validate cannot be changed. You could add an email snap to email the output to you, or write to a database or file.