ContributionsMost RecentMost LikesSolutionsRe: How are pipeline executions distributed across a Snaplex? We are having an issue where all our traffic tends to go to a single node. If I restart the snaplexes, it tends to help for a few days, but then goes back to a steady state around a single server. This is effectively leaves us with only a single node and we are running into memory issues. I do have a ticket in with support, but they haven’t been much help. Curious how many others are having this same problem. Our snaplex nodes I believe are identical in how they are configured (cpu, memory, etc). It doesn’t seem like this load balancing strategy works and would be nice to have other options to select like round-robin. At least with round-robin, we’d get something running on our other node. Re: How to copy data multiple times It’s not clear how your countryIso and sessionId fields enter the pipeline (are they constants, pipeline parameters?), but I’ve attached an example that’s pretty simple and doesn’t involve javascript expressions. Uses a Splitter to break out the lines, then a mapper to add the countryIso and sessionId (orderId?), then a GroupBy to put it all back together. You can use a JSON formatter to spit it back out depending on what your are doing with it. Hope this gives you some ideas anyway. walkbackCopyData_2021_10_28.slp (7.7 KB) Re: Doing a Lookup In SnapLogic? [Help] One thing you could do is instead of trying to join on those 4 different fields, you could use a mapper to create a single ‘key’ field (just concatenate all the values that make up the unique key). Do this for both sources and then do the join on that single key field. You can add a sort in their for good measure if you want, though I think joins support unsorted data. If you need to, write the output at this point just before the join into a file and make sure the keys are matching exactly between the two data sets. All it takes is one tiny difference between what you are trying to join on between your data sets for it to be off, including if a field is an integer object or a string. Re: JSON Query - Workday =/ Ha, worked this out on my own not long ago, wish I had read this first, would have saved me some time! Re: Line breaking during CSV Parser Actually, if the offending line breaks are consistently in the very last field, here is the logic I would start with to build out the regular expression: Replace all newline \n characters with spaces except for a newline character that has some number of characters, except another newline character, and a pipe | character. or … Replace a newline character if it is eventually followed by another newline character before a pipe | is reached. They amount to the same thing. Re: Line breaking during CSV Parser I think it is possible, but a little tricky. You want to replace the extra line breaks, but not the ones that delimit the end of a record. I think you could do this with a regular expression … but in the end, having line breaks in the csv that way is not valid. Re: Dynamic filename in s3 It worked for me with sftp. Example (_sftpPath is a pipeline parameter): _sftpPath + ‘/’ + $original.username + ‘.jpg’ Has anyone developed an image resize snap or is there a way to do that with existing snaps? Looking for a way to resize an image in my pipeline. Anyone done this yet before I try to build a custom snap? Maybe an easy way to do it with the script snap? Thanks State Department Per Diem Rates to Workday Import Expense Rate Table Pipeline sample that grabs per diem rate data from State Department website and imports it into Workday Rate Tables. It gets a little complex as far as the data mapping and filters, but cuts down on tedious manual maintenance of these tables. We run this once a month. There is definitely room for improvement like adding some validation checks. One big issue is that the import is all or nothing; if a single record doesn’t match between the State Dept. spreadsheet and Workday, it fails. Also, the Workday call itself always returns as if it succeeded, so you have to do a search for ‘import expense rate table’ in Workday to see if and what any errors are. You will likely have to play around with the mapping and filters to get the Excel data to match the Spend_Data_ID fields for locations in Workday. And the State Dept adds a new location every couple months which will cause the integration to fail unless you enter the new location in Workday. So, not perfect, but certainly saves HR a few steps. Erik Pearson Senior Software Engineer Bowdoin College HR_PerDiemRates_HR_Import.zip (7.8 KB) Re: SSO with ADFS missing Claim Rules Did you figure this out? Running into same issue.