How To Generate Rank with Partitions
Hi I need to generate the rank for the below data where data should be partitioned by No and Date order by value and want to get the top 3 ranked categories for each No and each Month. Input : No Category Date Value Rank 1 Dog 01-01-2022 32 1 Rabbit 01-01-2022 95 1 Fish 01-02-2022 4 1 Ox 01-02-2022 23 2 Cat 01-01-2022 4 2 Mouse 01-01-2022 12 2 Woman 01-02-2022 66 2 Man 01-02-2022 56 3 Bird 01-01-2022 54 3 Bee 01-01-2022 43 3 Cow 01-02-2022 32 3 Pig 01-02-2022 89 Expected Output : No Category Date Value Rank 1 Rabbit 01-01-2022 95 1 1 Ox 01-02-2022 23 1 1 Dog 01-01-2022 32 2 1 Fish 01-02-2022 4 2 2 Mouse 01-01-2022 12 1 2 Woman 01-02-2022 66 1 2 Cat 01-01-2022 4 2 2 Man 01-02-2022 56 2 3 Bird 01-01-2022 54 1 3 Cow 01-02-2022 32 1 3 Bee 01-01-2022 43 2 3 Pig 01-02-2022 89 25.1KViews0likes5CommentsKeep data "through" snaps which don't provide original input under $original in their output
SnapLogic isn’t consistent in how data is passed through various snaps. It’s often necessary to keep some data obtained early in the flow (e.g. from the source) for use later (e.g. when writing to target). However, some snaps, like the XML Parser, requires that only the data to be parsed is passed as input to it while also not supporting binary headers or such mechanisms to forward data from one side of the snap to the other - effectively removing everything except the data it cares about from the stream. There’s an enhancement request for fixing this posted here somewhere, and we’ve written about this to our SnapLogic contacts, so hopefully the following work-around won’t be necessary for very long, but here it is: Move the “problem” snap to a child pipeline and call it via the Pipeline Execute snap, making sure “Reuse executions to process documents” is not checked (won’t work if it is). If needed, at the start of the child pipeline, remove any data not to be used with the “problem” snap. The Pipeline Execute snap will output the original input data under $original (as the “problem snap” should have done).Prefixing/Suffixing multiple Input Schema in Mapping table expressions (i.e. Target Path) under a mapper snap
Hi Team, Can we prefix/suffix multiple field names in mapping table expressions under mapper snap in a single shot? As per the above screenshot, I can either: Select All as is Manually select required field names, do necessary transformations (if any) and then save it with a same or a different name in target path Let’s assume I have n (where n>=10) snowflake tables to read and eventually I would be joining them, each table is having 300+ columns and I need to prefix/suffix those column names in my snapLogic pipeline ONLY with something through which I can distinguish each field name. Is it possible to do that inside a mapper snap? I’m fine to use the same field name as Input Schema but the only addition would be a prefix/suffix text and that to saving manual efforts on 300+ columns for n different tables I read. Screenshot below on how I would want each field name to appear with reduced manual efforts (it could be either of prefix or suffix, I’m not looking for a combination at this point of time) P.S.: If there is any other snap than mapper that does the job for me, I would appreciate the help and leads. Thanking in advance. Best Regards, DTSolved5.6KViews0likes6CommentsPatterns: HR Patterns
The following patterns are available to help you build out your HR workflow: Employee Journey: Recruiting: Position Creation Employee Journey: Employee Onboarding/Offboarding Employee Journey: Employee Data Management: Box Folder Management Employee Journey: People Analytics Employee Journey: Create or Update Skilljar User Employee Journey: Insert new employee into Workday Employee Journey: Scheduled employee data batch update from Oracle into Workday2KViews1like0CommentsRest API service endpoint returned erros result: Status Code =403
When configuring a REST Post of the mailjet API, it indicates the following message. Error: Reason: REST API service endpoint returned error result: status code = 403, reason phrase = Forbidden, refer to the error_entity field in the error view document for more details Resolution: Please check the values of Snap properties. Error Fingerprint[0] = efp:com.snaplogic.snap.api.rest.89xhsFt7 When performing the test in Postman, the execution is correct3.5KViews0likes2CommentsMultiple file loads in single pipeline
Hi Team, I have a usecase where I need to process multiple files (like A,B,C) and load it to tables like (X,Y,Z). I can setup this configuration as control table. Say when file A is available load it to X table, for B file load it to Y table, C file load it to Z table Please let me know the best possible ways/optimized sample pipeline option to do this load in Oracle tables1.8KViews0likes0CommentsWriting flat file
Hi Team For my current use case, I need to write a flat-file. I am using the following pipeline to write csv. Input to JSON formatter is a document of arrays each having different no of properties. Can you please suggest a way to generate CSV file ignoring all null values and writing only what is present? Note - Each input document to CSV formatter has different set of properties. Thanks in advance6.1KViews0likes5CommentsWe Need You: Influence Our Next Big Thing
Hey there, I’m Jackie Curry, a new member of the user experience team at SnapLogic. We’re working on some exciting improvements to SnapLogic and I need your help to influence our upcoming products through user research. We’d like to show you some early concepts of what we’re working on to get your feedback and input. We want to know if these changes would help you do your job more efficiently. Plus, as a thank you, we’re offering a $100 gift card incentive to all participants who complete the study. If you’re interested, please email me at designresearch@snaplogic.com and I’ll set up a time to chat. Don’t worry, I promise to keep it brief! Thank you, Jackie – Jackie Curry Principal User Experience Designer jcurry@snaplogic.com2.4KViews5likes1CommentCSV to Workday Tenant
Submitted by @stodoroska from Interworks This pipeline reads a CSV file, parses the content, then the Workday Write Snap is used to call the web service operation Put_Applicant to write the data into a Workday tenant. Configuration If there is no lookup match in the SQL Server lookup table, we are using MKD as a default code for country code. Sources: CSV file on the File share system Targets: Workday tenant Snaps used: File Reader, CSV Parser, SQL Server - Lookup, Mapper, Union, Workday Write Downloads CSV2Workday.slp (17.3 KB)4.7KViews4likes1Comment