How To Generate Rank with Partitions
Hi I need to generate the rank for the below data where data should be partitioned by No and Date order by value and want to get the top 3 ranked categories for each No and each Month. Input : No Category Date Value Rank 1 Dog 01-01-2022 32 1 Rabbit 01-01-2022 95 1 Fish 01-02-2022 4 1 Ox 01-02-2022 23 2 Cat 01-01-2022 4 2 Mouse 01-01-2022 12 2 Woman 01-02-2022 66 2 Man 01-02-2022 56 3 Bird 01-01-2022 54 3 Bee 01-01-2022 43 3 Cow 01-02-2022 32 3 Pig 01-02-2022 89 Expected Output : No Category Date Value Rank 1 Rabbit 01-01-2022 95 1 1 Ox 01-02-2022 23 1 1 Dog 01-01-2022 32 2 1 Fish 01-02-2022 4 2 2 Mouse 01-01-2022 12 1 2 Woman 01-02-2022 66 1 2 Cat 01-01-2022 4 2 2 Man 01-02-2022 56 2 3 Bird 01-01-2022 54 1 3 Cow 01-02-2022 32 1 3 Bee 01-01-2022 43 2 3 Pig 01-02-2022 89 25.2KViews0likes5CommentsKeep data "through" snaps which don't provide original input under $original in their output
SnapLogic isn’t consistent in how data is passed through various snaps. It’s often necessary to keep some data obtained early in the flow (e.g. from the source) for use later (e.g. when writing to target). However, some snaps, like the XML Parser, requires that only the data to be parsed is passed as input to it while also not supporting binary headers or such mechanisms to forward data from one side of the snap to the other - effectively removing everything except the data it cares about from the stream. There’s an enhancement request for fixing this posted here somewhere, and we’ve written about this to our SnapLogic contacts, so hopefully the following work-around won’t be necessary for very long, but here it is: Move the “problem” snap to a child pipeline and call it via the Pipeline Execute snap, making sure “Reuse executions to process documents” is not checked (won’t work if it is). If needed, at the start of the child pipeline, remove any data not to be used with the “problem” snap. The Pipeline Execute snap will output the original input data under $original (as the “problem snap” should have done).Prefixing/Suffixing multiple Input Schema in Mapping table expressions (i.e. Target Path) under a mapper snap
Hi Team, Can we prefix/suffix multiple field names in mapping table expressions under mapper snap in a single shot? As per the above screenshot, I can either: Select All as is Manually select required field names, do necessary transformations (if any) and then save it with a same or a different name in target path Let’s assume I have n (where n>=10) snowflake tables to read and eventually I would be joining them, each table is having 300+ columns and I need to prefix/suffix those column names in my snapLogic pipeline ONLY with something through which I can distinguish each field name. Is it possible to do that inside a mapper snap? I’m fine to use the same field name as Input Schema but the only addition would be a prefix/suffix text and that to saving manual efforts on 300+ columns for n different tables I read. Screenshot below on how I would want each field name to appear with reduced manual efforts (it could be either of prefix or suffix, I’m not looking for a combination at this point of time) P.S.: If there is any other snap than mapper that does the job for me, I would appreciate the help and leads. Thanking in advance. Best Regards, DTSolved5.6KViews0likes6CommentsRest API service endpoint returned erros result: Status Code =403
When configuring a REST Post of the mailjet API, it indicates the following message. Error: Reason: REST API service endpoint returned error result: status code = 403, reason phrase = Forbidden, refer to the error_entity field in the error view document for more details Resolution: Please check the values of Snap properties. Error Fingerprint[0] = efp:com.snaplogic.snap.api.rest.89xhsFt7 When performing the test in Postman, the execution is correct3.5KViews0likes2CommentsMultiple file loads in single pipeline
Hi Team, I have a usecase where I need to process multiple files (like A,B,C) and load it to tables like (X,Y,Z). I can setup this configuration as control table. Say when file A is available load it to X table, for B file load it to Y table, C file load it to Z table Please let me know the best possible ways/optimized sample pipeline option to do this load in Oracle tables1.8KViews0likes0CommentsWriting flat file
Hi Team For my current use case, I need to write a flat-file. I am using the following pipeline to write csv. Input to JSON formatter is a document of arrays each having different no of properties. Can you please suggest a way to generate CSV file ignoring all null values and writing only what is present? Note - Each input document to CSV formatter has different set of properties. Thanks in advance6.1KViews0likes5CommentsCSV with header, detail, and trailer records on S3
I need to create a pipe delimited text file. The file needs to have a single header type record row with two columns, infinite number of detail record rows, and a single trailer type record row with three columns. 00|File Type Description 01|Data1|Data2|Data3|Data4|Data5|Data6|Data7|Data8|Data9|Data10 99|1|EOF I thought I could use multiple CSV parsers + File writer snaps and then just APPEND the detail records and the trailer record, but it looks like the APPEND function isn’t supported on S3. Any ideas on how to accomplish this?6.4KViews0likes5CommentsJSON Generator with 'Or' operator
Hi All I am reading an excel file which is having multiple worksheets in it. I need to check whether the worksheet name is matching with the Sheet name specified in JSON generator snap or not. Customer has recently came back and said that one of the worksheet name can be with ‘|’ suffixed and Prefixed with its name and again in some of the source file the worksheet name will be without ‘|’. |Dashboard| Dashboard As of now in the JSON editor snap I am using the below code. Is it possible to include ‘OR’ in the same JSON code? And if not then how can we process if the worksheet name is with and without ‘|’ operator. Cheers Vinny3.2KViews0likes2CommentsDatabase Poller
Hi Team, I have a requirement which is near to Realtime where the data from Oracle database has to be polled/picked and insert/update in the MongoDB for every 3 seconds. Is there any possibility of do this in an ultra where polling should be done from Oracle database ? Could you please help us in designing or an approach to solve the problem. Thanks, Mohammed Suhail3.4KViews0likes2Comments