Fetch data whose date is less than 10hrs from the existing date!
Hi Team, I’m trying to achieve a filter condition but haven’t found any luck so far. My data has a field as Last_Updated which has date stored in the format 2022-06-23 03:54:45 I want to consider only those records whose date is less than 10 hours than the existing date. How can I achieve this? Shall I use a mapper or a filter snap and I would really appreciate if the logic behind this can be shared. If the format of the date that is stored in the Last_Updated needs to be transformed as well, please let me know. Thanks! Regards, DarshSolved9.3KViews0likes16CommentsNeed to convert yyyymmdd to yyyy-mm-dd
Hi, I have a element as:- jsonPath($, “$Z_ORD.IDOC.DK02[].DATE") and value is coming as 20210521(yyyymmdd). I need to convert it to 2021-05-21 and map it to target ml element. I am using mapper. Tried like below:- Date.parse(jsonPath($, "$Z_ORD.IDOC.DK02[].DATE”,“yyyMMdd”)).toLocaleDateTimeString({“format”:“yyyy-MM-dd”}) || null but getting error as:- Not-a-number (NaN) does not have a method named: toLocaleDateTimeString, found in: …:“yyyy-MM-dd”}). Perhaps you meant: toString, toExponential, toPrecision, toFixedSolved8.5KViews0likes7CommentsWrite expression for conditional check
Hi, I want to write this code in expression snaplogic. I need to map TestID. <xsl:variable name=“TestID”> <xsl:if test=“exists(node()/IDOC/A1[VW=‘F’]/NO)”> <xsl:copy-of select=“node()/IDOC/A1[VW=‘F’]/NO”> </xsl:copy-of> </xsl:if> <xsl:if test=“not(exists(node()/IDOC/A1[VW=‘F’]/NO))”> <xsl:copy-of select=“node()/IDOC/A1[VW=‘F’]/NP”> </xsl:copy-of> </xsl:if> </xsl:variable> How do I do it?Solved7.6KViews0likes10CommentsCSV with header, detail, and trailer records on S3
I need to create a pipe delimited text file. The file needs to have a single header type record row with two columns, infinite number of detail record rows, and a single trailer type record row with three columns. 00|File Type Description 01|Data1|Data2|Data3|Data4|Data5|Data6|Data7|Data8|Data9|Data10 99|1|EOF I thought I could use multiple CSV parsers + File writer snaps and then just APPEND the detail records and the trailer record, but it looks like the APPEND function isn’t supported on S3. Any ideas on how to accomplish this?6.4KViews0likes5CommentsJoin Merge on all rows
Hi folks, I did find that using Join with Merge to append data to an other set only append the merged elements onto the 1st row of the second. Is there a way to merge on all rows to augment all rows with the merged new value? eg: Merging aaa to bbb ccc will show: bbb aaa ccc Desired result is: bbb aaa ccc aaa Thanks,Solved6.4KViews0likes6CommentsWriting flat file
Hi Team For my current use case, I need to write a flat-file. I am using the following pipeline to write csv. Input to JSON formatter is a document of arrays each having different no of properties. Can you please suggest a way to generate CSV file ignoring all null values and writing only what is present? Note - Each input document to CSV formatter has different set of properties. Thanks in advance6.2KViews0likes5CommentsPass parameter from child pipeline to parent pipeline
I have a parent pipeline that executes a child pipeline. The child pipeline’s sole purpose to write a single row to a logging table. After writing to the logging table, the child pipeline runs a Select Snap to get the newly inserted log_id . My question is: is it possible to pass the log_id from the child pipeline back to the parent pipeline as a parameter? I know it’s possible to pass it as a document (and join with other documents in my parent pipeline) but then that causes all sorts of other issues. Main question, is it possible to set a parent’s parameter from a child pipeline?5.9KViews0likes3CommentsPrefixing/Suffixing multiple Input Schema in Mapping table expressions (i.e. Target Path) under a mapper snap
Hi Team, Can we prefix/suffix multiple field names in mapping table expressions under mapper snap in a single shot? As per the above screenshot, I can either: Select All as is Manually select required field names, do necessary transformations (if any) and then save it with a same or a different name in target path Let’s assume I have n (where n>=10) snowflake tables to read and eventually I would be joining them, each table is having 300+ columns and I need to prefix/suffix those column names in my snapLogic pipeline ONLY with something through which I can distinguish each field name. Is it possible to do that inside a mapper snap? I’m fine to use the same field name as Input Schema but the only addition would be a prefix/suffix text and that to saving manual efforts on 300+ columns for n different tables I read. Screenshot below on how I would want each field name to appear with reduced manual efforts (it could be either of prefix or suffix, I’m not looking for a combination at this point of time) P.S.: If there is any other snap than mapper that does the job for me, I would appreciate the help and leads. Thanking in advance. Best Regards, DTSolved5.7KViews0likes6CommentsTime difference calculation excluding weekends
Does anyone have a pattern available to determine difference between two timestamps and add the duration to another timestamp, both of which calculations would exclude weekends? For example, three inputs and one output: Baseline start timestamp Baseline end timestamp Target start timestamp Logic would find the time diff between 1 and 2, excluding weekends and add that time to 3, excluding weekends to land on target end timestamp. For example, using reference range of last Friday 1/28 until Thursday 2/3 and target start of Friday 2/4. The 4 day difference would be calculated and added to 2/4 - excluding the coming weekend, result would be Thursday 2/10. Thanks! AdamSolved5.7KViews0likes5Comments