SnapLogic DateTime Conversion Guidelines
The SnapLogic Elastic Integration Platform native DATETIME data type is time zone-aware. For service endpoints that do not expose data types that are not time zone-aware, the SnapLogic Expression Language can be used to easily convert to supported 3rd party date, datetime, time, and timestamp data types. The matrix below provides sample conversions between the SnapLogic DATETIME variable, $snapDateTime , and several known 3rd party data types normalized to the US/Pacific time zone. NOTE: For 3rd party DATETIME / TIMESTAMP data types that do not support time zone, the samples assume that these values are being stored normalized to the UTC (Coordinated Universal Time) time zone. In addition, links to reference documentation for 3rd party data types have been provided where applicable. Service/Endpoint Data Type Sample Conversion Expression Amazon Redshift DATE LocalDate.parse($snapDateTime.toLocaleDateString('{“timeZone”:"US/Pacific”}’)) Amazon Redshift TIMESTAMP 1 LocalDateTime.parse($snapDateTime.toLocaleDateTimeString('{“timeZone”:"UTC”}’)) MongoDB DATETIME 3 LocalDateTime.parse($snapDateTime.toLocaleDateTimeString('{“timeZone”:"US/Pacific”}’)) MySQL DATE LocalDate.parse($snapDateTime.toLocaleDateString('{“timeZone”:"US/Pacific”}’)) MySQL DATETIME 1 LocalDateTime.parse($snapDateTime.toLocaleDateTimeString('{“timeZone”:"UTC”}’)) MySQL TIME 1 LocalTime.parse($snapDateTime.toLocaleTimeString('{“timeZone”:"US/Pacific”}’)) MySQL TIMESTAMP 2 LocalDateTime.parse($snapDateTime.toLocaleDateTimeString('{“timeZone”:"US/Pacific”}’)) Oracle DATE LocalDate.parse($snapDateTime.toLocaleDateString('{“timeZone”:"US/Pacific”}’)) Oracle TIMESTAMP 1 LocalDateTime.parse($snapDateTime.toLocaleDateTimeString('{“timeZone”:"UTC”}’)) Oracle TIMESTAMP WITH TIME ZONE 3 LocalDateTime.parse($snapDateTime.toLocaleDateTimeString('{“timeZone”:"US/Pacific”}’)) Oracle TIMESTAMP WITH LOCAL TIME ZONE LocalDateTime.parse($snapDateTime.toLocaleDateTimeString('{“timeZone”:"US/Pacific”}’)) PostgreSQL DATE LocalDate.parse($snapDateTime.toLocaleDateString('{“timeZone”:"US/Pacific”}’)) PostgreSQL TIME 1 LocalTime.parse($snapDateTime.toLocaleTimeString('{“timeZone”:"UTC”}’)) PostgreSQL TIME WITH TIME ZONE 3 LocalTime.parse($snapDateTime.toLocaleTimeString('{“timeZone”:"US/Pacific”}’)) PostgreSQL TIMESTAMP 1 LocalDateTime.parse($snapDateTime.toLocaleDateTimeString('{“timeZone”:"UTC”}’)) PostgreSQL TIMESTAMP WITH TIME ZONE 3 LocalTime.parse($snapDateTime.toLocaleTimeString('{“timeZone”:"US/Pacific”}’)) SAP HANA DATE LocalDate.parse($snapDateTime.toLocaleDateString('{“timeZone”:"US/Pacific”}’)) SAP HANA TIME LocalTime.parse($snapDateTime.toLocaleTimeString('{“timeZone”:"US/Pacific”}’)) SAP HANA SECONDDATE LocalDateTime.parse($snapDateTime.toLocaleDateTimeString('{“timeZone”:"UTC”}’)) SAP HANA TIMESTAMP 1 LocalDateTime.parse($snapDateTime.toLocaleDateTimeString('{“timeZone”:"UTC”}’)) Salesforce DATE LocalDate.parse($snapDateTime.toLocaleDateString('{“timeZone”:"US/Pacific”}’)) Salesforce DATETIME 1 LocalDateTime.parse($snapDateTime.toLocaleDateTimeString('{“timeZone”:"UTC”}’)) Salesforce TIME LocalTime.parse($snapDateTime.toLocaleTimeString('{“timeZone”:"US/Pacific”}’)) SQL Server DATETIME 1 LocalDateTime.parse($snapDateTime.toLocaleDateTimeString('{“timeZone”:"UTC”}’)) SQL Server DATETIME2 1 LocalDateTime.parse($snapDateTime.toLocaleDateTimeString('{“timeZone”:"UTC”}’)) 1 Does not provide time zone support. If provided, time zone offset will be stripped. 2 Supports time zone. Time zone is not stored interally. By default, on a SELECT, the value is normalized to the time zone set for the user’s local session. 3 Supports time zone. Also see Database Data Types in the documentation.16KViews3likes8CommentsHow to remove blank rows from an input flat file?
Hi Team, How can I remove all the blank rows from an input flat file? I cannot use a condition restricted on a single column as the other records are needed. All I want to do is filter out ALL ROWS that doesn’t have any data (like Row 4, 5, 7, 8 shown in the screenshot below) Desired Output: Thus, final output should be: Thanking in advance. Regards, DTSolved15KViews0likes18CommentsCapitalize First Letter of Every Word in a String
Is there a good way to capitalize the first letter of every word in a string with the rest lowercase natively in a Mapper or other snap? For example: fort worth => Fort Worth SAN ANTONIO => San Antonio I see the upperFirst() and capitalize() functions but they only work on the first character in the string, not every word.Solved14KViews0likes2CommentsHow to merge all objects in an array
I got an array named “group” includes object items as the below picture How can I merge all object items to one object? My expected result as the below picture. But, imagine that the number of item is unknown. It can be 3,4,5 , … (n). Thank you!Solved13KViews0likes12CommentsCannot format date correctly to YYYY-MM-DD
Hi there, I’ve looked at other articles addressing this but non has helped me so far. Im looking to format a date in the DD/MM/YYYY to YYYY-MM-DD. The date will always be populated. I’ve tried the following expressions but they wont process correctly Date.parse($original[‘Due Date’], “yyyy MM dd”) LocalDate.parse($original[‘Due Date’],“yyyy/MM/dd”) If I use the above expression and if the date is 06/04/2021, the output date becomes 0006-04-18. It’s inexplicable.12KViews0likes12CommentsTransformation rules inside mapper snap not exported to output file
Hi Team, I have found one of the transformational rules applied on “date field” not getting exported to the final output file (i.e. excel file) when the pipeline has been executed. Output preview on the snap does show the right data (i.e. expected one), pipeline validation also shows the right data however the file which gets generated after pipeline execution doesn’t get those details. I have tried the following: validated pipeline via shift+validate tweak the transformation rule, save the pipeline and check the output in the preview validated the preview file (with default 50 records) deleted the file generated by “validate & execute” snap on File writer with default 50 records so that new instance of data comes in executed the pipeline with different file name multiple times It’s a simple pipeline consisting of Snowflake Execute --> Mapper --> Excel Formatter --> File Writer Sharing some screenshots to help it understand better: (1) Pipeline with 4 snaps (2) Snowflake execute output preview in JSON format (highlighted yellow fields needs transformation) (3) Transformation Rules on Date fields (4) Transformation Rule: $Expire_Date.toString().replace('{Timestamp=',"").replace('.000+0000}',"Z") (5) Output preview after transformational rule (6) Mapper output preview shows different data → I’ve observed certain times that the transformation rules won’t work so I go inside the mapper, modify the rule a bit, bring it back to default rule so that I can save it (save is not enabled if there is no change thus this workaround) (7) As shift+validate didn’t work, had to change the transformation rule, save it and then this is the output preview on mapper snap: (8) Settings under excel formatter snap for reference: (9) Settings under File Writer snap for reference: (10) File generated with default 50 records due to validate & execute on File writer snap (11) Downloaded the file that gets generated in step 10 and below is the output (expected) (12) Executed Pipeline (13) Output File of Executed pipeline I’ve seen this happening numerous times particularly to fields containing timestamp in other pipelines too, am I doing something wrong over here? Any settings that needs to be changed either in Mapper, Excel Formatter or File Writer snap? Thanking in advance for your valuable time and suggestions on this one. Best Regards, DTSolved12KViews0likes23CommentsRemove JSON Key when value is "" or null
I have a json [ { "opportunity_Id": "Abc1", "sales_account_id": "Abc2", "account_name": "Pitney Bowes Inc.", "city": "Shelton", "address": "27 Waterview Dr", "requested_by": "Abc3@Abc3.com", "postal_code": 6484, "fax": "", "entitlement_group_id": "", "message_source": "Snaplogic", "contacts": [ { "last_name": "T", "first_name": "", "email": "Abc4@Abc4.com", "contact_role": "AdminContact", "contact_id": "Abc5" } ], "entitlements": [ { "quote_line_id": "Abc6", "transaction_type": "Quote", "status": "Active", "start_date": { "_snaptype_datetime": "2023-03-14T00:00:00.000 UTC" }, "sku": "Abc7", "sfdc_product_id": "Abc8", "product_type": "Primary", "product_name": "Abc9", "product_id": "Abc10", "license_type": "", "license_group_id": "", "license_features": [ { "unit_of_measure": "HOST", "quantity": 1 } ], "is_trial": "false", "entitlement_contacts": [ { "last_name": "T", "first_name": "", "email": "Abc11@Abc11.com", "contact_role": "AdminContact", "contact_id": "Abc12" } ], "end_date": { "_snaptype_datetime": "2024-03-14T00:00:00.000 UTC" }, "configuration": [ { "plan_type": "" } ], "channel_platform": "Salesforce CPQ", "delivery_number": "", "delivery_line": "", "serial_number": "", "marketplace_product_id": "Abc13" } ] } ] I want the output as [ { "opportunity_Id": "Abc1", "sales_account_id": "Abc2", "account_name": "Pitney Bowes Inc.", "city": "Shelton", "address": "27 Waterview Dr", "requested_by": "Abc3@Abc3.com", "postal_code": 6484, "message_source": "Snaplogic", "contacts": [ { "last_name": "T", "email": "Abc4@Abc4.com", "contact_role": "AdminContact", "contact_id": "Abc5" } ], "entitlements": [ { "quote_line_id": "Abc6", "transaction_type": "Quote", "status": "Active", "start_date": { "_snaptype_datetime": "2023-03-14T00:00:00.000 UTC" }, "sku": "Abc7", "sfdc_product_id": "Abc8", "product_type": "Primary", "product_name": "Abc9", "product_id": "Abc10", "license_features": [ { "unit_of_measure": "HOST", "quantity": 1 } ], "is_trial": "false", "entitlement_contacts": [ { "last_name": "T", "email": "Abc11@Abc11.com", "contact_role": "AdminContact", "contact_id": "Abc12" } ], "end_date": { "_snaptype_datetime": "2024-03-14T00:00:00.000 UTC" }, "channel_platform": "Salesforce CPQ", "marketplace_product_id": "Abc13" } ] } ] Basically, I want to remove/ignore the JSON keys when the value is “” or null Urgent!Solved10KViews0likes6CommentsFetch data whose date is less than 10hrs from the existing date!
Hi Team, I’m trying to achieve a filter condition but haven’t found any luck so far. My data has a field as Last_Updated which has date stored in the format 2022-06-23 03:54:45 I want to consider only those records whose date is less than 10 hours than the existing date. How can I achieve this? Shall I use a mapper or a filter snap and I would really appreciate if the logic behind this can be shared. If the format of the date that is stored in the Last_Updated needs to be transformed as well, please let me know. Thanks! Regards, DarshSolved9.2KViews0likes16CommentsHow to get the file name(s) from a Multi File Reader
I have a Multi File Reader reading a series of S3 files using a wildcard, and writing the data to Snowflake. There is a Mapper in between. Functionally, everything is working as expected. I’d like to get the name of the file in which the data was read, and write it to a Meta_FileName column. How do I retrieve the file name from the Multi File Reader? I’m sort of assuming it’s an expression to be added in the Mapper but not sure. TIA!9.2KViews0likes14Comments