ContributionsMost RecentMost LikesSolutionsRe: ELT Snaps in ELT Pipelines do not process data in Design Mode I understand your 2nd point and that is something a customer’s development team should be considering anyway to ensure the usage costs are managed. As for your first point other than the obvious push down behavior why would these snaps behave different than to other DML snaps in other snap packs? Isn’t “Design Mode” just doing a limited run of the pipeline snaps? The only way to validate your pipeline is to run it and if there are any exceptions you need to be an admin in the environment to see what Snaplogic has sent to the DB. We have already discussed this with your product team and they told us this was a large level of effort and needed more customers to request this behavior to put it into roadmap, they never mentioned the corruption issue. We would never validate or go into design mode in a production environment any. ELT Snaps in ELT Pipelines do not process data in Design Mode We are using the ELT Snaps for pushdown DBaaS transformations. In Design Mode they do not perform DML steps which makes it difficult to build and test. We reported this to product development and they are looking for more organizations to request this feature before putting it on their development roadmap. Are others using these ELT snaps? If you are, how are you all doing development without having to always execute the full pipeline instead of using validation in design mode like all other DML snaps not in the ELT snap pack? If you would like to see this feature get prioritized, please submit an enhancement request so we can get this in the roadmap. Re: Ultra pipeline response time We also saw this performance hit in the 4.21 upgrade but mostly using PATH parameters. First call you would see almost an 8 second hit and then any subsequent calls seemed normal. Change the path param value the 8 second hit came back. We reported this to support and it was acknowledge as a bug once we proved to them it was environmental. We just tested this with 4.22 after upgrading the plex environment in our dev environment. The initial problem does seem to be addressed but we are seeing more latency than before. Since it’s dev it difficult to truly tell if latency or normal. We typically have 200ms to 800ms response times depending on our pipeline. 800ms for just a JSON generator seems excessive. Have you reported that to Support? Re: Flattening and Mapping Complex JSON Nice! Both solutions worked very well and gave me some great examples of using the script snap and the .extend method. Thanks to you both and happy new year! Re: Flattening and Mapping Complex JSON Thanks for the help, when you say time consuming, do you mean writing code or execution time? The scripted gave us a good use case of writing a script snap. I am levering this to read data from Smartsheet so plan to post the whole solution once complete, so thanks for your contribution. I know I can get the data from Smartsheet as a csv or other binary format but wanted to use the rich json format to add additional functionality like detecting change and data types. Thanks again Re: Flattening and Mapping Complex JSON Thank you that helped and also validated that it needed to be a script block. Wan’t sure if there was any magic jsonPath or other Snap that could do this without scripting. Do you know if there are any limitations on using script blocks is large data streams? Just trying to understand best practices when using Script Blocks. Re: Flattening and Mapping Complex JSON Sorry should have provided the structure too More like this: [ [ { “newRows”: [ { “rowNumber”: 1, “ORG_VALUE_SRC”: “1400076”, “ORG_VALUE_TGT”: “1400076GBP”, “CREATE_DATE”: “2018-02-28”, “UPDATE_DATE”: “2018-02-28” }, { “rowNumber”: 2, “ORG_VALUE_SRC”: “1400076”, “ORG_VALUE_TGT”: “1400076EUR”, “CREATE_DATE”: “2018-02-28”, “UPDATE_DATE”: “2018-02-28” }, { “rowNumber”: 3, “ORG_VALUE_SRC”: “1400077”, “ORG_VALUE_TGT”: “1400077GBP”, “CREATE_DATE”: “2018-02-28”, “UPDATE_DATE”: “2018-02-28” } ] } ] ] Basically a row for every source row using the “Title” as the key and “Value” from “myRows.cells” as the value. My need is grander than this but figured if someone could steer me in the right direction we could take it from there. Thanks Flattening and Mapping Complex JSON We have a complex JSON object which has the columns information in one array and the row values in another. We would like the output to simply have the merged structure with value of the “Title” attribute held in the “myCols” as the key and the value from the “myRows.Cells.value” mapped. We would like a row for each myRows element. We’d like to do this without using the absolute index numbers of either and leverage the “myCols.id” and “myRows.cells.columnId” which are how they align. The purpose of this is to have a dynamic source and which can have any number of columns that could eventually create a results table and insert the rows. Any ideas? [ { “myCols”: [ { “id”: 7169064356341636, “version”: 0, “index”: 0, “title”: “ORG_VALUE_SRC”, “type”: “TEXT_NUMBER”, “primary”: true, “validation”: false, “width”: 150 }, { “id”: 1539564822128516, “version”: 0, “index”: 1, “title”: “ORG_VALUE_TGT”, “type”: “TEXT_NUMBER”, “validation”: false, “width”: 150 }, { “id”: 6043164449499012, “version”: 0, “index”: 2, “title”: “CREATE_DATE”, “type”: “TEXT_NUMBER”, “validation”: false, “width”: 150 }, { “id”: 3791364635813764, “version”: 0, “index”: 3, “title”: “UPDATE_DATE”, “type”: “DATE”, “validation”: false, “width”: 150 } ], “myRows”: [ { “id”: 7737866304415620, “rowNumber”: 1, “expanded”: true, “createdAt”: “2018-12-18T17:48:08Z”, “modifiedAt”: “2018-12-18T17:48:08Z”, “cells”: [ { “columnId”: 7169064356341636, “value”: 1400076, “displayValue”: “1400076” }, { “columnId”: 1539564822128516, “value”: “1400076GBP”, “displayValue”: “1400076GBP” }, { “columnId”: 6043164449499012 }, { “columnId”: 3791364635813764, “value”: “2018-02-28” } ] }, { “id”: 2108366770202500, “rowNumber”: 2, “siblingId”: 7737866304415620, “expanded”: true, “createdAt”: “2018-12-18T17:48:08Z”, “modifiedAt”: “2018-12-18T17:48:08Z”, “cells”: [ { “columnId”: 7169064356341636, “value”: 1400076, “displayValue”: “1400076” }, { “columnId”: 1539564822128516, “value”: “1400076EUR”, “displayValue”: “1400076EUR” }, { “columnId”: 6043164449499012 }, { “columnId”: 3791364635813764, “value”: “2018-02-28” } ] }, { “id”: 6611966397572996, “rowNumber”: 3, “siblingId”: 2108366770202500, “expanded”: true, “createdAt”: “2018-12-18T17:48:08Z”, “modifiedAt”: “2018-12-18T17:48:08Z”, “cells”: [ { “columnId”: 7169064356341636, “value”: 1400077, “displayValue”: “1400077” }, { “columnId”: 1539564822128516, “value”: “1400077GBP”, “displayValue”: “1400077GBP” }, { “columnId”: 6043164449499012 }, { “columnId”: 3791364635813764, “value”: “2018-02-28” } ] } ] } ] Re: Rest API to convert JSON to Excel and return Response to Requester Posted this a while ago and got a few suggestions but nothing that really worked. Trying to create a RestFul Endpoint that can return data as Json, CSV or Excel based on a format input param. I am going to attach a sample project I created but it doesn’t do wha we need. It takes an input stream in this case a statice excel file parses it to a json document and then tries to return it as am Excel type to requestor. Any insight out there? Examples_ExcelFormaterAPI.zip (9.2 KB) Re: Rest API to convert JSON to Excel and return Response to Requester Trying to take the output of a pipeline initiated as an Ultra Task then converts the stream into an Excel document then send it back to the Requester. So think if you have a pipeline that queries the a DB then sends the results back as either a JSON or Excel document as a Restful service. The Excel formatter can only connect to a binary input. Not looking to write to file but output as a stream of binary data with a mime type of “application/vnd.ms-excel.xlt” or something else