How to best combine/map values from separate inputs?
I’m trying to produce output containing Value and Item ID from separate files. I have two input files for this, with different naming for the fields For example, Transaction Name=TRANSACTION First Name=FNAME LastName=LNAME In addition, one file contains the id, the other contains the value. I need to combine these into an output with Instance (text value) and the ID value. I have a gate setup to create a doc with input0 and input1 as shown below. Input0 “Instance”:“Transaction Name” “ID”:“bae77” Input1 “TRANSACTION”:“AC00014623” What I want this to look like is: {TextValue":“AC00014623” “Item”:{“id”:“bae77”} I have about 40 of these pairs, slightly different in each file. Any recommendations or ideas?Solved13KViews0likes10CommentsReplacing Multiple Special Character
Hello Experts, I have a source column (Emp_name) where many special character are there in name ,want to remove all the special characters. Tried using replace and replaceAll but i didn’t find a function that can take list of special character like below {!,@,#,$,%,^,&,*}. Source data looks like these:- Emp_id Name 100 Tom!@#$ 200 Scott**& 300 Tig*!!@er 400 N!e@@el 500 #$Je%rry 600 James*&^% I used to do these using convert function in IBM Datastage where is used to pass the list of special character and then replace it,but i didnt find any function here. Can anyone guide me through how can i do it? Appreciate your help.12KViews0likes3CommentsCompare the values of two documents
Hi, I want to compare the values coming from Union Snap. I’ve two input views, both returning the count of documents. Using Filter, I want to check if the count from both sources is similar then apply the following logic, but I’m unable to compare the values between two documents. Can anyone help me create the expression please? In my case there will be only two sources.Solved10KViews0likes4CommentsChange OBJECT to ARRAY using Mapper
Using Mapper, we can change the Object in to array. In the below pipeline , data is generated using csv generator in plain text. Pipeline CSV generator configurations Changing the name from object to list. $Name instanceof Array?$Name :[$Name] Here name is the attribute which is reffered from csv generator and is converted to list from object. Output of Mapper [ { "Name": [ "Harish" ], "age": "26" } ] Here added an JSON splitter to read the list from the output of mapper. Below is the output of JSON splitter. [ { "splitValue": "Harish" } ] Attaching the sample pipeline.Array to Object_2017_03_06.slp (4.0 KB)8.4KViews0likes1CommentMap expression regex
Hi, A simple one for the people that are expert in regex. I’m still learning it. So, I have a key: notes that contains a value : “#IF:SLG-01-SL + some text here” My goal is to always get the 9 characters after an ‘#IF:’ My expression I tried in the regex and works: ([#IF:])(.{0,12}) But how do I put it into a mapper? First i check whether the notes contains ‘#IF:’ if thats true it goes to the path where i need to do the regex in the mapper $notes.replace($notes, /([#IF:])(.{0,12})/g) but now it gives the string of the regex. Regards JensSolved8.1KViews0likes6CommentsHow to find incoming data format(json,xml) dynamically in pipeline input view?
How to find incoming data format dynamically? Say for example, If the incoming data is JSON then I need to perform certain validation and flow for JSON If the incoming data is xml then I to perform certain operations. Can any one suggest How to find incoming data format during the run-time. by using single input view for the pipeline?7.3KViews0likes6CommentsExporting text file with CSV formatting, but with different numbers of fields
I’m working on a pipeline that needs to create and send a text file to a third party That text file contains rows formatted in the same was as a csv file The file contains a header row, then the data rows, then a footer row The header and footer rows only have 6 columns, and I cannot send it with more or less columns, because their system will reject it the data rows have 33 columns, and I cannot send it with more or less columns, because their system will reject it Here is my pipeline: The first section has 3 SQL Server Execute snaps that get the 3 types of rows, then I union them all together. The select contains 2 fields that cannot be in the resulting text file, and I only need them for the sort snap, to get the rows in the correct order The mapper after to Sort snap is to remove the 2 sort columns The problem I get here is, if I leave null safe access unchecked, then it freaks out because Col6 to Col33 does not exist in 2 of the rows, and if I check null safe access, it creates 6 to 33 in those 2 rows and adds to many fields to those 2 rows in the text file Is there any way to: A) Remove the 2 fields without using a mapper B) Remove the resulting null valued fields after the mapper OR C) Tell the csv formatter to not create the field if it has a null value Thanks7.1KViews0likes7CommentsMapper, create xml with both attribute and node value, or multiple attributes
I’m using a mapper to create values for an XML Formatter snap, but I can’t figure out how to handle when the XML requires both an attribute and a node value, or multiple attributes, like: <address_type desc=“Work”>work</address_type> How can I get the format I need?6.8KViews0likes7CommentsCount number of records fetched/processed from a flat file/upstream system systems (snowflake, salesforce, oracle)/file writer without using a pipeline execute
Hi Team, I’m looking to count records on a couple of scenarios listed below: (1) Records fetched from a flat file (e.g. excel, csv) and writing the total counts of records into a new column e.g. File Reader --> Mapper (transformation rules here with new column added to count the total number of records) --> Excel/CSV formatter --> File Writer I’ve tried using snap.in.totalCount and snap.outputViews inside a mapper but didn’t get the expected results. (2) Records fetched from source system like snowflake, salesforce, oracle, etc. without using a count command in the query itself I’m thinking of using a Group By or an Aggregate snap to get the counts, would that be the right approach? (3) Counting number of records processed after the operation has been completed. For instance, I’m writing a flat file (excel/csv) but want a new column ingested into that file dynamically that states the total number of docs processed AND send an email to the team that states total number of docs processed. e.g. File Reader/Salesforce Read --> Mapper --> excel/csv formatter --> File Writer --> Mapper (anticipating this should have some rules) --> Email Sender (sends count ONLY) Thanking you in advance for your time and help on this one. Best Regards, DarshSolved6.6KViews0likes2CommentsMapper Snap: Valid Column Names In Input Schema Give "Undefined" Error
Hi, I am experiencing an issue where I have a Mapper Snap that gives the error: $[FieldName] is undefined when I input a field name in the expression box. The fields that I am using are inside the input schema of the mapper snap (I have dragged and dropped) so I know that the fields are valid. I have tried refreshing the page, restarting my browser and clearing cache to eliminate the possibility of it being a browser issue. Does anyone know what could be the cause of this problem?6.6KViews0likes9Comments