Array Rename

Hi Team,

I am trying to achieve below. I would need directions on how to achieve below in a mapper or any other way.

Source :
{
“UBER_ID”:[
“1”,
“Integer_pattern”
],
“First_name”:[
“Majid”,
“TextOnly_pattern”
],
“Last_name”:[
“”,
“TextOnly_pattern”
]
}

Target :

I would like to rename the above fields based on the value in the array. Below is the output I am looking at.

{
“Integer_pattern”:[
“1”,
],
“TextOnly_pattern”:[
“Majid”,
],
" TextOnly_pattern":[
“”,
]
}

Hi @Majid ,

There must be some more solid conditions for the logic, like, the name of the field will always be the second element in the array. But even if you define those conditions, this output cannot be achieved because you have two fields in the object with the same key, and that is not allowed. Revise and resend the requirements so we can help you better.

Regards,
Bojan

Hi @Majid

You should be able to use this expression in a mapper to replace the keys with the last value of the array, as well as remove it from the array:

$.mapKeys((val, key) => val.pop())

What @bojanvelevski said still holds true. However, I believe this is the expression you are looking for

1 Like

@cjhoward18 Thank you. This is renaming the arrays but as there is more than one fields in the object with same key name the output is just keeping the last instance.

@bojanvelevski Thank You. my requirement is to validate each field against different regex. I have used data validator with all possible regex I have and I was trying to rename the fields to corresponding regex name I have provided in data validator snap so that each field will be validated against associated regex. regex name will always be second and last element.

Attaching sample pipeline.
data-validation-regex_2021_08_05 (1).slp (15.3 KB)

If there is any other recommended approach to achieve this please let me know.

@cjhoward18 @bojanvelevski any directions how to approach the above use case.

@Majid,

Attached is a sample pipeline with a couple of options using the Pivot Snap to produce results similar to what your sample pipeline appears to be attempting. Both options use the same logic - the second just brings the grouping back together if desired.

It may not be your target solution, and my expressions could need a little work, but I hope it helps some with the direction you want to go.

Community.10510_2021_08_05.slp (30.4 KB)

1 Like

Thank You @del … This works but in my case the number and name of fields will be different from one file to other as I am creating a generic pipeline. Is there anyway to make the Pivot snap number of fields and field names dynamic.

I am also not sure about performance as this method is going to divide each record into x records based on the number of fields.

I appreciate you looking into the use case and providing directions.

@Majid,

Here is a version 2 of the pipeline that uses a Mapper and Splitter (instead of Pivot snap) for a dynamic pivot of the data. Community.10510_2021_08_05 (v2).slp (32.8 KB)

I’m not sure about performance, either, but I don’t know how you use your validator snap as-is without splitting/pivoting the data - because of the duplicate key issues.

As this is to be a generic pipeline, I think you might be better off using an Expression Library
in place of (or in conjunction with) your Validator. I think you could avoid the pivot, then.

1 Like

@del Thank you so much… this is what I was exactly looking for. I will verify the performance and update you…

Can you provide example if possible about the way it can be done with expression library. As data validator does not allow any expression library or parameters used I am not sure how this can be achieved in a mapper using expression library.

@Majid I’m glad the above helped.

I think I may be derailing too much by suggesting the expression library. It was a creative thought, but would require readjusting downstream logic to reach desired end result.

But, for an exercise, I put this together to show where my thought was leaning.
It basically changes your source:

	{
		"UBER_ID": [
			"1",
			"Integer_pattern"
		],
		"First_name": [
			"Majid",
			"TextOnly_pattern"
		],
		"Last_name": [
			"",
			"TextOnly_pattern"
		]
	}

to this:

	{
		"UBER_ID": [
			"1",
			"Integer_pattern",
			"valid"
		],
		"First_name": [
			"Majid",
			"TextOnly_pattern",
			"valid"
		],
		"Last_name": [
			"",
			"TextOnly_pattern",
			"invalid"
		]
	}

But you’d still have to handle this creatively in downstream snaps to get your desired results.

Community.10510.v3_2021_08_06.slp (4.9 KB)
community10510v3.expr.txt (376 Bytes)

@del Thank You so much for all your help. I think this is better solution and I will not need pivot and even data validator if I go with this solution. This will also make the pipeline more dynamic. Any new regex I need to add I will add in expression library.

I hope the performance is as good as I have seen in data validator. But this is the best solution for the use case I have.

Appreciate your help. Will update the post with final solution and performance metrics once I complete the code and testing.

1 Like