ContributionsMost RecentMost LikesSolutionsRe: How to handle null or empty element Null values should be possible to identify in expressions in most places. The following condition would check that BEL is not blank or null, split them up if you need to handle them separately in the router. $ZSDATA.IDOC.EDL2.EDL4.BEL != "" && $ZSDATA.IDOC.EDL2.EDL4.BEL != null If you need to do the opposite to catch if an element is null or blank you simply change the conditions: $ZSDATA.IDOC.EDL2.EDL4.BEL == "" || $ZSDATA.IDOC.EDL2.EDL4.BEL == null If you have scenarios where you can be missing the element completely you may have to also throw in a check first that the object exists at all. If you are having null safe access in a mapper where you map BEL, it would make this redundant. $ZSDATA.IDOC.EDL2.EDL4.hasPath("BEL") Re: Year end split week data consumption in two datasets for a single execution If you want to do this using week numbering you need to be prepared for a rabbithole. Otherwise, finding the dates of a split week in dec/january can be achieved with date calculations. One sample is provided and attached which could give some inspiration. As always there could probably be other ways to achieve the same goal. Assuming weeks start on Mondays (ISO) Sample output for year 2023 and 2024: [ { "year": "2023", "split_week": true, "previous_year_first_day_last_week": "2022-12-26T00:00:00.000 UTC", "previous_year_last_date": "2022-12-31T00:00:00.000 UTC", "current_year_first_date": "2023-01-01T00:00:00.000 UTC", "current_year_last_day_first_week": "2023-01-01T00:00:00.000 UTC" }, { "year": "2024", "split_week": false, "previous_year_first_day_last_week": "2023-12-25T00:00:00.000 UTC", "previous_year_last_date": "2023-12-31T00:00:00.000 UTC", "current_year_first_date": "2024-01-01T00:00:00.000 UTC", "current_year_last_day_first_week": "2024-01-07T00:00:00.000 UTC" } ] If the first week of the current year is split between two years (boolean) The date of the last Monday in previous year Date.parse($year, "yyyy").minusDays(1).getDay() == 0 ? Date.parse($year, "yyyy").minusDays(7) : Date.parse($year,"yyyy").minusDays(Date.parse($year,"yyyy").minusDays(1).getDay()) The date of the last day in last year. (Always 31st of December) The date of the first day in this year. (Always 1st of January) The date of the first Sunday of this year. (Last day of the first week) Date.parse($year, "yyyy").getDay() == 0 ? Date.parse($year, "yyyy") : Date.parse($year, "yyyy").plusDays(7 - Date.parse($year, "yyyy").getDay()) Re: Environment-specific property files in Snaplogic I think the given answer from snaplogic would be to set it in your task parameters, but we found this to be quite risky since if something changes globally we need to identify and adjust each task affected. In our setup we store environment data in expression files and access accounts, url:s and other settings via an expression like this lib.[expression library name].[parameter group].[parameter] Example of expression file for environment: { "variables": { "API_HOST": "https://server.com:12345/rest/execute", "SOAP_HOST":"https://server2.com:42321/ws", "SMBAccount": "../../shared/SMB Sample TEST", "RESTAccount": "../../shared/Rest account test", "NumberVariable": 123, "TextVariable": "abc123" } } In your pipeline parameters you point to the expression files you wish to use. you can have global environment parameter files in the /shared or you can have them in your project and mix both if needed. Example of a pipeline in a project which loads two expression files, one global and one for the specific project: And you can use these in your snaps and pipeline executes, wherever you can use expressions: This example would get the value from variables.NumberVariable in the file imported as config lib.config.variables.NumberVariable Expression files are a bit tedious to work with since all changes needs to be done by downloading the file, editing and uploading it again. You can combine, extend or use this type of functionality in many different ways, this is one suggestion which we have tried. I would wish there were alternatives which were more native in snaplogic for handling environment variables. Re: Snaplogic file split Can you make use of the solution suggested in previous post? How to split files based on size.. Re: Get order of records within a group Hi Rajesh, You can probably achieve this in multiple ways. I try to keep it rather clean, using standard snaps, and not nest too much js code in the mappers. My sample is first sorting based on employee id and effective date, then it is grouping all records for an employee id in to one. Then you can generate the order and the end date individually using javascript map functions and split the employee out again. Order = Index +1 $employee.map((item, index) => ({...item, Order: index + 1})) Valid to = If there is another record, fetch its effective date-1, else null. $employee.map((item, index) => ({...item, ValidTo: index+1 == $employee.length ? null : Date.parse($employee[index + 1].EffectiveDate) .minusDays(1) .toLocaleDateString({"format": "M/d/yyyy"}) })) Re: Iterate JsonArray in parent to call child pipeline You will have to map the content to send in the post using a mapper to the target format expected if it is JSON. Then you can set the HTTP Entity in the rest post snap to the path of the root of the content/structure you wish to send. Based on your need and the requirements on the data you will post you can also play around with XML generator or JSON generator to build the target document using velocity templates, sometimes that could be an easier way (visually) if it is a more complex structure you need to map and post. If you are still facing challenges please post some example on how the json is structured from the previous snap and how your POST request is expected to be sent and structured. Re: how to filter out spaces Awesome! Since the community does not allow upload of files with extension .slp, you can download the txt file and rename it from .txt to .slp and import it as a pipeline in snaplogic designer or manager. Re: Line Break in Mapper I am guessing you do not actually want a csv output but some sort of text file, based on that the content should be separate lines and then each block separated with the stars? If you have some other challenge please provide some sample input and maybe a mockup on what you expect out.. I think you will only need to map the line as in your code above but in order to output the text with line breaks you can join all lines with \n and then push it as binary out to a file. I have attached a sample for you to test with.. Re: how to filter out spaces You have two approaches, one is to continue on the path with a mapper and javascript methods, but you can use the filter method before map to only include the low risk. $suppliersRisksResponseFromSupplierSAPI .filter(x => x.aggregateRiskDetails.riskLevel == "Low Risk") .map(x => x.riskDetails.supplierLocationStateCode) The second approach is to use standard snaps to achieve the result, it may look more busy in SL designer but it is adding better readability and flexibility for the future if you want to work with other data from the input. Attached is a sample, it will output the name of each low risk state as a separate document which you can work with or group as you wish. Re: How to loop thru N times My first go-to would be to create a sub pipeline for the part that needs to handle 200 records at a time and call that sub pipeline from the main pipeline using pipeline execute with the setting of batch size = 200. Depending on what you need and your options, you may also be able to solve it by batching it in to groups of 200 using Group by N and processing each document as one batch.