10-04-2023 02:38 AM
I am trying to write data from postgres to parquet. The source has numeric data type which has can values as int, decimal or float. I have mapped numeric type to decimal in parquet but the issue is that it converts int values to decimal as well. ex 1 becomes 1.00 and when i map the numeric datatype to int i loose the decimal values. It will be a general pipeline for many objects and i wont have the column schema at the runtime. Is there any workaround for this in parquet writer where we can distinguish between int and decimal for numeric data type.
10-04-2023 07:14 AM - edited 10-04-2023 11:38 PM
Hi @manichandana_ch can i get your views on this? I have been stuck on this for a long time
10-05-2023 01:29 AM
If that is the case then probably you should check if the incoming numeric value is integer or float. One way is, to check for a remainder when dividing by 1:
n % 1 == 0 --> integer
n % 1 != 0 --> float
BR,
Spiro Taleski
10-05-2023 01:53 AM
Can you suggest any snap through which i can achieve this. It is a dynamic pipeline which would iterate for all the tables in schema so i won't be able to hardcode column names at runtime
10-05-2023 02:55 AM
Then the better option is to have one configuration file(expression library) where the column types conversions will happen(along with my suggestion from above). Then from SnapLogic pipeline you will call the configuration file and pass the source columns that should be converted.