Showing results for 
Search instead for 
Did you mean: 

Postgres to parquet writer

New Contributor

I am trying to write data from postgres to parquet. The source has numeric data type which has can values as int, decimal or float. I have mapped numeric type to decimal in parquet but the issue is that it converts int values to decimal as well. ex 1 becomes 1.00 and when i map the numeric datatype to int i loose the decimal values. It will be a general pipeline for many objects and i wont have the column schema at the runtime. Is there any workaround for this in parquet writer where we can distinguish between int and decimal for numeric data type.


New Contributor

Hi @manichandana_ch  can i get your views on this? I have been stuck on this for a long time


If that is the case then probably you should check if the incoming numeric value is integer or float. One way is,  to check for a remainder when dividing by 1: 

n % 1 == 0 --> integer
n % 1 != 0 --> float


Spiro Taleski

HI @SpiroTaleski 

Can you suggest any snap through which i can achieve this. It is a dynamic pipeline which would iterate for all the tables in schema so i won't be able to hardcode column names at runtime 


Then the better option is to have one configuration file(expression library) where the column types conversions will happen(along with my suggestion from above). Then from SnapLogic pipeline you will call the configuration file and pass the source columns that should be converted.