01-24-2018 03:22 PM
Hi - i am looking to capture counts of source and target tables into .csv file for data verification on a daily basics.
My Question is i was able to get a table count into a csv file after designing a snap
SQL Select ->Aggregate -->CSVFormater->Filewrite->Email (Looks ok ) Result - Good.
I have 20+ should i do the same process as above for each table ? or any simplified way of achieving the result ?
Any inputs is greatly appreciated .Thank you
01-25-2018 11:51 AM
Is the purpose to get a count of all records in the table or is there individual criteria for each table?
If you only need the count of all records, I would suggest creating a list of the tables you want to check and then have the pipeline read the list and execute a pipeline that does the steps you listed above. Your child pipeline would have a parameter to accept the table and that in the SQL Select.
01-25-2018 02:03 PM
Most tables have a date column but the issue is there are lot’s of NULLS in it .Not sure how to capture the counts .I am looking to compare counts in Source and Target based on the date column ,But things are getting tricky.
02-13-2023 04:05 AM
I have a similar, but instead of writing to a file writer, I need to write in S3 as a csv file.
Can you share the snaplogic for retrieving the counts of records in multiple tables in one schema?
02-20-2023 05:11 AM
If the source is MSSQL then you can use SQL Execute snap and run below query
select object_name(object_id) as [table name], sum(row_count) as [row count]
from sys.dm_db_partition_stats
where index_id < 2
– and object_name(object_id) in (‘tbl1’,‘tbl2’,‘tbl3’)
group by object_name(object_id)