How to ignore duplicate rows when inserting record into table
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-04-2023 02:56 PM
I'm developing a pipeline to request data from an API and load it into a MS SQL Server table. The first run of the pipeline loaded 50 records to the table, each with a unique 'user_name' value. The table constraints require the 'user_name' field be unique. A new user has been added to the source platform so I want to run the pipeline again and ignore the duplicate records and update the destination table with the additional, 51st, record. Right now, I just get an error from the first record telling me I have a duplicate key error.
I've tried using the 'Update' snap with a condition like "user_name != $user_name" but with no success. How can I run this pipeline so it will skip duplicate key values and add any new records?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-04-2023 04:27 PM
@maahutch Try using SQL server merge snap if that works.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-05-2023 01:41 PM - edited ‎08-05-2023 01:43 PM
