cancel
Showing results for 
Search instead for 
Did you mean: 

Building ETL pipelines with dynamic mapping

jkrangel
New Contributor

Hello community,

I’m new to SnapLogic and am trying to understand best practices for building out scalable ETL pipelines. Specifically, I would like to build generalized replication pipelines that are dynamic and can handle schema changes without needing modification.

For example, suppose I want to replace objects in Salesforce into a DWH. I would like to know if it’s possible to build a pipeline that:

  • Can be prameterized/generalized to support replicating multiple objects. For this to happen, I believe the Mapper snap would need to automatically map the source/destination schemas (let’s assume I take care to make the matching easy by keeping the column names the same on both sides)
  • Would be responsive to simple schema changes (I.e., if I add a new column in salesforce, it adds a new one in the DWH)

Is any of the above possible? One thing we’ve enjoyed with simpler ETL tools (e.g., fivetran and etleap) is that they handle these simpler cases more easily.

0 REPLIES 0