Mongo DB Update Snap - Expression for Query
The following query works in MongoDB Atlas to update a record: db.mytable.update ( { _id : ObjectId(‘1234’) }, { $set : { “org_name” : “new org name” } } ) How do we configure this for use with the Mongo DB -Update snap? any one have an example?Solved2.3KViews0likes1CommentSalesforce Contacts - Database Replication (SQL Server-MongoDB-Oracle)
Created by @asharifian This pattern provides an example pipeline to replicate data from Salesforce to multiple different database types: SQL Server, MongoDB, and Oracle. Data replication is useful for disaster recovery, analytics, and improves overall system resilience and reliability. Configuration Two approaches are covered in this pattern: Data replication post success of an insert into a dependent database, such as data first getting inserted into MongoDB, and upon success, inserted into Oracle. Data replication in parallel, such as data being inserted to MongoDB and SQL Server in a parallel fashion. Sources: Salesforce Contacts Targets: Database tables for customer contact information in SQL Server, MongoDB, Oracle Snaps used: Salesforce Read, Mapper, Copy, MongoDB - Insert, SQL Server - Insert, Oracle - Insert Downloads Salesforce Contacts-Database Replication (SQL Server-MDB-Oracle).slp (14.5 KB)2.9KViews0likes0CommentsMongo DB secondary node
Good day. I have a question regarding mongo DB connecion account. Is there a way to configure a secondary node? I mean, If I make the connection via connection string, the parameter name and value in that connection string are readPreference=secondary and I can configure the name of that node, but in account configuration in SnapLogic I cound’t find where to define that kind of parameters (if possible). Maybe if I use a generic JDBC account I could define those parameters but I’d like to use the standard mongo DB account. Thank you for your help.1.4KViews0likes0CommentsConnecting to DocumentDB in AWS with shared .pem
Hello, I am trying to create a MongoDB account to connect to DocumentDB in aws. AWS supplied a shared .pem for RDS instances (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html) , but I am not sure how to use that with AWS secrets to create the account. There are fields to KeyStore and TrustStore, so I presume I need to convert the .pem to a different format. In past experience those stores are installed on the node(s) doing the connecting, how would I manage that in a CloudPlex?Is there any guidance on how to proceed? TIA -Liam4KViews0likes2CommentsDate filter in MongoDB snap
Hi, I am trying to filter the records from MongoDB Select snap using date filter but somehow it is not working. I am using below expression in condition. Please suggest if I am doing something wrong: {$and:[{“etl_date”:{$gte:“2019-05-15T00:00:00.000”}},{“etl_date”:{$lte:“2019-05-15T23:59:00.000”}}]} Thanks Aditya4.7KViews0likes3CommentsMongoDB update snap converts numbers to strings on update and insert
Good Day everyone, I came across this issue when using the mongodb update snap, it seems on update and insert using the upsert setting (on update alone it still gives the issue), all number variables are converted to string in the mongodb database. Is there a fix to this issue? Thanks, Sachin,1.7KViews0likes0CommentsIngest data from NoSQL Database (MongoDB) into AWS Cloud Storage (S3)
Contributed by @SriramGopal from Agilisium Consulting The pipeline is designed to fetch records on an incremental basis from document-oriented NoSQL database system (Mongo in this case) and load to cloud storage (Amazon S3) with partitioning logic. This use case is applicable to Cloud Data Lake initiatives. This pipeline also includes, the Date based Data Partitioning at the Storage layer and Data Validation trail between source and target. Parent Pipeline S3 Writer Child Pipeline Audit Update Child Pipeline Control Table - Tracking The Control table is designed in such a way that it holds the source load type (RDBMS, FTP, API etc.) and the corresponding object name. Each object load will have the load start/end times and the records/ documents processed for every load. The source record fetch count and target table load count is calculated for every run. Based on the status (S-success or F-failure) of the load, automated notifications can be triggered to the technical team. Control Table Attributes: UID – Primary key SOURCE_TYPE – Type of Source RDBMS, API, Social Media, FTP etc TABLE_NAME – Table name or object name. START_DATE – Load start time ENDDATE – Load end time SRC_REC_COUNT – Source record count RGT_REC_COUNT – Target record count STATUS – ‘S’ Success and ‘F’ Failed based on the source/ target load Partitioned Load For every load, the data gets partitioned automatically based on the transaction timestamp in the storage layer (S3) Configuration Sources : NoSQL Database, MongoDB Table Targets : AWS Storage Snaps used : Parent Pipeline: MongoDB - Find, Sort, File Writer, Mapper, Router, Copy, JSON Formatter, Redshift Insert, Redshift Select, Redshift - Multi Execute, S3 File Writer, S3 File Reader, Aggregate, Pipeline Execute S3 Writer Child Pipeline: Mapper, JSON Formatter, S3 File Writer Audit Update Child Pipeline: File Reader, JSON Parser, Mapper, Router, Aggregate, Redshift - Multi Execute Downloads IM_NoSQL_S3_Inc_load.slp (29.9 KB) IM_Nosql_S3_Inc_load_S3writer.slp (4.8 KB) IM_Nosql_S3_Inc_load_Audit_update.slp (12.0 KB)5.2KViews0likes1Comment