MongoDB
Monitor MongoDB database logs including application logs, profiler logs, audit logs, and replica set monitoring using the File Stream plugin
MongoDB Integration
Monitor and analyze MongoDB database logs in real-time using LogFlux Agent’s File Stream plugin. This configuration-based approach provides comprehensive NoSQL database monitoring, query performance analysis, and replica set health tracking for MongoDB deployments.
Overview
The MongoDB integration leverages LogFlux Agent’s File Stream plugin to:
- Real-time monitoring of MongoDB application logs, profiler logs, and audit logs
- Query performance analysis with slow operation detection and optimization insights
- Replica set monitoring with primary/secondary status and replication lag tracking
- Security auditing with authentication events and access control monitoring
- Sharding analysis for MongoDB sharded cluster deployments
- Connection monitoring with client connection and session tracking
Installation
The File Stream plugin is included with LogFlux Agent. Enable it for MongoDB log monitoring:
1
2
3
4
5
|
# Enable File Stream plugin
sudo systemctl enable --now logflux-filestream
# Verify plugin status
sudo systemctl status logflux-filestream
|
MongoDB Configuration
Configure MongoDB logging in /etc/mongod.conf
:
Basic Logging Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
|
systemLog:
destination: file
path: /var/log/mongodb/mongod.log
logAppend: true
logRotate: rename
# Log levels: 0=Info, 1=Debug, 2=Debug-1, 3=Debug-2, 4=Debug-3, 5=Debug-4
verbosity: 0
# Component-specific verbosity
systemLog:
component:
accessControl:
verbosity: 1
command:
verbosity: 1
control:
verbosity: 1
geo:
verbosity: 1
index:
verbosity: 1
network:
verbosity: 1
query:
verbosity: 1
replication:
verbosity: 1
sharding:
verbosity: 1
storage:
verbosity: 1
write:
verbosity: 1
|
Enhanced Logging Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
|
systemLog:
destination: file
path: /var/log/mongodb/mongod.log
logAppend: true
logRotate: rename
verbosity: 1
quiet: false
traceAllExceptions: true
# Profiler configuration
operationProfiling:
mode: slowOp
slowOpThresholdMs: 100
slowOpSampleRate: 1.0
# Audit logging (MongoDB Enterprise)
auditLog:
destination: file
format: JSON
path: /var/log/mongodb/audit.log
# Security configuration
security:
authorization: enabled
javascriptEnabled: false
# Replica set configuration
replication:
replSetName: "rs0"
# Sharding configuration (for sharded clusters)
sharding:
clusterRole: shardsvr
# Network configuration
net:
bindIp: 127.0.0.1,192.168.1.10
port: 27017
maxIncomingConnections: 65536
|
Audit Configuration (Enterprise)
1
2
3
4
5
6
7
8
9
10
11
12
13
|
auditLog:
destination: file
format: JSON
path: /var/log/mongodb/audit.log
filter: '{
atype: {
$in: [
"authenticate", "authCheck", "createUser", "dropUser",
"createRole", "dropRole", "createIndex", "dropIndex",
"createCollection", "dropCollection", "insert", "update", "delete"
]
}
}'
|
Basic Configuration
Configure the File Stream plugin to monitor MongoDB logs by creating /etc/logflux-agent/plugins/filestream-mongodb.toml
:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
[filestream.mongodb_main]
paths = ["/var/log/mongodb/mongod.log"]
format = "mongodb_log"
tags = ["mongodb", "database", "nosql"]
fields = {
service = "mongodb",
log_type = "main"
}
[filestream.mongodb_audit]
paths = ["/var/log/mongodb/audit.log"]
format = "json"
tags = ["mongodb", "audit", "security"]
fields = {
service = "mongodb",
log_type = "audit"
}
[filestream.mongodb_profiler]
paths = ["/var/log/mongodb/profiler.log"]
format = "json"
tags = ["mongodb", "profiler", "performance"]
fields = {
service = "mongodb",
log_type = "profiler"
}
|
1
2
3
4
5
6
7
|
[filestream.mongodb_standard]
paths = ["/var/log/mongodb/mongod.log"]
format = "regex"
regex = '^(?P<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}[+-]\d{4}) (?P<severity>[FEWID]) (?P<component>[A-Z]+)\s+\[(?P<context>[^\]]+)\] (?P<message>.*)$'
parse_timestamp = true
timestamp_field = "timestamp"
timestamp_format = "2006-01-02T15:04:05.000-0700"
|
1
2
3
4
5
6
7
8
9
10
11
|
[filestream.mongodb_structured]
paths = ["/var/log/mongodb/mongod.log"]
format = "json"
parse_timestamp = true
timestamp_field = "t"
timestamp_format = "2006-01-02T15:04:05.000Z"
tags = ["mongodb", "structured"]
fields = {
service = "mongodb",
log_type = "structured"
}
|
1
2
3
4
5
6
7
8
9
10
11
|
[filestream.mongodb_audit_structured]
paths = ["/var/log/mongodb/audit.log"]
format = "json"
parse_timestamp = true
timestamp_field = "ts"
timestamp_format = "2006-01-02T15:04:05.000Z"
tags = ["mongodb", "audit"]
fields = {
service = "mongodb",
log_type = "audit"
}
|
1
2
3
4
5
6
|
[filestream.mongodb_profiler_parsed]
paths = ["/var/log/mongodb/profiler-parsed.log"]
format = "json"
parse_timestamp = true
timestamp_field = "timestamp"
timestamp_format = "2006-01-02T15:04:05.000Z"
|
Create profiler log parsing script:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
|
#!/bin/bash
# Parse MongoDB profiler data
MONGO_HOST="localhost:27017"
MONGO_DB="admin"
collect_profiler_data() {
local timestamp=$(date -u +%Y-%m-%dT%H:%M:%S.000Z)
mongo --host "$MONGO_HOST" "$MONGO_DB" --eval "
db.runCommand({profile: -1}).was;
db.system.profile.find().limit(100).sort({ts: -1}).forEach(function(doc) {
doc.timestamp = '$timestamp';
doc.metric_type = 'profiler';
printjson(doc);
});
" 2>/dev/null | grep '^{' >> /var/log/mongodb/profiler-parsed.log
}
while true; do
collect_profiler_data
sleep 60
done
|
Advanced Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
[filestream.mongodb_performance]
paths = ["/var/log/mongodb/mongod.log"]
format = "mongodb_log"
tags = ["mongodb", "performance"]
fields = {
service = "mongodb",
log_type = "performance"
}
# Filter for performance-related events
[filestream.mongodb_performance.processors.grep]
patterns = [
"slow operation",
"command.*took",
"getmore.*took",
"query.*took",
"update.*took",
"remove.*took",
"ms$"
]
|
Replica Set Monitoring
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
[filestream.mongodb_replica_set]
paths = ["/var/log/mongodb/mongod.log"]
format = "mongodb_log"
tags = ["mongodb", "replication", "replica-set"]
fields = {
service = "mongodb",
log_type = "replica_set"
}
# Filter for replication events
[filestream.mongodb_replica_set.processors.grep]
patterns = [
"replica set",
"election",
"primary",
"secondary",
"heartbeat",
"sync",
"rollback",
"oplog"
]
|
Connection Monitoring
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[filestream.mongodb_connections]
paths = ["/var/log/mongodb/mongod.log"]
format = "mongodb_log"
tags = ["mongodb", "connections"]
fields = {
service = "mongodb",
log_type = "connections"
}
# Filter for connection events
[filestream.mongodb_connections.processors.grep]
patterns = [
"connection",
"client",
"auth",
"login",
"logout",
"network"
]
|
Sharding Monitoring
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[filestream.mongodb_sharding]
paths = ["/var/log/mongodb/mongod.log"]
format = "mongodb_log"
tags = ["mongodb", "sharding", "cluster"]
fields = {
service = "mongodb",
log_type = "sharding"
}
# Filter for sharding events
[filestream.mongodb_sharding.processors.grep]
patterns = [
"shard",
"chunk",
"migration",
"balancer",
"config server",
"mongos"
]
|
Index and Storage Monitoring
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[filestream.mongodb_storage]
paths = ["/var/log/mongodb/mongod.log"]
format = "mongodb_log"
tags = ["mongodb", "storage", "index"]
fields = {
service = "mongodb",
log_type = "storage"
}
# Filter for storage and index events
[filestream.mongodb_storage.processors.grep]
patterns = [
"index",
"storage",
"wiredTiger",
"compact",
"repair",
"validate"
]
|
Usage Examples
Monitor MongoDB Operations
1
2
3
4
5
6
7
8
|
# Stream all MongoDB logs
logflux-cli stream --filter 'service:mongodb'
# Monitor performance issues
logflux-cli stream --filter 'service:mongodb AND log_type:performance'
# Track replica set events
logflux-cli stream --filter 'service:mongodb AND log_type:replica_set'
|
1
2
3
4
5
6
7
8
|
# Monitor slow operations
logflux-cli stream --filter 'service:mongodb AND message:slow'
# Track query performance
logflux-cli stream --filter 'service:mongodb AND component:QUERY'
# Monitor connection issues
logflux-cli stream --filter 'service:mongodb AND log_type:connections'
|
Security Monitoring
1
2
3
4
5
6
7
8
|
# Track authentication events
logflux-cli stream --filter 'service:mongodb AND log_type:audit AND atype:authenticate'
# Monitor user management
logflux-cli stream --filter 'service:mongodb AND (atype:createUser OR atype:dropUser)'
# Track data access
logflux-cli stream --filter 'service:mongodb AND (atype:insert OR atype:update OR atype:delete)'
|
Cluster Health Monitoring
1
2
3
4
5
6
7
8
|
# Monitor replica set status
logflux-cli stream --filter 'service:mongodb AND message:primary'
# Track sharding operations
logflux-cli stream --filter 'service:mongodb AND log_type:sharding'
# Monitor index operations
logflux-cli stream --filter 'service:mongodb AND message:index'
|
MongoDB Metrics Collection
Database Statistics
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
|
#!/bin/bash
# Collect MongoDB database statistics
MONGO_HOST="localhost:27017"
collect_db_stats() {
local timestamp=$(date -u +%Y-%m-%dT%H:%M:%S.000Z)
# Get database list and statistics
mongo --host "$MONGO_HOST" --eval "
db.adminCommand('listDatabases').databases.forEach(function(database) {
var dbName = database.name;
if (dbName !== 'admin' && dbName !== 'config' && dbName !== 'local') {
var stats = db.getSiblingDB(dbName).stats();
print(JSON.stringify({
timestamp: '$timestamp',
database: dbName,
collections: stats.collections,
objects: stats.objects,
dataSize: stats.dataSize,
storageSize: stats.storageSize,
indexes: stats.indexes,
indexSize: stats.indexSize,
metric_type: 'database_stats'
}));
}
});
" 2>/dev/null >> /var/log/mongodb/db-stats.log
}
while true; do
collect_db_stats
sleep 300 # Every 5 minutes
done
|
Replica Set Status
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
|
#!/bin/bash
# Monitor replica set status
collect_replica_status() {
local timestamp=$(date -u +%Y-%m-%dT%H:%M:%S.000Z)
mongo --host "$MONGO_HOST" --eval "
var status = rs.status();
if (status.ok === 1) {
status.members.forEach(function(member) {
print(JSON.stringify({
timestamp: '$timestamp',
name: member.name,
health: member.health,
state: member.state,
stateStr: member.stateStr,
uptime: member.uptime,
optime: member.optimeDate,
pingMs: member.pingMs,
metric_type: 'replica_member'
}));
});
}
" 2>/dev/null >> /var/log/mongodb/replica-status.log
}
while true; do
collect_replica_status
sleep 60
done
|
Connection Statistics
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
|
#!/bin/bash
# Monitor connection statistics
collect_connection_stats() {
local timestamp=$(date -u +%Y-%m-%dT%H:%M:%S.000Z)
mongo --host "$MONGO_HOST" --eval "
var stats = db.serverStatus().connections;
print(JSON.stringify({
timestamp: '$timestamp',
current: stats.current,
available: stats.available,
totalCreated: stats.totalCreated,
active: stats.active,
metric_type: 'connection_stats'
}));
var currentOp = db.currentOp();
print(JSON.stringify({
timestamp: '$timestamp',
activeOperations: currentOp.inprog.length,
metric_type: 'current_operations'
}));
" 2>/dev/null >> /var/log/mongodb/connection-stats.log
}
while true; do
collect_connection_stats
sleep 30
done
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
|
#!/bin/bash
# Collect comprehensive performance metrics
collect_performance_metrics() {
local timestamp=$(date -u +%Y-%m-%dT%H:%M:%S.000Z)
mongo --host "$MONGO_HOST" --eval "
var status = db.serverStatus();
// Operation counters
var opcounters = status.opcounters;
print(JSON.stringify({
timestamp: '$timestamp',
insert: opcounters.insert,
query: opcounters.query,
update: opcounters.update,
delete: opcounters.delete,
getmore: opcounters.getmore,
command: opcounters.command,
metric_type: 'operation_counters'
}));
// Memory usage
var mem = status.mem;
print(JSON.stringify({
timestamp: '$timestamp',
resident: mem.resident,
virtual: mem.virtual,
mapped: mem.mapped,
mappedWithJournal: mem.mappedWithJournal,
metric_type: 'memory_usage'
}));
// WiredTiger cache statistics
if (status.wiredTiger) {
var cache = status.wiredTiger.cache;
print(JSON.stringify({
timestamp: '$timestamp',
bytes_currently_in_cache: cache['bytes currently in the cache'],
pages_evicted: cache['pages evicted by application threads'],
pages_read_into_cache: cache['pages read into cache'],
pages_written_from_cache: cache['pages written from cache'],
metric_type: 'wiredtiger_cache'
}));
}
" 2>/dev/null >> /var/log/mongodb/performance-stats.log
}
while true; do
collect_performance_metrics
sleep 60
done
|
Monitoring and Alerting
Key Metrics to Monitor
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
|
# Slow operation alert
[alerts.mongodb_slow_operations]
query = "service:mongodb AND message:slow AND component:COMMAND"
threshold = 5
window = "2m"
message = "MongoDB slow operations detected"
# Replica set failure alert
[alerts.mongodb_replica_failure]
query = "service:mongodb AND (stateStr:DOWN OR stateStr:UNKNOWN)"
threshold = 1
window = "30s"
message = "MongoDB replica set member failure: {{ .name }}"
# High connection usage alert
[alerts.mongodb_connection_limit]
query = "service:mongodb AND metric_type:connection_stats"
threshold = 1
window = "1m"
condition = "{{ .current }} > {{ mul .available 0.8 }}"
message = "MongoDB connection usage high: {{ .current }}/{{ .available }}"
# Authentication failure alert
[alerts.mongodb_auth_failures]
query = "service:mongodb AND log_type:audit AND atype:authenticate AND result:13"
threshold = 10
window = "1m"
message = "High MongoDB authentication failure rate"
# Replication lag alert
[alerts.mongodb_replication_lag]
query = "service:mongodb AND metric_type:replica_member AND stateStr:SECONDARY"
threshold = 1
window = "2m"
condition = "{{ .pingMs }} > 1000"
message = "MongoDB replication lag detected: {{ .name }}"
# Index creation alert
[alerts.mongodb_index_operations]
query = "service:mongodb AND message:index AND (message:create OR message:drop)"
threshold = 1
window = "10s"
message = "MongoDB index operation: {{ .message }}"
# Database lock alert
[alerts.mongodb_database_lock]
query = "service:mongodb AND message:lock AND severity:W"
threshold = 5
window = "1m"
message = "MongoDB database lock contention detected"
|
Dashboard Metrics
Monitor these key MongoDB metrics:
- Operation metrics (insert, query, update, delete rates)
- Performance metrics (operation latency, slow operations)
- Connection metrics (active connections, connection pool usage)
- Replica set health (member status, replication lag, elections)
- Memory usage (resident memory, virtual memory, cache usage)
- Storage metrics (database size, index size, collection counts)
- Security events (authentication attempts, privilege changes)
- Sharding metrics (chunk migrations, balancer activity)
Troubleshooting
Common Issues
MongoDB logs not appearing:
1
2
3
4
5
6
7
8
|
# Check MongoDB is running
sudo systemctl status mongod
# Verify log configuration
grep -E "(systemLog|path)" /etc/mongod.conf
# Check log file permissions
sudo ls -la /var/log/mongodb/
|
Log parsing errors:
1
2
3
4
5
6
7
8
|
# Check log format
sudo tail -n 10 /var/log/mongodb/mongod.log
# Verify structured logging (4.4+)
mongo --eval "db.serverStatus().version"
# Test log rotation
mongo --eval "db.runCommand({logRotate: 1})"
|
Performance monitoring issues:
1
2
3
4
5
6
7
8
|
# Check profiler status
mongo --eval "db.getProfilingStatus()"
# Enable profiling
mongo --eval "db.setProfilingLevel(2, {slowms: 100})"
# Check current operations
mongo --eval "db.currentOp()"
|
Replica set monitoring problems:
1
2
3
4
5
6
7
8
|
# Check replica set status
mongo --eval "rs.status()"
# Verify replica set configuration
mongo --eval "rs.conf()"
# Check oplog status
mongo --eval "db.oplog.rs.find().sort({ts: -1}).limit(5)"
|
Best Practices
- Enable profiling with appropriate slow operation thresholds
- Monitor index usage and create missing indexes
- Use connection pooling to manage database connections efficiently
- Regular maintenance with compact and repair operations
Security
- Enable authentication and role-based access control
- Use audit logging for compliance and security monitoring
- Monitor privilege changes and user management operations
- Implement network security with proper firewall rules
High Availability
- Configure replica sets with proper majority voting
- Monitor replication lag and implement alerts
- Use read preferences appropriately for application needs
- Implement proper backup strategies for data protection
Log Management
1
2
3
4
5
6
7
8
9
10
11
12
13
|
# Optimize log rotation for MongoDB
/var/log/mongodb/*.log {
daily
rotate 30
missingok
notifempty
compress
delaycompress
postrotate
killall -SIGUSR1 mongod
systemctl reload logflux-filestream.service > /dev/null 2>&1 || true
endpostrotate
}
|
Monitoring Configuration
Enable comprehensive monitoring:
1
2
3
4
5
6
7
8
9
10
11
12
13
|
systemLog:
verbosity: 1
component:
command:
verbosity: 1
replication:
verbosity: 1
sharding:
verbosity: 1
operationProfiling:
mode: slowOp
slowOpThresholdMs: 100
|
Integration Examples
Docker Deployment
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
version: '3.8'
services:
mongodb:
image: mongo:6.0
container_name: mongodb
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- mongodb_data:/data/db
- mongodb_logs:/var/log/mongodb
- ./mongod.conf:/etc/mongod.conf
ports:
- "27017:27017"
command: mongod --config /etc/mongod.conf
logflux-agent:
image: logflux/agent:latest
volumes:
- mongodb_logs:/var/log/mongodb:ro
- ./logflux-config:/etc/logflux-agent/plugins
depends_on:
- mongodb
volumes:
mongodb_data:
mongodb_logs:
|
Kubernetes Deployment
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
|
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
spec:
serviceName: mongodb
replicas: 3
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:6.0
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: password
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-data
mountPath: /data/db
- name: mongodb-logs
mountPath: /var/log/mongodb
- name: mongodb-config
mountPath: /etc/mongod.conf
subPath: mongod.conf
command:
- mongod
- --config
- /etc/mongod.conf
volumes:
- name: mongodb-config
configMap:
name: mongodb-config
- name: mongodb-logs
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: mongodb-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 100Gi
|
Replica Set Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
|
#!/bin/bash
# Initialize MongoDB replica set
mongo --eval "
rs.initiate({
_id: 'rs0',
members: [
{ _id: 0, host: 'mongodb-0:27017', priority: 2 },
{ _id: 1, host: 'mongodb-1:27017', priority: 1 },
{ _id: 2, host: 'mongodb-2:27017', priority: 1 }
]
});
// Wait for replica set to initialize
sleep(5000);
// Check replica set status
rs.status();
// Configure read concern for monitoring
db.adminCommand({
setDefaultRWConcern: 1,
defaultReadConcern: { level: 'majority' },
defaultWriteConcern: { w: 'majority', j: true }
});
"
|
Sharded Cluster Monitoring
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
#!/bin/bash
# Monitor MongoDB sharded cluster
collect_sharding_stats() {
local timestamp=$(date -u +%Y-%m-%dT%H:%M:%S.000Z)
# Connect to mongos
mongo --host mongos:27017 --eval "
// Get sharding statistics
var shardStats = sh.status();
// Get balancer status
var balancerStatus = sh.getBalancerState();
print(JSON.stringify({
timestamp: '$timestamp',
balancer_active: balancerStatus,
metric_type: 'balancer_status'
}));
// Get chunk distribution
db.adminCommand('listCollections').cursor.firstBatch.forEach(function(collection) {
if (collection.type === 'collection') {
var chunks = db.chunks.count({ns: collection.name});
print(JSON.stringify({
timestamp: '$timestamp',
collection: collection.name,
chunks: chunks,
metric_type: 'chunk_distribution'
}));
}
});
" 2>/dev/null >> /var/log/mongodb/sharding-stats.log
}
while true; do
collect_sharding_stats
sleep 300
done
|
This comprehensive MongoDB integration provides real-time NoSQL database monitoring, query performance analysis, and replica set health tracking using LogFlux Agent’s File Stream plugin. The configuration-based approach offers detailed insights into database operations, cluster health, and performance characteristics across different MongoDB deployment scenarios.
Disclaimer
The MongoDB logo and trademarks are the property of MongoDB, Inc. LogFlux is not affiliated with, endorsed by, or sponsored by MongoDB, Inc. The MongoDB logo is used solely for identification purposes to indicate compatibility and integration capabilities.