Redis
Monitor Redis server logs, slow logs, client connections, and memory usage using the File Stream plugin
Redis Integration
Monitor and analyze Redis server logs in real-time using LogFlux Agent’s File Stream plugin. This configuration-based approach provides comprehensive Redis monitoring, performance analysis, and operational insights for Redis deployments.
Overview
The Redis integration leverages LogFlux Agent’s File Stream plugin to:
- Real-time monitoring of Redis server logs and slow query logs
- Performance analysis with command execution timing and slow operation tracking
- Connection monitoring with client connection and disconnection tracking
- Memory usage analysis with eviction events and memory pressure monitoring
- Cluster monitoring for Redis Cluster and Sentinel deployments
- Security monitoring with authentication events and access control
Installation
The File Stream plugin is included with LogFlux Agent. Enable it for Redis log monitoring:
1
2
3
4
5
|
# Enable File Stream plugin
sudo systemctl enable --now logflux-filestream
# Verify plugin status
sudo systemctl status logflux-filestream
|
Redis Configuration
Configure Redis logging in redis.conf
:
Basic Logging Configuration
# Log level (debug, verbose, notice, warning)
loglevel notice
# Log file path
logfile /var/log/redis/redis-server.log
# Syslog support
syslog-enabled yes
syslog-ident redis
syslog-facility local0
# Slow log configuration
slowlog-log-slower-than 10000 # Microseconds (10ms)
slowlog-max-len 128
# Client output buffer limits
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
# Memory management
maxmemory-policy allkeys-lru
maxmemory-samples 5
# Persistence logging
save 900 1
save 300 10
save 60 10000
# AOF logging
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
Enhanced Logging Configuration
# Verbose logging for debugging
loglevel verbose
# Log queries (Redis 6.2+)
log-queries yes
log-queries-file /var/log/redis/queries.log
# Latency monitoring
latency-monitor-threshold 100 # Milliseconds
# Client connection logging
tcp-keepalive 300
timeout 300
# Security and ACL logging (Redis 6.0+)
acllog-max-len 128
# Replication logging
replica-read-only yes
replica-serve-stale-data yes
Basic Configuration
Configure the File Stream plugin to monitor Redis logs by creating /etc/logflux-agent/plugins/filestream-redis.toml
:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
[filestream.redis_server]
paths = ["/var/log/redis/redis-server.log"]
format = "redis_log"
tags = ["redis", "server", "nosql"]
fields = {
service = "redis",
log_type = "server"
}
[filestream.redis_slow]
paths = ["/var/log/redis/redis-slow.log"]
format = "redis_slow"
tags = ["redis", "slow", "performance"]
fields = {
service = "redis",
log_type = "slow_log"
}
[filestream.redis_sentinel]
paths = ["/var/log/redis/redis-sentinel.log"]
format = "redis_log"
tags = ["redis", "sentinel", "ha"]
fields = {
service = "redis",
component = "sentinel",
log_type = "sentinel"
}
|
1
2
3
4
5
6
7
|
[filestream.redis_standard]
paths = ["/var/log/redis/redis-server.log"]
format = "regex"
regex = '^(?P<pid>\d+):(?P<role>\w+) (?P<timestamp>\d{2} \w{3} \d{4} \d{2}:\d{2}:\d{2}\.\d{3}) (?P<level>.) (?P<message>.*)$'
parse_timestamp = true
timestamp_field = "timestamp"
timestamp_format = "02 Jan 2006 15:04:05.000"
|
Redis Server Log with Client Info
1
2
3
4
5
6
7
|
[filestream.redis_detailed]
paths = ["/var/log/redis/redis-server.log"]
format = "regex"
regex = '^(?P<pid>\d+):(?P<role>\w+) (?P<timestamp>\d{2} \w{3} \d{4} \d{2}:\d{2}:\d{2}\.\d{3}) (?P<level>.) (?P<message>.*?)(?:\s+\[(?P<client_info>[^\]]+)\])?$'
parse_timestamp = true
timestamp_field = "timestamp"
timestamp_format = "02 Jan 2006 15:04:05.000"
|
1
2
3
4
5
6
7
8
|
[filestream.redis_slowlog_parsed]
paths = ["/var/log/redis/slowlog-parsed.log"]
format = "json"
tags = ["redis", "slowlog", "parsed"]
fields = {
service = "redis",
log_type = "slowlog_parsed"
}
|
Create slow log parsing script:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
#!/bin/bash
# Parse Redis slow log continuously
while true; do
redis-cli SLOWLOG GET 10 | \
awk '
BEGIN { entry=0; print "[" }
/^[0-9]+\)/ {
if (entry > 0) print ","
entry++; id=$1; gsub(/\)/, "", id)
getline; timestamp=$1
getline; duration=$1
getline; cmd_count=$1
cmd=""
for (i=0; i<cmd_count; i++) {
getline; cmd=cmd $0 " "
}
getline; if ($0 != "") client_info=$0; else client_info=""
printf "{\"id\":%s,\"timestamp\":%s,\"duration_us\":%s,\"command\":\"%s\",\"client\":\"%s\"}",
id, timestamp, duration, cmd, client_info
}
END { print "]" }
' >> /var/log/redis/slowlog-parsed.log
# Clear processed entries
redis-cli SLOWLOG RESET
sleep 60
done
|
Advanced Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
[filestream.redis_performance]
paths = ["/var/log/redis/redis-server.log"]
format = "redis_log"
tags = ["redis", "performance"]
fields = {
service = "redis",
log_type = "performance"
}
# Filter for performance-related messages
[filestream.redis_performance.processors.grep]
patterns = [
"background saving",
"DB saved",
"AOF rewrite",
"memory usage",
"eviction",
"slow log",
"latency"
]
|
Connection Monitoring
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[filestream.redis_connections]
paths = ["/var/log/redis/redis-server.log"]
format = "redis_log"
tags = ["redis", "connections"]
fields = {
service = "redis",
log_type = "connections"
}
# Filter for connection events
[filestream.redis_connections.processors.grep]
patterns = [
"Accepted",
"Connection from",
"Client closed",
"timeout",
"max number of clients",
"AUTH"
]
|
Memory and Eviction Monitoring
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[filestream.redis_memory]
paths = ["/var/log/redis/redis-server.log"]
format = "redis_log"
tags = ["redis", "memory", "eviction"]
fields = {
service = "redis",
log_type = "memory"
}
# Filter for memory-related events
[filestream.redis_memory.processors.grep]
patterns = [
"evict",
"memory",
"maxmemory",
"OOM",
"swap",
"RSS"
]
|
Cluster and Sentinel Monitoring
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
|
[filestream.redis_cluster]
paths = ["/var/log/redis/redis-server.log"]
format = "redis_log"
tags = ["redis", "cluster"]
fields = {
service = "redis",
log_type = "cluster"
}
# Filter for cluster events
[filestream.redis_cluster.processors.grep]
patterns = [
"cluster",
"node",
"slot",
"failover",
"master",
"slave"
]
[filestream.redis_sentinel_monitoring]
paths = ["/var/log/redis/redis-sentinel.log"]
format = "redis_log"
tags = ["redis", "sentinel", "ha"]
fields = {
service = "redis",
component = "sentinel",
log_type = "high_availability"
}
# Filter for Sentinel events
[filestream.redis_sentinel_monitoring.processors.grep]
patterns = [
"SDOWN",
"ODOWN",
"failover",
"switch-master",
"reset-master",
"new-epoch"
]
|
Usage Examples
Monitor Redis Server
1
2
3
4
5
6
7
8
|
# Stream all Redis logs
logflux-cli stream --filter 'service:redis'
# Monitor server logs only
logflux-cli stream --filter 'service:redis AND log_type:server'
# Track performance issues
logflux-cli stream --filter 'service:redis AND log_type:performance'
|
1
2
3
4
5
6
7
8
|
# Monitor slow operations
logflux-cli stream --filter 'service:redis AND log_type:slow_log'
# Track memory usage
logflux-cli stream --filter 'service:redis AND message:memory'
# Monitor eviction events
logflux-cli stream --filter 'service:redis AND message:evict'
|
Connection Monitoring
1
2
3
4
5
6
7
8
|
# Track client connections
logflux-cli stream --filter 'service:redis AND message:Accepted'
# Monitor authentication events
logflux-cli stream --filter 'service:redis AND message:AUTH'
# Track connection limits
logflux-cli stream --filter 'service:redis AND message:max'
|
High Availability Monitoring
1
2
3
4
5
6
7
8
|
# Monitor Sentinel events
logflux-cli stream --filter 'service:redis AND component:sentinel'
# Track failover events
logflux-cli stream --filter 'service:redis AND message:failover'
# Monitor cluster operations
logflux-cli stream --filter 'service:redis AND log_type:cluster'
|
Redis Metrics Collection
INFO Command Monitoring
Create metrics collection script:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
#!/bin/bash
# Collect Redis INFO metrics
REDIS_HOST="localhost"
REDIS_PORT="6379"
REDIS_AUTH="" # Set if authentication required
collect_metrics() {
local timestamp=$(date -u +%Y-%m-%dT%H:%M:%S.000Z)
# Get INFO output
local info_output
if [ -n "$REDIS_AUTH" ]; then
info_output=$(redis-cli -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_AUTH INFO)
else
info_output=$(redis-cli -h $REDIS_HOST -p $REDIS_PORT INFO)
fi
# Parse metrics
echo "$info_output" | awk -v ts="$timestamp" '
BEGIN { printf "{\"timestamp\": \"%s\", ", ts }
/^[^#]/ && /=/ {
split($0, kv, "=")
gsub(/[^a-zA-Z0-9_]/, "_", kv[1])
if (kv[2] ~ /^[0-9.]+$/) {
printf "\"%s\": %s, ", kv[1], kv[2]
} else {
printf "\"%s\": \"%s\", ", kv[1], kv[2]
}
}
END { print "\"source\": \"redis-info\"}" }
' | sed 's/, "source"/"source"/' >> /var/log/redis/metrics.log
}
while true; do
collect_metrics
sleep 60
done
|
Latency Monitoring
1
2
3
4
5
|
#!/bin/bash
# Monitor Redis latency
redis-cli --latency-history -i 5 | while read line; do
echo "$(date -u +%Y-%m-%dT%H:%M:%S.000Z) INFO Redis latency: $line" >> /var/log/redis/latency.log
done
|
Memory Usage Monitoring
1
2
3
4
5
6
7
8
9
10
11
12
|
#!/bin/bash
# Monitor Redis memory usage details
while true; do
redis-cli MEMORY USAGE redis:* | \
awk -v ts="$(date -u +%Y-%m-%dT%H:%M:%S.000Z)" '
BEGIN { printf "{\"timestamp\": \"%s\", \"memory_usage\": [", ts }
NR > 1 { printf ", " }
{ printf "{\"key\": \"%s\", \"bytes\": %d}", $(NF-1), $NF }
END { print "]}" }
' >> /var/log/redis/memory-usage.log
sleep 300 # Every 5 minutes
done
|
Monitoring and Alerting
Key Metrics to Monitor
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
# High memory usage alert
[alerts.redis_memory_usage]
query = "service:redis AND (message:maxmemory OR message:evict)"
threshold = 5
window = "2m"
message = "Redis high memory usage detected"
# Slow operations alert
[alerts.redis_slow_operations]
query = "service:redis AND log_type:slow_log AND duration_us:>50000"
threshold = 10
window = "1m"
message = "Redis slow operations detected"
# Connection limit alert
[alerts.redis_connection_limit]
query = "service:redis AND message:max AND message:clients"
threshold = 1
window = "30s"
message = "Redis connection limit reached"
# Sentinel failover alert
[alerts.redis_failover]
query = "service:redis AND component:sentinel AND message:failover"
threshold = 1
window = "10s"
message = "Redis Sentinel failover event"
# Persistence failure alert
[alerts.redis_persistence_failure]
query = "service:redis AND (message:saving AND level:WARNING) OR (message:AOF AND level:WARNING)"
threshold = 1
window = "1m"
message = "Redis persistence operation failed"
|
Dashboard Metrics
Monitor these key Redis metrics:
- Memory usage (used memory, peak memory, evictions)
- Connection metrics (connected clients, connection rate)
- Command statistics (operations per second, slow commands)
- Persistence metrics (RDB saves, AOF rewrites)
- Replication status (master-slave lag, connected slaves)
- Cluster health (node status, slot distribution)
- Sentinel status (master monitoring, failover events)
- Latency metrics (command latency, network latency)
Troubleshooting
Common Issues
Redis logs not appearing:
1
2
3
4
5
6
7
8
|
# Check Redis is running
sudo systemctl status redis
# Verify log configuration
redis-cli CONFIG GET "*log*"
# Check log file permissions
sudo ls -la /var/log/redis/
|
Slow log monitoring issues:
1
2
3
4
5
6
7
8
|
# Check slow log configuration
redis-cli CONFIG GET "slowlog-*"
# View current slow log entries
redis-cli SLOWLOG GET 10
# Check slow log length
redis-cli SLOWLOG LEN
|
Memory monitoring problems:
1
2
3
4
5
6
7
8
|
# Check memory configuration
redis-cli CONFIG GET "*memory*"
# Get memory info
redis-cli INFO memory
# Check eviction policy
redis-cli CONFIG GET maxmemory-policy
|
Sentinel monitoring issues:
1
2
3
4
5
6
7
8
|
# Check Sentinel status
redis-cli -p 26379 SENTINEL masters
# Monitor Sentinel logs
tail -f /var/log/redis/redis-sentinel.log
# Check master status
redis-cli -p 26379 SENTINEL get-master-addr-by-name mymaster
|
Best Practices
- Configure appropriate slow log threshold (10-50ms typically)
- Monitor memory usage and set appropriate maxmemory limits
- Use pipelining for bulk operations
- Monitor key expiration patterns
Security
- Enable authentication with requirepass
- Use ACLs for fine-grained access control (Redis 6.0+)
- Monitor authentication failures and brute force attempts
- Disable dangerous commands in production
High Availability
- Configure Redis Sentinel for automatic failover
- Monitor replication lag between master and slaves
- Implement proper backup strategies for RDB and AOF
- Use Redis Cluster for horizontal scaling
Log Management
1
2
3
4
5
6
7
8
9
10
11
12
13
|
# Optimize log rotation for Redis
/var/log/redis/*.log {
daily
rotate 30
missingok
notifempty
compress
delaycompress
postrotate
systemctl reload redis.service > /dev/null 2>&1 || true
systemctl reload logflux-filestream.service > /dev/null 2>&1 || true
endpostrotate
}
|
Monitoring Configuration
# Recommended monitoring settings
loglevel notice
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 100
notify-keyspace-events Ex # Enable keyspace notifications
Integration Examples
Docker Deployment
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
version: '3.8'
services:
redis:
image: redis:7-alpine
command: redis-server --appendonly yes --loglevel notice
volumes:
- redis_data:/data
- redis_logs:/var/log/redis
- ./redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
redis-sentinel:
image: redis:7-alpine
command: redis-sentinel /usr/local/etc/redis/sentinel.conf
volumes:
- ./sentinel.conf:/usr/local/etc/redis/sentinel.conf
- redis_logs:/var/log/redis
ports:
- "26379:26379"
depends_on:
- redis
logflux-agent:
image: logflux/agent:latest
volumes:
- redis_logs:/var/log/redis:ro
- ./logflux-config:/etc/logflux-agent/plugins
depends_on:
- redis
volumes:
redis_data:
redis_logs:
|
Kubernetes Deployment
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
|
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: redis
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7-alpine
command: ["redis-server"]
args: ["/etc/redis/redis.conf"]
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /data
- name: redis-logs
mountPath: /var/log/redis
- name: redis-config
mountPath: /etc/redis/redis.conf
subPath: redis.conf
volumes:
- name: redis-config
configMap:
name: redis-config
- name: redis-logs
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: redis-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
|
Redis Cluster Monitoring
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
#!/bin/bash
# Monitor Redis Cluster health
check_cluster_health() {
local timestamp=$(date -u +%Y-%m-%dT%H:%M:%S.000Z)
redis-cli CLUSTER NODES | while read line; do
local node_id=$(echo $line | awk '{print $1}')
local ip_port=$(echo $line | awk '{print $2}')
local flags=$(echo $line | awk '{print $3}')
local master=$(echo $line | awk '{print $4}')
local slots=$(echo $line | awk '{print $9}')
echo "{\"timestamp\": \"$timestamp\", \"node_id\": \"$node_id\", \"address\": \"$ip_port\", \"flags\": \"$flags\", \"master\": \"$master\", \"slots\": \"$slots\"}" >> /var/log/redis/cluster-health.log
done
}
while true; do
check_cluster_health
sleep 60
done
|
This comprehensive Redis integration provides real-time server monitoring, performance analysis, and operational insights using LogFlux Agent’s File Stream plugin. The configuration-based approach offers detailed visibility into Redis operations, memory usage, client connections, and high availability scenarios across different Redis deployment patterns.
Disclaimer
The Redis logo and trademarks are the property of Redis Ltd. LogFlux is not affiliated with, endorsed by, or sponsored by Redis Ltd. The Redis logo is used solely for identification purposes to indicate compatibility and integration capabilities.