Nginx Web Server Integration

Monitor and analyze Nginx access and error logs with LogFlux Agent using File Stream integration

Nginx

The LogFlux Nginx integration monitors and analyzes Nginx web server logs through the File Stream plugin, providing comprehensive web server monitoring, request analysis, and error tracking. This configuration-based approach enables real-time log processing with custom parsing for Nginx access and error logs.

Overview

The Nginx integration provides:

  • Real-Time Log Monitoring: Live monitoring of Nginx access and error logs using File Stream
  • Custom Log Parsing: Structured parsing of Nginx log formats with field extraction
  • Access Log Analysis: Request tracking, response codes, user agents, and performance metrics
  • Error Log Processing: Error classification, severity levels, and issue detection
  • Security Monitoring: Failed authentication attempts, suspicious requests, and attack patterns
  • Performance Insights: Response times, request volumes, and traffic patterns
  • Custom Log Formats: Support for standard and custom Nginx log formats
  • Multi-Server Support: Monitor multiple Nginx instances from a single LogFlux Agent

Prerequisites

  • LogFlux Agent installed with File Stream plugin enabled (see File Stream Integration)
  • Nginx web server with accessible log files
  • Appropriate file permissions for LogFlux Agent to read Nginx logs
  • Optional: Custom Nginx log format configuration for enhanced parsing

Nginx Log Configuration

Configure Nginx Log Formats

First, configure Nginx to use structured log formats that provide detailed information:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# /etc/nginx/nginx.conf
http {
    # Standard combined log format (default)
    log_format combined '$remote_addr - $remote_user [$time_local] '
                       '"$request" $status $body_bytes_sent '
                       '"$http_referer" "$http_user_agent"';
    
    # Enhanced access log format with timing information
    log_format detailed '$remote_addr - $remote_user [$time_local] '
                       '"$request" $status $body_bytes_sent '
                       '"$http_referer" "$http_user_agent" '
                       'rt=$request_time uct="$upstream_connect_time" '
                       'uht="$upstream_header_time" urt="$upstream_response_time"';
    
    # JSON log format for structured parsing
    log_format json escape=json '{'
        '"time_local":"$time_local",'
        '"remote_addr":"$remote_addr",'
        '"remote_user":"$remote_user",'
        '"request":"$request",'
        '"status": "$status",'
        '"body_bytes_sent":"$body_bytes_sent",'
        '"request_time":"$request_time",'
        '"http_referrer":"$http_referer",'
        '"http_user_agent":"$http_user_agent",'
        '"upstream_addr":"$upstream_addr",'
        '"upstream_response_time":"$upstream_response_time",'
        '"upstream_status":"$upstream_status"'
    '}';
    
    # Security-focused log format
    log_format security '$remote_addr - $remote_user [$time_local] '
                       '"$request" $status $body_bytes_sent '
                       '"$http_referer" "$http_user_agent" '
                       '"$http_x_forwarded_for" "$http_x_real_ip" '
                       'rt=$request_time';
    
    # Configure access log
    access_log /var/log/nginx/access.log detailed;
    access_log /var/log/nginx/access.json.log json;
    
    # Configure error log with appropriate level
    error_log /var/log/nginx/error.log warn;
}

Virtual Host Configuration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# /etc/nginx/sites-available/example.com
server {
    listen 80;
    server_name example.com www.example.com;
    
    # Custom access log for this virtual host
    access_log /var/log/nginx/example.com.access.log detailed;
    error_log /var/log/nginx/example.com.error.log warn;
    
    # Security logging for sensitive endpoints
    location /admin {
        access_log /var/log/nginx/admin.access.log security;
        # ... other configuration
    }
    
    location /api {
        access_log /var/log/nginx/api.access.log json;
        # ... other configuration
    }
}

LogFlux Configuration

Basic Nginx Monitoring

Create or edit the File Stream plugin configuration for Nginx:

1
sudo nano /etc/logflux-agent/plugins/filestream.yaml

Basic Nginx monitoring configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
# Nginx Integration via File Stream
name: filestream-nginx
version: 1.0.0
source: filestream-plugin

# Agent connection
agent:
  socket_path: /tmp/logflux-agent.sock

# Nginx access logs
files:
  # Main access log
  - path: "/var/log/nginx/access.log"
    parser: "nginx_access"
    follow: true
    tail: true
    
    # Metadata for access logs
    metadata:
      labels:
        service: nginx
        log_type: access
        server: web-01
      
      # Parse Nginx access log format
      parsing:
        format: "nginx_combined"
        fields:
          remote_addr: "client_ip"
          remote_user: "user"
          time_local: "timestamp"
          request: "request_line"
          status: "status_code"
          body_bytes_sent: "response_size"
          http_referer: "referer"
          http_user_agent: "user_agent"
          request_time: "response_time"

  # Error logs
  - path: "/var/log/nginx/error.log"
    parser: "nginx_error"
    follow: true
    tail: true
    
    metadata:
      labels:
        service: nginx
        log_type: error
        server: web-01

# Batching for efficiency
batch:
  enabled: true
  size: 100
  flush_interval: 5s

Advanced Multi-Server Configuration

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
# Advanced Nginx Multi-Server Configuration
name: filestream-nginx-advanced
version: 1.0.0
source: filestream-plugin

# Agent connection
agent:
  socket_path: /tmp/logflux-agent.sock
  connect_timeout: 30s
  max_retries: 5
  retry_delay: 10s

# Multiple Nginx server logs
files:
  # Web server 1 - Access logs
  - path: "/var/log/nginx/web01.access.log"
    parser: "nginx_access_detailed"
    follow: true
    tail: true
    
    metadata:
      labels:
        service: nginx
        log_type: access
        server: web-01
        environment: production
      
      parsing:
        format: "custom"
        regex: '^(?P<client_ip>\S+) - (?P<user>\S+) \[(?P<timestamp>[^\]]+)\] "(?P<method>\S+) (?P<path>\S+) (?P<protocol>[^"]+)" (?P<status_code>\d+) (?P<response_size>\d+) "(?P<referer>[^"]*)" "(?P<user_agent>[^"]*)" rt=(?P<response_time>[\d.]+)'
        timestamp_format: "02/Jan/2006:15:04:05 -0700"
        level_mapping:
          2xx: "info"
          3xx: "info" 
          4xx: "warning"
          5xx: "error"

  # Web server 1 - Error logs
  - path: "/var/log/nginx/web01.error.log"
    parser: "nginx_error"
    follow: true
    tail: true
    
    metadata:
      labels:
        service: nginx
        log_type: error
        server: web-01
        environment: production
      
      parsing:
        format: "nginx_error"
        regex: '^(?P<timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) \[(?P<level>\w+)\] (?P<pid>\d+)#(?P<tid>\d+): (?P<message>.*?)(?:, client: (?P<client_ip>\S+))?(?:, server: (?P<server_name>\S+))?(?:, request: "(?P<request>[^"]*)")?'
        timestamp_format: "2006/01/02 15:04:05"

  # JSON format logs
  - path: "/var/log/nginx/api.json.log"
    parser: "json"
    follow: true
    tail: true
    
    metadata:
      labels:
        service: nginx
        log_type: api_access
        server: api-01
        format: json
      
      parsing:
        format: "json"
        timestamp_field: "time_local"
        timestamp_format: "02/Jan/2006:15:04:05 -0700"
        level_field: "status"
        message_field: "request"

  # Virtual host specific logs
  - path: "/var/log/nginx/admin.access.log"
    parser: "nginx_security"
    follow: true
    tail: true
    
    metadata:
      labels:
        service: nginx
        log_type: admin_access
        security: high
        monitoring: critical
      
      parsing:
        format: "security"
        regex: '^(?P<client_ip>\S+) - (?P<user>\S+) \[(?P<timestamp>[^\]]+)\] "(?P<request>[^"]+)" (?P<status_code>\d+) (?P<response_size>\d+) "(?P<referer>[^"]*)" "(?P<user_agent>[^"]*)" "(?P<x_forwarded_for>[^"]*)" "(?P<x_real_ip>[^"]*)" rt=(?P<response_time>[\d.]+)'
        alert_patterns:
          - pattern: 'status_code >= 400'
            level: 'warning'
          - pattern: 'status_code >= 500'
            level: 'error'

# Advanced processing
processing:
  # Content filtering
  filters:
    # Exclude health checks and monitoring
    exclude_patterns:
      - 'GET /health'
      - 'GET /status'
      - 'GET /ping'
      - 'User-Agent.*monitor'
    
    # Include specific patterns for security
    include_patterns:
      - 'status_code:[45]\\d{2}'
      - 'POST /admin'
      - 'POST /login'
  
  # Field processing
  field_processing:
    # Extract additional information
    extract_fields:
      - field: "request"
        regex: "(?P<method>\\S+) (?P<path>\\S+) (?P<protocol>\\S+)"
      - field: "user_agent"
        regex: "(?P<browser>[^/]+)/(?P<version>[^\\s]+)"
    
    # Normalize fields
    normalize:
      - field: "client_ip"
        type: "ip_address"
      - field: "status_code"
        type: "integer"
      - field: "response_size"
        type: "integer"
      - field: "response_time"
        type: "float"

# Enhanced metadata
metadata:
  global_labels:
    plugin: filestream
    integration: nginx
    environment: production
    datacenter: us-east-1
  
  # Custom field mapping
  field_mapping:
    client_ip: "nginx_client_ip"
    user_agent: "nginx_user_agent"
    response_time: "nginx_response_time_seconds"
    status_code: "nginx_status_code"
    server_name: "nginx_server"

# Performance optimization
performance:
  # Batch processing
  batch_size: 200
  flush_interval: 10s
  
  # Memory management
  max_memory: "256MB"
  line_buffer_size: 65536
  
  # File processing
  max_lines_per_read: 1000
  read_timeout: "30s"

# Resource limits
limits:
  max_files: 50
  memory_limit: "512MB"
  cpu_limit: "1"
  
  # Rate limiting
  max_lines_per_second: 10000

# Health monitoring
health:
  check_interval: 60s
  max_parse_errors: 100
  alert_on_file_missing: true
  stats_collection: true

Log Parsing Examples

Access Log Parsing

Standard Combined Format:

192.168.1.100 - - [20/Jan/2024:14:30:50 +0000] "GET /api/users HTTP/1.1" 200 1234 "https://example.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"

Parsed LogFlux Entry:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
  "timestamp": "2024-01-20T14:30:50Z",
  "level": "info",
  "message": "GET /api/users HTTP/1.1",
  "node": "web-01",
  "metadata": {
    "source_type": "plugin",
    "source_name": "filestream",
    "service": "nginx",
    "log_type": "access",
    "nginx_client_ip": "192.168.1.100",
    "nginx_user": "-",
    "nginx_method": "GET",
    "nginx_path": "/api/users",
    "nginx_protocol": "HTTP/1.1",
    "nginx_status_code": 200,
    "nginx_response_size": 1234,
    "nginx_referer": "https://example.com/",
    "nginx_user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36",
    "plugin": "filestream",
    "integration": "nginx"
  }
}

Error Log Parsing

Nginx Error Format:

2024/01/20 14:30:50 [error] 12345#0: *67890 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.100, server: example.com, request: "GET /api/data HTTP/1.1", upstream: "http://127.0.0.1:8080/api/data"

Parsed LogFlux Entry:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
{
  "timestamp": "2024-01-20T14:30:50Z",
  "level": "error",
  "message": "connect() failed (111: Connection refused) while connecting to upstream",
  "node": "web-01",
  "metadata": {
    "source_type": "plugin",
    "source_name": "filestream",
    "service": "nginx",
    "log_type": "error",
    "nginx_level": "error",
    "nginx_pid": "12345",
    "nginx_tid": "0",
    "nginx_client_ip": "192.168.1.100",
    "nginx_server": "example.com",
    "nginx_request": "GET /api/data HTTP/1.1",
    "nginx_upstream": "http://127.0.0.1:8080/api/data",
    "plugin": "filestream",
    "integration": "nginx"
  }
}

JSON Format Parsing

JSON Access Log:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
{
  "time_local": "20/Jan/2024:14:30:50 +0000",
  "remote_addr": "192.168.1.100",
  "request": "GET /api/users HTTP/1.1",
  "status": "200",
  "body_bytes_sent": "1234",
  "request_time": "0.123",
  "http_user_agent": "curl/7.64.1",
  "upstream_response_time": "0.098"
}

LogFlux Processing:

  • Automatically parsed as JSON
  • All fields preserved with nginx_ prefix
  • Timestamp converted to ISO format
  • Numeric fields properly typed

Security Monitoring

Failed Authentication Detection

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Security monitoring configuration
files:
  - path: "/var/log/nginx/auth.log"
    parser: "nginx_auth"
    follow: true
    
    metadata:
      labels:
        service: nginx
        log_type: authentication
        security: critical
      
      # Security alert patterns
      alerts:
        - pattern: 'status_code: 401'
          message: "Unauthorized access attempt"
          level: "warning"
        - pattern: 'status_code: 403'
          message: "Forbidden access attempt"
          level: "warning"
        - pattern: 'user_agent: .*bot.*|.*crawler.*|.*scanner.*'
          message: "Suspicious bot activity"
          level: "info"

DDoS and Attack Detection

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# Attack pattern detection
processing:
  security_patterns:
    # Rate limiting violations
    - name: "high_request_rate"
      pattern: 'client_ip repeat > 100 in 60s'
      action: "alert"
      level: "error"
    
    # SQL injection attempts
    - name: "sql_injection"
      pattern: 'path contains "union select|drop table|insert into"'
      action: "alert"
      level: "critical"
    
    # XSS attempts
    - name: "xss_attempt"
      pattern: 'path contains "<script|javascript:|onload="'
      action: "alert"
      level: "warning"
    
    # Directory traversal
    - name: "directory_traversal"
      pattern: 'path contains "../|..\\|/etc/passwd"'
      action: "alert" 
      level: "warning"

Performance Monitoring

Response Time Analysis

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# Performance monitoring
files:
  - path: "/var/log/nginx/performance.log"
    parser: "nginx_performance"
    
    metadata:
      labels:
        service: nginx
        monitoring_type: performance
      
      # Performance thresholds
      performance_alerts:
        - metric: "response_time"
          threshold: 1.0
          condition: "greater_than"
          level: "warning"
          message: "Slow response detected"
        
        - metric: "response_time"
          threshold: 5.0
          condition: "greater_than"
          level: "error"
          message: "Very slow response detected"
        
        - metric: "upstream_response_time"
          threshold: 2.0
          condition: "greater_than"
          level: "warning"
          message: "Slow upstream response"

Traffic Analysis

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Traffic monitoring
processing:
  traffic_analysis:
    # Monitor request volumes
    - metric: "requests_per_minute"
      calculation: "count() per minute"
      thresholds:
        warning: 1000
        critical: 5000
    
    # Monitor error rates
    - metric: "error_rate"
      calculation: "count(status_code >= 400) / count() * 100"
      thresholds:
        warning: 5.0   # 5% error rate
        critical: 10.0  # 10% error rate
    
    # Monitor bandwidth usage
    - metric: "bandwidth_mbps"
      calculation: "sum(response_size) * 8 / 1048576 per minute"
      thresholds:
        warning: 100   # 100 Mbps
        critical: 500  # 500 Mbps

Common Use Cases

E-commerce Website

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# E-commerce monitoring
files:
  - path: "/var/log/nginx/ecommerce.access.log"
    metadata:
      labels:
        service: nginx
        application: ecommerce
        business_critical: true
      
      # Business-specific parsing
      parsing:
        business_fields:
          - field: "path"
            extract:
              - pattern: '/product/(?P<product_id>\d+)'
              - pattern: '/category/(?P<category>[^/]+)'
              - pattern: '/checkout/(?P<step>\w+)'
        
        # Transaction monitoring
        transaction_patterns:
          - pattern: 'POST /api/orders'
            type: 'order_creation'
          - pattern: 'POST /api/payment'
            type: 'payment_processing'
          - pattern: 'GET /api/inventory'
            type: 'inventory_check'

API Gateway

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# API gateway monitoring
files:
  - path: "/var/log/nginx/api-gateway.log"
    metadata:
      labels:
        service: nginx
        component: api_gateway
        tier: critical
      
      parsing:
        api_fields:
          - field: "path"
            extract:
              - pattern: '/api/v(?P<api_version>\d+)/(?P<service>[^/]+)'
              - pattern: '/(?P<endpoint>[^?]+)'
          - field: "user_agent"
            extract:
              - pattern: '(?P<client_name>[^/]+)/(?P<client_version>[\d.]+)'
        
        # Rate limiting detection
        rate_limit_patterns:
          - pattern: 'status_code: 429'
            message: "Rate limit exceeded"
          - pattern: 'status_code: 503'
            message: "Service unavailable"

Load Balancer

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Load balancer monitoring
files:
  - path: "/var/log/nginx/loadbalancer.log"
    metadata:
      labels:
        service: nginx
        component: load_balancer
        infrastructure: core
      
      parsing:
        lb_fields:
          - field: "upstream_addr"
            extract:
              - pattern: '(?P<backend_ip>\d+\.\d+\.\d+\.\d+):(?P<backend_port>\d+)'
          - field: "upstream_status"
            type: "integer"
          - field: "upstream_response_time"
            type: "float"
        
        # Health check patterns
        health_patterns:
          - pattern: 'upstream_status: 502|503|504'
            message: "Backend server error"
            level: "error"
          - pattern: 'upstream_response_time > 5.0'
            message: "Slow backend response"
            level: "warning"

Monitoring and Alerting

LogFlux Queries for Nginx

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# High error rate detection
logflux query 'service:nginx AND nginx_status_code:[400 TO 599] AND @timestamp:[now-5m TO now]' --count

# Slow requests
logflux query 'service:nginx AND nginx_response_time:>2.0 AND @timestamp:[now-1h TO now]' --format table

# Security incidents
logflux query 'service:nginx AND (nginx_status_code:401 OR nginx_status_code:403) AND @timestamp:[now-1h TO now]'

# Top client IPs
logflux query 'service:nginx AND log_type:access' --aggregation 'terms:nginx_client_ip' --size 10

# API endpoint analysis
logflux query 'service:nginx AND nginx_path:/api/*' --aggregation 'terms:nginx_path,avg:nginx_response_time'

Grafana Dashboard Queries

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# Example Grafana dashboard queries for LogFlux
panels:
  - title: "Nginx Request Rate"
    query: 'service:nginx AND log_type:access'
    aggregation: 'rate'
    
  - title: "Error Rate by Status Code"
    query: 'service:nginx AND nginx_status_code:[400 TO 599]'
    aggregation: 'terms:nginx_status_code'
    
  - title: "Average Response Time"
    query: 'service:nginx AND nginx_response_time:*'
    aggregation: 'avg:nginx_response_time'
    
  - title: "Top URLs by Traffic"
    query: 'service:nginx AND nginx_path:*'
    aggregation: 'terms:nginx_path'

Troubleshooting

Common Issues

Log File Access:

1
2
3
4
5
6
# Check file permissions
ls -la /var/log/nginx/
sudo chmod +r /var/log/nginx/*.log

# Verify LogFlux Agent can read files
sudo -u logflux-agent cat /var/log/nginx/access.log | head -5

Log Rotation Issues:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# Configure logrotate for Nginx
sudo nano /etc/logrotate.d/nginx

/var/log/nginx/*.log {
    daily
    missingok
    rotate 52
    compress
    delaycompress
    notifempty
    create 644 www-data adm
    postrotate
        if [ -f /var/run/nginx.pid ]; then
            kill -USR1 `cat /var/run/nginx.pid`
        fi
    endscript
}

Parsing Issues:

1
2
3
4
5
# Debug parsing with verbose logging
processing:
  debug: true
  show_parsing_errors: true
  sample_failed_lines: 10

Log Format Validation

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Test Nginx configuration
sudo nginx -t

# Reload Nginx configuration
sudo systemctl reload nginx

# Test log format output
tail -f /var/log/nginx/access.log | head -5

# Validate JSON format
tail -f /var/log/nginx/access.json.log | jq '.'

Best Practices

Configuration Management

  1. Use structured log formats (JSON preferred) for easier parsing
  2. Separate logs by function (access, error, security) for better organization
  3. Configure appropriate log levels to balance detail and volume
  4. Implement log rotation to manage disk space

Security

  1. Monitor authentication logs for failed login attempts
  2. Set up alerts for suspicious patterns and attack attempts
  3. Log security-sensitive endpoints separately for enhanced monitoring
  4. Exclude sensitive data from logs (passwords, tokens, personal information)

Performance

  1. Use efficient log formats that balance readability and parsing speed
  2. Configure appropriate batching for high-traffic servers
  3. Monitor log file sizes and implement rotation policies
  4. Filter out noise (health checks, monitoring requests) to focus on important events

Operational

  1. Test log parsing before deploying to production
  2. Monitor plugin health and file accessibility
  3. Set up proper alerting for error rates and performance issues
  4. Document custom log formats and parsing rules

Disclaimer

Nginx and the Nginx logo are trademarks of Nginx, Inc. LogFlux is not affiliated with, endorsed by, or sponsored by Nginx, Inc. The Nginx logo is used solely for identification purposes to indicate compatibility with Nginx web server logs.

Next Steps