LogFlux Agent API

API reference for the LogFlux Agent Local Server

The LogFlux Agent API provides a local server interface for receiving logs from plugins, SDKs, and applications. The agent accepts log entries via Unix socket or TCP connections and handles forwarding to LogFlux servers.

Communication Protocols

The LogFlux Agent supports two communication methods:

Unix Socket (Default)

  • Path: /tmp/logflux-agent.sock
  • Security: File permissions-based access control
  • Authentication: Not required
  • Recommended for: Local applications and SDKs

TCP Socket (Optional)

  • Address: 127.0.0.1:9999
  • Security: Shared secret authentication
  • Authentication: Required
  • Recommended for: Remote or containerized applications

Message Format

All messages are sent as newline-delimited JSON (NDJSON) over the socket connection. Each message must be terminated with a newline character (\n).

Authentication (TCP Only)

TCP connections require authentication using a shared secret:

1
2
3
4
5
{
  "version": "1.0",
  "action": "authenticate",
  "shared_secret": "your-shared-secret-here"
}

Response:

1
2
3
4
{
  "status": "success",
  "message": "Authentication successful"
}

Log Entry Submission

Single Log Entry

Submit a single log entry to the agent:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
  "version": "1.0",
  "payload": "User login successful",
  "entryType": 1,
  "source": "auth-service",
  "logLevel": 6,
  "timestamp": "2025-08-31T10:30:00Z",
  "payloadType": "generic",
  "metadata": {
    "service": "authentication",
    "environment": "production"
  }
}

Batch Log Entries

Submit multiple log entries in a single request:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
{
  "version": "1.0",
  "entries": [
    {
      "payload": "Database connection established",
      "entryType": 1,
      "source": "database",
      "logLevel": 6
    },
    {
      "payload": "Query execution failed",
      "entryType": 1,
      "source": "database",
      "logLevel": 3,
      "metadata": {
        "error": "timeout"
      }
    }
  ]
}

Batch Limits: Maximum 100 entries per batch

Schema Reference

LogEntry Properties

Field Type Required Description
version string No Protocol version (default: “1.0”)
payload string Yes Log content (max 1MB)
entryType integer Yes Entry type classification (1-5)
source string Yes Source identifier (max 256 chars)
logLevel integer Yes Syslog severity level (1-8)
timestamp string No RFC3339 timestamp in UTC (auto-generated if omitted)
payloadType string No Content type identifier (default: “generic”)
metadata object No Additional key-value metadata

Entry Types

Type Value Description
Log 1 Standard log entry
Metric 2 Numerical measurement
Trace 3 Distributed tracing data
Event 4 Discrete application event
Audit 5 Compliance/security audit entry

Log Levels (Syslog Compatible)

Level Name Description
1 EMERGENCY System is unusable
2 ALERT Action must be taken immediately
3 CRITICAL Critical conditions
4 ERROR Error conditions
5 WARNING Warning conditions
6 NOTICE Normal but significant
7 INFO Informational messages
8 DEBUG Debug-level messages

Payload Types

Common payload type identifiers:

Type Description
generic Plain text (default)
generic_json JSON payloads
generic_xml XML payloads
systemd_journal systemd journal logs
syslog syslog format logs
metrics Metric data
application Application logs
container Container logs

Metadata

Additional key-value pairs for log enrichment:

  • Keys and values must be strings
  • Values limited to 1024 characters each
  • Total metadata size cannot exceed payload size limits

Example metadata:

1
2
3
4
5
6
7
{
  "service": "authentication",
  "environment": "production", 
  "region": "us-west-2",
  "version": "1.2.3",
  "request_id": "abc-123-def"
}

Health Check

Send a ping request to check connection health:

1
2
3
4
{
  "version": "1.0",
  "action": "ping"
}

Response:

1
2
3
{
  "status": "pong"
}

Response Handling

Successful Submissions

For valid log entries, the agent typically:

  • Accepts silently - No response is sent for successful log submissions
  • Maintains connection - Socket remains open for additional messages
  • Processes asynchronously - Logs are queued and forwarded to LogFlux servers

Error Responses

For invalid submissions, the agent may send error messages as JSON:

1
2
3
{
  "error": "Invalid log entry format: missing required field 'payload'"
}

Connection States

Condition Behavior
Valid log entry No response, connection remains open
Invalid format Error message sent, connection may remain open
Authentication failure (TCP) Connection closed
Queue full Error message sent, may close connection
Service unavailable Connection refused or closed

Connection Examples

Unix Socket (Node.js)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
const net = require('net');

const client = net.createConnection('/tmp/logflux-agent.sock');

// Send log entry
const logEntry = {
  version: "1.0",
  payload: "User authentication successful",
  entryType: 1,
  source: "auth-service", 
  logLevel: 6,
  timestamp: new Date().toISOString()
};

client.write(JSON.stringify(logEntry) + '\n');
client.end();

TCP Socket (Python)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import socket
import json
import time

# Connect to TCP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('127.0.0.1', 9999))

# Authenticate
auth_request = {
    "version": "1.0",
    "action": "authenticate",
    "shared_secret": "your-shared-secret"
}
sock.send((json.dumps(auth_request) + '\n').encode())

# Send log entry
log_entry = {
    "version": "1.0",
    "payload": "Database query executed",
    "entryType": 1,
    "source": "database-service",
    "logLevel": 7,
    "timestamp": time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime())
}
sock.send((json.dumps(log_entry) + '\n').encode())
sock.close()

Batch Example (Go)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
package main

import (
    "encoding/json"
    "net"
    "time"
)

type LogEntry struct {
    Version   string            `json:"version"`
    Payload   string            `json:"payload"`
    EntryType int               `json:"entryType"`
    Source    string            `json:"source"`
    LogLevel  int               `json:"logLevel"`
    Timestamp string            `json:"timestamp"`
    Metadata  map[string]string `json:"metadata,omitempty"`
}

type BatchRequest struct {
    Version string     `json:"version"`
    Entries []LogEntry `json:"entries"`
}

func main() {
    // Connect to Unix socket
    conn, err := net.Dial("unix", "/tmp/logflux-agent.sock")
    if err != nil {
        panic(err)
    }
    defer conn.Close()

    // Create batch request
    batch := BatchRequest{
        Version: "1.0",
        Entries: []LogEntry{
            {
                Payload:   "Service started",
                EntryType: 1,
                Source:    "my-service",
                LogLevel:  6,
                Timestamp: time.Now().UTC().Format(time.RFC3339),
            },
            {
                Payload:   "Processing request",
                EntryType: 1,
                Source:    "my-service",
                LogLevel:  7,
                Timestamp: time.Now().UTC().Format(time.RFC3339),
            },
        },
    }

    // Send batch
    data, _ := json.Marshal(batch)
    conn.Write(append(data, '\n'))
}

Best Practices

1. Connection Management

  • Unix Socket: Create new connections for each batch or use connection pooling
  • TCP Socket: Authenticate once per connection, reuse connections when possible
  • Always properly close connections to prevent resource leaks

2. Error Handling

  • Handle network errors gracefully with retries
  • Implement exponential backoff for reconnection attempts
  • Log connection failures for debugging

3. Performance Optimization

  • Use batch requests for high-volume logging (up to 100 entries)
  • Buffer log entries locally before sending batches
  • Consider async/non-blocking I/O for better throughput

4. Timestamp Handling

  • Always use UTC timestamps
  • Include timezone information (Z suffix)
  • Let the agent auto-generate timestamps if precision isn’t critical

5. Metadata Usage

  • Include relevant context in metadata for better log analysis
  • Use consistent key names across your application
  • Keep metadata concise to avoid hitting size limits

Security Considerations

Unix Socket Security

  • File permissions control access to the socket
  • Only processes with appropriate permissions can connect
  • Recommended for single-machine deployments

TCP Socket Security

  • Use strong shared secrets (minimum 32 characters)
  • Rotate shared secrets periodically
  • Bind to localhost only (127.0.0.1) to prevent external access
  • Consider firewall rules for additional protection

Data Privacy

  • Avoid logging sensitive information in payloads
  • Use structured logging with metadata to separate sensitive context

Configuration

The LogFlux Agent can be configured to:

  • Change Unix socket path
  • Modify TCP listening address and port
  • Set authentication requirements
  • Configure batch size limits
  • Adjust queue sizes and timeouts

See the Agent Configuration documentation for detailed setup instructions.

Integration with SDKs

Our official SDKs automatically handle the complexity of connecting to the LogFlux Agent:

The SDKs provide:

  • Automatic agent discovery and connection
  • Connection pooling and retry logic
  • Structured logging interfaces
  • Error handling and fallback mechanisms