Agent API

The Agent API is a local socket interface for applications and SDKs to send logs to the LogFlux Agent running on the same machine. The agent handles encryption, batching, and forwarding to the cloud via the Ingestion API.

Communication Protocols

The LogFlux Agent supports two communication methods:

Unix Socket (Default)

  • Path: /tmp/logflux-agent.sock
  • Security: File permissions-based access control
  • Authentication: Not required
  • Recommended for: Local applications and SDKs

TCP Socket (Optional)

  • Address: 127.0.0.1:9999
  • Security: Shared secret authentication
  • Authentication: Required
  • Recommended for: Remote or containerized applications

Message Format

All messages are sent as newline-delimited JSON (NDJSON) over the socket connection. Each message must be terminated with a newline character (\n).

Authentication (TCP Only)

TCP connections require authentication using a shared secret:

1
2
3
4
5
{
  "version": "1.0",
  "action": "authenticate",
  "shared_secret": "your-shared-secret-here"
}

Response:

1
2
3
4
{
  "status": "success",
  "message": "Authentication successful"
}

Log Entry Submission

Single Log Entry

Submit a single log entry to the agent:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
  "version": "1.0",
  "payload": "User login successful",
  "entryType": 1,
  "source": "auth-service",
  "logLevel": 6,
  "timestamp": "2025-08-31T10:30:00Z",
  "payloadType": "generic",
  "metadata": {
    "service": "authentication",
    "environment": "production"
  }
}

Batch Log Entries

Submit multiple log entries in a single request:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
{
  "version": "1.0",
  "entries": [
    {
      "payload": "Database connection established",
      "entryType": 1,
      "source": "database",
      "logLevel": 6
    },
    {
      "payload": "Query execution failed",
      "entryType": 1,
      "source": "database",
      "logLevel": 3,
      "metadata": {
        "error": "timeout"
      }
    }
  ]
}

Batch Limits: Maximum 100 entries per batch

Schema Reference

LogEntry Properties

FieldTypeRequiredDescription
versionstringNoProtocol version (default: “1.0”)
payloadstringYesLog content (max 1MB)
entryTypeintegerYesEntry type classification (1-5)
sourcestringYesSource identifier (max 256 chars)
logLevelintegerYesSyslog severity level (1-8)
timestampstringNoRFC3339 timestamp in UTC (auto-generated if omitted)
payloadTypestringNoContent type identifier (default: “generic”)
metadataobjectNoAdditional key-value metadata

See API Overview for entry types and payload structure. The Agent API supports types 1-5. Types 6-7 (Telemetry, TelemetryManaged) require the Ingestion API directly.

Log Levels (Syslog Compatible)

LevelNameDescription
1EMERGENCYSystem is unusable
2ALERTAction must be taken immediately
3CRITICALCritical conditions
4ERRORError conditions
5WARNINGWarning conditions
6NOTICENormal but significant
7INFOInformational messages
8DEBUGDebug-level messages

Payload Types

Common payload type identifiers:

TypeDescription
genericPlain text (default)
generic_jsonJSON payloads
generic_xmlXML payloads
systemd_journalsystemd journal logs
syslogsyslog format logs
metricsMetric data
applicationApplication logs
containerContainer logs

Metadata

Additional key-value pairs for log enrichment:

  • Keys and values must be strings
  • Values limited to 1024 characters each
  • Total metadata size cannot exceed payload size limits

Example metadata:

1
2
3
4
5
6
7
{
  "service": "authentication",
  "environment": "production", 
  "region": "us-west-2",
  "version": "1.2.3",
  "request_id": "abc-123-def"
}

Health Check

Send a ping request to check connection health:

1
2
3
4
{
  "version": "1.0",
  "action": "ping"
}

Response:

1
2
3
{
  "status": "pong"
}

Response Handling

Successful Submissions

For valid log entries, the agent typically:

  • Accepts silently - No response is sent for successful log submissions
  • Maintains connection - Socket remains open for additional messages
  • Processes asynchronously - Logs are queued and forwarded to LogFlux servers

Error Responses

For invalid submissions, the agent may send error messages as JSON:

1
2
3
{
  "error": "Invalid log entry format: missing required field 'payload'"
}

Connection States

ConditionBehavior
Valid log entryNo response, connection remains open
Invalid formatError message sent, connection may remain open
Authentication failure (TCP)Connection closed
Queue fullError message sent, may close connection
Service unavailableConnection refused or closed

Connection Examples

Unix Socket (Node.js)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
const net = require('net');

const client = net.createConnection('/tmp/logflux-agent.sock');

// Send log entry
const logEntry = {
  version: "1.0",
  payload: "User authentication successful",
  entryType: 1,
  source: "auth-service", 
  logLevel: 6,
  timestamp: new Date().toISOString()
};

client.write(JSON.stringify(logEntry) + '\n');
client.end();

TCP Socket (Python)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import socket
import json
import time

# Connect to TCP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('127.0.0.1', 9999))

# Authenticate
auth_request = {
    "version": "1.0",
    "action": "authenticate",
    "shared_secret": "your-shared-secret"
}
sock.send((json.dumps(auth_request) + '\n').encode())

# Send log entry
log_entry = {
    "version": "1.0",
    "payload": "Database query executed",
    "entryType": 1,
    "source": "database-service",
    "logLevel": 7,
    "timestamp": time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime())
}
sock.send((json.dumps(log_entry) + '\n').encode())
sock.close()

Batch Example (Go)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
package main

import (
    "encoding/json"
    "net"
    "time"
)

type LogEntry struct {
    Version   string            `json:"version"`
    Payload   string            `json:"payload"`
    EntryType int               `json:"entryType"`
    Source    string            `json:"source"`
    LogLevel  int               `json:"logLevel"`
    Timestamp string            `json:"timestamp"`
    Metadata  map[string]string `json:"metadata,omitempty"`
}

type BatchRequest struct {
    Version string     `json:"version"`
    Entries []LogEntry `json:"entries"`
}

func main() {
    // Connect to Unix socket
    conn, err := net.Dial("unix", "/tmp/logflux-agent.sock")
    if err != nil {
        panic(err)
    }
    defer conn.Close()

    // Create batch request
    batch := BatchRequest{
        Version: "1.0",
        Entries: []LogEntry{
            {
                Payload:   "Service started",
                EntryType: 1,
                Source:    "my-service",
                LogLevel:  6,
                Timestamp: time.Now().UTC().Format(time.RFC3339),
            },
            {
                Payload:   "Processing request",
                EntryType: 1,
                Source:    "my-service",
                LogLevel:  7,
                Timestamp: time.Now().UTC().Format(time.RFC3339),
            },
        },
    }

    // Send batch
    data, _ := json.Marshal(batch)
    conn.Write(append(data, '\n'))
}

Best Practices

1. Connection Management

  • Unix Socket: Create new connections for each batch or use connection pooling
  • TCP Socket: Authenticate once per connection, reuse connections when possible
  • Always properly close connections to prevent resource leaks

2. Error Handling

  • Handle network errors gracefully with retries
  • Implement exponential backoff for reconnection attempts
  • Log connection failures for debugging

3. Performance Optimization

  • Use batch requests for high-volume logging (up to 100 entries)
  • Buffer log entries locally before sending batches
  • Consider async/non-blocking I/O for better throughput

4. Timestamp Handling

  • Always use UTC timestamps
  • Include timezone information (Z suffix)
  • Let the agent auto-generate timestamps if precision isn’t critical

5. Metadata Usage

  • Include relevant context in metadata for better log analysis
  • Use consistent key names across your application
  • Keep metadata concise to avoid hitting size limits

Security Considerations

Unix Socket Security

  • File permissions control access to the socket
  • Only processes with appropriate permissions can connect
  • Recommended for single-machine deployments

TCP Socket Security

  • Use strong shared secrets (minimum 32 characters)
  • Rotate shared secrets periodically
  • Bind to localhost only (127.0.0.1) to prevent external access
  • Consider firewall rules for additional protection

Data Privacy

  • Avoid logging sensitive information in payloads
  • Use structured logging with metadata to separate sensitive context

Configuration

The LogFlux Agent can be configured to:

  • Change Unix socket path
  • Modify TCP listening address and port
  • Set authentication requirements
  • Configure batch size limits
  • Adjust queue sizes and timeouts

See the Agent Configuration documentation for detailed setup instructions.

Integration with SDKs

Our official SDKs automatically handle the complexity of connecting to the LogFlux Agent:

The SDKs provide:

  • Automatic agent discovery and connection
  • Connection pooling and retry logic
  • Structured logging interfaces
  • Error handling and fallback mechanisms