Google Cloud Logging Integration

Retrieve and stream logs from Google Cloud Logging with LogFlux Agent

Google Cloud Logging

The LogFlux Google Cloud Logging integration retrieves and streams logs from Google Cloud Logging (formerly Stackdriver), enabling centralized log analysis from your Google Cloud infrastructure. This plugin provides seamless integration with Google Cloud Logging service, supporting multiple authentication methods and advanced filtering capabilities.

Overview

The Google Cloud Logging plugin provides:

  • Cloud Logging Integration: Direct connection to Google Cloud Logging service
  • Multiple Authentication Methods: Service accounts, ADC, and default credentials
  • Multi-Project Support: Query logs across multiple GCP projects simultaneously
  • Resource Type Filtering: Filter by specific GCP resource types (GCE, Cloud Functions, etc.)
  • Advanced Filtering: Use Cloud Logging’s powerful filter query syntax
  • Severity Filtering: Filter logs by severity levels
  • Follow Mode: Continuously poll for new log entries in real-time
  • Batch Processing: Efficient batching for high-volume log retrieval
  • Rich Metadata: Extract GCP resource information, HTTP requests, and operations

Installation

The Google Cloud Logging plugin is included with the LogFlux Agent but disabled by default.

Prerequisites

  • LogFlux Agent installed (see Installation Guide)
  • Google Cloud credentials configured (service account, ADC, or default credentials)
  • Appropriate IAM permissions for Cloud Logging access
  • Network connectivity to Google Cloud Logging endpoints

Required IAM Permissions

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
{
  "bindings": [
    {
      "role": "roles/logging.viewer",
      "members": [
        "serviceAccount:logflux@PROJECT_ID.iam.gserviceaccount.com"
      ]
    }
  ]
}

Or create a custom role with minimal permissions:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
{
  "title": "LogFlux Logging Reader",
  "description": "Read access to Cloud Logging for LogFlux",
  "stage": "GA",
  "includedPermissions": [
    "logging.entries.list",
    "logging.logEntries.list",
    "logging.logs.list",
    "logging.projects.get"
  ]
}

Enable the Plugin

1
2
3
4
5
# Enable and start the GCP plugin
sudo systemctl enable --now logflux-gcplog

# Check status
sudo systemctl status logflux-gcplog

Configuration

Basic Configuration

Create or edit the Google Cloud Logging plugin configuration:

1
sudo nano /etc/logflux-agent/plugins/gcplog.yaml

Basic configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
# Google Cloud Logging Plugin Configuration
name: gcplog
version: 1.0.0
source: gcplog-plugin

# Agent connection
agent:
  socket_path: /tmp/logflux-agent.sock

# Google Cloud Configuration
gcp:
  # Project ID(s) to monitor
  project_id: "my-gcp-project"
  # project_ids: "project1,project2,project3"  # Multiple projects
  
  # Authentication method
  auth_method: "default"  # default, service-account, adc
  credentials_file: ""    # Path to service account JSON (optional)

# Log retrieval settings
logging:
  # Specific log names (optional)
  log_names: []
  
  # Resource types to filter (optional)
  resource_types: []
  
  # Advanced filter query
  filter: ""
  
  # Minimum severity level
  severity: ""  # DEFAULT, DEBUG, INFO, NOTICE, WARNING, ERROR, CRITICAL, ALERT, EMERGENCY
  
  # Follow mode for real-time streaming
  follow: true
  poll_interval: 30s
  
  # Maximum entries per request
  max_entries: 1000
  page_size: 1000

# Metadata and labeling
metadata:
  verbose: false
  labels:
    plugin: gcplog
    source: gcp

# Batching for efficiency
batch:
  enabled: true
  size: 100
  flush_interval: 5s

Advanced Configuration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
# Advanced Google Cloud Logging Configuration
name: gcplog
version: 1.0.0
source: gcplog-plugin

# Enhanced agent settings
agent:
  socket_path: /tmp/logflux-agent.sock
  connect_timeout: 30s
  max_retries: 5
  retry_delay: 10s

# Google Cloud Configuration
gcp:
  # Multiple projects
  project_ids: "prod-app-123,staging-app-456,dev-app-789"
  
  # Authentication
  auth_method: "service-account"
  credentials_file: "/etc/gcp/logflux-sa.json"

# Advanced logging settings
logging:
  # Specific log names
  log_names:
    - "projects/prod-app-123/logs/stdout"
    - "projects/prod-app-123/logs/stderr"
    - "projects/prod-app-123/logs/nginx.access"
  
  # Resource type filtering
  resource_types:
    - "gce_instance"
    - "cloud_function"
    - "gke_container"
    - "cloud_run_revision"
  
  # Complex filter query
  filter: 'severity>="WARNING" AND (resource.type="cloud_function" OR resource.type="cloud_run_revision")'
  
  # Time range for historical data
  start_time: "-1h"  # 1 hour ago
  end_time: ""       # Now (empty = current time)
  
  # Real-time following
  follow: true
  poll_interval: 15s
  
  # Request limits
  max_entries: 5000
  page_size: 2000

# Enhanced metadata
metadata:
  verbose: true
  labels:
    plugin: gcplog
    source: gcp
    environment: production
  
  # Custom field mapping
  field_mapping:
    project_id: "gcp_project"
    log_name: "gcp_log_name"
    resource_type: "gcp_resource_type"
    severity: "gcp_severity"
    insert_id: "gcp_insert_id"

# Advanced batching
batch:
  enabled: true
  size: 500
  flush_interval: 10s
  
  # Memory management
  max_memory: 100MB

# Health monitoring
health:
  check_interval: 60s
  max_api_errors: 10
  alert_on_quota_exceeded: true

Usage Examples

Cloud Functions Logs

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# Monitor specific Cloud Function
sudo logflux-gcplog \
  -project-id "my-project" \
  -resource-types "cloud_function" \
  -filter 'resource.labels.function_name="my-function"' \
  -follow

# Monitor all Cloud Functions with errors
sudo logflux-gcplog \
  -project-id "my-project" \
  -resource-types "cloud_function" \
  -severity "ERROR" \
  -follow

Google Kubernetes Engine (GKE) Logs

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# GKE cluster monitoring
gcp:
  project_id: "my-project"

logging:
  resource_types:
    - "gke_container"
    - "k8s_container"
    - "k8s_pod"
  
  filter: 'resource.labels.cluster_name="my-cluster"'
  follow: true
  poll_interval: 20s

metadata:
  labels:
    service: gke
    cluster: my-cluster

Compute Engine Logs

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# GCE instance monitoring
gcp:
  project_id: "my-project"

logging:
  resource_types:
    - "gce_instance"
  
  filter: 'resource.labels.instance_id="my-instance-id"'
  follow: true

metadata:
  labels:
    service: compute_engine
    instance: my-instance

Cloud Run Services

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# Cloud Run service monitoring
gcp:
  project_id: "my-project"

logging:
  resource_types:
    - "cloud_run_revision"
  
  filter: 'resource.labels.service_name="my-service" AND severity>="INFO"'
  follow: true
  poll_interval: 30s

metadata:
  labels:
    service: cloud_run
    application: my-service

Command Line Usage

Basic Commands

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# Monitor specific project
logflux-gcplog -project-id "my-project"

# Follow mode for real-time logs
logflux-gcplog -project-id "my-project" -follow

# Historical logs from last hour
logflux-gcplog -project-id "my-project" -start-time "-1h"

# Multiple projects
logflux-gcplog -project-ids "project1,project2,project3"

# Filter by resource type
logflux-gcplog -project-id "my-project" -resource-types "cloud_function,cloud_run_revision"

# Filter by severity
logflux-gcplog -project-id "my-project" -severity "ERROR"

Advanced Options

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# Complex filter query
logflux-gcplog -project-id "my-project" \
  -filter 'severity>="WARNING" AND resource.type="cloud_function"'

# Specific log names
logflux-gcplog -project-id "my-project" \
  -log-names "projects/my-project/logs/stdout,projects/my-project/logs/stderr"

# Custom time range
logflux-gcplog -project-id "my-project" \
  -start-time "2024-01-20T10:00:00Z" \
  -end-time "2024-01-20T11:00:00Z"

# Service account authentication
logflux-gcplog -project-id "my-project" \
  -auth-method "service-account" \
  -credentials-file "/path/to/sa.json"

# Custom batch settings
logflux-gcplog -project-id "my-project" \
  -batch-size 200 \
  -flush-interval 10s \
  -max-entries 5000

# Verbose output
logflux-gcplog -project-id "my-project" -verbose

# Configuration file
logflux-gcplog -config /etc/logflux-agent/plugins/gcplog.yaml

Authentication Methods

Application Default Credentials (ADC)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Initialize ADC with gcloud
gcloud auth application-default login

# Or set environment variable
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"

# Use in configuration
gcp:
  auth_method: "adc"
  project_id: "my-project"

Service Account Key File

1
2
3
4
5
6
7
8
9
# Download service account key
gcloud iam service-accounts keys create logflux-sa.json \
  --iam-account logflux@PROJECT_ID.iam.gserviceaccount.com

# Use in configuration
gcp:
  auth_method: "service-account"
  credentials_file: "/etc/gcp/logflux-sa.json"
  project_id: "my-project"

Default Credentials (GCE, GKE)

1
2
3
4
# For GCE instances or GKE pods with default service accounts
gcp:
  auth_method: "default"
  project_id: "my-project"

Environment Variables

1
2
3
4
5
6
# Set project ID via environment
export GOOGLE_CLOUD_PROJECT="my-project"
export GCLOUD_PROJECT="my-project"  # Alternative

# Credentials file
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/sa.json"

Cloud Logging Filter Syntax

Basic Filters

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
# Severity filtering
-filter 'severity="ERROR"'
-filter 'severity>="WARNING"'

# Resource type filtering
-filter 'resource.type="cloud_function"'
-filter 'resource.type=("cloud_function" OR "cloud_run_revision")'

# Time-based filtering
-filter 'timestamp>="2024-01-20T10:00:00Z"'
-filter 'timestamp>="2024-01-20T10:00:00Z" AND timestamp<="2024-01-20T11:00:00Z"'

# Text search
-filter 'textPayload:"error"'
-filter 'jsonPayload.message:"database connection failed"'

# Label filtering
-filter 'labels.env="production"'
-filter 'resource.labels.function_name="my-function"'

Advanced Filters

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# Combined conditions
-filter 'severity>="ERROR" AND resource.type="cloud_function" AND resource.labels.function_name="api-handler"'

# JSON payload filtering
-filter 'jsonPayload.level="error" AND jsonPayload.service="auth"'

# HTTP request filtering
-filter 'httpRequest.status>=400'
-filter 'httpRequest.requestMethod="POST" AND httpRequest.status>=500'

# Regular expressions
-filter 'textPayload=~".*connection.*timeout.*"'
-filter 'resource.labels.instance_id=~"web-.*"'

# Exclude patterns
-filter 'NOT textPayload:"health check"'
-filter 'severity!="DEBUG"'

Filter Examples by Service

Cloud Functions:

1
2
3
4
5
6
7
8
# Function errors
-filter 'resource.type="cloud_function" AND severity>="ERROR"'

# Specific function
-filter 'resource.type="cloud_function" AND resource.labels.function_name="my-function"'

# Function timeouts
-filter 'resource.type="cloud_function" AND textPayload:"timeout"'

Cloud Run:

1
2
3
4
5
6
7
8
# Service errors
-filter 'resource.type="cloud_run_revision" AND severity>="ERROR"'

# Specific service
-filter 'resource.type="cloud_run_revision" AND resource.labels.service_name="my-service"'

# HTTP errors
-filter 'resource.type="cloud_run_revision" AND httpRequest.status>=400'

GKE/Kubernetes:

1
2
3
4
5
6
7
8
# Pod logs
-filter 'resource.type="k8s_pod" AND resource.labels.namespace_name="default"'

# Container logs
-filter 'resource.type="gke_container" AND resource.labels.cluster_name="my-cluster"'

# Application logs
-filter 'resource.type="k8s_container" AND labels.k8s-pod/app="my-app"'

Metadata and Output Format

Metadata Fields

The plugin adds Google Cloud-specific metadata:

Field Description Example
source_type Always “plugin” plugin
source_name Always “gcplog” gcplog
gcp_project GCP project ID my-project-123
gcp_log_name Cloud Logging log name projects/my-project/logs/stdout
gcp_resource_type GCP resource type cloud_function
gcp_severity Cloud Logging severity ERROR
gcp_insert_id Unique entry identifier abc123def456
gcp_receive_time Log receive timestamp 2024-01-20T14:30:50.123Z

LogFlux Output Format

Input Cloud Logging Entry:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
{
  "insertId": "abc123def456",
  "logName": "projects/my-project/logs/cloud-functions",
  "receiveTimestamp": "2024-01-20T14:30:50.123456Z",
  "resource": {
    "type": "cloud_function",
    "labels": {
      "function_name": "my-function",
      "region": "us-central1"
    }
  },
  "severity": "ERROR",
  "textPayload": "Database connection failed",
  "timestamp": "2024-01-20T14:30:50.000Z"
}

Output LogFlux Log:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
{
  "timestamp": "2024-01-20T14:30:50.000Z",
  "level": "info",
  "message": "Database connection failed",
  "node": "gcp",
  "metadata": {
    "source_type": "plugin",
    "source_name": "gcplog",
    "gcp_project": "my-project",
    "gcp_log_name": "projects/my-project/logs/cloud-functions",
    "gcp_resource_type": "cloud_function",
    "gcp_severity": "ERROR",
    "gcp_insert_id": "abc123def456",
    "gcp_receive_time": "2024-01-20T14:30:50.123Z",
    "gcp_function_name": "my-function",
    "gcp_region": "us-central1",
    "plugin": "gcplog",
    "environment": "production"
  }
}

Performance Optimization

High-Volume Configuration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# High-throughput settings
logging:
  max_entries: 10000
  page_size: 5000
  poll_interval: 10s
  
batch:
  size: 1000
  flush_interval: 30s
  max_memory: 500MB

# Use specific filters to reduce data volume
filter: 'severity>="WARNING"'

Cost Optimization

1
2
3
4
5
6
7
8
9
# Reduce Cloud Logging API calls
logging:
  poll_interval: 60s  # Less frequent polling
  max_entries: 500    # Smaller batch sizes
  page_size: 500
  
  # Use filters to reduce data transfer
  filter: 'severity>="ERROR"'
  resource_types: ["cloud_function"]  # Specific resources only

Multi-Project Optimization

1
2
3
4
5
6
7
8
# Run separate instances for different projects
# instead of using project_ids for better performance

# Project 1
logflux-gcplog -project-id "prod-project" -config prod-config.yaml

# Project 2  
logflux-gcplog -project-id "staging-project" -config staging-config.yaml

Monitoring and Alerting

Plugin Health Monitoring

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/bin/bash
# check-gcp-plugin.sh

if ! systemctl is-active --quiet logflux-gcplog; then
    echo "CRITICAL: LogFlux GCP plugin is not running"
    exit 2
fi

# Check GCP connectivity
if ! gcloud logging logs list --project=my-project --limit=1 &>/dev/null; then
    echo "CRITICAL: Cannot connect to Google Cloud Logging API"
    exit 2
fi

# Check recent log processing
if ! journalctl -u logflux-gcplog --since="10 minutes ago" | grep -q "entries processed"; then
    echo "WARNING: No entries processed in last 10 minutes"
    exit 1
fi

echo "OK: LogFlux GCP plugin is healthy"
exit 0

Cloud Logging Metrics

1
2
3
4
5
6
7
8
# Monitor API usage with gcloud
gcloud logging metrics list --project=my-project

# Check quota usage
gcloud logging quotas list --project=my-project

# Monitor logs-based metrics
gcloud logging sinks list --project=my-project

Common Use Cases

Serverless Application Monitoring

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# Cloud Functions + Cloud Run monitoring
gcp:
  project_id: "my-serverless-project"

logging:
  resource_types:
    - "cloud_function"
    - "cloud_run_revision"
  
  filter: 'severity>="WARNING"'
  follow: true
  poll_interval: 30s

metadata:
  labels:
    architecture: serverless
    environment: production

Microservices on GKE

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# GKE cluster monitoring
gcp:
  project_id: "my-k8s-project"

logging:
  resource_types:
    - "gke_container"
    - "k8s_container"
    - "k8s_pod"
  
  filter: 'resource.labels.cluster_name="prod-cluster" AND severity>="INFO"'
  follow: true
  poll_interval: 20s

metadata:
  labels:
    platform: gke
    cluster: prod-cluster

Application Performance Monitoring

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# APM with HTTP request tracking
gcp:
  project_id: "my-app-project"

logging:
  filter: 'httpRequest.status>=400 OR severity>="WARNING"'
  follow: true
  poll_interval: 15s

metadata:
  labels:
    monitoring_type: apm
    focus: performance

Security and Audit Logging

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# Security event monitoring
gcp:
  project_id: "my-secure-project"

logging:
  filter: 'protoPayload.serviceName="cloudaudit.googleapis.com" OR severity>="ERROR"'
  follow: true
  poll_interval: 60s

metadata:
  labels:
    log_type: security
    compliance: required

Security Considerations

IAM Best Practices

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
  "bindings": [
    {
      "role": "roles/logging.viewer",
      "members": ["serviceAccount:logflux@PROJECT.iam.gserviceaccount.com"],
      "condition": {
        "title": "LogFlux Access",
        "description": "Restrict access to specific log types",
        "expression": "resource.name.startsWith(\"projects/PROJECT/logs/\")"
      }
    }
  ]
}

Network Security

1
2
3
4
5
6
7
8
9
# Use private Google access for GCE instances
gcloud compute networks subnets update SUBNET \
  --region=REGION \
  --enable-private-ip-google-access

# Configure VPC Service Controls
gcloud access-context-manager perimeters create my-perimeter \
  --policy=POLICY_ID \
  --title="LogFlux Perimeter"

Credential Management

1
2
3
4
5
6
7
8
# Use workload identity for GKE
# Instead of service account keys
gcp:
  auth_method: "default"  # Uses workload identity
  project_id: "my-project"

# Rotate service account keys regularly
# Use Google Secret Manager for credential storage

Troubleshooting

Common Issues

Authentication Failures:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Check credentials
gcloud auth list
gcloud auth application-default print-access-token

# Test Cloud Logging access
gcloud logging logs list --project=my-project --limit=1

# Check service account permissions
gcloud projects get-iam-policy my-project \
  --filter="bindings.members:serviceAccount:logflux@my-project.iam.gserviceaccount.com"

No Logs Retrieved:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Verify logs exist
gcloud logging entries list --project=my-project --limit=5

# Check filter syntax
gcloud logging entries list --project=my-project \
  --filter='severity>="INFO"' --limit=5

# Test resource type filter
gcloud logging entries list --project=my-project \
  --filter='resource.type="cloud_function"' --limit=5

API Quota Exceeded:

1
2
3
4
5
6
7
8
9
# Check quota usage
gcloud logging quotas list --project=my-project

# Request quota increase
gcloud alpha services quotas update \
  --service=logging.googleapis.com \
  --consumer=projects/my-project \
  --metric=read_requests \
  --value=10000

High Costs:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# Optimize for cost
logging:
  # Use specific resource types
  resource_types: ["cloud_function"]
  
  # Apply filters to reduce data volume
  filter: 'severity>="ERROR"'
  
  # Increase poll interval
  poll_interval: 300s  # 5 minutes
  
  # Reduce page size
  max_entries: 100

Debugging

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# Enable verbose logging
sudo systemctl edit logflux-gcplog
# Add:
[Service]
Environment="LOGFLUX_LOG_LEVEL=debug"

# Monitor API calls
gcloud logging entries list --project=my-project --log-name=cloudaudit.googleapis.com \
  --filter='resource.type="api"' --limit=10

# Check plugin logs
sudo journalctl -u logflux-gcplog -f

# Test connectivity
gcloud info --run-diagnostics

Best Practices

Configuration Management

  1. Use specific filters to reduce API calls and costs
  2. Target specific resource types rather than broad queries
  3. Set appropriate poll intervals based on log volume
  4. Use service accounts with minimal required permissions

Performance

  1. Optimize batch sizes for your log volume
  2. Use regional deployment when possible
  3. Monitor API quotas and adjust accordingly
  4. Cache authentication tokens to reduce overhead

Security

  1. Follow least privilege principle for IAM permissions
  2. Use workload identity instead of service account keys
  3. Enable audit logging for the plugin service account
  4. Rotate credentials regularly

Cost Management

  1. Use filters effectively to reduce data retrieval
  2. Monitor API usage through Cloud Monitoring
  3. Set up budget alerts for logging costs
  4. Archive old logs to cheaper storage classes

Disclaimer

Google Cloud Platform, GCP, Google Cloud Logging, and the Google Cloud logo are trademarks of Google LLC. LogFlux is not affiliated with, endorsed by, or sponsored by Google LLC. The Google Cloud services and logos are referenced solely for identification purposes to indicate compatibility with Google Cloud Logging.

Next Steps