Telemetry
True Life includes a full observability stack with OpenTelemetry integration, providing distributed tracing, structured logging, and metrics.
Architecture
Flow: Web UI logs/spans → NUI fetch → FiveM Client → RPC → Server → OTLP HTTP → OTEL Collector → Tempo (traces) / Loki (logs) → Grafana
Server Configuration
Configure via convars in server.cfg:
set otel_enabled "true"
set otel_endpoint "http://localhost:4318"
set otel_service_name "true_life"
Pipeline Initialization
Server
// src/runtime/server/main.ts
import { initServerTelemetry, shutdownServerTelemetry } from "@core/telemetry/server";
// During bootstrap
initServerTelemetry({
serviceName: "true_life",
flushIntervalMs: 5000,
});
// During shutdown
await shutdownServerTelemetry();
Client
// src/runtime/client/main.ts
import { initClientTelemetry, shutdownClientTelemetry } from "@core/telemetry/client";
// During bootstrap
initClientTelemetry({
flushIntervalMs: 2000,
});
// During shutdown
await shutdownClientTelemetry();
Web UI
// ui/src/main.tsx
import { initTelemetryForwarder, shutdownTelemetryForwarder } from "@ui/lib/telemetry";
// During app initialization
initTelemetryForwarder({
flushIntervalMs: 1000,
});
// On app unmount
await shutdownTelemetryForwarder();
Observability Stack
The infra/ directory contains the full-stack configuration:
infra/
├── otel/ # OpenTelemetry Collector
│ └── otel-collector-config.yaml
├── tempo/ # Tempo (distributed tracing)
├── loki/ # Loki (log aggregation)
├── grafana/ # Grafana (visualization)
│ └── provisioning/
│ └── dashboards/ # Pre-configured dashboards
└── prometheus/ # Prometheus (metrics)
Starting the Stack
# Start full observability stack
docker compose up -d otel-collector tempo loki grafana prometheus
# Or just the essentials
docker compose up -d otel-collector tempo loki grafana
Accessing Services
| Service | URL | Description |
|---|---|---|
| Grafana | http://localhost:3001 | Dashboards and visualization |
| Prometheus | http://localhost:9090 | Metrics queries |
| Tempo | http://localhost:3200 | Trace queries |
| Loki | http://localhost:3100 | Log queries |
| OTEL Collector | http://localhost:4318 | OTLP HTTP receiver |
Grafana Dashboards
Pre-configured dashboards are available:
- Service Overview - Request rates, error rates, latencies
- Trace Explorer - Search and visualize distributed traces
- Log Explorer - Search structured logs with trace correlation
- Player Metrics - Online players, resource usage
Exploring Traces
- Open Grafana at
http://localhost:3001 - Go to Explore → Select Tempo data source
- Search by trace ID, service, or operation name
- Click on a trace to see the full span hierarchy
Log Correlation
Logs include trace context for correlation:
- Find a trace in Tempo
- Click "Logs for this trace" to see related logs in Loki
- Or search Loki with
{traceId="..."}filter
OTEL Collector Configuration
The collector receives telemetry and routes it to backends:
# infra/otel/otel-collector-config.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 1024
exporters:
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
loki:
endpoint: http://loki:3100/loki/api/v1/push
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/tempo]
logs:
receivers: [otlp]
processors: [batch]
exporters: [loki]
Custom Metrics
Expose custom metrics via Prometheus:
// Custom counter example
import { incrementCounter, setGauge } from "@core/metrics";
// Increment a counter
incrementCounter("banking_transfers_total", { status: "success" });
// Set a gauge
setGauge("players_online", playerCount);
Best Practices
- Use spans for operations - Wrap significant operations
- Include relevant attributes - Player IDs, amounts, etc.
- Set appropriate log levels - Avoid noise
- Monitor error rates - Set up alerts in Grafana
- Sample high-volume traces - Reduce storage costs
- Correlate logs with traces - Use trace context