Agent
The DAAM agent is a lightweight proxy that sits between developers and your PostgreSQL database. Deploy one agent per database — it enforces access policies, applies column-level data masking, and creates ephemeral database roles for each connection based on the developer's identity.
Overview
Developers connect through the agent as though it were a normal PostgreSQL server, but the agent intercepts every query and applies the policies and masking rules configured in the control plane.
- One agent per database — each agent proxies a single upstream PostgreSQL instance.
- Identity-aware — each developer gets an isolated database session tied to their control plane identity.
- Policy enforcement — table-level grants are evaluated on every query. Queries that reference tables not permitted by the developer's policy are rejected.
- Column-level masking — sensitive columns are masked in query results according to masking rules defined in the policy.
- Encrypted connectivity — developers reach the agent through encrypted tunnels established via the CLI, with automatic relay fallback.
- Offline resilience — agents cache policies to disk and continue enforcing them during control plane outages.
The agent never stores database credentials for developers. Isolated sessions are created on-the-fly when a secure tunnel is established and cleaned up when the connection closes.
Deployment
The agent ships as a single static binary with no runtime dependencies. Deploy it alongside your database using Docker, a direct binary download, or a systemd service.
Docker
Pull and run the agent container image. Pass your agent token (obtained from the console when registering a database) and point the agent at your upstream PostgreSQL instance:
docker run -d \
--name daam-agent \
--restart unless-stopped \
-p 6432:6432 \
-p 8080:8080 \
-e DAAM_UPSTREAM_HOST=your-db-host \
-e DAAM_UPSTREAM_PORT=5432 \
-e DAAM_CONTROL_PLANE_URL=https://app.daam.dev \
-e DAAM_AUTH_TOKEN=your-agent-token \
<registry>/daam-agent:latest Binary Download
Download the agent binary for your platform and run it with a configuration file:
# Download the binary
curl -sLO https://<release-url>/daam-agent-linux-amd64
chmod +x daam-agent-linux-amd64
mv daam-agent-linux-amd64 /usr/local/bin/daam-agent
# Run with a config file
daam-agent --config /etc/daam/agent.yaml Systemd
For long-running deployments on Linux, create a systemd unit file so the agent starts on boot and restarts on failure:
[Unit]
Description=DAAM Agent
After=network-online.target postgresql.service
Wants=network-online.target
[Service]
Type=simple
ExecStart=/usr/local/bin/daam-agent --config /etc/daam/agent.yaml
Restart=on-failure
RestartSec=5s
User=daam
Group=daam
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target The agent registers itself with the control plane on first startup using the auth token. After registration, it receives a persistent credential and rotates it automatically. Store the auth token securely — it is only needed for the initial connection.
Configuration Reference
The agent reads its configuration from a YAML file specified with the --config flag. Below is an annotated example showing all available fields:
# Address the agent listens on for PostgreSQL connections
listen: ":6432"
# Upstream database connection
upstream:
host: "localhost"
port: 5432
# Use dsn_file in production to avoid secrets in the config
dsn_file: "/run/secrets/upstream-dsn"
# Control plane connection
control_plane:
url: "https://app.daam.dev"
# Use auth_token_file in production
auth_token_file: "/run/secrets/agent-token"
# Path where the agent stores its persistent credential
credential_path: "/var/lib/daam/credential.json"
# Path for the on-disk policy cache
policy_cache_path: "/var/lib/daam/policy-cache.json"
# Path for buffered audit events during outages
audit_buffer_path: "/var/lib/daam/audit-buffer.json"
# Secure tunnel configuration
wireguard:
listen_port: 51820
private_key_path: "/var/lib/daam/wg-private.key"
# Health check endpoint
health:
enabled: true
port: 8080
# Logging configuration
logging:
level: "info"
queries: false Core Fields
| Field | Default | Description |
|---|---|---|
listen | :6432 | Address and port for the PostgreSQL proxy listener. |
upstream.host | localhost | Hostname of the upstream PostgreSQL server. |
upstream.port | 5432 | Port of the upstream PostgreSQL server. |
upstream.dsn_file | — | Path to a file containing a PostgreSQL DSN. Recommended over inline credentials in production. |
upstream.dsn | — | Inline PostgreSQL DSN. Use dsn_file instead in production to keep secrets out of config files. |
Control Plane Fields
| Field | Default | Description |
|---|---|---|
control_plane.url | — | URL of the DAAM control plane. Required. |
control_plane.auth_token_file | — | Path to a file containing the agent registration token. Recommended for production. |
control_plane.auth_token | — | Inline agent registration token. Use auth_token_file instead in production. |
control_plane.credential_path | credential.json | Path where the agent stores its persistent credential after registration. |
control_plane.policy_cache_path | policy-cache.json | Path for the on-disk policy cache. Allows the agent to survive control plane restarts. |
control_plane.audit_buffer_path | audit-buffer.json | Path for buffering audit events when the control plane is temporarily unreachable. |
Health and Logging Fields
| Field | Default | Description |
|---|---|---|
health.enabled | true | Enable the HTTP health check endpoint. |
health.port | 8080 | Port for the health check HTTP server. |
logging.level | info | Log level: debug, info, warn, or error. |
logging.queries | false | Log individual SQL queries. Enable for debugging, disable in production for performance. |
logging.redact_queries | false | Redact query parameters from logged queries. Enable when query logging is on and queries may contain sensitive data. |
Health Checks
The agent exposes two HTTP health check endpoints when health.enabled is true. Use these for load balancer probes, Kubernetes liveness/readiness checks, and monitoring.
Basic Health Check
GET /health returns a simple status response. No authentication required.
{
"status": "healthy"
} Returns HTTP 200 when the agent process is running and able to accept connections. Returns HTTP 503 when the agent is shutting down or has encountered a fatal error.
Detailed Health Check
GET /health/detailed returns extended status including upstream database connectivity and control plane connection state. This endpoint requires authentication with the agent token.
| Status | HTTP Code | Meaning |
|---|---|---|
healthy | 200 | All subsystems operational |
degraded | 200 | Agent is running but the control plane connection is down (using cached policies) |
unhealthy | 503 | Agent cannot serve traffic (e.g. upstream database unreachable) |
Kubernetes Probes
Example Kubernetes liveness and readiness probe configuration for the agent container:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10 Use the basic /health endpoint for liveness and readiness probes. The detailed endpoint is intended for operational dashboards and requires authentication, making it unsuitable for automated orchestrator probes.
Schema Introspection
The agent automatically discovers the schema of the upstream database (tables, columns, and data types) and reports it to the control plane. This powers the policy editor in the console — when you create or edit a policy, you can select from actual tables and columns rather than typing names manually.
- Automatic discovery — the agent discovers schemas, tables, and columns from the upstream database.
- On-demand refresh — the control plane can request a schema refresh at any time.
- Reported to control plane — the discovered schema is made available in the console for policy authoring.
- Read-only — the agent never modifies the upstream schema. Introspection uses read-only queries only.
The discovered schema is used only for the policy editor UI and does not affect query enforcement.
Policy Caching
Agents cache policies to disk so they can continue enforcing access controls during control plane outages. This ensures that a temporary loss of connectivity to the control plane does not leave your database unprotected or inaccessible.
- Disk persistence — policies are written to the path specified by
control_plane.policy_cache_pathwhenever the agent receives a policy update. - Integrity verification — the cache file is integrity-verified on startup. Corrupted caches are rejected.
- No expiry — cached policies do not expire. The agent uses them indefinitely until replaced by a fresh update from the control plane.
- Fail-closed on cold start — if the agent cannot verify its policies, it refuses connections. Security is never degraded by an outage.
| Scenario | Behavior |
|---|---|
| Control plane reachable | Agent uses live policies from the control plane and updates the disk cache |
| Control plane down, cache exists | Agent continues with cached policies. Health status reports "degraded" |
| Control plane down, no cache | Agent rejects all connections (fail-closed). Health status reports "unhealthy" |
| Control plane restored | Agent reconnects, receives latest policies, updates cache, drains buffered audit events |
Audit events generated during a control plane outage are buffered locally. When connectivity is restored, buffered events are drained to the control plane automatically.
Graceful Shutdown
The agent shuts down gracefully, draining active connections before exiting. The health endpoint reports unhealthy during shutdown so load balancers stop sending traffic.
Set the Kubernetes terminationGracePeriodSeconds to at least 65 seconds to allow the agent to finish draining connections before the pod is killed.