Docker Deployment

This guide covers deploying Neocache using Docker, the recommended method for most self-hosted deployments.

Prerequisites

  • Docker 20.10+ and Docker Compose v2
  • PostgreSQL 14+ (or use the bundled container)
  • S3-compatible storage (or local volume for testing)

Quick Start with Docker Compose

Create a docker-compose.yml:

version: "3.8"

services:
  neocache:
    image: ghcr.io/neocache/neocache:latest
    ports:
      - "8080:8080"
    environment:
      - DATABASE_URL=postgres://neocache:neocache@postgres:5432/neocache
      - STORAGE_TYPE=s3
      - S3_BUCKET=neocache
      - S3_REGION=us-east-1
      - S3_ENDPOINT=http://minio:9000
      - AWS_ACCESS_KEY_ID=minioadmin
      - AWS_SECRET_ACCESS_KEY=minioadmin
    depends_on:
      - postgres
      - minio

  postgres:
    image: postgres:16-alpine
    environment:
      - POSTGRES_USER=neocache
      - POSTGRES_PASSWORD=neocache
      - POSTGRES_DB=neocache
    volumes:
      - postgres_data:/var/lib/postgresql/data

  minio:
    image: minio/minio:latest
    command: server /data --console-address ":9001"
    environment:
      - MINIO_ROOT_USER=minioadmin
      - MINIO_ROOT_PASSWORD=minioadmin
    volumes:
      - minio_data:/data
    ports:
      - "9001:9001"  # MinIO Console

volumes:
  postgres_data:
  minio_data:

Start the services:

docker compose up -d

Neocache will be available at http://localhost:8080.

Configuration

Environment Variables

VariableDescriptionDefault
DATABASE_URLPostgreSQL connection stringRequired
STORAGE_TYPEs3 or locals3
S3_BUCKETS3 bucket nameRequired for S3
S3_REGIONAWS regionus-east-1
S3_ENDPOINTCustom S3 endpointAWS default
AWS_ACCESS_KEY_IDS3 access keyRequired for S3
AWS_SECRET_ACCESS_KEYS3 secret keyRequired for S3
LISTENAddress to bind[::]:8080
API_ENDPOINTPublic URL for this serverAuto-detected

Using a Config File

Mount a config file for more options:

services:
  neocache:
    volumes:
      - ./config.toml:/etc/neocache/config.toml:ro
    environment:
      - NEOCACHE_CONFIG=/etc/neocache/config.toml

Example config.toml:

listen = "[::]:8080"
api-endpoint = "https://cache.example.com"

[database]
url = "postgres://neocache:neocache@postgres:5432/neocache"

[storage]
type = "s3"
bucket = "neocache"
region = "us-east-1"

[chunking]
nar-size-threshold = 65536
min-size = 16384
avg-size = 65536
max-size = 262144

[compression]
type = "zstd"
level = 3

[garbage-collection]
interval = "12h"
default-retention-period = "30d"

Production Deployment

With External PostgreSQL

For production, use a managed PostgreSQL instance:

services:
  neocache:
    image: ghcr.io/neocache/neocache:latest
    ports:
      - "8080:8080"
    environment:
      - DATABASE_URL=postgres://user:pass@your-postgres-host:5432/neocache
      - STORAGE_TYPE=s3
      - S3_BUCKET=your-bucket
      - S3_REGION=us-east-1
      # AWS credentials from IAM role or env vars

With AWS S3

environment:
  - STORAGE_TYPE=s3
  - S3_BUCKET=my-neocache-bucket
  - S3_REGION=us-west-2
  # Use IAM roles in EC2/ECS, or set these:
  - AWS_ACCESS_KEY_ID=AKIA...
  - AWS_SECRET_ACCESS_KEY=...

With Cloudflare R2

environment:
  - STORAGE_TYPE=s3
  - S3_BUCKET=my-neocache-bucket
  - S3_REGION=auto
  - S3_ENDPOINT=https://ACCOUNT_ID.r2.cloudflarestorage.com
  - AWS_ACCESS_KEY_ID=your-r2-access-key
  - AWS_SECRET_ACCESS_KEY=your-r2-secret-key

TLS/HTTPS

Use a reverse proxy for TLS termination. Example with Caddy:

services:
  caddy:
    image: caddy:2-alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy_data:/data
    depends_on:
      - neocache

  neocache:
    # ... same as above, but don't expose ports directly

Caddyfile:

cache.example.com {
    reverse_proxy neocache:8080
}

Health Checks

The server exposes a health endpoint:

curl http://localhost:8080/health
# {"status": "ok"}

Add to Docker Compose:

services:
  neocache:
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
      interval: 30s
      timeout: 10s
      retries: 3

Upgrading

docker compose pull
docker compose up -d

Neocache handles database migrations automatically on startup.

Troubleshooting

Check logs

docker compose logs -f neocache

Database connection issues

Ensure PostgreSQL is accepting connections and the credentials are correct:

docker compose exec postgres psql -U neocache -d neocache -c "SELECT 1"

S3 permission issues

Verify your S3 credentials have the necessary permissions:

  • s3:GetObject
  • s3:PutObject
  • s3:DeleteObject
  • s3:ListBucket