← Back to Logbook
April 24, 2026 by Quartermaster

Docker Compose for Solopreneurs: One File, Ten Services, Zero SaaS Bills

docker compose solopreneur stack

Docker Compose lets you define your entire self-hosted infrastructure in a single YAML file β€” databases, web servers, monitoring, backups, and AI tools all running on one VPS for less than the cost of a single SaaS subscription. If you’re a solopreneur still paying $29/month for analytics, $49/month for a CMS platform, $15/month for uptime monitoring, and another $99/month for file storage, you’re essentially renting infrastructure you could own outright for the price of a cheap lunch.

The SaaS trap is real. Every tool promises it’s worth the monthly fee, but the fees compound quietly until you’re bleeding $300–$500 a month on software that someone else controls, can raise prices on, shut down, or sell to a private equity firm that immediately enshittifies the product. Docker Compose is the antidote. It’s not glamorous, it’s not trendy, and it doesn’t have a slick marketing site β€” it’s just a tool that works, and it puts you back in control of your own digital infrastructure. This guide will show you exactly how to use it to replace an entire SaaS stack with ten self-hosted services, all running on a single affordable VPS.

⚑ Key Takeaways

  • Docker Compose lets you run 10+ production services from a single YAML file on a $5 VPS
  • Replace $200-500/month in SaaS subscriptions with self-hosted alternatives at $5-20/month
  • Modern syntax: docker compose (space) β€” the hyphenated binary is deprecated
  • Stack includes: reverse proxy, database, CMS, analytics, monitoring, AI, automation, storage, backups, git
  • For 99% of solopreneurs, Docker Compose is all the orchestration you will ever need

Why Docker Compose Is the Solopreneur’s Best Infrastructure Tool

Before we get into YAML and terminal commands, let’s address the obvious question: why docker compose specifically, and not just running Docker containers manually, or going all-in on Kubernetes? The answer comes down to simplicity, portability, and the fact that most solopreneurs are running one server, not a distributed cluster across three availability zones.

docker compose step 1

One File to Rule Your Entire Stack

With docker compose, your entire infrastructure lives in a single docker-compose.yml file. Every service, every environment variable, every network connection, every volume mount β€” all of it is declared in one place in plain text. That file can be committed to a private Git repository, versioned, diffed, rolled back, and copied to a new server in minutes. When your VPS provider has an outage, or you want to migrate to a cheaper host, you don’t need to remember which ports you opened, which databases you created, or which environment variables you set six months ago. You just copy the file and run docker compose up -d.

This is infrastructure as code in its most accessible form. You don’t need to learn Terraform, Ansible, or Helm charts. You need to understand YAML and a handful of docker compose concepts: services, volumes, networks, and environment variables. That’s it.

πŸ΄β€β˜ οΈ PIRATE TIP: Commit your docker-compose.yml to a private Git repo β€” Gitea, which we set up later in this stack, is perfect for this. When your VPS dies at 3 AM, spin up a fresh server, clone the repo, run docker compose up -d, and you’re back online before your morning coffee gets cold.

No Kubernetes Overhead for Single-Server Deploys

Kubernetes is an extraordinary piece of engineering built for a problem most solopreneurs will never have. It was designed to orchestrate thousands of containers across hundreds of nodes with automatic failover, rolling deployments, and self-healing workloads. It requires a significant learning curve, dedicated tooling, and frankly β€” for a single person running a content site, a few web apps, and some AI tooling β€” it’s catastrophic overkill.

Docker compose gives you 95% of what you actually need: defined services, automatic container restarts, shared networks, persistent volumes, and environment-based configuration. It runs on a single server. You manage it with four or five commands. The operational burden is minimal. According to research from NIST’s Application Container Security Guide, containerization itself provides meaningful security and isolation benefits regardless of orchestration complexity β€” you don’t need Kubernetes to get the core benefits of container-based deployments.

Modern CLI: docker compose (With a Space)

A quick note that will save you confusion: the modern syntax is docker compose (with a space), not docker-compose (with a hyphen). The hyphenated version was the old standalone binary. The space version is the Docker Compose V2 plugin, installed as part of Docker Engine. Both work for most purposes, but the plugin version is actively maintained and is the standard going forward. When you install docker-compose-plugin via apt, you get the modern version. Use the space syntax in all new projects.

The Math That Makes This Obvious

Let’s be direct about cost. A Hetzner CX22 runs about $4.50/month. A DigitalOcean Droplet starts at $5/month. Even if you go with a beefier 4 vCPU / 8GB RAM server to comfortably run ten services, you’re looking at $15–20/month. Now add up what you’d pay for the SaaS equivalents of the stack we’re about to build: Google Analytics or Fathom ($14/mo), WordPress.com Business ($25/mo), StatusCake or Better Uptime ($20/mo), Dropbox or Google Drive ($12/mo), GitHub ($4/mo minimum), a workflow automation tool like Zapier ($20/mo minimum), and managed PostgreSQL ($15/mo). You’re already at $110/month for mediocre, vendor-locked versions of tools you could own entirely. Docker compose is how you stop paying that tax.

$1,320/yr

What solopreneurs save annually by replacing 10 SaaS subscriptions with a self-hosted Docker Compose stack

Source: Pricing comparison based on standard SaaS tier rates

What You Need Before Starting

This isn’t a guide that requires you to be a senior DevOps engineer. But it does require you to have a few things in place before the YAML starts making sense. If you’re starting from scratch, check out our guide to first VPS setup β€” it covers everything from provisioning your server to SSH key configuration.

docker compose step 2
  • A VPS: Hetzner CX22 ($4.50/mo), DigitalOcean Basic ($5/mo), or Vultr. For the full 10-service stack, aim for at least 4GB RAM. 8GB is comfortable.
  • Docker Engine + Compose Plugin: Run apt install docker.io docker-compose-plugin on Ubuntu/Debian. Verify with docker compose version.
  • A domain name: Point an A record to your VPS IP. Caddy will handle SSL automatically from there.
  • Basic terminal comfort: You need to be able to SSH into a server, create directories, edit files with nano or vim, and run commands. That’s the full prerequisite list.

Once you have those four things, you’re ready. Everything else is configuration.

Self-Hosting Tutorial: Docker Compose for Beginners

The 10-Service Solopreneur Stack in One docker-compose.yml

Here’s what we’re building: a complete, production-capable infrastructure stack that replaces the most common SaaS tools solopreneurs pay for every month. Every service runs as a container managed by docker compose, communicates over a shared internal network, stores data in named volumes, and sits behind Caddy’s automatic HTTPS reverse proxy. The full file is assembled from the snippets below β€” drop them all into a single docker-compose.yml and add the shared network and volumes declarations at the bottom.

docker compose step 3

Create your project directory first:

mkdir ~/stack && cd ~/stack
nano docker-compose.yml

All snippets below belong inside the top-level services: key. We’ll note the shared network and volumes at the end.

1. Caddy (Reverse Proxy + Auto HTTPS)

Replaces: Nginx Proxy Manager, Cloudflare paid plans, manual Certbot setup.

Caddy is the gateway to your entire stack. It automatically provisions and renews Let’s Encrypt SSL certificates for every domain you configure. No certbot cron jobs. No manual renewal. It just works. Every other service in your docker compose stack routes through Caddy β€” none of them are exposed directly to the internet.

services:
  caddy:
    image: caddy:latest
    container_name: caddy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - web

Config note: Create a Caddyfile in the same directory. A basic entry looks like: yourdomain.com { reverse_proxy ghost:2368 }. Add a block per subdomain/service. Caddy handles the rest.

πŸ΄β€β˜ οΈ PIRATE TIP: Notice how only Caddy has ports mapped in the docker compose file. Every other service talks to Caddy over the internal Docker network. Never expose database or admin ports directly to the internet β€” Caddy is your single front door. That is how container security works.

2. PostgreSQL (Database)

Replaces: Supabase Free Tier, PlanetScale, managed RDS ($15-50/mo).

Most of the services in this stack need a database. PostgreSQL is the right choice β€” battle-tested, feature-complete, and freely available. One container serves multiple applications through separate databases and users. In a docker compose stack, other services connect to it by container name (postgres) over the internal network β€” no external port exposure needed.

  postgres:
    image: postgres:16-alpine
    container_name: postgres
    restart: unless-stopped
    environment:
      POSTGRES_USER: admin
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: maindb
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - web

Config note: Create a .env file in the same directory as your docker compose file to store secrets. Add POSTGRES_PASSWORD=yourStrongPassword there. Never hardcode passwords in YAML.

3. Ghost (CMS)

Replaces: WordPress.com Business ($25/mo), Ghost(Pro) ($25-199/mo), Substack (revenue cut).

Ghost is a fast, modern CMS and newsletter platform. It’s what many professional content creators use β€” and Ghost(Pro) charges significantly for hosting. Running it yourself via docker compose costs you nothing beyond your VPS. Ghost connects to the PostgreSQL container we defined above.

  ghost:
    image: ghost:latest
    container_name: ghost
    restart: unless-stopped
    environment:
      url: https://yourdomain.com
      database__client: mysql
      NODE_ENV: production
      mail__transport: SMTP
    volumes:
      - ghost_data:/var/lib/ghost/content
    networks:
      - web
    depends_on:
      - postgres

Config note: Ghost works best with its own MySQL instance (it ships with SQLite support too, which works fine for low-traffic sites). For a lightweight setup, remove the database config and let Ghost use SQLite β€” just make sure the volume is backed up. For your self-hosted email setup, see our guide to building a self-hosted email server.

4. Plausible Analytics (Privacy-First Analytics)

Replaces: Google Analytics, Fathom ($14/mo), Simple Analytics ($19/mo).

Plausible is the privacy-respecting analytics tool that’s GDPR compliant out of the box, doesn’t use cookies, and gives you clean, actionable data without feeding your visitors’ behavior to Google. The self-hosted version via docker compose is functionally identical to the paid cloud version.

  plausible_events_db:
    image: clickhouse/clickhouse-server:24-alpine
    container_name: plausible_events_db
    restart: unless-stopped
    volumes:
      - plausible_events:/var/lib/clickhouse
    networks:
      - web
    ulimits:
      nofile:
        soft: 262144
        hard: 262144

  plausible:
    image: ghcr.io/plausible/community-edition:v2
    container_name: plausible
    restart: unless-stopped
    command: >-
      sh -c "sleep 10
      && /entrypoint.sh db createdb
      && /entrypoint.sh db migrate
      && /entrypoint.sh run"
    environment:
      BASE_URL: https://analytics.yourdomain.com
      SECRET_KEY_BASE: ${PLAUSIBLE_SECRET_KEY}
      DATABASE_URL: postgres://admin:${POSTGRES_PASSWORD}@postgres:5432/plausible
      CLICKHOUSE_DATABASE_URL: http://plausible_events_db:8123/plausible_events
    networks:
      - web
    depends_on:
      - postgres
      - plausible_events_db

Config note: Generate a secret key with openssl rand -base64 64 and add it to your .env file as PLAUSIBLE_SECRET_KEY. The ClickHouse service above is required β€” Plausible uses it to store all analytics event data. Add plausible_events to your volumes declaration at the bottom of the compose file.

5. Uptime Kuma (Monitoring)

Replaces: Better Uptime ($20/mo), StatusCake, UptimeRobot paid plans.

Uptime Kuma is a beautiful, self-hosted monitoring tool that checks your services, sends alerts via Telegram/Slack/email/webhook, and shows you uptime history. It runs as a single container with zero external dependencies in your docker compose stack.

  uptime-kuma:
    image: louislam/uptime-kuma:latest
    container_name: uptime-kuma
    restart: unless-stopped
    volumes:
      - uptime_kuma_data:/app/data
    networks:
      - web

Config note: Add a Caddy reverse proxy entry for status.yourdomain.com pointing to uptime-kuma:3001. On first launch, create an admin account immediately β€” the setup page is publicly accessible until you do.

6. Ollama (Local AI / LLM)

Replaces: ChatGPT Plus ($20/mo), Claude Pro ($20/mo), OpenAI API costs.

Ollama lets you run large language models locally β€” Llama 3, Mistral, Phi-3, Gemma, and dozens more. No API keys, no per-token costs, no data leaving your server. If you want to go deeper on this, we have a full guide on how to run a local LLM with practical use cases.

  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    restart: unless-stopped
    volumes:
      - ollama_data:/root/.ollama
    networks:
      - web
    # Uncomment below if you have a GPU:
    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - driver: nvidia
    #           count: all
    #           capabilities: [gpu]

Config note: After the container starts, pull a model with docker exec -it ollama ollama pull llama3. On a CPU-only VPS, use smaller models like phi3 or gemma:2b for reasonable inference speeds. Pair Ollama with Open WebUI for a browser-based chat interface.

7. n8n (Workflow Automation)

Replaces: Zapier ($20-49/mo), Make (formerly Integromat), Activepieces cloud.

n8n is a powerful workflow automation platform with a visual editor, hundreds of integrations, and β€” when self-hosted β€” no per-task pricing. Your automations, your data, your rules. In a docker compose setup, n8n connects to PostgreSQL for persistent workflow storage.

  n8n:
    image: n8nio/n8n:latest
    container_name: n8n
    restart: unless-stopped
    environment:
      N8N_BASIC_AUTH_ACTIVE: "true"
      N8N_BASIC_AUTH_USER: admin
      N8N_BASIC_AUTH_PASSWORD: ${N8N_PASSWORD}
      DB_TYPE: postgresdb
      DB_POSTGRESDB_HOST: postgres
      DB_POSTGRESDB_DATABASE: n8n
      DB_POSTGRESDB_USER: admin
      DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD}
      N8N_HOST: n8n.yourdomain.com
      WEBHOOK_URL: https://n8n.yourdomain.com
    volumes:
      - n8n_data:/home/node/.n8n
    networks:
      - web
    depends_on:
      - postgres

Config note: Create a dedicated n8n database in PostgreSQL before first launch: docker exec -it postgres psql -U admin -c "CREATE DATABASE n8n;". Add N8N_PASSWORD to your .env file.

πŸ’‘ If this is the kind of overpriced tool you’re tired of paying for β€” we built a pirate version. Check the Arsenal.

8. Nextcloud (File Storage)

Replaces: Dropbox ($11.99/mo), Google Drive ($2.99-9.99/mo), iCloud.

Nextcloud is the self-hosted file sync and collaboration platform. It’s the closest thing to owning your own Google Drive β€” file sync, sharing, contacts, calendars, and a massive app ecosystem. In your docker compose stack, it mounts a local directory for file storage and connects to PostgreSQL.

  nextcloud:
    image: nextcloud:latest
    container_name: nextcloud
    restart: unless-stopped
    environment:
      POSTGRES_HOST: postgres
      POSTGRES_DB: nextcloud
      POSTGRES_USER: admin
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      NEXTCLOUD_ADMIN_USER: admin
      NEXTCLOUD_ADMIN_PASSWORD: ${NEXTCLOUD_PASSWORD}
      NEXTCLOUD_TRUSTED_DOMAINS: cloud.yourdomain.com
    volumes:
      - nextcloud_data:/var/www/html
    networks:
      - web
    depends_on:
      - postgres

Config note: Create a nextcloud database in PostgreSQL the same way as n8n above. Add NEXTCLOUD_PASSWORD to .env. For a deeper dive into managing your passwords and credentials within this stack, see our guide to a self-hosted password manager.

9. Restic + Autorestic (Automated Backups)

Replaces: Backblaze B2 managed backup ($7/mo), Veeam, BackupBuddy for WordPress.

Backups are not optional. Every service above stores its data in named Docker volumes, and those volumes need to be backed up to an offsite location. Restic is a fast, encrypted, deduplicated backup tool. Pair it with a cron job or a lightweight wrapper container to automate volume backups to Backblaze B2, S3, or any S3-compatible storage. For WordPress-specific backup strategies, see our self-hosted WordPress backup guide.

  restic:
    image: lobaro/restic-backup-docker:latest
    container_name: restic
    restart: unless-stopped
    environment:
      RESTIC_REPOSITORY: s3:s3.amazonaws.com/your-bucket
      RESTIC_PASSWORD: ${RESTIC_PASSWORD}
      AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY}
      AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_KEY}
      BACKUP_CRON: "0 3 * * *"
    volumes:
      - postgres_data:/data/postgres:ro
      - ghost_data:/data/ghost:ro
      - nextcloud_data:/data/nextcloud:ro
      - n8n_data:/data/n8n:ro
    networks:
      - web

Config note: Add all Restic credentials to your .env file. Run docker exec -it restic restic init once to initialize the repository before the first scheduled backup. Test restores periodically β€” a backup you’ve never restored is just hope, not a backup.

10. Gitea (Self-Hosted Git Server)

Replaces: GitHub private repos ($4/mo), GitLab.com, Bitbucket.

Gitea is a lightweight, self-hosted Git service with a GitHub-like interface. Host your private repositories, run CI/CD with Gitea Actions, and keep your code on infrastructure you control. This is especially important if you’re building proprietary AI tools or storing client work. Your docker compose configuration itself should live in a private Gitea repository on this very server.

  gitea:
    image: gitea/gitea:latest
    container_name: gitea
    restart: unless-stopped
    environment:
      USER_UID: 1000
      USER_GID: 1000
      GITEA__database__DB_TYPE: postgres
      GITEA__database__HOST: postgres:5432
      GITEA__database__NAME: gitea
      GITEA__database__USER: admin
      GITEA__database__PASSWD: ${POSTGRES_PASSWORD}
    volumes:
      - gitea_data:/data
    networks:
      - web
    depends_on:
      - postgres

Config note: Create a gitea database in PostgreSQL. Expose SSH on a non-standard port if needed by adding - "2222:22" to ports. The web interface routes through Caddy like all other services.

Shared Network and Volumes

At the bottom of your docker-compose.yml, add the top-level declarations that tie everything together:

networks:
  web:
    driver: bridge

volumes:
  caddy_data:
  caddy_config:
  postgres_data:
  ghost_data:
  plausible_events:
  uptime_kuma_data:
  ollama_data:
  n8n_data:
  nextcloud_data:
  gitea_data:

Running Your Stack

With your docker-compose.yml assembled and your .env file populated, it’s time to bring everything up. The docker compose commands you’ll use day-to-day are simple and few.

docker compose step 4

Starting Everything

# Start all services in detached mode (background)
docker compose up -d

# Watch logs from all services
docker compose logs -f

# Watch logs from a single service
docker compose logs -f ghost

The -d flag runs containers in detached mode β€” they keep running after you close your terminal. This is what you want for a production stack. On first run, docker compose will pull all images before starting containers. Depending on your server’s internet connection, this takes 2–10 minutes.

Updating Your Services

# Pull latest images and recreate containers
docker compose pull && docker compose up -d

# Update a single service only
docker compose pull ghost && docker compose up -d ghost

Run this command monthly at minimum. Container images receive regular security patches, and running outdated images is one of the most common self-hosting mistakes. The docker compose pull command fetches new image versions, and the subsequent up -d recreates containers that have new images while leaving unchanged containers running.

Managing Volumes and Persistence

Docker volumes are where your data lives. Unlike the container filesystem (which is ephemeral and disappears when a container is removed), volumes persist independently. Key commands:

# List all volumes
docker volume ls

# Inspect a volume (find its mount path on the host)
docker volume inspect postgres_data

# Remove unused volumes (be careful!)
docker volume prune

Never run docker compose down -v on a production stack unless you intend to delete all your data. The -v flag removes volumes. Without it, docker compose down stops and removes containers but preserves volumes β€” your data is safe.

Stopping and Restarting

# Stop all services (data preserved)
docker compose down

# Restart a single service
docker compose restart uptime-kuma

# Stop and remove everything including volumes (DANGER)
# docker compose down -v

Security and Maintenance

Running a self-hosted stack means you’re responsible for security. The good news is that with docker compose behind Caddy, most of your attack surface is minimal. Here’s the baseline security configuration every solopreneur stack needs.

docker compose step 5

Firewall Configuration

# Allow only essential ports
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp    # SSH
ufw allow 80/tcp    # HTTP (Caddy redirects to HTTPS)
ufw allow 443/tcp   # HTTPS
ufw enable

# Verify
ufw status

This configuration blocks everything except SSH, HTTP, and HTTPS. All your services communicate internally over the Docker network β€” PostgreSQL, n8n, Gitea, Nextcloud β€” none of them need to be reachable from the internet directly. Caddy is the only front door, and it handles SSL termination and request proxying for everything.

If you want an additional layer of network security, pairing your VPS with a self-hosted VPN lets you lock admin interfaces down to VPN-only access.

Automatic Updates with Watchtower

If you prefer automated container updates rather than running manual docker compose pull commands, Watchtower monitors running containers and automatically updates them when new images are available. Add it to your docker compose file:

  watchtower:
    image: containrrr/watchtower:latest
    container_name: watchtower
    restart: unless-stopped
    environment:
      WATCHTOWER_SCHEDULE: "0 0 4 * * *"  # 4 AM daily
      WATCHTOWER_CLEANUP: "true"
      WATCHTOWER_NOTIFICATIONS: email
      WATCHTOWER_NOTIFICATION_EMAIL_TO: you@yourdomain.com
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - web

A word of caution: auto-updates are convenient but can occasionally break things when a service ships a breaking change. For mission-critical services like databases, consider pinning to a specific version tag (e.g., postgres:16.2-alpine instead of postgres:latest) and updating those manually after reviewing release notes.

SSL Is Automatic with Caddy

One of the biggest advantages of using Caddy as your reverse proxy in a docker compose stack is zero-configuration SSL. As long as your domain DNS is pointing to your VPS IP before you start Caddy, it will automatically obtain Let’s Encrypt certificates on first request and renew them before they expire. No cron jobs, no certbot, no manual renewal errors at 2 AM.

Volume Backup Strategy

Your Restic container handles scheduled backups, but a solid backup strategy has three components: automated backups run daily, backup integrity is verified weekly (run docker exec -it restic restic check), and a full restore test is performed monthly. Data you haven’t successfully restored from is not backed up data β€” it’s just bytes you hope are recoverable.

When Docker Compose Is Enough (and When It’s Not)

Docker compose is the right tool for a very wide range of use cases, but intellectual honesty requires acknowledging where it reaches its limits. Here’s a clear-eyed breakdown.

docker compose step 6

Docker Compose Is More Than Enough When:

  • You’re one person (or a very small team) managing your own infrastructure
  • You’re running fewer than 50 services on a single server
  • You don’t need automatic failover or high availability (most solopreneurs don’t)
  • You can tolerate a few minutes of downtime to update or restart services
  • Your traffic is measured in thousands to tens of thousands of visitors, not millions
  • You want to understand your infrastructure and be able to debug it yourself

For the vast majority of independent creators, consultants, developers, and solopreneurs reading this β€” docker compose is genuinely all you will ever need. The 10-service stack above can handle a respectable business workload on a $20/month VPS without breaking a sweat.

When You Should Start Looking Beyond Docker Compose:

  • You need your services to keep running automatically when a server fails (requires multi-node orchestration)
  • You’re deploying across multiple physical servers or regions
  • You have a team of 10+ engineers all making infrastructure changes simultaneously
  • You need sophisticated deployment pipelines with canary releases and automatic rollbacks
  • You’re running workloads at a scale where the cost of a dedicated DevOps team is justified

If you hit those thresholds, Docker Swarm is a lightweight step up that keeps most of the docker compose syntax while adding multi-node orchestration. Kubernetes is the step after that, and it’s warranted at genuine scale β€” but not before. The software industry has a bad habit of pushing complexity as a sign of seriousness. It isn’t. Running a clean, well-maintained docker compose stack is a sign of someone who understands their actual requirements and optimizes for them, not for resume-driven engineering.

“You own your stack the same way you’d own a building instead of renting a desk in someone else’s office.” β€” The solopreneur self-hosting philosophy

The Solopreneur Self-Hosting Philosophy

At its core, docker compose is about sovereignty. Your data lives on hardware you control. Your services run on software you’ve chosen. Your costs are fixed and predictable. When a SaaS company decides to pivot, raise prices, kill a feature, or get acquired, it doesn’t affect you. You own your stack the same way you’d own a building instead of renting a desk in someone else’s office.

This is the practical meaning of “own your digital infrastructure.” It doesn’t require a computer science degree or a DevOps certification. It requires a VPS, a domain name, a YAML file, and the willingness to spend a Saturday afternoon setting it up. Docker compose is the tool that makes that Saturday afternoon productive β€” and the next few years of your digital life more stable, private, and affordable as a result.

The services we’ve covered here are a starting point, not a ceiling. Once you’re comfortable with docker compose patterns β€” images, volumes, networks, environment variables, depends_on β€” adding new services takes fifteen minutes. New self-hosted tools emerge constantly, and the community around self-hosted software grows every month. The patterns you’ve learned here apply to all of them.

βš”οΈ Pirate Verdict

Docker Compose is the most underrated tool in a solopreneur’s toolkit. While the industry pushes Kubernetes certifications and managed cloud services that bill by the millisecond, one YAML file on a $20/month VPS replaces $1,300+ in annual SaaS subscriptions. Ten services. Zero vendor lock-in. Zero surprise price hikes. Your data lives on your hardware, your backups go where you tell them, and when a SaaS company gets acquired by private equity and triples their prices overnight β€” you won’t even notice. Stop renting your infrastructure. Write the YAML. Own the stack. That’s the pirate way.

Set Sail With Your Own Stack

Docker Compose is how solopreneurs take back control of their digital infrastructure β€” one YAML file at a time. The ten services above cover analytics, content management, monitoring, AI, automation, file storage, backups, version control, and the reverse proxy that ties them all together. You own every byte. You control every update. You answer to no one’s pricing page.

Start with the stack above, customize it to your needs, and stop paying SaaS companies for software you can own outright. The learning curve is a Saturday afternoon. The savings compound every single month after that.

What SaaS subscriptions are you tired of paying for? Drop them in the comments β€” we might have a self-hosted alternative you haven’t tried yet.

Is Docker Compose good enough for production?

Yes. Docker Compose is production-ready for single-server deployments. It handles service dependencies, automatic restarts, volume persistence, and networking. For solopreneurs running 10-50 services on one VPS, it is the right tool.

Do I need Kubernetes instead of Docker Compose?

No, unless you are running multi-server clusters with auto-scaling requirements. Kubernetes adds massive operational complexity. Docker Compose handles 99% of solopreneur infrastructure needs on a single server.

How much RAM do I need to run 10 services with Docker Compose?

4GB RAM handles a basic 10-service stack. 8GB gives comfortable headroom. If you are running Ollama for local AI, 16GB or more is recommended. Hetzner and DigitalOcean offer 4GB VPS instances starting at $6-8/month.

Is docker-compose with a hyphen still valid?

The standalone docker-compose binary with a hyphen was deprecated in 2023 and removed from default Docker installations. The modern syntax is docker compose with a space, which runs as a Docker CLI plugin.

Can I run Docker Compose on a Raspberry Pi?

Yes. Docker and Docker Compose run on ARM64 architecture. Most popular self-hosted images publish ARM builds. Performance will be limited compared to a VPS, but it works for lightweight services and home lab experimentation.

← How to Set Up a VPS Server in 2026 β€” The Complete Beginner’s Guide How to Fix WordPress Accessibility Issues Without a Plugin →
The Quartermaster
> THE QUARTERMASTER
Identify yourself, pirate. What brings ye to the command deck?