Mastodon 4 in Docker with a reverse proxy using a subdomain

After a few hours of running into walls and old documentation, I have mastodon working for me. I hope to help save you some time with the following guide.

First I assume you have docker installed on a system with a reverse proxy container that is in use. If you don’t know what those things are this guide isn’t probably for you.

First clone the repo
git clone https://github.com/mastodon/mastodon.git

Open the docker-compose file and make some changes
Change db’s health check to monitoring the mastodon user
Remove db’s environment argument and instead link to the env file
Remove 127.0.0.1 from port assignments on web and streaming

my docker-compose file:

version: '3'
services:
  db:
    restart: always
    image: postgres:14-alpine
    shm_size: 256mb
    networks:
      - internal_network
    healthcheck:
      test: ['CMD', 'pg_isready', '-U', 'mastodon']
    volumes:
      - ./postgres14:/var/lib/postgresql/data
    env_file: .env.production

  redis:
    restart: always
    image: redis:7-alpine
    networks:
      - internal_network
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
    volumes:
      - ./redis:/data


  web:
    build: .
    image: tootsuite/mastodon
    restart: always
    env_file: .env.production
    command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
    networks:
      - external_network
      - internal_network
    healthcheck:
      # prettier-ignore
      test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:3000/health || exit 1']
    ports:
      - '3000:3000'
    depends_on:
      - db
      - redis
      # - es
    volumes:
      - ./public/system:/mastodon/public/system

  streaming:
    build: .
    image: tootsuite/mastodon
    restart: always
    env_file: .env.production
    command: node ./streaming
    networks:
      - external_network
      - internal_network
    healthcheck:
      # prettier-ignore
      test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1']
    ports:
      - '4000:4000'
    depends_on:
      - db
      - redis

  sidekiq:
    build: .
    image: tootsuite/mastodon
    restart: always
    env_file: .env.production
    command: bundle exec sidekiq
    depends_on:
      - db
      - redis
    networks:
      - external_network
      - internal_network
    volumes:
      - ./public/system:/mastodon/public/system
    healthcheck:
      test: ['CMD-SHELL', "ps aux | grep '[s]idekiq\ 6' || false"]


networks:
  external_network:
  internal_network:
    internal: true

Copy the .env.production.sample to .env.production
Set LOCAL_DOMAIN to your root domain
Add WEB_DOMAIN to your subdomain
Change Redis host to redis
change db host to db
set a password, duplicate info for the db container
turn off ES and S3
Set SMTP info
Save file

Generate secrets using rake twice
docker-compose run --rm web bundle exec rake secret
Then generate vapid keys
docker-compose run --rm web bundle exec rake mastodon:webpush:generate_vapid_key
Save those in the proper spots in env

my .env file

# Federation
# ----------
# This identifies your server and cannot be changed safely later
# ----------
LOCAL_DOMAIN=randompherret.com
WEB_DOMAIN=mastodon.randompherret.com

# Redis
# -----
REDIS_HOST=redis
REDIS_PORT=6379

# PostgreSQL
# ----------
DB_HOST=db
DB_USER=mastodon
DB_NAME=mastodon_production
DB_PASS=<setapassword>
DB_PORT=5432
POSTGRES_PASSWORD=<thesameasabove>
POSTGRES_DB=mastodon_production
POSTGRES_USER=mastodon

# Elasticsearch (optional)
# ------------------------
ES_ENABLED=false
ES_HOST=localhost
ES_PORT=9200
# Authentication for ES (optional)
ES_USER=elastic
ES_PASS=password

# Secrets
# -------
# Make sure to use `rake secret` to generate secrets
# -------
SECRET_KEY_BASE=<1st secret here>
OTP_SECRET=<second here>

# Web Push
# --------
# Generate with `rake mastodon:webpush:generate_vapid_key`
# --------
VAPID_PRIVATE_KEY=<stop looking>
VAPID_PUBLIC_KEY=<you can see this one>

# Sending mail
# ------------
SMTP_SERVER=<your ip>
SMTP_PORT=25
SMTP_FROM_ADDRESS=mastodon@randompherret.com
SMTP_SSL=false
SMTP_AUTH_METHOD=none
SMTP_TLS=false
SMTP_OPENSSL_VERIFY_MODE=none

# File storage (optional)
# -----------------------
S3_ENABLED=false
S3_BUCKET=files.example.com
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
S3_ALIAS_HOST=files.example.com

# IP and session retention
# -----------------------
# Make sure to modify the scheduling of ip_cleanup_scheduler in config/sidekiq.yml
# to be less than daily if you lower IP_RETENTION_PERIOD below two days (172800).
# -----------------------
IP_RETENTION_PERIOD=31556952
SESSION_RETENTION_PERIOD=31556952

Initialize database
docker-compose run --rm web rails db:migrate
After that up all the services.

Next step is to get the reverse proxy set

You want to add the mastodon instance to the proxy networking in it’s compose file

    networks:
      - mastodon_external_network
networks:
  mastodon_external_network:
    external: true

Then add a bunch of stuff to your nginx.conf. the main thing is that your primary listener needs to forward on the webfinger location to the mastodon server. that is how the other servers know how to find it. also you can see the hosts set for the streaming and web
Some of my config cut out, but this is the relevant parts of mine:

server {
    listen 80;
    location /.well-known/webfinger {
      return 301 https://mastodon.randompherret.com$request_uri;
    }
...
  }
...
  server {
    listen 443 ssl;
    server_name randompherret.com;
...
    location /.well-known/webfinger {
      return 301 https://mastodon.randompherret.com$request_uri;
    }
  }
...
  server {
    listen 443 ssl;
    server_name mastodon.randompherret.com;
    ssl_certificate /etc/letsencrypt/live/mastodon.randompherret.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/mastodon.randompherret.com/privkey.pem;
    keepalive_timeout    70;
    sendfile             on;
    client_max_body_size 80m;

    gzip on;
    gzip_disable "msie6";
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

    add_header Strict-Transport-Security "max-age=31536000";

    location / {
      try_files $uri @proxy;
    }

    location ~ ^/(emoji|packs|system/accounts/avatars|system/media_attachments/files) {
      add_header Cache-Control "public, max-age=31536000, immutable";
      try_files $uri @proxy;
    }
    
    location /sw.js {
      add_header Cache-Control "public, max-age=0";
      try_files $uri @proxy;
    }

    location @proxy {
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto https;
      proxy_set_header Proxy "";
      proxy_pass_header Server;

      proxy_pass http://mastodon_web_1:3000;
      proxy_buffering off;
      proxy_redirect off;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection $connection_upgrade;

      tcp_nodelay on;
    }

    location /api/v1/streaming {
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto https;
      proxy_set_header Proxy "";

      proxy_pass http://mastodon_streaming_1:4000;
      proxy_buffering off;
      proxy_redirect off;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection $connection_upgrade;

      tcp_nodelay on;
    }
  }

Reload nginx and you should be able to create your first user.
As the last step before you are all done you need to promote yourself to “Owner” previous versions this was admin, welcome to 4.x
docker-compose run --rm web bin/tootctl accounts modify Derek --role Owner

I’m sure I missed something in my testing and trying to write down what was doing. Let me know if I need to update this post, and when you get it all done, follow me @derek@randompherret.com