New server: Install & configuration of services (Part III)

Welcome back, this will be the 3rd part of new server series. In the previous parts I assembled the server and prepared the machine with Ubuntu Server. I configured the basics things like; networking, RAID setup, E-Mail and more. In case you missed that: Read Part I and Part II.

Today, we will finish the job with installing and configuring all the services we love so much. Again, I included a Table of Contents for convenience reasons, since it quite a long article. But hopefully the table will help you to navigate around.

The listed services are an important part for my day-to-day programming life and to support open-source & free software in general.
Some of the services can be used by everybody, including yourself. In addition, you can also request new services, if you want to.


Expectations

All the services are first explained, hopefully to better comprehend the benefits of each service. Then how-to install the service under Ubuntu Server. Finally, I explain how I configured each service to get the most performance out of it.

Service uptime

BONUS: I also added external links to useful documentation and tools for each service.

The services are listed are in random order.

Nginx

Nginx is a high-performance reverse proxy server and load balancer, which can be used to host web pages or pass the network connection to some internal service running on a particular port.

Install Nginx & Certbot

Public URL: https://server.melroy.org/ (=landing page, but Nginx is used for all my domains actually)

sudo apt install -y nginx
sudo usermod -a -G www-data melroy

# Also Installing Let's Encrypt Certbot
sudo apt install -y certbot python3-certbot-nginx

# Generate Secure Diffie–Hellman (DH) key exchange file
cd /etc/nginx
sudo openssl dhparam -dsaparam -out dhparam.pem 4096

Configure Nginx

Assuming you know how-to setup a new Nginx Server block and generate Let’s Encrypt certificates via Certbot (sudo certbot --nginx), we will now look into the Nginx generic configurations.

Important collection of changes to /etc/nginx/nginx.conf:

user www-data;
worker_processes auto;
worker_cpu_affinity auto;
thread_pool default threads=16 max_queue=65536;
worker_rlimit_nofile 65535;

events {
    worker_connections 65535;
    multi_accept on;
}

# Generic http block
http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    server_tokens off;
    log_not_found off;
    keepalive_timeout 70;
    types_hash_max_size  2048;
    client_max_body_size 16M;
    client_body_buffer_size 50M;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # SSL Intermediate configuration
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    ssl_session_tickets off;

    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_ecdh_curve secp521r1:secp384r1:secp256k1;

    # DNS
    resolver 8.8.8.8 8.8.4.4 208.67.222.222 208.67.220.220 valid=60s;
    resolver_timeout 2s;

    # Discard 2xx or 3xx responses from logging
    map $status $loggable {
        ~^[23] 0;
        default 1;
    }
    access_log /var/log/nginx/access.log combined if=$loggable;
    # Show warn, error, crit, alert and emerg messages
    error_log /var/log/nginx/error.log warn;

    # Gzip compression
    gzip            on;
    gzip_vary       on;
    gzip_comp_level 6;
    gzip_min_length 256;
    gzip_proxied    expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types
        text/css
        text/plain
        text/javascript
        text/cache-manifest
        text/vcard
        text/vnd.rim.location.xloc
        text/vtt
        text/x-component
        text/x-cross-domain-policy
        application/javascript
        application/json
        application/x-javascript
        application/ld+json
        application/xml
        application/xml+rss
        application/xhtml+xml
        application/x-font-ttf
        application/x-font-opentype
        application/vnd.ms-fontobject
        application/manifest+json
        application/rss+xml
        application/atom_xml
        application/vnd.geo+json
        application/x-web-app-manifest+json
        image/svg+xml
        image/x-icon
        image/bmp
        font/opentype;
}

Next to that, I created some general snippets that I can easily be reused and included into the server blocks.

Like /etc/nginx/snippets/fastcgi-php.conf:

# regex to split $uri to $fastcgi_script_name and $fastcgi_path
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
set $path_info $fastcgi_path_info;

# Check that the PHP script exists before passing it
try_files $fastcgi_script_name =404;

# fastcgi settings
fastcgi_pass                  unix:/run/php/php7.4-fpm.sock;
fastcgi_index                 index.php;
fastcgi_buffers               8 16k;
fastcgi_buffer_size           32k;
fastcgi_read_timeout          600;
fastcgi_intercept_errors on;

# fastcgi params
fastcgi_param PATH_INFO $path_info;
fastcgi_param DOCUMENT_ROOT   $realpath_root;
fastcgi_param HTTPS on;
fastcgi_param modHeadersAvailable true;         # Avoid sending the security headers twice
fastcgi_param front_controller_active true;     # Enable pretty urls

include fastcgi.conf

And /etc/nginx/snippets/security.conf:

# Increase security (using the Diffie-Hellman Group file)
ssl_dhparam /etc/nginx/dhparam.pem;

# Don't leak powered-by
fastcgi_hide_header X-Powered-By;

# Security headers
# Don't allow the browser to render the page inside an frame or iframe and avoid clickjacking
add_header X-Frame-Options "SAMEORIGIN" always;
# Enable the Cross-site scripting (XSS) filter built into most recent web browsers.
add_header X-XSS-Protection "1; mode=block" always;
# When serving user-supplied content, include a X-Content-Type-Options: nosniff header along with the Content-Type: header,
# to disable content-type sniffing on some browsers.
add_header X-Content-Type-Options "nosniff" always;
# Referrer Policy will allow a site to control the value of the referer header in links away from their pages.
add_header Referrer-Policy "no-referrer" always;
# Disable the option to open a file directly on download
add_header X-Download-Options                   "noopen"        always;
# Don't allow cross domain of Falsh & PDF documents
add_header X-Permitted-Cross-Domain-Policies    "none"          always;
#  Feature to support on your site and strengthens your implementation of TLS by getting the User Agent to enforce the use of HTTPS
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# Set a CSP if you like as well
#add_header Content-Security-Policy ...

real_ip_header X-Real-IP; ## X-Real-IP or X-Forwarded-For or proxy_protocol
real_ip_recursive off;    ## If you enable 'on'

And finally, /etc/nginx/snippets/general.conf:

location = /robots.txt {
    log_not_found off;
    access_log    off;
}
location = /favicon.ico {
    log_not_found off;
    access_log off;
}
# assets, media
location ~* \.(?:css(\.map)?|js(\.map)?|jpe?g|png|gif|ico|cur|heic|webp|tiff?|mp3|m4a|aac|ogg|midi?|wav|mp4|mov|webm|mpe?g|avi|ogv|flv|wmv)$ {
    expires    18d;
    add_header Access-Control-Allow-Origin "*";
    access_log off;
}
# svg, fonts
location ~* \.(?:svgz?|ttf|ttc|otf|eot|woff2?)$ {
    expires    18d;
    add_header Access-Control-Allow-Origin "*";
    add_header Cache-Control "public";
    access_log off;
}
location ~ /\.ht {
    deny all;
    access_log off;
}

Example usage of such snippets in a Nginx server block example:

server {
    listen 80;
    server_name yourhomesite.com;
    # Redirect to HTTPS
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name yourhomesite.com;

    root /var/www/html;
    index index.html index.php;

    ssl_certificate /etc/letsencrypt/live/yourhomesite.com/fullchain.pem; 
    ssl_certificate_key /etc/letsencrypt/live/yourhomesite.com/privkey.pem; 
    include snippets/security.conf;

    location / {
        add_header 'Access-Control-Allow-Origin' '*';
        try_files $uri $uri/ =404;
    }
    location ~ \.php(?:$|/) {
        include snippets/fastcgi-php.conf;
    }
    include snippets/general.conf;
}

Read more: Nginx Docs, Mozilla SSL Configuration Tool and SSL Labs Server Tester.

PHP FPM

Since we are using Nginx, we will use the PHP FPM (FastCGI Process Manager) together with Nginx for PHP scripts.

Install PHP-FPM + Modules

sudo apt install -y \
  php-apcu php-apcu-bc php-cg php-common php-igbinary php-imagick \
  php-msgpack php-redis php7.4-bcmath php7.4-bz2 php7.4-cgi \
  php7.4-cli php7.4-common php7.4-curl php7.4-fpm php7.4-gd \
  php7.4-gmp php7.4-intl php7.4-json php7.4-mbstring \
  php7.4-mysql php7.4-opcache php7.4-readline \
  php7.4-xml php7.4-zip
PHP7.4 systemctl status output

Configure PHP & PHP-FPM

I will discuss the most import changes I did.

Changes to /etc/php/7.4/fpm/pool.d/www.conf:

pm = dynamic
pm.max_children = 120
pm.start_servers = 12
pm.min_spare_servers = 6
pm.max_spare_servers = 18
clear_env = no
# Uncommenting all env[..] lines in www.conf

Changes to /etc/php/7.4/fpm/php.ini:

output_buffering = 0
max_execution_time = 600
memory_limit = 512M
post_max_size = 20G
upload_max_filesize = 20G
max_file_uploads = 200

Restart the PHP FPM service to apply the changes:

sudo systemctl restart php7.4-fpm

Read more: PHP-FPM docs, php.ini docs

Monit

Monit will be used to monitor the running services, report issues to me and automatically (re)starts if something goes wrong.

Public URL: https://monit.melroy.org/ (does require login too bad)

Monit Dashboard

Install Monit

sudo apt install monit

Configure Monit

Be sure you also configure the set mailserver and set alert <your_email>, in order to receive e-mail notifications.

Some other highlights from the /etc/monit/monitrc file:

# Enable the dashboard webpage, seen above
set httpd port 2812 and
    use address localhost 
    allow admin:secret_password

# Check CPU & memory usage
check system $HOST
   if loadavg (1min) per core > 2 for 5 cycles then alert
   if loadavg (5min) per core > 1.5 for 10 cycles then alert
   if cpu usage > 90% for 10 cycles then alert
   if cpu usage (wait) > 20% then alert
   if memory usage > 75% then alert
   if swap usage > 17% then alert

check filesystem rootfs with path /
  if space usage > 80% then alert
  group server

check filesystem Data with path /media/Data
  if space usage > 80% then alert
  group server

check filesystem Data_extra with path /media/Data_extra
  if space usage > 80% then alert
  group server

# Check RAID healthy
check program Data-raid with path "/sbin/mdadm --misc --detail --test /dev/md/Data"
  if status != 0 then alert

check program Data-extra-raid with path "/sbin/mdadm --misc --detail --test /dev/md/Data_extra"
  if status != 0 then alert

# Some services as an example
check process Nginx with pidfile /run/nginx.pid
   group www-data
   start program = "/bin/systemctl start nginx"
   stop program  = "/bin/systemctl stop nginx"
   if failed host 127.0.0.1 port 443 protocol https for 3 cycles then restart
   if 3 restarts within 5 cycles then unmonitor

check process sshd with pidfile /var/run/sshd.pid
   start program "/bin/systemctl start ssh"
   stop program "/bin/systemctl stop ssh"
   if failed port 22 protocol ssh then restart
   if 3 restarts within 5 cycles then unmonitor

# Ping test
check host google.com with address google.com
  if failed ping then alert

# Check network
check network public with interface enp45s0
   start program  = "/usr/sbin/ip link set dev enp45s0 up"
   stop program  = "/usr/sbin/ip link set dev enp45s0 down"
   if failed link then restart
   if failed link then alert
   if changed link then alert
   if saturation > 90% then alert
   if download > 40 MB/s then alert
   if total upload > 3 GB in last 2 hours then alert
   if total upload > 10 GB in last day then alert

Docker & Docker compose

Containerization is becoming quite popular, especially since Docker.
It is must faster and lighter than running VMs (Virtual Machines), but with similar benefits. Like consistent environment and runs in isolation.

Docker allows you to run containers, be it from your own created image or images which are made publicly available for you to use.

Install Docker / Docker-Compose

Installation was actually already explained in Part 2 of this blog series, but I will also include Docker-compose now:

sudo apt install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    stable"
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io

# Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.27.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Configure Docker

sudo groupadd docker
# Add a user to the docker group
sudo usermod -aG docker $USER
newgrp docker

Read more: Docker Docs: Getting Started and Docker-Compose Docs.

Grafana, InfluxDB & Telegraf

Grafana is a dashboard tool, for displaying graphs and such. InfluxDB is a time-series database. Telegraf is the collecting tool, collecting stats from your computer, which will log the data into InfluxDB. Within Grafana, I configure to use InfluxDB as data-source. Then I configure in Grafana the graph to query the data from the database, eventually showing the information on the dashboard in Grafana.

Public URL: https://stats.melroy.org/ (check-out my public status page!)

Install Grafana

echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
sudo apt update
sudo apt install -y grafana
sudo systemctl daemon-reload
sudo systemctl enable grafana-server
sudo systemctl start grafana-server

# InfluxDB
wget -qO- https://repos.influxdata.com/influxdb.key | sudo apt-key add -
source /etc/lsb-release
echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
sudo systemctl unmask influxdb.service
sudo systemctl start influxdb

# Telegraf
sudo apt install telegraf

# Install Additional plugin in Grafana
sudo grafana-cli plugins install yesoreyeram-boomtable-panel
sudo grafana-cli plugins install flant-statusmap-panel
sudo chown grafana.grafana -R /var/lib/grafana/plugins/
sudo systemctl restart grafana-server

Configure Grafana/InfluxDB/Telegraf

In Telegraf I configured quite some inputs to collect data from, some highlights from the /etc/telegram/telegraf.conf file:

[agent]
  interval = "20s"
  round_interval = true
  metric_batch_size = 5000
  metric_buffer_limit = 10000
  collection_jitter = "5s"

# Output the data into InfluxDB
[[outputs.influxdb]]
  urls = ["http://127.0.0.1:8086"]

# Inputs
[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false

[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]

[[inputs.diskio]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]
[[inputs.hddtemp]]
[[inputs.interrupts]]
[[inputs.kernel_vmstat]]
[[inputs.linux_sysctl_fs]]
[[inputs.net]]
[[inputs.net_response]]
  protocol = "tcp"
  address = "localhost:80"

[[inputs.netstat]]
[[inputs.procstat]]
  pid_file = "/var/run/nginx.pid"

[[inputs.sysstat]]
  sadc_path = "/usr/lib/sysstat/sadc"

[[inputs.systemd_units]]
[[inputs.temp]]

Important: Only log want you really need! The Telegraf configuration above is most likely too much for you.

Telegraf data is stored in InfluxDB, I use Grafana to create graphs out of the data:

Read more: Grafana: Getting Started, Telegraf: Getting Started

GitLab & GitLab Runner

GitLab is an open-source and very powerful Git hosting tool, issue tracking, time tracking, Agile/KanBan tool as well as CI/CD (Continuous Integration, Continuous Deployment) tool. Works great together with their GitLab runner to support CI/CD.

Public URL: https://gitlab.melroy.org/

Install GitLab

sudo apt install -y curl openssh-server ca-certificates tzdata
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash
sudo EXTERNAL_URL="https://gitlab.melroy.org" apt install gitlab-ce

# Runner
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
sudo -E apt install gitlab-runner

# Register runner
sudo gitlab-runner register

Configure GitLab/ GitLab-Runner

Most important settings in the /etc/gitlab/gitlab.rb file:

external_url 'https://yourdomain.com'
# Store git data at another location
git_data_dirs({
  "default" => {
    "path" => "/media/Data/gitlab/git-data"
   }
})
sidekiq['max_concurrency'] = 25
sidekiq['min_concurrency'] = 15

# Since I already have Nginx running, I disabled the built-in
nginx['enable'] = false

Read more: GitLab Docs (great documentation!), GitLab Runner Docs

Tor

Tor can be a bit overwhelming to understand. Actually the Tor package can be configured to be used as just a ‘Client‘, as ‘Relay‘ or as a so called ‘Hidden Service‘.
Or both, but that is not advised (particularly a Relay & Hidden service isn’t advised together).

Anyway, for the people who are new to Tor. Tor is a anonymous communication network, which routes the traffic through a set of relays. With the goal to be anonymous as a client user.

Being anonymous on the Internet means you can use the Onion services, but be-aware of the fact you may leak information to Tor services. Like your usernames, passwords or maybe your actual name. In the end the users, are often the ones who leaks data about themselves, causing to loss their anonymity. Meaning you can’t blame Tor for that.

Relay & Hidden Services

On your PC, you’ll most likely only use the client of Tor, like the official Tor Browser. On a dedicated servers on the other hand, Tor is often used as either a Relay node or Hidden Service.

Disclaimer: Of-course, technologies like Tor can be used for both ‘good’ and ‘bad’ (depending on who you ask). The same can be said about every decentralized or anonymous technology for that matter.

With a Relay node you help the Tor network to become more decentralized, faster and more secure. Helping people in world that are being censored. You can also host a Bridge node, which will allow users in countries where Tor is blocked still be able to use Tor. There are some public metrics available: Nr. of Tor nodes, different relays. And Relay Search tool.

Hidden Services are the (web) services that are run in the Tor network, which are reachable by an .onion domain. And by default not available on the clearnet.

Important: It does require a Tor Browser to visit onion links.

Just to name two onion domains, DuckDuckGo: http://3g2upl4pq6kufc4m.onion/ and The Hidden Wiki: http://zqktlwiuavvvqqt4ybvgvi7tyo4hjl5xgfuvpdf6otjiycgwqbym2qad.onion/wiki/index.php/Main_Page.
Yet again, you can host your own hidden service in the Tor network.

Install Tor

sudo apt install -y apt-transport-https
sudo nano /etc/apt/sources.list.d/tor.list # See content below
sudo apt update
sudo apt install tor deb.torproject.org-keyring

tor.list with content:

deb https://deb.torproject.org/torproject.org focal main
deb-src https://deb.torproject.org/torproject.org focal main

Configure Tor

Let’s say you want to run a Onion hidden service. The configuration file /etc/tor/torrc will look like:

# Disable outgoing
SocksPort 0

# Configure Hidden Service
HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServiceVersion 3
HiddenServicePort 80 127.0.0.1:3000

This will put a local running service running on port 3000 available via Tor Onion service on port 80. Restart tor service: sudo systemctl restart tor. sudo cat /var/lib/tor/hidden_service/hostname should give you the onion domain.

Read more: Tor Support site, Relay Operators and Onion services.

DuckDNS

My home internet connection has a dynamic IP address assigned, although it doesn’t change often. However, if my external IP does change, that should not impact my services availability. Therefor I use DuckDNS to periodically check my IP address, and update when needed. My DNS records will therefor always point to the correct IP address.

Install DuckDNS

mkdir duckdns
cd duckdns
nano duck.sh # With content see below
chmod +x duck.sh

duck.sh should contain:

#!/bin/bash
echo url="https://www.duckdns.org/update?domains=melroyserver&token=secret_token&ip=" | curl -k -o ~/duckdns/duck.log -K -
# Don't forget to change the secret_token to your token

Add the duck.sh script to crontab:

crontab -e
*/5 * * * * ~/duckdns/duck.sh >/dev/null 2>&1

Let’s try nslookup:

nslookup melroyserver.duckdns.org
Server:		127.0.0.53
Address:	127.0.0.53#53

Non-authoritative answer:
Name:	melroyserver.duckdns.org
Address: 85.145.27.228

Read more: Duck DNS: Spec

Python3

sudo apt install -y python3 python3-setuptools python-is-python3
sudo apt-mark hold python2 python2-minimal python2.7 python2.7-minimal libpython2-stdlib libpython2.7-minimal libpython2.7-stdlib

Fail2Ban

sudo apt install -y fail2ban
sudo systemctl start fail2ban
sudo systemctl enable fail2ban

NodeJS

curl -sL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt install -y nodejs
sudo apt install gcc g++ make
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt update
sudo apt install -y yarn

Databases

MariaDB

MariaDB is a database open-source replacement of the previously known MySQL server. Installing is easy:

sudo apt install mariadb-server

Public URL https://mysql.melroy.org (actually a web-based frontend, login required too bad)

Read more: MariaDB Docs

PostgreSQL

Just another database, which is sometimes faster with complex queries than MySQL. Some application prefer to run in PostgreSQL databases.

psql listing table (\dt) of the Synapse database

Installation is just as easy as MariaDB:

sudo apt install postgresql

Configure PostgreSQL

Changes to /etc/postgresql/12/main/postgresql.conf file (optimized for lot of read/write and SSD):

max_connections = 300
shared_buffers = 8GB
work_mem = 6990kB
maintenance_work_mem = 2GB
effective_io_concurrency = 200
max_worker_processes = 16
max_parallel_maintenance_workers = 4
max_parallel_workers_per_gather = 4
max_parallel_workers = 16
wal_buffers = 16MB
max_wal_size = 8GB
min_wal_size = 2GB
checkpoint_completion_target = 0.9
random_page_cost = 1.1
effective_cache_size = 24GB

Read more: PostgreSQL Docs and a very useful PGTune tool.

Redis

Redis is a special database. Redis a in memory database, to cache most used data to speed-up the application/website.

Installation is easy again:

sudo apt install redis-server

# Add redis group to www-data user
sudo usermod -a -G redis www-data

Configure Redis

Default configuration file /etc/redis/redis.conf:

# Only accept connections via socket file 
port 0
unixsocket /var/run/redis/redis-server.sock
unixsocketperm 775
daemonize yes
pidfile /var/run/redis/redis-server.pid

Docker Containers

In theory all services above can be hosted as a docker container. However, the big and heavy services/databases I will run outside of Docker.

For those applications I prefer to run them on true bare-metal server. Services below are currently hosted via Docker in my case:

Synapse

Matrix a fully decentralized, open standard real-time communication protocol. Synapse is one of the servers for Matrix. Dendrite would be the next-generation server of Matrix.

As a client user, you can use Element for your chats. It’s fully free. Matrix a better alternative for WhatApps, Signal and Telegram. In other words, Matrix is not depending on centralized servers. And therefor Matrix is federated. I will most likely spend a dedicated blog article about Matrix.

Public URL https://matrix.melroy.org (can be used as your Matrix homeserver address!)

Synapse Compose

Since I’m using the PostgreSQL database on my bare metal machine, therefor NOT running another database instance in Docker:

version: '3.3'
 services:
   synapse:
     image: matrixdotorg/synapse
     restart: always
     container_name: synapse
     user: 1000:1000
     volumes:
       - /media/Data/synapse:/data
     ports:
       - "8008:8008"
     environment:
       - UID=1000
       - GID=1000
     healthcheck:
       test: ["CMD", "curl", "-fSs", "http://localhost:8008/health"]
       interval: 1m
       timeout: 10s
       retries: 3
     network_mode: "host"

Main configuration file /media/Data/synapse/homeserver.yaml:

server_name: "melroy.org"
public_baseurl: https://matrix.melroy.org/
listeners:
  - port: 8008
    tls: false
    bind_addresses: ['127.0.0.1']
    type: http
    x_forwarded: true

    resources:
      - names: [client, federation]
        compress: false

tls_fingerprints: [{"sha256": "znOrbGUV3jhjIVQw1tMJRWB0MKoR9CX8+HBTiPaM2qM"}, {"sha256": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU"}]

caches:
   global_factor: 1.0

database:
  name: psycopg2
  args:
    user: synapse
    password: secret_pass
    database: synapse
    host: 127.0.0.1
    port: 5432
    cp_min: 5
    cp_max: 10

max_upload_size: 10M
enable_registration: true
auto_join_rooms:
  - "#welcome:melroy.org"
report_stats: false
limit_remote_rooms:
  enabled: true
  complexity: 0.7
  complexity_error: "This room is too complex to join. Ask @melroy:melroy.org if you want to change this behaviour."

Gitea

Gitea is a lightweight alternative for GitLab.

Gitea Compose

Also Gitea is using the PostgreSQL database on the bare metal server.

version: "3"
services:
  gitea:
    image: gitea/gitea:1.13
    container_name: gitea
    restart: always
    environment:
      - USER_UID=1000
      - USER_GID=1000
      - ROOT_URL=https://yourserver.com
      - SSH_DOMAIN=yourserver.com
      - SSH_PORT=222
      - DB_TYPE=postgres
      - DB_HOST=127.0.0.1:5432
      - DB_NAME=giteadb
      - DB_USER=gitea
      - DB_PASSWD=secret_password
    volumes:
      - /media/Data/gitea:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    network_mode: "host"

Wekan

Wekan is a to-do web application, very powerful to keep yourself organized.

Public URL https://todo.melroy.org

Wekan Compose

For Wekan I will use a Docker MongoDB instance as database storage.

version: '2'
services:
  wekan:
   image: quay.io/wekan/wekan:master
   container_name: wekan-app
   user: 1000:1000
   restart: always
   networks:
     - wekan-tier
   environment:
     - MONGO_URL=mongodb://wekandb:27017/wekan
     - ROOT_URL=https://todo.melroy.org
     - MAIL_URL=smtp://mailserver
     - MAIL_FROM=melroy@melroy.org
     - WITH_API=true
     - BROWSER_POLICY_ENABLED=true
   extra_hosts:
     - "mailserver:192.168.2.20"
   ports:
     - 3001:8080
   depends_on:
     - wekandb

  wekandb:
    image: mongo:3.2.21
    user: 1000:1000
    container_name: wekan-db
    restart: always
    command: mongod --smallfiles --oplogSize 128
    networks:
      - wekan-tier
    expose:
      - 27017
    volumes:
      - /media/Data/wekan/db:/data/db
      - /media/Data/wekan/dump:/dump

networks:
  wekan-tier:
    driver: bridge

TeamSpeak

TeamSpeak is used for voice over IP, just like Skype, Zoom or Discord for that matter. But allows you to host your own server.

Public Address: server.melroy.org:9987 (default TS port)

TeamSpeak Compose

For TeamSpeak I use the bare metal MySQL database.

version: '3'
services:
  teamspeak:
    image: teamspeak
    container_name: teamspeak
    restart: always
    ports:
      - 9987:9987/udp
      - 10011:10011
      - 30033:30033
    environment:
      TS3SERVER_DB_PLUGIN: ts3db_mariadb
      TS3SERVER_DB_SQLCREATEPATH: create_mariadb
      TS3SERVER_DB_HOST: 127.0.0.1
      TS3SERVER_DB_USER: teamspeak
      TS3SERVER_DB_PASSWORD: secret_password
      TS3SERVER_DB_NAME: teamspeak
      TS3SERVER_DB_WAITUNTILREADY: 60
      TS3SERVER_LICENSE: accept
    network_mode: "host"

Did you like the article? Please share!