Standardize deployment strategy #13

Open
opened 2023-06-28 22:37:27 +00:00 by scott · 2 comments
Owner

We should discuss our current web service deployment strategies and work out how we can best deploy software as a group.

There are a bunch of layers and at each layer, we can choose one of many options:

  • Hardware/bandwidth provisioning: VPS providers, physical machines + a place to put them + ISP?
  • Operating system, init system: what triggers the services to start up after booting? How are updates handled?
  • Filesystem + backups
  • container runtime: docker or podman or k8s or nix containers
  • reverse proxy: Traefik, nginx, caddy, or haproxy
  • monitoring: nagios or prometheus+grafana or...?
  • DNS (with an API)
We should discuss our current web service deployment strategies and work out how we can best deploy software as a group. There are a bunch of layers and at each layer, we can choose one of many options: - Hardware/bandwidth provisioning: VPS providers, physical machines + a place to put them + ISP? - Operating system, init system: what triggers the services to start up after booting? How are updates handled? - Filesystem + backups - container runtime: docker or podman or k8s or nix containers - reverse proxy: Traefik, nginx, caddy, or haproxy - monitoring: nagios or prometheus+grafana or...? - DNS (with an API)
scott added this to the Internal infrastructure project 2023-06-28 22:37:27 +00:00
Author
Owner

My current systems:

VPS

Hardware

OVH VPS, 4GB RAM, 2-core, 80GB storage, gigabit?
Backblaze S3 for media storage

OS + init

Ubuntu 22.04
Filesystem: ext4
Backups: rsnapshot to local machine. Additionally, a cron job runs a docker exec command to cause the database to be dumped to a file. Backups happen daily, ~1AM-4AM EST

Containers

Docker. Services are organized with one service per folder in $HOME. All services have (or should have):

  • all dependencies and the application itself in a docker-compose.yml
  • secrets stored in *.pw files, which are added to the docker-compose.yml. .pw is the extension to make it simple to add to .gitignore files
  • Any volumes for the containers bind mounted to $HOME/$service/mounts/$mount_name.
  • Labels defining their proxy rules (see the Reverse Proxy section)

Reverse proxy

Traefik runs in a container defined at $HOME/traefik/docker-compose.yml. It offers configuration through docker labels (so must bind mount to the docker daemon socket, eliminating the ability to use podman, but), this allows proxy rules to be defined per-application, meaning that moving a service from one machine to another is as simple as copying over its directory and changing the DNS records.

Monitoring

None

DNS

I use digitalocean's DNS service as my authoritative nameserver. In the past I wrote a script to watch docker labels for DNS definitions like Traefik does, but I've found that I prefer to manually manage the records now that I have a relatively stable IP and doctl exists.

Services

  • Mastodon
  • 2 nextcloud
  • vaultwarden (unused tbh)
  • 2 hugo sites
  • A static files server
  • a wordpress

At home

I also have a machine running at home

Dell tower, Intel Core 2 Quad 2.4GHz, 4GB RAM, 250GB SSD, 6TB striped zpool.
OS: NixOS 22.11

Network is a 100/10 cable line tho

No backups, 😰

The containers are run basically the same as in the VPS. I'm planning to make it more Nix-y but I haven't gotten that far, just trying to get away from Ubuntu at this point.

Services

  • vaultwarden (the one I use)
  • hedgedoc
  • a static files server
  • a wiki
  • Jellyfin
  • Tandoor cookbook
  • Minio for one of the nextclouds on the VPS
  • ProxiTok
I use this script to bring up or down all the services on either machine:
#!/usr/bin/env zsh
# A simple script for batch-starting and -stopping my docker-compose configs.
#
# to add a new service, add it to the $all_services array and add a block for
# executing it to start(). If there is nothing that needs done for it other than
# running "docker-compose up", add it to the second case match alongside
# bitwarden, jellyfin, and hedgedoc.
#all_services=(traefik bitwarden nextcloud ghost jellyfin portainer hedgedoc onlyofice recipes static wordpress)
# all_services=(traefik bitwarden hedgedoc static mediawiki jellyfin tandoor pihole nextcloud-s3)
all_services=(
    traefik-2
    bitwarden
    hedgedoc
    static
    mediawiki
    jellyfin
    tandoor
#    pihole
    nextcloud-s3
    gimmeasearx
    ProxiTok
)
app_name="$0" # different inside of functions

## ensure docker-compose is up-to-date
# sudo pip install -U docker-compose

function abort() {
    echo "$@"
    exit 1
}
function composeOpts() {
    dir=$1
    printf "--project-directory=$dir -f $dir/docker-compose.yml"
}
tokenFileLoc=$HOME/.config/digital-ocean/auth.key

function ensureNetwork() {
    # ensure the network exists
    docker network create web
}

function noOptStart() {
    docker-compose `composeOpts $1` pull
    docker-compose `composeOpts $1` up -d
}

function start() {
    ensureNetwork
    case "$1" in
        traefik)
                # start traefik
                docker-compose `composeOpts traefik` pull
                DO_AUTH_TOKEN=`< $tokenFileLoc` docker-compose `composeOpts traefik` up -d
        ;;
        mediawiki|onlyoffice|bitwarden|portainer|jellyfin|recipes|hedgedoc \
		|static|wordpress|prometheus|tandoor|pihole|mailjet-logger|docker-registry \
		|nextcloud-s3|gimmeasearx|traefik-2|ProxiTok)
                # start service which requires no secrets
                noOptStart $1
        ;;
        ghost)
            # start ghost
            docker-compose `composeOpts ghost` pull
            MYSQL_ROOT_PASSWORD=`< ghost/mysql.pw` \
            database__connection__password=`< ghost/mysql.pw` \
            mail__options__auth__pass=`< ghost/email.pw` \
            	docker-compose `composeOpts ghost` up -d
        ;;
	# etherpad)
	#     docker-compose `composeOpts etherpad` pull
	#     DB_PASS=`< etherpad/db.pw` \
	# 	    POSTGRES_PASSWORD="$DB_PASS" \
	#     docker-compose `composeOpts etherpad` up -d
	# ;;
	ethercalc)
		docker-compose `composeOpts ethercalc` pull
		docker-compose `composeOpts ethercalc` up -d
	;;
        "")
		for service in $all_services; do
			start $service &
		done
            wait
        ;;
	discourse)
		: TODO
	;;
        *) usage error: unknown service $1
        ;;
    esac
}

function stop() {
    service="$1"
    if test -z "$service"; then
	for service in $all_services; do
		stop $service &
	done
        wait
        docker network rm web > /dev/null
    else
        docker-compose `composeOpts $service` down
    fi
}

function restart() {
    service="$1"
    if test -z "$service"; then
	for service in $all_services; do
		restart $service &
	done
        wait
    else
        stop $service
        start $service
    fi
}

function logs() {
    if test -z "$1"; then
	for service in $all_services; do
		logs $service &
	done
        wait
    else
        docker-compose `composeOpts $1` logs -f &
    fi
}

function usage() {
    echo usage
    echo
    echo "    $app_name start|run|up|stop|down|restart|logs [service]"
    echo
    echo '  - service is optional. if no service is specified, the action is applied to all services.'
    echo '  - start/run/up are aliases, they all update and bring up the service(s)'
    echo '  - stop and down are aliases, they run `docker-compose down` on the service(s)'
    echo '  - restart is the same as running `docker-compose restart` in the folder for the service(s)'
    echo '  - logs is the same as running `docker-compose logs -f` in the folder of the service(s).'
    echo '        a nice side effect of this is that if no service is specified, the logs of all services '
    echo '        are shown together inline.'
    if [ -z "$NO_ABORT" ]; then
      abort $@
    fi
}

action="$1"
service="$2"

case $action in
start)   start $service;;
run)     start $service;;
up)      start $service;;
stop)    stop $service;;
down)    stop $service;;
restart) restart $service;;
logs)    logs $service;;
"")      usage error: please specify an action ;;
*)       usage error: "no action known for $1";;
esac
I also have one service (DNS) which is configured through NixOS (hence the commented-out pihole in the script above), here's its config as an example:

{ config, pkgs, ... }:

{
  environment.systemPackages = [
    pkgs.blocky
  ];

  services.blocky = {
    enable = true;
    settings = {
      upstream.default = [
        "1.1.1.1"                 # cloudflare
        "https://one.one.one.one" # cloudflare; DoH
        "one.one.one.one:853"     # cloudflare; DoT
        "1.1"                     # cloudflare
        "9.9.9.9"                 # quad9
        "149.112.112.112"         # quad9
        "dns9.quad9.net:853"      # quad9; DoT
        "https://dns9.quad9.net"  # quad9; DoH
        "8.8.8.8"                 # google
        "4.4.4.4"                 # google
        "dns.google:853"          # google; DoT
        "https://dns.google"      # google; DoH
        "doh.mullvad.net:853"     # Mullvad; DoT
        "https://doh.mullvad.net" # Mullvad; DoH
        "https://dns.opendns.com" # OpenDNS; DoH
        "dns.opendns.com:853"     # OpenDNS; DoT
        "208.67.222.222"          # OpenDNS
        "208.67.220.220"          # OpenDNS
      ];
      upstreamTimeout = "1500ms";
      startVerifyUpstream = true;
      blocking = {
        blackLists = {
          ads = [
            "https://s3.amazonaws.com/lists.disconnect.me/simple_ad.txt"
            "https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts"
            "http://sysctl.org/cameleon/hosts"
            "https://s3.amazonaws.com/lists.disconnect.me/simple_tracking.txt"
            ''
            # inline definition with string literal in hosts format
            ''
          ];
        };
        clientGroupsBlock.default = [ "ads" ];
        startStrategy = "failOnError";
      };
      caching.cacheTimeNegative = -1;
      # Uncomment to enable metrics:
      # promethius.enable = true
      # bootstrapDns = [
      #     { upstream = "doh.mullvad.net"; ips = [ "194.242.2.2" "2a07:e340::2" ]; }
      # ];
    };
  };
}

Places I'd like to improve

  • Monitoring
  • backups (i'd rather be using ZFS send than rsnapshot)
  • I'd like each service to be started by the init system (ZFS?) and ideally configured with a more robust config management like Nix or Ansible

I also have a HP Proliant G7 (or 8?) with 2x Xeon 2.something GHz + 24GB RAM. The problem is that it's stuck behind my <10Mbps connection, and only has 2.5" bays. This means storage is expensive for that machine, unless we can get ahold of a different backplane (something i've been looking into but haven't taken care of yet) or load it up with SSDs (expensive!)

My current systems: ## VPS ### Hardware OVH VPS, 4GB RAM, 2-core, 80GB storage, gigabit? Backblaze S3 for media storage ### OS + init Ubuntu 22.04 Filesystem: ext4 Backups: rsnapshot to local machine. Additionally, a cron job runs a docker exec command to cause the database to be dumped to a file. Backups happen daily, ~1AM-4AM EST ### Containers Docker. Services are organized with one service per folder in `$HOME`. All services have (or should have): - all dependencies and the application itself in a docker-compose.yml - secrets stored in *.pw files, which are added to the docker-compose.yml. .pw is the extension to make it simple to add to .gitignore files - Any volumes for the containers bind mounted to `$HOME/$service/mounts/$mount_name`. - Labels defining their proxy rules (see the Reverse Proxy section) ### Reverse proxy Traefik runs in a container defined at `$HOME/traefik/docker-compose.yml`. It offers configuration through docker labels (so must bind mount to the docker daemon socket, eliminating the ability to use podman, but), this allows proxy rules to be defined per-application, meaning that moving a service from one machine to another is as simple as copying over its directory and changing the DNS records. ### Monitoring None ### DNS I use digitalocean's DNS service as my authoritative nameserver. In the past I wrote a script to watch docker labels for DNS definitions like Traefik does, but I've found that I prefer to manually manage the records now that I have a relatively stable IP and [`doctl`](https://docs.digitalocean.com/reference/doctl/reference/) exists. ### Services - [Mastodon](https://tams.tech) - 2 nextcloud - vaultwarden (unused tbh) - 2 hugo sites - A static files server - a wordpress ## At home I also have a machine running at home Dell tower, Intel Core 2 Quad 2.4GHz, 4GB RAM, 250GB SSD, 6TB striped zpool. OS: NixOS 22.11 Network is a 100/10 cable line tho No backups, 😰 The containers are run basically the same as in the VPS. I'm planning to make it more Nix-y but I haven't gotten that far, just trying to get away from Ubuntu at this point. ### Services - vaultwarden (the one I use) - hedgedoc - a static files server - a wiki - Jellyfin - Tandoor cookbook - Minio for one of the nextclouds on the VPS - ProxiTok <details> <summary> I use this script to bring up or down all the services on either machine: </summary> ```sh #!/usr/bin/env zsh # A simple script for batch-starting and -stopping my docker-compose configs. # # to add a new service, add it to the $all_services array and add a block for # executing it to start(). If there is nothing that needs done for it other than # running "docker-compose up", add it to the second case match alongside # bitwarden, jellyfin, and hedgedoc. #all_services=(traefik bitwarden nextcloud ghost jellyfin portainer hedgedoc onlyofice recipes static wordpress) # all_services=(traefik bitwarden hedgedoc static mediawiki jellyfin tandoor pihole nextcloud-s3) all_services=( traefik-2 bitwarden hedgedoc static mediawiki jellyfin tandoor # pihole nextcloud-s3 gimmeasearx ProxiTok ) app_name="$0" # different inside of functions ## ensure docker-compose is up-to-date # sudo pip install -U docker-compose function abort() { echo "$@" exit 1 } function composeOpts() { dir=$1 printf "--project-directory=$dir -f $dir/docker-compose.yml" } tokenFileLoc=$HOME/.config/digital-ocean/auth.key function ensureNetwork() { # ensure the network exists docker network create web } function noOptStart() { docker-compose `composeOpts $1` pull docker-compose `composeOpts $1` up -d } function start() { ensureNetwork case "$1" in traefik) # start traefik docker-compose `composeOpts traefik` pull DO_AUTH_TOKEN=`< $tokenFileLoc` docker-compose `composeOpts traefik` up -d ;; mediawiki|onlyoffice|bitwarden|portainer|jellyfin|recipes|hedgedoc \ |static|wordpress|prometheus|tandoor|pihole|mailjet-logger|docker-registry \ |nextcloud-s3|gimmeasearx|traefik-2|ProxiTok) # start service which requires no secrets noOptStart $1 ;; ghost) # start ghost docker-compose `composeOpts ghost` pull MYSQL_ROOT_PASSWORD=`< ghost/mysql.pw` \ database__connection__password=`< ghost/mysql.pw` \ mail__options__auth__pass=`< ghost/email.pw` \ docker-compose `composeOpts ghost` up -d ;; # etherpad) # docker-compose `composeOpts etherpad` pull # DB_PASS=`< etherpad/db.pw` \ # POSTGRES_PASSWORD="$DB_PASS" \ # docker-compose `composeOpts etherpad` up -d # ;; ethercalc) docker-compose `composeOpts ethercalc` pull docker-compose `composeOpts ethercalc` up -d ;; "") for service in $all_services; do start $service & done wait ;; discourse) : TODO ;; *) usage error: unknown service $1 ;; esac } function stop() { service="$1" if test -z "$service"; then for service in $all_services; do stop $service & done wait docker network rm web > /dev/null else docker-compose `composeOpts $service` down fi } function restart() { service="$1" if test -z "$service"; then for service in $all_services; do restart $service & done wait else stop $service start $service fi } function logs() { if test -z "$1"; then for service in $all_services; do logs $service & done wait else docker-compose `composeOpts $1` logs -f & fi } function usage() { echo usage echo echo " $app_name start|run|up|stop|down|restart|logs [service]" echo echo ' - service is optional. if no service is specified, the action is applied to all services.' echo ' - start/run/up are aliases, they all update and bring up the service(s)' echo ' - stop and down are aliases, they run `docker-compose down` on the service(s)' echo ' - restart is the same as running `docker-compose restart` in the folder for the service(s)' echo ' - logs is the same as running `docker-compose logs -f` in the folder of the service(s).' echo ' a nice side effect of this is that if no service is specified, the logs of all services ' echo ' are shown together inline.' if [ -z "$NO_ABORT" ]; then abort $@ fi } action="$1" service="$2" case $action in start) start $service;; run) start $service;; up) start $service;; stop) stop $service;; down) stop $service;; restart) restart $service;; logs) logs $service;; "") usage error: please specify an action ;; *) usage error: "no action known for $1";; esac ``` </details> <details> <summary> I also have one service (DNS) which is configured through NixOS (hence the commented-out pihole in the script above), here's its config as an example: </summary> ``` { config, pkgs, ... }: { environment.systemPackages = [ pkgs.blocky ]; services.blocky = { enable = true; settings = { upstream.default = [ "1.1.1.1" # cloudflare "https://one.one.one.one" # cloudflare; DoH "one.one.one.one:853" # cloudflare; DoT "1.1" # cloudflare "9.9.9.9" # quad9 "149.112.112.112" # quad9 "dns9.quad9.net:853" # quad9; DoT "https://dns9.quad9.net" # quad9; DoH "8.8.8.8" # google "4.4.4.4" # google "dns.google:853" # google; DoT "https://dns.google" # google; DoH "doh.mullvad.net:853" # Mullvad; DoT "https://doh.mullvad.net" # Mullvad; DoH "https://dns.opendns.com" # OpenDNS; DoH "dns.opendns.com:853" # OpenDNS; DoT "208.67.222.222" # OpenDNS "208.67.220.220" # OpenDNS ]; upstreamTimeout = "1500ms"; startVerifyUpstream = true; blocking = { blackLists = { ads = [ "https://s3.amazonaws.com/lists.disconnect.me/simple_ad.txt" "https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts" "http://sysctl.org/cameleon/hosts" "https://s3.amazonaws.com/lists.disconnect.me/simple_tracking.txt" '' # inline definition with string literal in hosts format '' ]; }; clientGroupsBlock.default = [ "ads" ]; startStrategy = "failOnError"; }; caching.cacheTimeNegative = -1; # Uncomment to enable metrics: # promethius.enable = true # bootstrapDns = [ # { upstream = "doh.mullvad.net"; ips = [ "194.242.2.2" "2a07:e340::2" ]; } # ]; }; }; } ``` </details> ## Places I'd like to improve - Monitoring - backups (i'd rather be using ZFS send than rsnapshot) - I'd like each service to be started by the init system (ZFS?) and ideally configured with a more robust config management like Nix or Ansible ---- I also have a HP Proliant G7 (or 8?) with 2x Xeon 2.something GHz + 24GB RAM. The problem is that it's stuck behind my <10Mbps connection, and only has 2.5" bays. This means storage is expensive for that machine, unless we can get ahold of a different backplane (something i've been looking into but haven't taken care of yet) or load it up with SSDs (expensive!)
Member

I have 3 self hosted systems with 1 VPS

All 3 systems are set up fairly the same.

  • Everything is behind Tailscale, with the exeption of the VPS.
  • All systems are Fedora Server 36 with BTRFS.
  • All data is stored on the second drive mounted at /data
  • BTRFS snapshots are taken when upgrading services however I have no external backup (I know, shame on me).

Hardware

System 1 (Homelab)

  • HP EliteDesk 800
  • Intel Core i5-6600T @ 2.70GHz 4x
  • 16 GB RAM
  • 256 GB SSD boot drive
  • 2TB HDD for storage

System 2 (“Shoplab”)

  • LENOVO ThinkCentre ??? (It’s a mini PC like the EliteDesk)
  • Intel Core i5-6500T @ 2.50GHz 4x
  • 32 GB RAM
  • 256 GB SSD boot drive
  • 2TB HDD for storage

System 3 (“OuterHeaven” A project server, set up for family and friends)

  • Hobbled together ASUS Desktop
  • Intel Core i5-4590 @ 3.30GHz 4x
  • 16 GB RAM
  • 256 GB SSD boot drive
  • 16TB HDD for storage

VPS
A Linode Nanode whose sole purpose is to run RustDesk in Docker. The docker-compose.yml is in git and there is no need for saved data. It runs on Alma 9.

Containers

Docker. The file structure looks like this with @service being a subvolume to be snapshot compatible.
/data/containers/domain/@service/{docker-compose.yml, data/, config/, db/, etc..,}

Reverse proxy

Reverse proxy is Caddy that runs on a docker network that all containers are connected to.
I use a custom Caddy container with both the Cloudflare & Linode DNS plugins enabled to allow Caddy to do its auto SSL magic with domains behind Tailscale. Each service also has a caddyfile com.domain.service.caddyfile.

DNS

Public DNS is a combination of Linode & Cloudflare. I was using Linode happily but the Caddy plugin kept having renewal issues so I am migrating to Cloudflare.

Local DNS is handled by a bind9 docker container that points to the docker host. This only gets used if the internet goes down and the Tailscale IP is unreachable. This makes it so I can still use the services on the LAN.

Services

Between the 4 systems the services I run are:

  • Caddy x3 (Reverse proxy for each docker host)
  • GitLab
  • Gitea
  • Seafile x3
  • Vaultwarden x2
  • bind9 x2 (Local DNS for each physical location)
  • piHole x2 (For each physical location)
  • Jellyfin
  • LLDAP
  • EteSync/Base
  • Peertube x3
  • osTicket
  • Joplin Server
  • A static website served via Cloudflared
  • RustDesk
  • Cockpit (The only service that is not a container.) x4

Sevices are controlled through a custom bash script and config file that I call dawker, github coming soon.
dawker seafile.domain {start,stop,restart,backup}

Places I'd like to improve

  • I' d like to switch from docker to podman.
  • Proper backups!!
  • More services
    • Grafana
    • Federation stuff
      • Mastodon
      • Pixelfed (Kinda in the works but put on hold due to time)
      • Matrix
      • Lemmy
      • etc...
    • Ansible
    • Headscale (I would really like this going)
    • Immich (Next on my list)
    • A web builder similar to GrapeDrop (grapejs) or Seilex. I;m still searching...
  • Using env files. It's easyer for me to visualize everything in one compose file and I have a "set and forget" / "if it aint broke don't fix it mantality" sometimes.
I have 3 self hosted systems with 1 VPS All 3 systems are set up fairly the same. - Everything is behind Tailscale, with the exeption of the VPS. - All systems are Fedora Server 36 with BTRFS. - All data is stored on the second drive mounted at `/data` - BTRFS snapshots are taken when upgrading services however I have no external backup (I know, shame on me). ## Hardware **System 1 (Homelab)** - HP EliteDesk 800 - Intel Core i5-6600T @ 2.70GHz 4x - 16 GB RAM - 256 GB SSD boot drive - 2TB HDD for storage **System 2 (“Shoplab”)** - LENOVO ThinkCentre ??? (It’s a mini PC like the EliteDesk) - Intel Core i5-6500T @ 2.50GHz 4x - 32 GB RAM - 256 GB SSD boot drive - 2TB HDD for storage **System 3 (“OuterHeaven” A project server, set up for family and friends)** - Hobbled together ASUS Desktop - Intel Core i5-4590 @ 3.30GHz 4x - 16 GB RAM - 256 GB SSD boot drive - 16TB HDD for storage **VPS** A Linode Nanode whose sole purpose is to run RustDesk in Docker. The docker-compose.yml is in git and there is no need for saved data. It runs on Alma 9. ## Containers Docker. The file structure looks like this with @service being a subvolume to be snapshot compatible. `/data/containers/domain/@service/{docker-compose.yml, data/, config/, db/, etc..,}` ## Reverse proxy Reverse proxy is Caddy that runs on a docker network that all containers are connected to. I use a custom [Caddy](https://hub.docker.com/r/pencilshavings/caddy) container with both the Cloudflare & Linode DNS plugins enabled to allow Caddy to do its auto SSL magic with domains behind Tailscale. Each service also has a caddyfile `com.domain.service.caddyfile`. ## DNS Public DNS is a combination of Linode & Cloudflare. I was using Linode happily but the Caddy plugin kept having renewal issues so I am migrating to Cloudflare. Local DNS is handled by a bind9 docker container that points to the docker host. This only gets used if the internet goes down and the Tailscale IP is unreachable. This makes it so I can still use the services on the LAN. ## Services Between the 4 systems the services I run are: - Caddy x3 (Reverse proxy for each docker host) - GitLab - Gitea - Seafile x3 - Vaultwarden x2 - bind9 x2 (Local DNS for each physical location) - piHole x2 (For each physical location) - Jellyfin - LLDAP - EteSync/Base - Peertube x3 - osTicket - Joplin Server - A static website served via Cloudflared - RustDesk - Cockpit (The only service that is not a container.) x4 Sevices are controlled through a custom bash script and config file that I call dawker, github coming soon. `dawker seafile.domain {start,stop,restart,backup}` ## Places I'd like to improve - I' d like to switch from docker to podman. - Proper backups!! - More services - Grafana - Federation stuff - Mastodon - Pixelfed (Kinda in the works but put on hold due to time) - Matrix - Lemmy - etc... - Ansible - Headscale (I would really like this going) - Immich (Next on my list) - A web builder similar to GrapeDrop (grapejs) or Seilex. I;m still searching... - Using env files. It's easyer for me to visualize everything in one compose file and I have a "set and forget" / "if it aint broke don't fix it mantality" sometimes.
Sign in to join this conversation.
No milestone
No assignees
2 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: TWS/meta#13
No description provided.