Standardize deployment strategy #13
Labels
No labels
backlog
bug
duplicate
enhancement
help wanted
invalid
question
wontfix
No milestone
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: TWS/meta#13
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
We should discuss our current web service deployment strategies and work out how we can best deploy software as a group.
There are a bunch of layers and at each layer, we can choose one of many options:
My current systems:
VPS
Hardware
OVH VPS, 4GB RAM, 2-core, 80GB storage, gigabit?
Backblaze S3 for media storage
OS + init
Ubuntu 22.04
Filesystem: ext4
Backups: rsnapshot to local machine. Additionally, a cron job runs a docker exec command to cause the database to be dumped to a file. Backups happen daily, ~1AM-4AM EST
Containers
Docker. Services are organized with one service per folder in
$HOME
. All services have (or should have):$HOME/$service/mounts/$mount_name
.Reverse proxy
Traefik runs in a container defined at
$HOME/traefik/docker-compose.yml
. It offers configuration through docker labels (so must bind mount to the docker daemon socket, eliminating the ability to use podman, but), this allows proxy rules to be defined per-application, meaning that moving a service from one machine to another is as simple as copying over its directory and changing the DNS records.Monitoring
None
DNS
I use digitalocean's DNS service as my authoritative nameserver. In the past I wrote a script to watch docker labels for DNS definitions like Traefik does, but I've found that I prefer to manually manage the records now that I have a relatively stable IP and
doctl
exists.Services
At home
I also have a machine running at home
Dell tower, Intel Core 2 Quad 2.4GHz, 4GB RAM, 250GB SSD, 6TB striped zpool.
OS: NixOS 22.11
Network is a 100/10 cable line tho
No backups, 😰
The containers are run basically the same as in the VPS. I'm planning to make it more Nix-y but I haven't gotten that far, just trying to get away from Ubuntu at this point.
Services
I use this script to bring up or down all the services on either machine:
I also have one service (DNS) which is configured through NixOS (hence the commented-out pihole in the script above), here's its config as an example:
Places I'd like to improve
I also have a HP Proliant G7 (or 8?) with 2x Xeon 2.something GHz + 24GB RAM. The problem is that it's stuck behind my <10Mbps connection, and only has 2.5" bays. This means storage is expensive for that machine, unless we can get ahold of a different backplane (something i've been looking into but haven't taken care of yet) or load it up with SSDs (expensive!)
I have 3 self hosted systems with 1 VPS
All 3 systems are set up fairly the same.
/data
Hardware
System 1 (Homelab)
System 2 (“Shoplab”)
System 3 (“OuterHeaven” A project server, set up for family and friends)
VPS
A Linode Nanode whose sole purpose is to run RustDesk in Docker. The docker-compose.yml is in git and there is no need for saved data. It runs on Alma 9.
Containers
Docker. The file structure looks like this with @service being a subvolume to be snapshot compatible.
/data/containers/domain/@service/{docker-compose.yml, data/, config/, db/, etc..,}
Reverse proxy
Reverse proxy is Caddy that runs on a docker network that all containers are connected to.
I use a custom Caddy container with both the Cloudflare & Linode DNS plugins enabled to allow Caddy to do its auto SSL magic with domains behind Tailscale. Each service also has a caddyfile
com.domain.service.caddyfile
.DNS
Public DNS is a combination of Linode & Cloudflare. I was using Linode happily but the Caddy plugin kept having renewal issues so I am migrating to Cloudflare.
Local DNS is handled by a bind9 docker container that points to the docker host. This only gets used if the internet goes down and the Tailscale IP is unreachable. This makes it so I can still use the services on the LAN.
Services
Between the 4 systems the services I run are:
Sevices are controlled through a custom bash script and config file that I call dawker, github coming soon.
dawker seafile.domain {start,stop,restart,backup}
Places I'd like to improve