How do you set up a server? Do you do any automation or do you just open up an SSH session and YOLO? Any containers? Is docker-compose enough for you or are you one of those unicorns who had no issues whatsoever with rootless Podman? Do you use any premade scripts or do you hand craft it all? What distro are you building on top of?
I’m currently in process of “building” my own server and I’m kinda wondering how “far” most people are going, where do y’all take any shortcuts, and what do you spend effort getting just right.
I’m a lazy piece of shit and containers give me cancer, so I just keep iptables aggressive and spin up whatever on an Ubuntu box that gets upgrades when I feel like wasting a weekend in my underwear.
An honest soul
I get paid to do shit with rigor; I don’t have the time, energy, or help to make something classy for funsies. I’m also kind of a grumpy old man such that while I’ll praise and embrace Python’s addition of f-strings which make life better in myriad ways, I eschew the worse laziness of the all the containers attitude that we see for deployment.
Maybe a day shall come when containers are truly less of a headache than just thinking shit through the first time, and I’ll begrudgingly adapt and grow, but that day ain’t today.
Debian + nginx + docker (compose).
That’s usually enough for me. I have all my docker compose files in their respective containers in the home directory like
~/red-discordbot/docker-compose.yml
.The only headache I’ve dealt with are permissions because I have to run docker as root and it makes a lot of messy permissions in the home directories. I’ve been trying rootless docker earlier and it’s been great so far.
edit: I also use
rclone
for backups.raspberry pi, arch linux, docker-compose. I really need to look up ansible
I use debian VMs and create rootless podman containers for everything. Here’s my collection so far.
I’m currently in the process of learning how to combine this with ansible… that would save me some time when migrating servers/instances.
About two years ago my set up had gotten out of control, as it will. Closet full of crap all running vms all poorly managed by chef. Different linux flavors everywhere.
Now its one big physical ubuntu box. Everything gets its own ubuntu VM. These days if I can’t do it in shell scripts and xml I’m annoyed. Anything fancier than that i’d better be getting paid. I document in markdown as i go and rsync the important stuff from each VM to an external every night. Something goes wrong i just burn the vm, copy paste it back together in a new one from the mkdocs site. Then get on with my day.
Generally, it’s Proxmox, debían, then whatever is needed for what I’m spinning up. Usually Docker Compose.
Lately I’ve been playing some with Ansible, but it’s use is far from common for me right now.
I use NixOS on almost all my servers, with declarative configuration. I can also install my config in one command with NixOS-Anywhere
It allows me to improve my setup bit by bit without having to keep track of what I did on specific machines
Proxmox, then create LXC for everything (moslty debian and a bit of alpine), no automation, full yolo, if it break I have backup (problems are for future me eh)
This.
Proxmox and then LXCs for anything I need.and yes - I cheat a bit, I use the excellent Proxmox scripts - https://tteck.github.io/Proxmox/ because I’m lazy like that haha
Mostly the same. Proxmox with several LXC, two of which are running docker. One for my multimedia, the other for my game servers.
I used to do the same, but nowadays I just run everything in docker, within a single lxc container on proxmox. Having to setup mono or similar every time I wanted to setup a game server or even jellyfin was annoying.
I have a stupid overcomplicated networking script that never works. So every time i set up a new server I need to fix a myriad of weird issues I’ve never seen before. Usually I setup a server with a keyboard and mouse because SSH needs networking, if it’s a cloud machine its the QEMU console or hundreds of reboots.
Debian netinst via PXE, SSH/YOLO, docker + compose (formerly swarm), scripts are from my own library, Debian.
I do the same except I boot a usb installer instead of PXE.
I can never find a USB drive when I need one, thus my PXE server was born. lol
I use Proxmoxn then stare at the dashboard realizing I have no practical use for a home lab
So i’m not alone. I am trying to better myself.
I use SSH to manage docker compose. I’m just using a raspberry pi right now so I don’t have room for much more than Syncthing and Dokuwiki.
Don’t underestimate a pi! If you have a 3 or up, it can easily handle a few more things.
I forgot to mention I also have a samba share running on it and it’s sooooooo sloooooow. I might need to reflash the thing just to cover my bases but it’s unusable for large or many files.
NixOS instances running Nomad/Vault/Consul. Each service behind Traefik with LE certs. Containers can mount NFS shares from a separate NAS which optionally gets backed up to cloud blob storage.
I use SSH and some CLI commands for deployment but only because that’s faster than CICD. I’m only running ~’nomad run …’ for the most part
The goal was to be resilient to single node failures and align with a stack I might use for production ops work. It’s also nice to be able to remove/add nodes fairly easily without worrying about breaking any home automation or hosting.
I’ve set up some godforsaken combination of docker, podman, nerdctl and bare metal at work for stuff I needed since they hired me. Every day I’m in constant dread something I made will go down, because I don’t have enough time to figure out how I was supposed to do it right T.T
I use the following procedure with ansible.
- Setup the server with the things I need for k3s to run
- Setup k3s
- Bootstrap and create all my services on k3s via ArgoCD
People like to diss running kubernetes on your personal servers, but once you have enough services running in your servers, managing them using docker compose is no longer cut it and kubernetes is the next logical step to go. Tools such as k9s makes navigating as kubernetes cluster a breeze.