Categories
Uncategorized

Marginally Better Shortlinks

I’ve found myself tinkering with WordPress stuff a lot lately. Here’s another quick change I made for nicer shortlinks with a shorter domain that I have.

This was inspired by trawling through jwz’ archives and hacks. This post in particular.

The first piece of this puzzle is adding this location block to the nginx configs for bhh.sh. There’s probably a better way to do this to handle more types or remove the extra redirect.

location ~* /p/(.*) {
  return 302 https://benharri.org?p=$1;
}

The other half is filtering the shortlink on the WordPress end. I’ve added this to the functions.php of my theme. Basic logic here is based on jwz’ base64 shortlinks, just minus the base64 and dumping the post ID right in there.

add_filter( 'pre_get_shortlink', 'bhhsh_shortlink', 10, 4 );

function bhhsh_shortlink( $shortlink, $id, $context, $allow_slugs ) {
  if ($context == 'query' && !is_singular())
    return false;

  $post = get_post( $id );
  if ( empty( $post ) )
    return false;
  $id = $post->ID;
  if ( empty( $id ) )
    return false;

  return 'https://bhh.sh/p/' . $id;
}
Categories
Uncategorized

DNSSEC wasn’t worth it

In reply to Calling Time on DNSSEC? by Geoff Huston.

[… we] estimate that DNSSEC validation is performed around 1% of the time, given the DNS query profile of today’s data

I run my own authoritative nameservers and have had a slight nagging feeling that I should’ve enabled DNSSEC years ago. It’s been on my perpetual to-do list but I’ve never gotten around to it. I’ve definitely caused some outages trying to get DNSSEC to work.

Came across this article and it confirms that my procrastination was pretty OK in this specific case.

Categories
Uncategorized

Bluesky PDS Without Docker

Here’s how I got a self-hosted PDS (personal data server) running without docker.

This can be useful if you want to run the PDS on an existing machine or just don’t like docker. I came up with these steps by emulating what the installer script does.

My setup uses nginx and a wildcard TLS cert for my PDS domain.

Get the code

Clone the PDS repo

$ git clone https://github.com/bluesky-social/pds

Set up nginx

I use certbot to issue wildcard certs for my domains. See my wildcard cert script here. Note that you will need to set up credentials for your nameservers. I’m not aware of a way in nginx to issue certs on-demand like the example caddy config does.

Here’s my nginx config for the PDS.

server {
listen 80;
server_name hellthread.pro *.hellthread.pro;
return 302 https://$host$request_uri;
}

server {
listen 443 ssl http2;
server_name hellthread.pro *.hellthread.pro;
ssl_certificate /etc/letsencrypt/live/hellthread.pro/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/hellthread.pro/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

location / {
include proxy_params;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://127.0.0.1:3002;
}
}

Configure your .env file

Adjust the top 4 options, filling in your domain and generating keys with the following commands:

Use this for the JWT_SECRET:

$ openssl rand --hex 16

Use this twice to generate the admin password and rotation key:

$ openssl ecparam --name secp256k1 --genkey --noout --outform DER | tail --bytes=+8 | head --bytes=32 | xxd --plain --cols 32
PDS_HOSTNAME="your domain here"
PDS_JWT_SECRET="generated secret"
PDS_ADMIN_PASSWORD="generated key"
PDS_PLC_ROTATION_KEY_K256_PRIVATE_KEY_HEX="another generated key"

PDS_DATA_DIRECTORY=./data
PDS_BLOBSTORE_DISK_LOCATION=./data/blocks
PDS_DID_PLC_URL=https://plc.directory
PDS_BSKY_APP_VIEW_URL=https://api.bsky.app
PDS_BSKY_APP_VIEW_DID=did:web:api.bsky.app
PDS_REPORT_SERVICE_URL=https://mod.bsky.app
PDS_REPORT_SERVICE_DID=did:plc:ar7c4by46qjdydhdevvrndac
PDS_CRAWLERS=https://bsky.network
LOG_ENABLED=true
NODE_ENV=production
PDS_PORT=3002

Run the PDS

Be sure to install the dependencies:

$ cd service
$ pnpm install --production --frozen-lockfile
$ mkdir -p data/blocks

This is the systemd setup I use to run the PDS. Add your own user unit with the following steps:

$ mkdir -p ~/.config/systemd/user
$ $EDITOR ~/.config/systemd/user/pds.service
# copy in the example below and adjust as needed
$ systemctl --user daemon-reload
$ systemctl --user enable --now pds

pds.service:

[Unit]
Description=atproto personal data server

[Service]
WorkingDirectory=/home/ben/workspace/pds/service
ExecStart=/usr/bin/node --enable-source-maps index.js
Restart=on-failure
EnvironmentFile=/home/ben/workspace/pds/service/.env

[Install]
WantedBy=default.target

View the logs from journalctl like this:

$ journalctl --user --output=cat --follow --unit pds | jq

You can run the pdsadmin commands by setting the PDS_ENV_FILE variable like this:

ben@odin ~/w/p/pdsadmin (main)> PDS_ENV_FILE=../service/.env bash account.sh list
Handle Email DID
ben.hellthread.pro ben@hellthread.pro did:plc:g5isluhi3wkw557ucarjgtgy

Update

To update your PDS, use git pull in the directory you cloned it in. Then update dependencies in the service subdirectory and restart the unit:

$ cd pds
$ git pull
$ cd service
$ pnpm install --production --frozen-lockfile
$ systemctl --user restart pds
Categories
Uncategorized

Proxmox on Hetzner Setup Notes

Installation

Basic installation over plain debian https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm

Network config

Network configs derived from: https://community.hetzner.com/tutorials/install-and-configure-proxmox_ve

sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv6.conf.all.forwarding=1

Proxmox host /etc/network/interfaces

This example is for the main IPv4 of 157.90.92.151 with two subnets of 157.90.196.48/28 and 162.55.142.192/28. The IPv4 gateway is derived from the existing Hetzner configs given on install.

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback
iface lo inet6 loopback

auto enp8s0
iface enp8s0 inet static
address 157.90.92.151
netmask 255.255.255.255
pointopoint 157.90.92.129
gateway 157.90.92.129

iface enp8s0 inet6 static
address 2a01:4f8:252:3e22::2
netmask 128
gateway fe80::1

auto vmbr0
iface vmbr0 inet static
address 157.90.92.151
netmask 255.255.255.255
bridge_ports none
bridge_stp off
bridge_fd 0
pre-up brctl addbr vmbr0
up ip route add 157.90.196.48/28 dev vmbr0
up ip route add 162.55.142.192/28 dev vmbr0
down ip route del 157.90.196.48/28 dev vmbr0
down ip route del 162.55.142.192/28 dev vmbr0
post-down brctl delbr vmbr0

iface vmbr0 inet6 static
address 2a01:4f8:252:3e22::2
netmask 64

The important bits here are sysctl forwarding and routing our guest subnet to vmbr0.

Also need to systemctl disable --now rpcbind.socket per Hetzner rules.

Debian guest config

Subnet: 157.90.196.48/28

auto ens18
iface ens18 inet static
address 157.90.196.48/32
# or address 157.90.196.X/32
gateway 157.90.92.151

iface ens18 inet6 static
# in this case i'm using the same ending as ipv4
address 2a01:4f8:252:3e22::48/64
gateway 2a01:4f8:252:3e22::2

/etc/apt/sources.list

deb http://mirror.hetzner.de/debian/packages bookworm main
deb http://mirror.hetzner.de/debian/packages bookworm-updates main
deb http://mirror.hetzner.de/debian/packages bookworm-backports main
deb http://mirror.hetzner.de/debian/security bookworm-security main

deb http://security.debian.org bookworm-security main

/etc/resolv.conf

These are specifically Hetzner’s internal resolvers.

nameserver 213.133.100.100
nameserver 213.133.98.98
nameserver 213.133.99.99
nameserver 2a01:4f8:0:1::add:1010
nameserver 2a01:4f8:0:1::add:9999
nameserver 2a01:4f8:0:1::add:9898
Categories
Uncategorized

Mastodon Admin Notes

This is the cron job that runs daily to do media and federation data cleanup:

#!/bin/sh

printf "%s: running cleanup tasks\n" "$(date)"
RAILS_ENV=production /home/mastodon/.rbenv/shims/ruby /home/mastodon/live/bin/tootctl media remove --days=7
RAILS_ENV=production /home/mastodon/.rbenv/shims/ruby /home/mastodon/live/bin/tootctl media remove --days=7 --remove-headers
RAILS_ENV=production /home/mastodon/.rbenv/shims/ruby /home/mastodon/live/bin/tootctl media remove --days=7 --prune-profiles
RAILS_ENV=production /home/mastodon/.rbenv/shims/ruby /home/mastodon/live/bin/tootctl media remove-orphans
RAILS_ENV=production /home/mastodon/.rbenv/shims/ruby /home/mastodon/live/bin/tootctl preview_cards remove --days=30

printf "%s: dumping db\n" "$(date)"
pg_dump -Fc mastodon_production | ssh rsync "dd of=mastodon/db.dump"

date

This is the wrapper script I use to make sure I restart all necessary processes after updates.

root@mastodon:~# cat /usr/local/bin/masto
#!/bin/sh

case $1 in
        start|restart|stop|status)
                printf "%sing mastodon services\n" "$1"
                ;;

        logs)
                exec journalctl -fu mastodon-\*
                ;;
        *)
                printf "%s: invalid action. try logs, status, start, stop, restart.\n" "$1"
                exit 1
                ;;
esac

exec systemctl $1 mastodon-web \
        mastodon-streaming \
        mastodon-sidekiq@default \
        mastodon-sidekiq@pull \
        mastodon-sidekiq@push \
        mastodon-sidekiq@mailers \
        mastodon-sidekiq@scheduler \
        mastodon-sidekiq@ingress

The mastodon-sidekiq@.service units were added to address queues backing up and is essentially the same as the default unit file but with the -q queue name parameter added.

Categories
Uncategorized

Bluesky Post Permalinks

I set up a handy way to permalink my bluesky posts with a quick nginx config change.

I use a custom domain as my handle on bluesky (https://bsky.app/profile/benharr.is), but have changed it before which has broken some links that I’ve shared elsewhere. See the DID tracker for my handle change history.

As an example, here’s some pics of Shaq Attaq that I posted the other day: https://benharr.is/post/3k655thdbzv2p. Note that bsky.app is not in the url.

I added this location block to the nginx config for benharr.is:

location ~ ^/post/ {
       return 302 https://bsky.app/profile/did:plc:v7tbr6qxk6xanxzn6hjmbk7o$request_uri;
}
Categories
Uncategorized

WordPress with sqlite3

Update: there’s a proposal in WordPress core to merge sqlite support: https://make.wordpress.org/core/2023/04/19/status-update-on-the-sqlite-project/. The recommended way to use sqlite is currently by using the official plugin: https://wordpress.org/plugins/sqlite-database-integration/.


Running WordPress with sqlite is quick, easy, and can be much less system administration load as it eliminates the need for a separate database process.

Here’s how to run WordPress with sqlite using aaemnnosttv’s drop-in.

Set it up

  1. download https://wordpress.org/latest.tar.gz
  2. extract it into your webroot (something like /var/www)
  3. download db.php and add it to /var/www/yoursite/wp-content/
  4. follow the normal setup instructions but skip the database fields
  5. profit????

nginx config

Adjust configs as needed. Here’s an example.

snippets/ssl/benharri.org includes the block from certbot that points to the right cert and key.

server {
  listen 80;
  server_name benharri.org;
  return 307 https://$server_name$request_uri;
}

server {
  listen 443 ssl;
  server_name benharri.org;
  include snippets/ssl/benharri.org;
  index index.php index.html;
  root /var/www/benharri.org;
  client_max_body_size 100M;
  include /var/www/benharri.org/nginx.conf; #w3tc caching

  location / {
    try_files $uri $uri/ /index.php?$args;
  }

  location = /favicon.ico {
    log_not_found off;
    access_log off;
  }

  location ~* wp-config.php {
    deny all;
  }

  location ~ \.php$ {
    include snippets/fastcgi-php.conf;
    fastcgi_intercept_errors on;
    fastcgi_pass unix:/run/php/php8.2-fpm.sock;
  }

  location ~ /\.ht {
    deny all;
  }
}
Categories
Uncategorized

Update Adventures

tl;dr I got bit by an interface naming change (bug?) https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Linux_Bridge_MAC-Address_Change. Network didn’t come back up after reboot and I spent a long time figuring it out.


Here’s the longer version about the outage on August 24, 2021:

After finishing the package upgrades on my Proxmox hosts for the new release (Proxmox 7.0, corresponding to Debian 11/bullseye), I typed reboot and pressed enter, crossing my fingers that it would come back up as expected.

It didn’t.

Luckily I had done one last round of VM-level backups before starting the upgrade! I started restoring the backups to one of my other servers, but my authoritative DNS is hosted on the same server as tilde.team, so that needed to happen first.

I got the ns1 set up on my Proxmox node at Hetzner, but my ns2 secondary zones had been hosted at ovh. Time to move those to he.net to get it going again (and move away from a provider-dependent solution).

While shuffling VMs around, I ended up starting a restore of the tilde.team VM on my infra-2 server at OVH. It’s a large VM with two 300gb disks so it would take a while.

I started working to update the DNS records for tilde.team to live on OVH instead of my soyoustart box, but shortly after, I received a mail (in my non-tilde inbox luckily) from the ovh monitoring team that my server had been rebooted into rescue mode after being unpingable for this long.

I was able to log in with the temporary ssh password and update /etc/network/interfaces to use the currently working MAC address that the rescue system was using.

Once I figured out how to disable the netboot rescue mode in the control panel, I hit reboot once more. we’re back up and running on the server that it was on at the start of the day!

ejabberd wasn’t happy with mysql for some reason but everything else seems to have come back up now.

Like usual, holler if you see anything amiss!

Cheers, ~ben

Categories
Uncategorized

Mastodon PostgreSQL upgrade fun

Howdy friends!

If you’re a mastodon user on tilde.zone (the tildeverse mastodon instance), you might’ve noticed some downtime recently.

Here’s a quick recap of what went down during the upgrade process.

We run the current stable version of PostgreSQL from the postgres apt repos. PostgreSQL 13 was released recently and the apt upgrades automatically created a new cluster running 13.

The database for mastodon has gotten quite large (about 16gb) which complicates this upgrade a bit. This was my initial plan:

  1. drop the 13 cluster created by the apt package upgrades
  2. upgrade the 12-main cluster to 13
  3. drop the 12 cluster

These steps appeared to work fine, but closer inspection afterwards led me to discover that the new cluster had ended up with SQL_ASCII encoding somehow. This is not a situation we want to be in. Time to fix it.

Here’s the new plan:

  1. stop mastodon:
    for i in streaming sidekiq web; do systemctl stop mastodon-$i; done
  2. dump current database state:

    pg_dump mastodon_production > db.dump

  3. drop and recreate cluster with utf8 encoding:
    pg_dropcluster 13 main --stop pg_createcluster --locale=en_US.UTF8 13 main --start
  4. restore backup:
    sudo -u postgres psql -c "create user mastodon createdb;" sudo -u mastodon createdb -E utf8 mastodon_production
    sudo -u mastodon psql < db.dump

I’m still not 100% sure how the encoding reverted to ASCII but it seems that the locale was not correctly set while running the apt upgrades…

If this happens to you, hopefully this helps you wade out while keeping all your data 🙂

Categories
Uncategorized

Default git branch name

Update:

As of git 2.28, there’s a new configuration option and you don’t need to use the templateDir option:

git config --global init.defaultBranch main

Changing git’s default branch name has come up recently as an easy action we can take to update our language and remove harmful ideas from our daily usage.

I’m concerned that this effort to change the language used is ultimately a symbolic gesture to avoid scrutiny into actual change (notably github’s push for this change and continued contracts with ICE).

However, it’s an easy change to make.

Let’s have a look at how to change it for new repos:

mkdir -p ~/.config/git/template
echo "ref: refs/head/main" > ~/.config/git/template/HEAD
git config --global init.templateDir ~/.config/git/template

Note that you can put this template dir anywhere you like.

You can also set this system-wide (not just for your user) in /usr/share, but note that this might get overridden by package updates.

echo "ref: refs/head/main" | sudo tee /usr/share/git-core/templates/HEAD

The next time you git init, you’ll be on a branch named main.

To change an existing repo, you can use the -m switch of git-branch:

git checkout master
git branch -m master main

Push with -u to your remote if needed and update the default branch in the repo settings in the hosting platform of choice.

It’s a relatively easy change, but don’t kid yourself that it makes any real impact. go protest, donate and sign petitions, and get out there to fix the actual problems.