The blog

Setting up guix, and locales

I wanted to try out Spritely Goblins to see what it is like, or to really understand what it is, since I’d been seeing it pop up on here and there.

The guide suggests using the guix package, so I decided to install the guix package manager on top of my arch linux system.

Installing asks to please run nscd, which is not even included in arch nowadays, so I did not, we’ll see what problems it causes. Then after it’s set up, runnig any guix command asked me to fix the locales:

hint: Consider installing the `glibc-locales' package and defining `GUIX_LOCPATH',
along these lines:

     guix install glibc-locales
     export GUIX_LOCPATH="$HOME/.guix-profile/lib/locale"

See the "Application Setup" section in the manual, for more info.

The Application Setup section in the manual says the full glibc-locales includes everything and thus is heavy, and suggests this invocation for a slimmer setup:

(use-modules (gnu packages base))

(define my-glibc-locales
   #:locales (list "en_CA" "fr_CA" "ik_CA" "iu_CA" "shs_CA")
   #:name "glibc-canadian-utf8-locales"))

Apparently, you can drop that in a file (I removed the define, since we want to return the value), such as locales.scm:

(use-modules (gnu packages base))

  #:locales (list "en_US" "ca_ES" "es_ES")
  #:name "glibc-my-utf8-locales")

And tell guix to install it as a package:

guix package --install-from-file=locales.scm

The env variable $GUIX_LOCPATH should be set automatically by /etc/profile.d/ on login, with a gotcha: It will try to set it up first for $HOME/.guix-profile (correct) but then overwrite it for $HOME/.guix-home/profile (which does not exist on my system because I’m not using guix home, for now). So I had to comment out that final section.

And then, I ran guix pull, and the glibc version had changed, so I had to re-install the package to upgrade it and fix the locales again. Also it turns out after that, there will be two different per-user guix profiles:

> guix package --list-profiles                              

Which is a bit confusing but seems to work.

Also, the guix pull took forever, and the way to fix that seems to be to write the following to .config/guix/channels.scm:

(use-modules (guix ci))

(list (channel-with-substitutes-available

And then it will try to update only to versions which have binaries. Maybe probably.


Annoyance on low battery

Laptops should be a bit annoying when they are low on battery. Otherwise, I will not realise it until it’s too late, and my computer will turn off in the middle of whatever I was doing.

Proper linux Desktop Environments already do this, but since I am running the Awesome window manager with a custom config, I only get what I add myself.

My solution is as follows. First, the annoyance script:


#!/usr/bin/env python3
import subprocess
import math
import sys

with open('/sys/class/power_supply/BAT0/status') as f:
    status =

if status in ["Charging\n", "Full\n"]:

with open('/sys/class/power_supply/BAT0/energy_full') as f:
    total = int(

with open('/sys/class/power_supply/BAT0/energy_now') as f:
    now = int(

perc = math.floor(now / total * 100)
if perc < 10:["notify-send",
                    "-t", "0",
                    "-i", "/usr/share/icons/Numix/24/status/gpm-battery-020.svg",
                    "-u", "normal" if perc > 5 else "critical",
                    f"Low battery! {perc}%"])

Depending on the laptop, energy_now and energy_full are replaced by charge_now and charge_full. And notify-send must be installed.

Then we could run it on login on a loop with a sleep, but I just have systemd call it every 2 minutes:


Description=Notify of low battery



Description=Check battery status every 2 minutes


$ systemctl --user enable --now battery.timer

Building dosemu2 to run WordPerfect for DOS on debian

We are going to run wordperfect for dos on dosemu on debian. Overcomplicated adventure building stuff as debs for no reason.

Main source of inspiration is and the linux page there. Apparently there is a new way with dosbox-x which is better, but I don’t seriously intend to use WP, I’m just toying, so I won’t ask.

I wrote down what I did as I went, but reordered sections to be in logical order instead of chronological.

Let’s go with dosemu2. I’m running bedrock linux on this laptop, with debian and void. But dosemu2 is not in either’s repo. We could try to use the ubuntu debs, but it feels wrong and there is no fun in that. So time to build from source.

We’ll need some normal dependencies we can get from apt, but we will also need fdpp.

git clone

The readme claims make is all you need - it’s never that easy

sudo apt install libelf-dev bison flex clang texinfo asciidoc-base xmlto

(note this pulls a ton of tex shit, maybe there’s a way to make it more minimal)

As of writing, fdpp depends on a custom nasm which is hosted on launchpad.

git clone
git checkout elf16
debuild -i -us -uc -b

We could add --lintian-opts --suppress-tags bad-distribution-in-changes-file telling lintian to ignore that because we are building an ubuntu package, but it turns out that lintian doesn’t matter, it gives some errors but the build is fine.

sudo apt install ../nasm-segelf_2.16.01-2_amd64.deb

Back on fdpp:

make deb
sudo apt install ../fdpp_1.7-1_amd64.deb ../fdpp-dev_1.7-1_amd64.deb

Now it turns out that dosemu2 will also want comcom32, so let’s do that:

git clone

New subquest, we need the djgpp toolchain. We are going to need normal nasm this time.

sudo apt install dos2unix nasm

This one will take forever

git clone
debuild -i -us -uc -b
sudo apt install ../binutils-djgpp_2.41+11_amd64.deb \
../djgpp-dev_2.05.cvs.20230827.1621+11_amd64.deb \
../djgpp-utils_2.05.cvs.20230827.1621+11_amd64.deb \
../gcc-djgpp_12.2.0+11_amd64.deb ../gdb-djgpp_8.2.1+11_amd64.deb \

Back on comcom32:

debuild -i -us -uc -b
sudo apt install ../comcom32_0.1\~alpha3-1_all.deb


git clone
sudo apt install linuxdoc-tools libslang2-dev libgpm-dev \
libsdl2-ttf-dev libfontconfig1-dev ladspa-sdk libfluidsynth-dev \
libao-dev libieee1284-3-dev libslirp-dev libbsd-dev \
libreadline-dev libjson-c-dev libb64-dev binutils-dev binutils-i686-linux-gnu
make deb
sudo apt install ../dosemu2_2.0~pre9-1_amd64.deb

I’m not going to bother with uninstalling all the stuff we installed just for building honestly.

Once I got dosemu2 booting, I fetched a copy of WordPerfect 6.2, extracted the contents of the images:

sudo mkdir -p /mnt/a
mkdir -p out
for i in *.img; do
    echo "$i"
    sudo mount -o loop "$i" /mnt/a
    cp -rv /mnt/a/* out
    sudo umount /mnt/a

Then went through the install, selecting the Apple LaserWriter IINTX for the printer, forgetting to select the VESA driver (so I did that part to get high resolution later), and basically following the WPDOS guide as well as I could. I also replaced the binary for the patched version in the archive, which gives me shift+arrow keys selection. The mouse is a bit weird and too fast in my dosemu2, so I have an emumouse line in my USERHOOK.BAT file.

emumouse x 2 y 4

A screenshot of the result:
Screenshot from WP6 running on dosemu2 on debian

Printing just works, dosemu2 passes down the file to the CUPS default printer. CUPS-PDF can be used to print to PDF.


Making stupid bridged networking work on wifi (because usermode networking is not good enough)

This is how I setup the network when I run a VM on a laptop.

Create a bridge on networkmanager:

$ nmcli connection add type bridge ifname br0 stp no \
  ipv4.addresses ipv4.method manual br0

(it will be DOWN but already be visible in ip link)

Set up a dhcp server for our vms using dnsmasq

# systemctl enable --now dnsmasq

Enable forwarding

In /etc/sysctl.conf or /etc/sysctl.d/99-sysctl.conf:

net.ipv4.ip_forward = 1
# sysctl -p /etc/sysctl.conf

Tell iptables to please forward

iptables -A FORWARD -i br0 -o wlp2s0 -j ACCEPT
iptables -t nat -A POSTROUTING -o wlp2s0 -j MASQUERADE
iptables -A FORWARD -i wlp2s0 -o br0 -m state --state RELATED,ESTABLISHED -j ACCEPT

(This does not persist across reboots)

Configure the qemu bridge helper

write allow br0 in /etc/qemu/bridge.conf

let qemu-bridge-helper do its thing:

$ sudo chmod u+s /usr/lib/qemu/qemu-bridge-helper

And finally

$ qemu-system-x86_64 -cpu host -enable-kvm -m 2048 \
    -nic bridge,br=br0,mac=52:54:28:86:30:65,model=virtio \
    -device virtio-scsi-pci,id=scsi \
    -drive if=none,id=vd0,file=vm.qcow2.img \
    -device scsi-hd,drive=vd0

(or whatever else, set the MAC to something unique - the only important part here is the -nic)


Nix on ubuntu and webGL

So I switched jobs and got a laptop from the company, and it had ubuntu installed with FDE. Normally I would have done a clean install of arch on it, but I said hey, let’s just give ubuntu a chance.

When it came to setting up the elm language server though, you need a specific version of node and the fight with npm got tiring enough that I decided to go the Nix On Non Nixos route. So I installed nix on ubuntu, ran nix-env -iA a bunch of times, and everything was good in the world.

Then it turned out that most of the time, when something is outdated in the ubuntu repo, it is just easier to install stuff from nix than adding PPAs. That’s how I ended up with neovim, shellcheck installed from nix too.

Enter snap annoyance. Every single day, a big notification saying hey, plz close firefox before 13 days, snap wants to update it but not while it’s running. As I couldn’t find a way to disable the annoying popups, I removed firefox from snap and installed it from nix. But ah, then you have the issue of openGL not working in nix-installed programs. Thankfully nixGL exists, which solves the issue. This laptop has a stupid powerhungry nvidia card, which I disable when I remember to, so I ended up with Exec=nixGLIntel firefox %U in my .local/share/applications/firefox.desktop.

But how, why was webGL still not working? Turns out, upon launching firefox without gl support, it had blacklisted all the things that make it possible for the future. This comment on a github issue gave me the fix. Searching for failureId in about:config and deleting all the things.

Brittle setup much? Maybe. Fun stuff.


Microblogging on the fediverse

So today I read this post by icyphox titled Stop joining (archived) and I thought I’d put my thoughts in writing. And out came the following pointless mess.

TL;DR: The fediverse doesn’t seem to work for me.

I’ll take a step back first, and that sets me on twitter. So I don’t really do twitter. As in, I almost never tweet. I have followers but no mutual follows. I don’t feel like I know anybody I follow; or rather, nobody I follow knows me. I never get direct messages. I just open the website, and scroll through the Latest Tweets of people I follow like @paniq and @foone and a bunch of artists like @moshimoshibe.

There’s a few things I don’t like about twitter. The UI is annoying. The APIs purporsely hinder 3rd party clients. It’s a proprietary silo.

Then there’s the bad people saying bad things, of which I am reminded when I talk to people outside of twitter and when I take a glimpse of the trending topic list, but that is something that does not affect me, because I don’t follow those kinds of accounts. My feed is quite well curated.

Enter mastodon/pleroma/whatever. The fediverse. Take twitter and make it work like email. Federated. That’s nice! I self-host my email for fun. This should be able to fix the things I don’t like about twitter: I can make my own UI, my own client, we all can, it’s all free software.

Now, small instances. They promise community. This is the kind of interaction I don’t have, or maybe don’t care to have on twitter. I’ve looked around the fediverse, sadly haven’t found any I like. Removed that, what do I want? A reliable, no drama, high uptime instance. What’s most likely to give me that? The biggest instance. And it so happens that it links accounts with pixiv. Guess who has a pixiv account already? We are set.

The thing is, I don’t have any people to follow on the fediverse. Or rather, I follow a handful of people, but they don’t really post. The only one that does is crossposting from twitter. So what do I do? I read twitter.

Some days I try again. Browse lists of instances. Read their local timelines. Try to find people. But no luck.

icy writes:

Are you into, say, the BSDs? Join Free software? Or host your own for yourself and your friends.

Don’t get me wrong, I like the BSDs and free software as much as the next nerd, but “hey I upgraded OpenBSD, went fine” (me too!) is not the kind of content I’m looking for. And friends? Please, if I were to set up an instance I’d be alone in it.

Philosophically, it’s not like that works if we are aiming for small instances either. I mean, if you took all the FOSS users on twitter and put them all in the same instance, you’d get something not unlike Big stuff, not a community.

On practical terms, what’s the difference? I can follow people on any instance, people can follow me from any instance (but shouldn’t, because I don’t post). Unless they are on some instance that breaks federation, I guess.

My local timeline is mostly japanese, which I don’t speak. Yeah, so? I can browse the local timeline of any other instance if I feel like it, no problem. And yet, those times I’ve tried to I haven’t found much of value.

And finally, there’s the matter of trust. Why should I trust some random small pleroma instance to be well managed?

PD 2020-10-23: (archived) has some good points against the fediverse and mastodon in particular.

PD 2023-02-07: Twitter is dying and most of the people I followed on twitter have moved over. Federation is broken with japanese instances, so I migrated my account to My mastodon timeline is now better than my twitter timeline, overall.


Fenix is not a rebirth but a regression

This will be a rant. Couple days ago, I upgraded the apps on my phone. Firefox for Android, my phone browser of choice, got a major upgrade. It went from the old codebase Fennec, to the new codebase Fenix, built on GeckoView, which had been on testing for a while in Firefox Preview, Firefox Nightly and Firefox Beta.

The first obvious change is that the URL bar is now at the bottom. But fear not, this is configurable. When starting the new browser for the first time, it will ask a bunch of things, one of which is “where do you want the bar, top or bottom”? Issue is, having the bar on top is buggy. It has not received enough testing, neither from a technical Point of View nor from a usability point of view.

The first problem is serious: websites break with the URL bar on top. There’s an issue open for twitter already, but other websites break in similar ways due to the viewport height changes. When the URL bar is on top and you scroll down, it hides to leave more room to read. This happens too with the bar at the bottom though, so I don’t know. Hope it can get fixed soon.

Second problem: tabs

screenshot of the tab display

First, how many tabs do I have open? 6? No, 7, look at that tiny piece of a tab showing its foot at the very top. Second, how come only one of the thumbnails works? No idea. Fallback to favicons? Or are those out of fashion? What’s the point of the foldable thing that comes from below? I can just click back at my tab if I want to go back to it. Fennec’s square tabs made much better use of my space.

Third problem: navigating to a new website on a new tab. Which is, like, the #1 thing you want to optimize UI for. On fennec, you would click twice on the “tabs” button, and then again on the same spot, because that button just became the new tab “+” button. Two taps on the same spot to go to the empty tab screen, then either:

  • click on one of your (automatic or pinned) top sites;
  • easily and obviously reach for bookmarks or history; or
  • tap just left of where you just tapped to pop out the keyboard and start typing.

Now, if you have your URL bar at the top (which I would very much prefer) you have to tap up at the tabs button, then on the round blue button at the other end of the screen to open the tab. If you have the bar at the bottom, it’s more or less two taps at the same spot still.

The new tab screen is an empty landscape. Bookmarks? History? No, just manually pinned top sites or “collections”. But hey, at least we don’t have to hunting in settings for the option to disable Pocket. So we just tap on the URL bar… or is it a google bar? Anyway, no suggestions under it yet: let’s type.

I tend to open twitter often, on my phone. I type a t. I get autocompleted, with the rest selected so that I may keep typing. I don’t usually go to translate on my phone (I use the app), but hey it’s all on firefox sync so whatever. Below the bar are suggestions for the Transmission Web Interface (starts with a t), and then… Presura? How in hell? Ok, “Revista” has a t, but seriously?

Ok let’s keep typing: w for tw. Autocompletion for Suggestions for, and The Pirate Bay, which has software somewhere in the <title>. Then actually twitter. Seriously, how is this sorted? Only after typing twitt does it give up on twitch. It’s not sorted on time since the last visit, it doesn’t prioritize text being at the start of words.

Oh, and the preselected text means that if you mistype a letter you will have to press backspace twice, once for the selection of something you don’t want, and once for the letter you mistyped. Same as on desktop firefox really, but typos are a lot more common on mobile.

Fourth problem: downloads. Can’t find them. Starting to worry this version of firefox actually doesn’t have a downloads screen. Am I supposed to go look for an android file manager that isn’t shit and find the downloads folder?

And this is it for now. I guess I will edit this post with links to bug reports or workarounds I find. Now I’m just sad and frustrated. Maybe I should have cried and filed reports about all of this back when I tried Firefox Preview and found many of these same issues, but back at the time I just went back to stable and went on with my life.


Adventures with an R9 290 and temperature

So there’s this AMD R9 290 I had lying around from back when two friends and I set up an ethereum mining rig (which is a story for another day). Only one fan, stock cooler.

I was thinking of selling it, but after a couple google searches, the internet told me that this card was in fact better than the Nvidia GTX 960 I was running.

Photo of the card

So I pop it in place, reconfigure everything to use amdgpu instead of nvidia, and go for my standard test: Counter Strike: Global Offensive.

Aaaand as soon as the game started, my screen went black, the GPU fan went to max, and I had to force a reboot. Ugly. Looking at the temperatures would tell a simple story, the GPU temperature was reaching 100°C (94 is the maximum allowed), and it was aborting all operations. This command shows the temps:

sudo watch -n 0.5 cat /sys/kernel/debug/dri/0/amdgpu_pm_info

Take it out, watch a video on how to disassemble the thing, remove all the little screws, curse bad screws and bad screwdrivers. Remove all the dry fossilized ancient thermal paste from year 200BC. I’ve never had isopropyl alcohol at home, ethanol 96° had to do. Apply new thermal paste. Put the thing back together. Try again. Same story.

Maybe I put too little thermal paste! Disassemble it again, clean it well, drop big fucking line of paste, reassemble.

Card back in the PC, this time we are going to be careful. And measure things.

Well turns out, first of all the power draw of this thing is ridiculous, at least on this linux/amdgpu combination. With a single monitor at 60Hz it draws 20W. Which is a lot, but acceptable. Now when you have 2 monitors, or a single one at 144Hz you jump to 65 fucking Watt. Idle. Only xorg running, “GPU Load” at 0%. 65W.

Now second thing, the fan speed is not ramping up properly at all. We were going from “pretty quiet” to “fuckfuckfuck max throttle” in one go, when it’s already too late. Poking around shows that we have the fan speed at /sys/class/drm/card0/device/hwmon/hwmon0/pwm1. pwm1_min and pwm2_max show that the range is 0–255. By itself it was sitting at around 90, which is fairly quiet and gets the GPU under 60°C with one monitor at 60Hz.

If I want to keep things under control with 2 monitors though, I have to force the speed to 140.

Then at speed 170 I could open CSGO, although the temperatures were slowly rising. At speed 210 I got it to stabilize at 92°C. FPS unlocked, GPU more or less drawing as much as it could. So speed 210 is safe, it seems. We won’t die if we keep it. Thing is, 210 is LOUD. Vacuum cleaner loud, almost. Not acceptable by any means.

Maybe I should just give up on this card. Although, my brother has one that’s the exact same model, and I think he doesn’t have these problems? He’s on radeon though, not amdgpu. More investigation required. Tomorrow.

Tomorrow arrives

From the overclocking section in arch wiki, even though I am not trying to overclock, I got a useful bit of info. I can limit the power draw with:

echo 150000000 > /sys/class/drm/card0/device/hwmon/hwmon0/power1_cap

where 150000000 means 150W.

I think the main problem I have is that the card is not thermally throttling properly. I tested CSGO on windows, and there the fans spun up a bit (but not super high), but most importantly: the card lowered its power to never exceed 94°C. The game was playable, and the noise was bearable.

On linux on the other hand, if I don’t limit wattage and don’t force the fans up, what happens is that it tries to put the fans on max really late, and then shuts down (due to emergency temp). There’s a bunch of people with the same problem at, although they don’t seem to be able to manually set the pwm speed (which I can).

Just like them, I get a buggy reading for crit and hyst from sensors:

edge:         +77.0°C  (crit = +104000.0°C, hyst = -273.1°C)

So in short, I have a hardware issue and a software issue:

  1. The cooling on this card is pretty shit.
  2. The amdgpu driver doesn’t throttle properly, and its automatic fan control is pretty bad.


  • Linux version: 5.6.11
  • Mesa: 20.0.6
  • Distro: Arch Linux
  • xf86-video-amdgpu: 19.1.0
  • Kernel parameters:
    • radeon.cik_support=0 amdgpu.cik_support=1

Installing OpenBSD on a Scaleway VPS

Yesterday I wanted to set up an OpenBSD vps. And the cheapest vps provider these days seems to be Scaleway, which doesn’t allow custom ISO uploads.

So what I did is pick debian, and from grub run with the serial console attached:

set root=(hd0,gpt1)
kopenbsd /bsd.rd

And I got nothing, because this thing is serial not a proper terminal (or so I thought at first. Actually, the problem is the combination of grub+uefi+openbsd ramdisk fails to output anything other than serial):

screenshot of things not working

Then I learned about -h com0:

set root=(hd0,gpt1)
kopenbsd -h com0 /bsd.rd

This works fine in qemu. Output goes to the serial console, installation can start, etc.

On the VPS, the shell never starts:

I tried different versions of openbsd and found out that versions previous to 5.3 worked. Something changed on 5.3, 5.2 “works” (network card not recognized, gpt/uefi don’t work). Of course it doesn’t work well enough to do a proper install, but it gets to a shell.

Now I asked myself, is the process hanging here (1), or is the output changing from serial to something that doesn’t work through the scaleway tty (2)? And I went to IRC for help. Specifically, freenode #openbsd. And then the couple guys that tried to help me decided it was certainly the second thing. Why? Because I was not using the blessed boot.conf to set tty com0. Because grub was doing it wrong. Because I wasn’t using a custom ramdisk with a nice boot.conf.

This was all a big misunderstanding, for various reasons:

  1. I’m getting all the kernel messages but the last in the serial console. The moment the messages stop, the kernel has been chatting serial for a while, and it’s just missing one message to send.
  2. Local qemu tests proved that grub with -h com0 could work fine.
  3. I am not using /boot at all, because all it does I’m already doing with grub, successfully.

But still, the people on IRC helped me mess with ramdisks, and messing with ramdisks proved to me that the kernel was really hanging. The first test was modifying /.profile to make it reboot when it got to the shell immediately. This worked correctly on qemu, but did nothing on Scaleway. If it were a problem of display, it would still reboot.

The second test was removing /sbin/init. This makes the kernel panic and reboot… but just after finding the root partition. Ours is hanging just before finding the root, so obviously it didn’t reboot with this either.

In short, the situation is that the kernel is hanging somewhere between printing scsibus2 at softraid0: 256 targets and root on rd0a swap on rd0b dump on rd0b The next thing was seeing if the kernel verbose mode would help.

I booted an openbsd vm, ran boot -c (yes, this time from the famous boot> prompt, since it was a full installation) enabled verbose mode and looked at the output. There were no added lines between scsibus… and root on…, so verbosity wasn’t going to help.

The only thing left to do then, is build an openbsd kernel (and ramdisk image) with added printfs everywhere, to see where in the code the kernel hangs and finally find the bug. Or is it?

No, first let’s compare good and bad boots to see that the devices are different. Then try to boot a local uefi qemu with virtio devices, just as scaleway is doing. See that it fails just the same. Ah, locally reproducing a bug, that sure is nice. Especially so when the vps takes over a minute to reboot every time.

But when booting an actual install66.fs it doesn’t fail; it is only when booting from grub /with virtio/ and /uefi/. (Can’t boot from grub on uefi without serial, can’t see anything)

Some tests:

  • UEFI+GRUB ⇒ no image
  • UEFI+GRUB+virtio ⇒ no image
  • UEFI+GRUB+com0 ⇒ works
  • UEFI+GRUB+com0+virtio ⇒ kernel freezes after “scsibusN at softraidM: 256 targets”
  • BIOS+GRUB+virtio ⇒ works
  • UEFI+install66.fs+virtio ⇒ works

Anyway, there’s a bug here somewhere.

How to actually boot the install media

Reboot to the rescue ubuntu thing that boots off the network on the scaleway control panel.

cat miniroot66.fs > /dev/vda

Set to boot again from disk, reboot, attach console. boot> prompt will appear:

set tty com0

I told it to use the whole disk as GPT. The system will warn that “An EFI/GPT disk may not boot. Proceed?”. I disregarded the warning and the system booted fine after finishing the installation.


KRunner and D-Bus

Today I pressed my awesome shortcut for layout switching and found that it didn’t work. Instead, something called KRunner popped up. BUT WHY?

What followed was an investigation. Typing krunner on google already explained the first thing: KRunner binds to Alt-F2 and Alt-SPACE.

krunner google query

Now why was krunner running?

> ps ax | grep krunner
2343941 ?        Sl     0:00 /usr/bin/krunner
2382396 pts/10   S+     0:00 grep --color=auto krunner

Pretty high PID, it sure didn’t start with the system. Who is its parent?

> cat /proc/2343941/status | rg -i ppid
PPid:   1477
> ps ax | grep 1477
1477 ?        Ss     0:00 /usr/lib/systemd/systemd --user

Oh boy, it’s our friend systemd --user. Now how did this end up happening?

> systemctl --user status 2343941
● dbus.service - D-Bus User Message Bus
     Loaded: loaded (/usr/lib/systemd/user/dbus.service; static; vendor preset: enabled)
     Active: active (running) since Thu 2020-01-16 17:02:17 CET; 1 weeks 0 days ago
TriggeredBy: ● dbus.socket
       Docs: man:dbus-daemon(1)
   Main PID: 1512 (dbus-daemon)
     CGroup: /user.slice/user-1000.slice/user@1000.service/dbus.service
             ├─   1512 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nop…
             ├─   1578 /usr/lib/ibus/ibus-portal
             ├─   2255 /usr/lib/dconf-service
             ├─2343766 /usr/lib/kactivitymanagerd
             ├─2343773 /usr/bin/kglobalaccel5
             └─2343941 /usr/bin/krunner

D-Bus, that little guy I’ve never known what exactly it does but is always there. tells us:

In addition to interprocess communication, D-Bus helps coordinate process lifecycle; it makes it simple and reliable to code a “single instance” application or daemon, and to launch applications and daemons on demand when their services are needed.

So, someone asked the magic D-Bus to run krunner for them.

> journalctl --user --unit=dbus
[...] Activating service name='org.kde.ActivityManager' requested by ':1.335' \
(uid=1000 pid=2343761 comm="kate -b ")
[...] Successfully activated service 'org.kde.ActivityManager'
[...] Activating service name='org.kde.kglobalaccel' requested by ':1.336' \
(uid=1000 pid=2343766 comm="/usr/lib/kactivitymanagerd ")
[...] Successfully activated service 'org.kde.kglobalaccel

Oh, Kate! I remember running Kate the other day to open some file!

So the process is like that:

  1. Kate starts the KDE Activity Manager
  2. The Activity Manager starts the kglobalaccel thing
  3. The kglobalaccel thing binds Alt-F2 and Alt-SPACE to start krunner.

And basically I hate everything and sometimes the alt+space bind overrides my WM’s and sometimes not.


Ludum Dare 40

This last weekend, after partying hard on Friday night, I made a game for the Ludum Dare 40 Compo! (and messed up my sleep schedule even more)


I used the Godot Engine (v2.1.4, not the v3 beta), the Aseprite pixel art editor (the GPL fork), the sfxr sound effect generator and the MilkyTracker music tracker.

I did everything all the graphics, music, sfx and code myself (except for Godot), and all the project files (with source and all) are public (which reminds me I need to add a proper license).

I exported to Linux, Windows, and the Web. I didn’t do macOS because I don’t have a mac to test, and macOS users can always use the web version. I didn’t do phones because the game only runs at a fixed resolution and doesn’t support touchscreen controls. All the download links are in the ldjam post linked above.

My takes on the whole thing would be that:

  • Godot and Aseprite are great
  • I suck at composing music
  • Making a small game in 48h is feasible and fun

The only major problem I had over the whole process was that the web godot export wouldn’t play my sound effects. Reading godot’s source code I found it was due to the compression godot’s importer does by default, which I then disabled, and the bug was fixed.


Disassembling bytecode

Today I rewrote the decompiler in Red Alien. Here is a quick comparison:

From script source:

#dyn 0x800000

#org @start
loadpointer @text
jump :label

#org @text
= \c\h01\h05\v\h01I don't even know what\n
= I'm doing! Ádudududu\p
= \c\h01\03Does this text even make\n
= sense?


'file name = /home/jaume/RH/ruby.gba
'address = 0x800000

#org 0x800000
loadpointer 0x880001a
jump 0x8800001

#org 0x80001a
= \cÀÈ\vÀI don't even know what\nI'm doing! Ádudududu\p\cÀ03Does this text even ma
= ke\nsense?

#org 0x800001
loadpointer 0x880001a
jump 0x8800001


'file name =  /home/jaume/RH/ruby.gba
'address =  0x800000

#org 0x800000
' joined
#org 0x800001
loadpointer 0x880001a
jump 0x8800001

#org 0x80001a
= \c\h01\h05\v\h01I don't even know what\nI'm doing! Ádudududu\p\c\h01\ha13Does 
= this text even make\nsense?$$

Notice how the code at 0x800001 isn’t duplicated any more. Also, the splitting code for strings is now much better, and characters outside the ascii range are detected as control codes depending on the preceding characters (\c, \v). That $$ is the 0xFF string terminator, which I made explicit in August.

…and I spent just as much time getting backtick code blocks with PKS highlighting working on this blogpost than on the work itself.


Blog Technology

The other day, I was asked on IRC what did this website run on.

Let’s go from the bottom to the top. First, the hardware:

An Intel Atom (D945GCLF2) with just a power cord and an ethernet cable comming out of it, sitting on my home desk. A single 300GB disk, no raid, no backups, no SAI, the best setup ever. 2GB RAM. Since the fan is noisy, I have it set to never spin below 75°C.

Now, for the software:

  • Arch Linux using the LTS kernel. I update it when I feel like it.
  • Nginx, serving plain text files, and running a reverse proxy for:

Everything is started using systemd service files. The lisp server runs on sbcl, wrapped by rlwrap, inside in a tmux session, and is responsible for (please forgive me for letting an implementation detail inside a URL).

The cherrypy server runs as is (service Type=simple), and used to serve this blog directly. Right now I use it to preview things before generating the actual static files.

At the time of writing, the script I use to generate the static content looks like this:

#!/usr/bin/env python3
import os
import web
out_dir = "static"

root = web.Root()
blog =
redalien = root.redalien
tutorial = root.tutorial
pages = [
    ('index.html', root.index()),
    ('blog/index.html', blog.index()),
    ('blog/atom.xml', blog.atom()),
    ('redalien/index.html', redalien.index()),
    ('redalien/manual/index.html', redalien.manual()),
    #('tutorial/index.html', tutorial.index()),
    ('tutorial/fixing/index.html', tutorial.fixing()),
    ('bluespider/index.html', root.bluespider()),
    ('random/index.html', root.random()),
    ('dtops/index.html', root.dtops()),
    ] + [('blog/entry/{}'.format(i), blog.entry(i)) for i in range(1, 10)]

for path, page in pages:
    fullpath = os.path.join(out_dir, path)
    os.makedirs(os.path.dirname(fullpath), exist_ok=True)
    with open(fullpath, 'w') as f:

(gotta automate that range(1, 10))

As for the pages and entries themselves, I write them in either markdown or HTML, and they are rendered automatically inside a mako template. Mako! Mako! Everything runs on mako and these AVALANCHE terrorists won’t stop me. All the code is in a nice <200 SLOC file.

I guess I could use something like pelican instead, but this system is already built, it works, and it’s flexible.


Today I Solved/Broke ROM-Hacking

GBA Pokémon game ROM-Hacking is messy. It's done using tools like map editors, hex editors or image inserters directly on the ROM file, and the only safety measure are plain file backups.

Normal game or software development, on the other hand, usually keeps all the source data in separate, easily editable files, then provides some means of building the final product automatically.

Wouldn't it be great if ROM Hacking could be done like normal software development? We'd only need some kind of build system, with a linker that could set all the pointers where they belong... Then we'd press a button and all our data would be written on top of a stripped down ROM file, producing the final game.

But we have that! The script compiler does exactly what we need. You give it your code with a bunch of @-prefixed labels, it looks for free space and it links it all together. What we don't have, though, is non-script things in script source form. To start with, we'd need:

  • Map headers and data
  • Graphics
  • Trainer data
  • Pokemon data

Graphics can probably be compressed by grit. I can make Blue Spider output source files. Trainer data, pokémon data, etc. are similar structures—some work, but possible. The only thing left is some way to nicely edit tileset block data, which my map editor doesn't do. Music and ASM modifications are done by the means of binaries already. So, let's give it a go...

I added a script to Blue Spider named It takes the map bank list, the list of maps for every bank, and most of the data for every map (including events, but not the tile map itself yet), and outputs it as pks scripts in a directory named map_dump when called as:

$ ./ FR.gba

It also writes a file include_list.pks which has an #include line for every other file, so that running:

$ asc-cli c FR.gba map_dump/include_list.pks

Builds the whole thing (and, if nothing has been changed on the source files, changes nothing on the ROM). But wait! I hear you say, that's full of raw addresses, nobody could work with that! And that's true, which is why takes a --label option, which makes it spit out nice @labels instead of hex addresses:

$ ./ --label FR.gba

And so, map_dump/3/map_0.pks AKA Pallet Town will look more or less like this:

'map header
#org @map_3_0_map_header
#word @map_3_0_map_data_header 'map_data_ptr
#word @map_3_0_events_header 'event_data_ptr
#word @map_3_0_level_scripts 'level_script_ptr
#word 0x835276c 'connections_ptr
#hword 0x12c 'song_index
#hword 0x4e 'map_ptr_index
#byte 0x58 'label_index
#byte 0x0 'is_a_cave
#byte 0x2 'weather
#byte 0x1 'map_type
#hword 0x601 'null
#byte 0x0 'show_label
#byte 0x0 'battle_type

'map data header
#org @map_3_0_map_data_header
#word 0x18 'w
#word 0x14 'h
#word 0x82dd0f8 'border_ptr
#word 0x82dd100 'tilemap_ptr
#word @map_3_0_t1_header 'global_tileset_ptr
#word @map_3_0_t2_header 'local_tileset_ptr
#byte 0x2 'border_w
#byte 0x2 'border_h

't1 header
#org @map_3_0_t1_header
#byte 0x1 'is_compressed
#byte 0x0 'tileset_type
#hword 0x0 'null
#word 0x8ea1d68 'tileset_image_ptr
#word 0x8ea1b68 'palettes_ptr
#word 0x829f6c8 'block_data_ptr
[rest omitted]

Now, there's a couple things which must be done before this can be compiled. First, a suitable #dyn line must be added at the top of include_list.pks. And second, since Blue Spider doesn't know when to stop finding maps at the end of the last bank, all the bad maps must be removed from the definition of said bank. In the case of Fire Red, that means removing all maps but number 0 at map_dump/bank_42.pks.

And so, this is it for today. There is quite a bit of work ahead, but the future doesn't look bad.


Random Thoughts, part I

Go listen to 'go' by Delilah, remixed by Paralloyd & Too greezey

Fixing broken ROM Hacks the dirty way

People make ROM Hacks using buggy tools, and then one day they find their game freezing on a pokemon evolution, or battle, or whatever. Since the ROM file is just a big mess and they have no idea how to fix it, they either start over or give up. I sometimes fix other people's ROM hacks.

My usual method was going close to the point of failure (freeze, reboot, whatever), making vba-sdl-h start generating a trace, activating that failure and then looking at the 200+ MB trace file using split(1) and less(1) (basic gnu tools).

Yesterday, this method proved more ineficcient than usual, so I came up with another one. I created 2 directories, a and b. In a I put a clean Pokemon ruby ROM and the broken hack, named r.gba and d.gba. In b I created the following script:

cp ../a/* .
dd if=r.gba of=d.gba skip=$OFFSET seek=$OFFSET \
    count=$SIZE bs=1 conv=notrunc

What this does is basically to copy SIZE bytes from r.gba to d.gba starting from byte OFFSET, using dd(1). I would run it as (for example):

$ OFFSET=$((0x416000)) SIZE=$((0x1000)) ./

I had a savestate to run just before the crash, so I just had to change OFFSET and SIZE, run vba-sdl-h, press F1, and see if it worked.

Starting with 0x600000 bytes from the ROM start fixed the bug, as expected, but that would be replacing too many things from the hack with stuff from the original game. So I narrowed it down by halving and tweaking those values until I ended up with a that 0x1000 byte copy shown above—which fixes evolution in the hack and (hopefully) doesn't break anything important in doing so.

The result of all this is here: What would have been, after some more work, Pokemon Dark Blue's beta 2.


VNC over 2 SSH servers

Today I *needed* a graphical connection to my home desktop, from ~100 km away. That means VNC. At home I have 2 computers: a server and said desktop. The server has a port redirected from the router for ssh, and from the server I can access the desktop (ssh as well). Therefore, I need some kind of double tunnel to get from the laptop to the home machine. The final solution looked like this:

home $ x0vncserver -display :0 -passwordfile ~/.vnc/passwd
laptop $ ssh -L 5900:(Laptop local IP):5900 -N (Home IP)
laptop $ vncviewer DotWhenNoCursor=1 :0

All the ssh connections I did here used RSA keys. VNC uses a password, but it's not facing the public internet and the connection is encrypted (it isn't from the home server to the desktop, but I assume there are no rogues inside my house), so it's not actually needed.

Neither of those IP addresses have to be actual addresses here. In my case, the first one was the hostname of my desktop as written in the server's /etc/hosts. The second one was the name of a .ssh/config entry which uses a CNAME domain name, which points to a no-ip domain name (yay!).

That DotWhenNoCursor=1 thing was added because I had no mouse cursor without it (I'm not sure what to blame here). You can put that in ~/.vnc/default.tigervnc too.

Once the connection is open, you can press F8 to open the viewer's menu and use the fullscreen option or close the program, among other options.


Bye bye, database

This blog doesn't use a database any more. Instead, entries are saved in plain text files, and it's all put together using Mako templates.

Now I can have the blog contents under version control, I edit the entries the same way I edit the rest of the website (emacs and ssh), and the code has decreased in size quite a bit. It's also one less service running on the server.

This was my first application using a database, and I learnt a lot, but there is no point in keeping it any more.



Here it is, lads, a manual for Red Alien. I might or might not write a tutorial as well.

While doing it I realized that writing HTML using emacs with web-mode and evil is not bad at all. Also, I'll see if I can integrate that syntax highlighting I used in the document into the GUI's QScintilla editor.


Beep! Someone said my name

As an IRC client, I run weechat on my server, inside a tmux session, and connect to that using SSH. I had disabled the beep plugin, since I was begining to go crazy every time someone mentioned me. Still, I needed some way to get notified when someone mentioned or /query'ed me. Enter notify-send and twmnd. notify-send is a standard tool to send notifications to the notification server in a linux desktop. twmnd is a nice notification server that blends well with i3 and is not annoying. The first obvious step is to start running twmnd, then.

Now, how do you tell weechat to send your desktop a notify-send over ssh every time someone says your name? You write a plugin. Here's mine: pastebin. Actually, as you'll see, I didn't write all of that. I took a script named from the the weechat website, and modified it to my needs. If you want to use it, you'll have to replace “altair” on line 104 with whatever hostname or IP address your desktop has.



I'm so upset. jk.

I think the original UPS patcher had a command line interface already... Anyway, this was both easy and fun to do.


Here we go again

The server stopped working, like a week or two ago. The hard drives were too hot and died. Some info was lost, like everything on the database, but I saved the website code (yay!).