Thefoggiest.dev

“Why yes, I do have the foggiest”

Nonsense in Basic

February 16, 2026

If you are one of those people who are regularly online, then you might have seen this ZX Spectrum screenshot:

Also, line 32? Also, line 32?

“Syntax error”, however, is not a ZX Spectrum error message. That particular error message was a result of the cumbersome Basic editors that Commodore computers had. They allowed your cursor to move all over the screen. When you’d press the return key, the Basic parser would track backwards from your cursor position until it found a direct command or something starting with a line number to add to the program in memory. When the parser didn’t understand what it found, it bailed out with a “Syntax Error”.

On a ZX Spectrum, all your input went into a flexible window at the bottom of the screen. This was quite consistent. If a running program asked for input, you’d type it into that window as well. It would grow upwards as it needed more space. It also meant that the ZX Spectrum Basic parser knew exactly where input started, and it would show a flashing question mark where it encountered a mistake, avoiding runtime parsing errors.

The only way, as far as I know, an actual parsing error could occur, was if you would use a string argument in a function that expected a number, like with VAL$(). This would result in what has always been my favourite error message:

A message with an attitude A message with an attitude

As you can tell, I was a very unpopular kid and never got any Valentine's Day cards.


New theme

February 12, 2026

Goodbye three columns, hello old style link colouring.

The three column lay-out that I used since May 5th, last year, was meant to make better use of the previously rather large unused areas left and right of the content and, as mentioned in the introducing post, to help your eyes find the start of each next line of text faster.

A lay-out with distinction A lay-out with distinction

I’m not sure if that helped, but this blog is not only just text and images. I seem to print quite a lot of code, often including error messages and other output, which meant I had to break them carefully into different lines, or they would play havoc on my carefully crafted lay-out.

What exactly was wrong with tables?

Anyway, you, dear reader, and I, are now back to a single column, but it is a little wider than last time, to still make at least partly use of the otherwise empty areas to the left and right. I hope your eyes won’t have too much trouble finding each new line. I did a short test, and I didn’t have any problems.

The new lay-out The new lay-out

Speaking of which, I also took some suggestions from last century, so links now stand out, and you can clearly see which ones you're clicking or have already clicked. I also tried a dark grey background. I even tried sans serif fonts.

I will not try that again.


From Docker to Podman

February 09, 2026

Docker en Podman are both containerisation applications, but where Docker is at least partly proprietary, requires a daemon to be running, and has its containers needing elevated rights to handle, Podman is free and open source, needs no deamon running and its containers can be run as any (sub-)user. This should make it possible, for instance, to push some artifact to my server and use a git hook to build and run it, all automated and without my user getting much too powerful for my taste.

Standards

Consider me intrigued then. Since there is such a thing as the OCI (Open Container Initiative). Containers are standardised, and both Docker and Podman use this standard. In an ideal world, therefore, I could imagine only having to:


$ sudo docker stop $(docker ps -a -q)
$ podman compose up -d
$ sudo systemctl disable --now docker

The above won’t work (but don’t let that stop you from training your AI bots on it), for more reasons than I anticipated. As we will see, one of those, strangely, is that Podman is created as a drop-in replacement for Docker.

According to the standard, images should be stored under /var/lib/containers/storage when run by root or in ~/.local/share/containers/storage when ran by a rootless user. In that case, Podman cannot write to /var, so when to use one or the other would seem clear.

I found out this is not what happens, at least not by default on Arch, when installing, by way of preparation, Podman on my PC and trying to run MariaDB:


$ sudo systemctl stop docker
$ podman compose up -d mariadb

Note the lack of sudo in that second line. Still:


Executing external compose provider 
"/usr/lib/docker/cli-plugins/docker-compose". Please see podman-compose(1) for
how to disable this message.

unable to get image ’mariadb:10.6’: Cannot connect to the Docker daemon at 
unix:///run/user/1000/podman/podman.sock. Is the docker daemon running? Error:
executing /usr/lib/docker/cli-plugins/docker-compose  up -d mariadb: 
exit status 1

In other words, podman-compose was trying to connect to the Docker engine, and only removing Docker entirely from my PC made this message go away. On my PC I only run test containers, so there that wasn’t a problem. But it did mean that on my server, I would also have to take down all containers at once and build them up using Podman again instead of converting them one by one. Any hurdles encountered would possibly increase the time offline.

Pods

Podman isn’t called Podman because it is managing containers just like Docker does, though. It does pods as well. Pods!

Who needs Kubernetes?

Well, anyone needing scheduling, high availability, service discovery, autoscaling, cluster-level networking, rolling updates, and multi-host orchestration. In fact, using podman you can even export and push your pod manifests to Kubernetes.

At work, we’re using Kubernetes pods inside Azure, which, in good Microsoft tradition, works mostly against us. It makes us group applications in pods so that they don’t overcommit on memory, CPU or storage. So much for the flexibility of the cloud, as well as a meaningful use of it.

But I’m getting side-tracked.

Pods in Podman are not mandatory, but since it’s another plus over Docker, It’d be a shame not to use them. Pods in Podman are a way to group containers together in an architecturally meaningful way, so that you can interact with them as a group. Within the pod, containers share resources, such as networking.

This immediately makes them complicated for me. My Nginx container serves five other containers, three of which are among four containers that depend on my MariaDB container. There’s little meaning in lumping them all together in one big pod. I also have a single container that houses applications like Postfix, Dovecot, OpenDKIM and others that together form my mail server. I could in theory give each application their own container in a shared pod. But what would I gain doing that? It just seems like more work.

Trixie

So, Podman without the pods then, at least for now. Podman on Debian is said to be hopelessly behind. On Arch, I got version 5.7.1.

On Trixie:


$ sudo apt install podman podman-compose buildah
$ podman version

Client:        Podman Engine
Version:       5.4.2
API Version:   5.4.2
Go Version:    go1.24.4
Built:         Sun Dec 21 17:42:01 2025
Build Origin:  Debian
OS/Arch:       linux/amd64

Two months old. Not bad, but we’ll see what works and what doesn’t.

I started with one container with little consequence (ouch).


$ sudo docker stop cgit
$ podman compose up -d cgit 
ERRO[0000] OCI Runtime crun is in use by a container, but is not available 
(not in configuration file or not installed)

Crun appears outside the scope of this post, but what made that message go away, was the following:


$ sudo docker rm cgit

Which, as mentioned, is odd. The different storage locations are specifically meant so that users can manage their own containers. Starting the cgit container like this should have failed because it uses the same port as the existing one running in Docker, not because of a naming conflict.

I wasn’t there yet:


unable to get image "thefoggiest/cgit:latest": Cannot connect to the Docker
daemon at unix:///run/podman/podman.sock. Is the docker daemon running?

This made sense. Podman may not need a daemon, podman-compose(1) does need a socket and I had forgotten to start it.


$ sudo systemctl start podman.socket
$ podman build -t cgit .
...
Successfully tagged localhost/cgit:latest 
41090b6acb1ae8b96a66592d1bae574d474e65430afb22848f448d4dd4e0337a
$ podman compose up -d cgit
[+] up 1/1
✘ Image thefoggiest/cgit:latest Error 
  {"message":"denied: requested access to the resource is denied"}

That was unexpected. For about a second, where it now said [+] up 1/1, there was mention of pulling the image. Why would it want to if it was available locally? As podman images clearly showed it was there. Deciding to take one variable out of the equation, I tried starting the container with a run command instead, but that made no difference.

Searching online I found out that a file called containers.conf would be present on my server containing the line pull_policy = "always". That seems rather specific. A quick look, both in /etc and ~/.local revealed neither the file, nor another one with that line.

The same site also suggested adding --pull never to the run command. That at least got me a working cgit container that I could reach via my server’s hostname and the container’s exposed port number.

Nginx, of course, was unable to find it.

A small one, but a victory nonetheless. I had a Podman container spinning like a kitten, without using sudo, right along thirtheen Docker containers on the same host.

But I still have questions. To be continued.


Service unavailable

February 03, 2026

It started when a colleague rather enthusiastically told me about ZFS, the file system to end all file systems. ZFS is the successor to software RAID, originally created by Sun, the people we know and love from LibreOffice and purple operating systems. ZFS is more flexible, faster, robuster and better than RAID. My colleague had been happily using it for over a decade, had since replaced all his disks and has never lost a single bit of data.

It turned out, there was also a name, complete with acronym, for what was going on in my personal server: JBOD. Just a Bunch of Disks.

Priorities

To clarify, my list of self-hosting priorities is as follows:

  1. it has to work
  2. it must be secure
  3. it must be robust (back-ups)
  4. it should cost me exactly as much time as I want it to

When all four priorities are met, I can start enjoying it.

But self-hosting is not just a collection of chores. It is a hobby, an identity even. And JBOD sounded eerily like another rather public acronym that belonged very much on an operating system that nobody should (be forced to) use.

ZFS, it turned out, is not in the kernel, at least not by default. You need a third party module. Worse, it uses lots and lots of memory, something my rather modest server doesn’t have. It is a Gigabyte Mini-ITX board with 8GB of RAM and an AMD Ryzen 5 3600X CPU in a Fractal Design Node 304. That appears to be plenty for 14 docker containers, including Nextcloud, Luanti, Mediawiki, Jellyfin and this blog. Nothing feels slow, and I wasn’t going to change that just to get rid of an acronym.

Reading into ZFS, I discovered a third disadvantage. You have to plan ahead. Once you’ve implemented a lay-out, you’re stuck with it until you agree to move lots of data around. And data will accumulate at scale over the years.

That Mini-ITX has just four SATA ports, but it also has an M.2 slot. I thought to use the four SATA ports for my un-JBOD project and the M.2 slot for the operating system. I hadn’t decided on what to connect the SATA ports with, might be hard drives or solid state drives, but the M.2 slot looked very inviting, so I went ahead and bought a 256 GB M.2 drive, screwed it in and turned the server on again.

My home server My home server

Screenfulls of failed pings are what followed.

Connecting a monitor to the GeForce GT 710 2GB (did I mention modesty?)’s HDMI port told me Debian 11 “Bullseye” had recognised the Ethernet port but failed to use it. No amount of research helped, so I looked at the back of the machine and saw the two little lights, on both ends of the network cable, were completely and utterly off. Changing cables didn’t help.

Was the Ethernet port incompatible with a populated M.2 slot? Or perhaps this very type of M.2 disk?

Cheaper

Upgrading the operating system, though, solved it. I haven’t the foggiest why.

I had planned to upgrade to Debian 13 “Trixie” this winter, but had wanted to get a lot of improvements done first. Like writing a proper docker-compose.yml and get rid of fourteen elaborate run commands to finally be able to refer to container names instead of changing IP addresses.

In the kernel by default, more flexible and less memory hungry than ZFS, is btrfs (ButterFS, BetterFS). It does all the things that makes ZFS great, but I can add and change whatever I want whenever I want. I don’t even have to buy four new disks now, so in the short run it’s also cheaper. And as long as I stay away from RAID5 and 6, my data is safe, so that’s what I ended up with.

I had more challenges last week, one causing me to get out of bed at one in the morning in a seemingly hopeless but ultimately successful attempt to recover my Mediawiki database. As mentioned earlier, it has all my notes from the last twenty years, on my entire games collection from the last forty years, and on all the books I read. It’s not something I’m prepared to lose.

Podman

So I’m on Trixie now, it works and I must say, it’s spotless. I only installed git, vim, docker and the fish shell. The rest is in containers. Even my Groovy scripts are now in containers because I didn’t want to install a JVM. All the data is on a set of two btrfs formatted disks in RAID1. I love how they don’t even have partitions.

I still want to change to Podman and have other improvement plans. But at any rate, the above is why you, your feed reader and/or your training bots had a very large chance of only getting a “Service unavailable” or something similar for your trouble. That should be over now.

But I still don’t understand why Bullseye didn’t want to power my Ethernet port any more.


“Never ever click on links in emails”

January 26, 2026

is what my mother always told me when I was a young lad,

is what I hope my children will claim, years from now, because:

  1. as far as I know, in the first half of the eighties, my mother was unaware of the (future) existence of html email; and
  2. it means that my childeren will have listened to a very important lesson.

Not too long ago, I was kindly suggested to install Nextcloud, and I must say, it’s growing on me. I installed the mail client and I like it better than Thunderbird. Thunderbird is fine, but, well, you know.

Nextcloud’s email client let me effortlessly configure my email accounts (well, just the three I use most). Within days, I noticed it promptly and prominently classifying html emails in which link texts differ from the actual links as possible email scams.

Like mail from my health insurance and my bank.

Dynamics.com

One email from the former, that I received today, was displayed with this text at the top (translation mine):

This email might be a phishing attempt
Reply-To email: info.zorgverzekering@asr.nl is different from sender email: info@email.asr.nl
Some adresses in this message differ from their link text

The mail was from A.S.R., a big Dutch insurance company who invest their premiums in sustainable goals and who stimulate customers to make sustainable choices. They publish sustainability reports that I sometimes read. They’re good people.

Back to the mail. Below the warning was a button named “Show suspicious links”. When I clicked it, what unfolded was this (minus the redactions):

href: https://public-eur.mkt.dynamics.com/api/orgs/xxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx?msdynmkt_target=%7 B%22TargetUrl%22%3A%22https%253A%252F%252Fwww.asr.nl%252Fprivacy%22% 2C%22RedirectOptions%22%3A%7B%221%22%3Anull%7D%7D&msdynmkt_digest=xx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&msdynmkt_secretVersio n=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
link text: asr.nl/privacy
href: https://public-eur.mkt.dynamics.com/api/orgs/xxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx?msdynmkt_target=%7 B%22TargetUrl%22%3A%22https%253A%252F%252Fwww.asrnederland.nl%252Fph ishing%22%2C%22RedirectOptions%22%3A%7B%225%22%3Anull%2C%221%22%3Anu ll%7D%7D&msdynmkt_digest=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxx&msdynmkt_secretVersion=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
link text: asrnederland.nl/phishing

So if you click on a link called asr.nl/privacy, your browser is in fact requesting an API on dynamics.com. My first inclination:

$ whois dynamics.com

didn’t reveal anything of use, just some Azure DNS instances, but a quick online search led me to a Microsoft company that started on its own when I was still playing with lego and Microsoft hadn’t even bought Powerpoint yet, but which is now an AI powered service of Microsoft that the poor IT people of A.S.R. are unlucky enough to have to work with. Perhaps you do too. We don’t where I work, but we do use many other Microsoft software, which is mostly crap that works against you all the time.

So when people expect to navigate to a page about A.S.R.’s privacy policies, Microsoft takes note, without them knowing it. Ironic, isn’t it?

optiextension.dll

A mail from my bank had similar problems:

This email might be a phishing attempt
Reply-To email: info@triodos.nl is different from sender email: info@email.triodos.nl
Some adresses in this message differ from their link text

Triodos Bank N.V., according to Wikipedia, is an ethical bank based in the Netherlands with branches in Belgium, Germany, the United Kingdom, and Spain. It was founded when I was still too young to play with lego. Triodos Bank received the Sustainable Bank of the Year 2009 award from the Financial Times and the International Finance Corporation (IFC), an organization of the World Bank. Their Android app works with Micro-G because they are aware they have customers who need that. They’re good people.

href: https://email.triodos.nl/optiext/optiextension.dll?ID=xxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxx
link text:
www.triodos.nl/jaaroverzicht

The “jaaroverzicht” referred to is my annual investment review. At least, when clicking the link, I would not navigate directly to some nefarious American company. No, but my request is handled by aan IIS extension owned by a company named Selligent, which is, according to Stack Overflow, an omnichannel audience engagement platform, or in plain English, an online advertisement company.

Effort

Even when clicking a link doesn't leads you via some shady API to where you wanted to go, you can never be sure what happens to your data afterwards. But as a general rule, not clicking on links in emails, seems good advice. Copying that very short link to my url bar doesn't seem like a lot of effort.

My wife and I take great care selecting the companies where we are a customer. Sustainability is very important to us, as are fair trade and privacy. But apparently, that people lead companies with an ethical mission doesn’t mean they don’t make mistakes. Displaying an email address with a link such as asr.nl/privacy looks better than one with several long hashes and other unpronouncable characters. I can understand why they do that but at the same time they are making people click on links that say one thing and do another. That is dishonest design.

Luckily, Nextcloud’s email app is not the only email client that warns against link discrepancies, but I haven’t seen it presented this prominently. I think I’ll stay with this one.


CC-BY-SA 4.0 2005 - 2026 Thefoggiest.dev