During my holiday break, I built a Stratum 1 NTP server in my home lab. Ordinarily, we get our time from internet-based NTP servers which usually Stratum 2 or 3. With Stratum 1, I can get significantly more accurate time directly from the Stratum 0 GPS satellites orbiting above me. On these satellites are atomic clocks which are the most accurate timekeeping devices humanity has ever created.
First off, time tracking that's this precise and reliable is overkill for my needs, but it's a fun project and a great learning experience. For this project I used a Raspberry Pi 5 (which I already had), a GPS module, GPIO wires, and a better antenna. I followed some guidance from a colleague, but much of this has been based off of a post on NetworkProfile.org. However, what I've put together below has some differences that I felt was worth documenting. For example, I'm using a newer Raspberry Pi 5 and AlmaLinux 10.1 as opposed to a Raspberry Pi 3 and RaspberryPiOS (Debian). As such, some of the instructions, commands, and insights are different, but the original is well worth the read.
In terms of hardware, I bought this GPS module receiver, this set of ribbon cables, and GPS antenna. However, there are a ton of different modules and antennas available that should work. For everything, I spend about $25.
During this project, I learned that although USB is an option on the GPS module, it will not be anywhere near as accurate as going through the GPIO pins on a Raspberry Pi because there is a pin on the module for PPS. PPS stands for Pulse-Per-Second and is a hardware timing signal output used by many GPS receivers that provide an extremely accurate “tick” every second - accurate to the millisecond (and even nanoseconds). Even though the GPS module can be connected over USB and can deliver NMEA sentences and GPS time, USB cannot carry PPS timing as it requires a dedicated electrical pin. So if we want a true Stratum-1 NTP server, we must use something like a Raspberry Pi that has GPIO pins.
The first step is to solder the header to the GPS module. I hadn't soldered anything in quite a while, so I picked up a soldering gun for about $10. I'll be up front, I shorted the PPS and TX pins on the GPS module with a little too much solder which caused me about 2 hours of troubleshooting before I realized that was the reason why I wasn't picking up any satellites.
In regards to connections, the RPI5 GPIO pins 4, 6, 8, 10, and 12, need to be connected to VCC, GND, RXD, TXD, and PPS on the module respectively. Below is a diagram of the RPI5 GPIO pin diagram and a screenshot from the module product page.
| RPI5 Pin | GPS Pin |
|---|---|
| 4 - SV | VCC |
| 6 - GND | GND |
| 8 - GPIO 14 TXD | RXD |
| 10 - GPIO 15 RXD | TXD |
| 12 - GPIO 18 - CLK/PCM | PPS |
Below is what my RPI5 looks like with the GPS module attached. You'll notice that I mounted the GPS module to a Deskpi KL-P24 board. I attached this to my RPI5 because 1) I already had two of them that were included when I bought my minirack, 2) it's cleaner in that all cables run in the same direction behind the RPI5, and 3) I was able to easily mount the GPS module to it. In terms of the excessive wire length, I'll shorten them another day.
On AlmaLinux 10.1, install the following packages: sudo dnf install gpsd gpsd-clients tcpdump. The original guide also suggested installing chrony and jq but they were already installed, and pps-tools are not currently in the AlmaLinux 10.1 repos or EPEL at this time (however, pps-tools is not needed).
To enable the serial port on AlmaLinux, run sudo vim /boot/cmdline.txt and remove the line console=serial0,115200 so it will not interfere with GPS data and PPS timing. Below is what mine looks like. Note that I have pcie_aspm=off which I only added to correct some NVMe issues and it's not needed for this project.
console=tty1 root=PARTUUID=f773372a-b398-4536-abc4-b86776f6e1c9 rootfstype=ext4 rootwait pcie_aspm=off
Now we configure the GPIO pins for PPS with sudo vim /boot/config.txt and add the following lines:
# The following 3 lines are for GPS PPS signals
dtoverlay=pps-gpio,gpiopin=18
enable_uart=1
init_uart_baud=57600
I already had enable_uart=1 present in mine, just ensure it exists in the config.
Now we can add the PPS module: sudo bash -c "echo 'pps-gpio' >> /etc/modules".
Create the file /etc/default/gpsd and added these lines:
START_DAEMON="true"
USBAUTO="true"
DEVICES="/dev/ttyAMA0 /dev/pps0"
GPSD_OPTIONS="-n"
In order to make gpsd start at boot, run this command: sudo ln -s /lib/systemd/system/gpsd.service /etc/systemd/system/multi-user.target.wants/. Now reboot the machine.
If everything went smoothly, we should start seeing the LED on the GPS module blinking every one second.
From here we can see if things are working correctly by running cgps -s and gpsmon. If satellites are being picked up, they will be displayed in this output. If we don't see any after around 15 minutes, the configuration is wrong or there's an issue with the hardware (such as the antenna not being a good spot or you shorted the connections as was my case). You can also run gpspipe -w | jq ".uSat| select( . != null )" which will provide the value of how many satellites are being picked up, updating every second. It's important to note that at least 4 satellites are needed to get accurate time.
For reference, below is my output from cgps -s. Note that my GPS coordinates, Grid Square, and ECEF coordinates are obfuscated with #'s.
┌───────────────────────────────────────────┐┌────────────────Seen 19/Used 11──┐
│ Time 2026-01-01T03:00:48.000Z (18)││GNSS S PRN Elev Azim SNR Use│
│ Latitude 45.6####### N ││GP 5 5 47.0 136.0 43.0 Y │
│ Longitude 122.4####### W ││GP 11 11 43.0 54.0 39.0 Y │
│ Alt (HAE, MSL) 60.985, 83.138 m ││GP 12 12 40.0 161.0 29.0 Y │
│ Speed 0.09 km/h ││GP 18 18 14.0 235.0 38.0 Y │
│ Track (true, var) 84.9, 15.0 deg ││GP 20 20 51.0 80.0 41.0 Y │
│ Climb -4.32 m/min ││GP 21 21 45.0 88.0 36.0 Y │
│ Status 3D FIX (12 secs) ││GP 25 25 66.0 217.0 36.0 Y │
│ Long Err (XDOP, EPX) 0.66, +/- 9.9 m ││GP 28 28 20.0 284.0 28.0 Y │
│ Lat Err (YDOP, EPY) 0.72, +/- 10.8 m ││GP 29 29 60.0 302.0 36.0 Y │
│ Alt Err (VDOP, EPV) 1.60, +/- 36.8 m ││GP 31 31 14.0 314.0 31.0 Y │
│ 2D Err (HDOP, CEP) 0.98, +/- 18.6 m ││QZ 2 194 7.0 295.0 26.0 Y │
│ 3D Err (PDOP, SEP) 1.87, +/- 35.5 m ││GP 6 6 2.0 54.0 0.0 N │
│ Time Err (TDOP) 1.01 ││GP 9 9 1.0 28.0 0.0 N │
│ Geo Err (GDOP) 2.13 ││GP 26 26 4.0 321.0 0.0 N │
│ Speed Err (EPS) +/- 77.8 km/h ││SB133 46 37.0 189.0 0.0 N │
│ Track Err (EPD) n/a ││SB135 48 37.0 183.0 0.0 N │
│ Time offset 0.145824081 s ││SB138 51 35.0 159.0 0.0 uN │
│ Grid Square ########## ││QZ 1 193 n/a 0.0 0.0 uN │
│ ECEF X, VX -#######.### m 0.050 m/s││QZ 5 197 n/a 0.0 0.0 uN │
│ ECEF Y, VY -#######.### m 0.030 m/s││ │
│ ECEF Z, VZ #######.### m -0.050 m/s││ │
│ ││ │
│ ││
Now that we've confirmed that the GPS module is working correctly, we need to configure Chrony. Run sudo vim /etc/chrony.conf and add these two lines:
refclock SHM 0 refid NMEA offset 0.000 precision 1e-3 poll 3 noselect
refclock PPS /dev/pps0 refid PPS lock NMEA poll 3
Then uncomment log measurements statistics tracking to enable logging. Restart chrony with sudo systemctyl restart chronyd.service.
In order to true up the clock, we need to see the "Est offset" number so we can revise our chrony configuration so that our clock is more accurate. Run sudo cat /var/log/chrony/statistics.log | sudo head -2; sudo cat /var/log/chrony/statistics.log | sudo grep NMEA and you'll see some data. After around 15 minutes, run it again. Copy and paste this data into a text file, save it, then import it into a spreadsheet and set the delimiter to spaces. The only column we're interested in is Column R (Est). I had to change the values to in this column from "Scientific" to "Number" and increase the decimal to the ten-thousandths place so that 4 values would show after the decimal. Take the average of this column - in my case, it's 0.1412. For more detailed instructions on working with the spreadsheet, I suggest the NetworkProfile.org blog post.
Open the chrony config with sudo vim /etc/chrony.conf and update the line refclock SHM 0 refid NMEA offset 0.000 precision 1e-3 poll 3 noselect with the offset value we just obtained. In my case, the line is now refclock SHM 0 refid NMEA offset 0.1412 precision 1e-3 poll 3 noselect. Also, comment out the line we uncommented earlier log measurements statistics tracking, then save and close. We can also remove the old log files with sudo rm /var/log/chrony/statistics.log. Finally, restart chrony: sudo systemctl restart chrony.
If the tuning we did earlier worked, then PPS will be deemed as the most accurate and will take over. We can take a look at chrony's source with the command watch -n 1 chronyc sources (you can also add the "-v" option for verbose to this command if you want more details on what each column means. The output looks something like this:
Every 1.0s: chronyc sources mercury: Wed Dec 31 19:31:58 2025
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
#? NMEA 0 3 377 7 +6670us[+6670us] +/- 1000us
#* PPS 0 3 377 6 +704ns[ +834ns] +/- 161ns
^- dns-e.ns4v.icu 2 6 177 25 +468us[ +468us] +/- 48ms
^- 159.203.82.102 3 6 177 24 -858us[ -857us] +/- 37ms
^- 45.12.52.138 3 6 177 24 -3299us[-3299us] +/- 42ms
^- 173.208.172.164 3 6 177 26 -2429us[-2429us] +/- 151ms
As you can see, in the far right column, PPS is showing 142ns, whereas the best networked-based NTP is 37ms (or 37,000,000ns!). This value is chrony's estimate of the uncertainty (error bound) of that time source. Obviously, the difference in accuracy is substantial.
The symbols in the front are chrony's selection and status markers that tell you which time sources are usable, trusted, and actually discipling the clock. #'s are local reference clocks, whereas ^'s are time sources over the network (most likely the internet). Of the second characters, * means the source is currently controlling the system clock, "?" means unusable/not selected, and "-" indicates the source is a candidate, but it's not selected. Here's a summary:
| Symbol | Meaning |
| ------ | ------------------------------------------- |
| `#` | Local reference clock (GPS/PPS) |
| `^` | Network time source |
| `*` | Selected (currently disciplining the clock) |
| `?` | Seen but unusable / not selectable |
| `-` | Valid candidate, but not chosen |
If satisfied with the accuracy, run sudo vim /etc/chrony.conf again and add "prefer" to the end of PPS line (refclock PPS /dev/pps0 refid PPS lock NMEA poll 3 prefer).
To allow clients to connect to chrony, add the line: allow 0.0.0.0/0, or whatever your home network is (ie the common 192.168.1.0/24). We also we must add a line which changes who is allowed to set the system clock and when. All we need to do is add the line manual to chrony.conf. It enables runtime support for the settime command, and at the same time disables chrony’s automatic stepping behavior. Chrony will not step automatically. A clock step can occur only if you explicitly request it.
We also will add the line local stratum 1 which tells connected clients that even if there is no internet connection, this is a Stratum 1 time server.
While editing the chrony config, it's also not a bad idea to review and even add NTP servers to your chrony file to verify your clock is accurate, or as a fallback option. By default, the only one in AlmaLinux's chrony file is pool 2.almalinux.pool.ntp.org iburst. I also added server time.nist.gov and server tick.usno.navy.mil. Once you're satisfied, save and close, then restart chrony sudo systemctl restart chrony.
One final step on AlmaLinux is to open the firewall. Running sudo firewall-cmd --add-service=ntp --permanent and sudo firewall-cmd --reload will allow NTP connections to be made.
It's fairly simple to add the new NTP server as a source. On Linux machines, we first need to determine what service we're running, which can be done by running systemctl status systemd-timesyncd and systemctl status chronyd. In my case, I see chronyd is active so I'll run sudo vim /etc/chrony.conf and add the line server 10.0.0.40 iburst which is the IP address of the NTP server I just built. Run sudo systemctl restart chronyd and verify it's using the server with chronyc sources. As you can see below, 10.0.0.40 has been selected and identified as a Stratum 1 server.
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 10.0.0.40 1 6 377 42 +25us[ +33us] +/- 138us
^- 172-232-15-202.ip.linode> 3 10 377 374 +3078us[+3085us] +/- 79ms
^- fairy0.mattnordhoffdns.n> 3 10 377 378 +72us[ +79us] +/- 72ms
^- 198-12-95-197-host.coloc> 3 10 377 17 -1027us[-1027us] +/- 56ms
^- 23.186.168.132 2 10 367 193 +3436us[+3451us] +/- 25ms
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>
https://chuck.is/ntp
For example I really could not care less about football, celebrity news, daily political controversies, social media influencers, and so on. I have zero interest in these things and I certainly don't have my identity anchored to them. What may seem as obviously important to you could just be background noise to me. Yes, ignorance in the things I don't care about is bliss. My attention is finite, life is finite, so I choose to spend me time focusing on what I see as important or interesting - and it's fair that others may see what interests me as background noise.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>
https://chuck.is/ignorant
There are many reasons to have your own domain that don't involve email or a website, but for me, those are the two primary ones. I have two websites: one for professional purposes and one for personal uses (this one). When buying beginning to use a domain, there's a few considerations I wish I thought about first.
For example, will the TLD (such as .xyz) be blocked by ISPs or automatically flagged as spam by email providers because security teams are lazy? Or whether non-technical people will arbitrarily add an "m" at the end of an email address that ends in ".co" because they don't know any better? My suggestion is to stick with well established TLDs like .com, .net, or .org unless you understand what the drawbacks and limitations are (or don't plan on using it with email).
Cost is another factor. I use Iceland's TLD ".is" which is around ~$50/yr and can only be renewed annually rather than up to 10 years as with other domains. The benefit is that it's easy and fun to use in that I can configure sweet-looking URLs like chuck.is/dumb. That being said, established TLD's like .com, .net, and .org are usually the most cost effective and run for about ~$15/yr and can be paid for up to 10 years in advance. They've also been around a lot longer so you're unlikely to find a short domains like chuck.net because some guy cooler than you already snagged it.
Let's talk about email. Once upon a time I wrote a post about why you should use your own domain for all your email. To summarize, having your own email domain is like owning the doorstep to your digital life.
For example, I own chuckcarroll.net and my email address is chuck@chuckcarroll.net. This decouples me from providers like Gmail, Yahoo, Microsoft, and even Protonmail. While many of these providers allow you to use a custom domain (for a fee), but if they do something you don't like or they suspend your account for any reason, you're basically screwed unless you have your own domain.
With a custom domain, you can typically move to a different provider with ease. This is why I considering switching as not unlike porting your cellphone number to a different carrier.
Another benefit of custom domains are email aliases. I can create any alias I want such as "school@chuckcarroll.net" or "[business]@chuckcarroll.net" on the fly. Each service I register with online gets a unique email address. When my credentials were leaked last year, I was quickly able to determine who was responsible because the exposed email address was "meineke@chuckcarroll.net" - a company I mistakenly gave my email address to for an oil change (any nobody should ever do business with them).
That being said, I should note that I've been slowly moving away from using [business]@chuckcarroll.net aliases. When real people see the custom email address, they often assume either I work for that company or that I gave false information. Either way, it makes for awkward social interactions.
Owning your own domain also means you can have your own personal or professional website that you fully control. Years ago, I wrote about why I think it's important for everyone to have a personal website. The same arguments apply here, but I'll add that if you're trying to get noticed (especially if you're looking for work) having a website helps you stand out.
I also strongly believe that if you're going to write or publish anything, you should own your publishing tools and your platform. Own your words. Own your distribution. Avoid falling into the trap of publishing on someone else's platform like Medium or Substack.
A domain also enables you to run and share online services. I use custom domains with services like Wallabag, Jellyfin, and Audiobookshelf. If a service is running on my server, I can easily and securely share that service with friends or family since it's publicly accessible. For this I use Cloudflare's Zero Trust tunnels.
Beyond being a ton of fun to experiment and play around with, owning a domain grants you freedoms you otherwise wouldn't have. It gives you control, flexibility, and long term independence of your digital life.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>
https://chuck.is/self-reliant
Initially I thought it was the NVMe base, ribbon, or the drive, but I swapped each and still had the issue. Then I made the connection that I started receiving I/O errors usually around the same time after bootup. Consoling in, among all the messages, I noticed that the NVMe went into power saving mode not long before the I/O errors started to occur.
I think I solved the issues by adding PCIe power limit to the firmware, preventing the NVMe from going into power saving mode. This can be done by adding a single line to the Raspberry Pi's configuration and haven't experienced any I/O errors since.
On AlmaLinux edit /boot/efi/cmdline.txt and add the line pcie_aspm=off, then restart. On RaspberryPiOS, it's the same line, but file is located at /boot/firmware/cmdline.txt.
I think the power saving kicks on at different times depending on the OS. On RockyLinux I started having I/O errors almost immediately on startup and didn't get the opportunity to edit the file.
What's odd is that when I was running EndeavourOS I wasn't experiencing this issue until I upgraded the system, so it sounds like a software issue. It's taken me a while to track down this issue, so hopefully I'll never see this pop up again.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>Some background: I prefer to buy DRM-free music so that I can play it on any device I choose and with any music program I wish to, which is why I absolutely adore Bandcamp. They provide not only legitimate DRM free music, but also make your purchases available in a variety of formats including FLAC, MP3, OGG, etc. From the artist side, Bandcamp only takes a 15% cut of the sale price while the rest goes directly to the artists. From my side, I get high quality DRM-free music without having to feel guilty about pirating. Everyone is a winner. Unfortunately, Bandcamp (and the artist) can renege on the deal whenever they wish.
On October 31st, 2025, my favorite metal band Despised Icon released their latest album Shadow Work. I did a typical search via a search engine and saw that it had been distributed through Bandcamp. I immediately ordered it, downloaded the lossless FLAC version, then converted it to 350Kbps OGG (the equivalent to a high quality MP3). A couple of days later I noticed that my conversion had some stuttering so I went to download a fresh copy only to discover that the album not only disappeared from my Bandcamp collection, but it was also removed from my order history despite having a record of the purchase via email and the funds removed from my bank account.
Now, one of a few things occurred. The record company (Nuclear Blast Records) changed it's mind about distributing on Bandcamp, in which case I believe I should still be able to download my purchase. This is something I believe Steam does with video games.
Another thing I've considered is that someone opened up a fraudulent account and uploaded Despised Icon's music library along with their latest album to make money off of a people (like me) trying to legally purchase music. If this is the case, I should receive a full refund on my purchase since there is no guarantee this is actually a legitimate copy, much less a legitimate LOSSLESS copy.
Unfortunately, this has soured some of the appeal of buying music via Bandcamp. What I find especially sketchy about this situation is that my purchase has been removed from my order history as well, and there's no trace that I had ever ordered this album in my Bandcamp account. The only proof I have is an email confirmation and a transaction in my bank account. The is reminiscent of a couple situations I've experienced in the past. A decade ago when I had bought a game via the Google Play Store, around a year later the developer pulled it and reuploaded it as a separate item. I don't know what their reasoning was, but I was no longer able to download and install it on my device without paying another $6.99. I reached out to the developer and all they did was email me the APK (thanks, I guess?). Also, back in the days of Google Daydream, I purchased the VR game Blade Runner Revelations and within a year I was no longer able to download/install the game, and the transaction disappeared from my purchase history. All links on the internet pointing to the Google Play Store page returned the error "We're sorry, the requested URL was not found on this server."
Regardless as to why this situation occurred with Bandcamp, this is yet another reason why I (and you!) should not only buy DRM-free content, but also store that content on a data storage device that you physically own. And in my case, keep the LOSSLESS version permanently backed up after it's converted to a different audio format. I am not an advocate of piracy, but it's situations like this as to why people do indeed pirate content.
In the last week, I have reached out to Bandcamp support directly via email twice and their support contact form twice, but have yet to receive a response. I'll update this post once I hear back.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>This post isn't going to be the holy grail of SSH, just some of the fundamentals that I find interesting and have incorporated in my home network environment.
SSH was designed as a replacement for Telnet which initiated remote sessions in plain text. SSH allows remote sessions over an encrypted connection. Within the OSI model, SSH is generally considered to be at Layer 7, the application layer. By default it uses port 22, but from a security standpoint, it's considered best practice to change it to a different non-standard port number to deter automated brute-force attacks.
Your OpenSSH client config file is stored in your home directly in ~/.ssh/config. A basic configuration includes the host, hostname IP address, the user, and a port number. We could also include options like ServerAliveInterval 60 which sends a keep-alive every 60 seconds to prevent timeouts, or ServerAliveCountMax 3 which will drop the connection if 3 keep-alives go unanswered.
An example SSH config looks something like this:
Host jupiter
Hostname 192.168.1.10
User chuck
Port 21098
Host mercury
Hostname 192.168.1.30
User chuck
Port 22212
Host web-server
Hostname server166.webhosting.net
User chuck12345
Port 22
With this configuration, instead of running ssh chuck@192.168.1.10 -p 21098 to initiate a remote session to my server, I can simply run ssh jupiter. I went from typing 31 characters, to just 11 (or 5 with bash auto-completion) which expends less energy from my fingers, and billions of hours of time saved.
Note that SFTP also uses this config file, since it's FTP running over SSH. As Lars helpfully pointed out to me, SFTP and FTP are entirely different protocols. SFTP, despite the name, stands for SSH File Transfer Protocol and is built from the ground up as part of the OpenSSH suite, as opposed to FTP (File Transfer Protocol) which transfers files via plain text and has no meaningful security. SFTP runs inside an encrypted SSH connection via port 22 (by default), providing not just file transfer, but also file system operations like ls, mv, etc. There's also a distinction to be made about FTPS which is literally FTP with SSL/TLS encryption added to it.
Running ssh [host] is great, but you still need to authenticate your session. We can, however, configure a passwordless SSH login.
Assuming you don't have an existing ssh key pair, generate a new key pair with ssh-keygen -t rsa -b 4096 -C "name@example.com". This will generate an RSA key with a key length of 4096 bits. The -C option creates a comment, and conventionally that's the user's email address which is simply an identifier. Hit enter to accept the default file location, then enter again if you want an empty passphrase (less secure, but helps with automation). This generates your private and public keys to ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub respectively.
Now to login to our server without a password, copy the public key to the server we're managing. Run ssh-copy-id [username]@[IP address of server] and you'll be prompted to enter the password. If all went well, you should receive a message Number of key(s) added: 1.
Now, all I need to do is run ssh jupiter and I'm immediately connected to my server.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>
https://chuck.is/ssh
They regurgitate quotes they heard from some YouTube "influencer" or obscure podcaster. The type of content being consumed is usually under the guise of "life advice" from an "alpha male" pushing shitty supplements. In reality it's just some dude who hasn't done much with his life saying things like "women ain't shit" to drive engagement. These particular sentiments especially resonate with males who can't form connections with women because they have unresolved emotional issues but blame women instead of looking inward. Communities that form around these online personalities may promote themselves as "support" communities but are really a misogynistic circle jerk of incels. I don't mean to be picking on incels in particular, but they're currently the most undeniable example of being terminally online.
I've also noticed that they do not seem to be outraged about what they are actually seeing going on in the world. They are outraged by what their favorite online personalities see, and it's usually their primary source of new information. They get their information on current events through the lens of some random podcast host rather than a professional journalist. Their world is filtered through this online personality. Because of YouTube and social media algorithms, they're fed more and more of the same type of content, which feeds them similar messaging that reinforce their beliefs. What's interesting is how their opinions are the "correct" opinions and not actually biased, but anyone who disagrees with them are the ones that are being manipulated by censorship and social media algorithms. The information they receive is gospel, straight for God's ass, and boy is it important to stay up-to-date on whatever the latest nonsense is. I've had people in my life that assume I live under a rock and don't know what's going on in the world or current events since I'm not on social media.
Life offline is lacking. No sense of community, no fulfillment, little to no meaningful relationships, and loneliness. Parasocial relationships are formed with the online personality, providing an artificial one-way emotional connection. The personality is perceived by the viewer as a friend, and it feels like a real friendship because you see them in a personal setting and you're regularly engaging with them. Parasocial relationships are intensified due to trust and self-disclosure provided by the online persona. Once this relationship is established, it's easy to adopt the personality's opinions and biases without evaluating them critically.
Since their personal lives may be lacking in many ways, their sense of identity can be strongly affected or straight up derived from online communities and digital content. Social identity theory explains how individuals define themselves based on their group memberships. There's nothing inherently wrong with this since it's a human characteristic, but it becomes problematic when your race, religion, or political party becomes an integral and most important part of your identity. Once you associate your identity with a group an online personality, you're not only narrowing your worldview, you tend to over-personalize any sort of disagreement. A critique of a political figure or media personality might feel like a personal attack, leading them to become defensive or even hostile.
Being terminally online leads to a lot of toxic behavior. And it's not just the "life advice" bros, it's also other political entertainment masquerading as "news" and "independent journalism". Personally, I really don't care where someone stands politically, but if you're so wrapped up in left/right ideology that can't simply exist in the same physical space as someone you don't agree with politically, you need to reassess what's important in your life. Also, if you can't just sit and have a conversation with someone without immediately bringing up whatever the latest political drama is, it's time to get a hobby or read a book.
I think there's varying degrees, but ultimately don't be a terminally online person. We're all guilty of spending far too much time online to some degree or another, but it's important self-monitor and recognize that behavior so you can change it. I get that it's great to make connections with people online, but letting it become a major factor in your life is not just bad for your mental health. And quite honestly it makes you a boring person. If digital content is the only thing that you consume, takes up a lot of mental real estate, and is one of the few things you can or want to talk about with your friends, you need to get a fucking hobby that gets you away from a screen.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>
https://chuck.is/terminally-online
Having a quiet place to work for hours at a time isn't always easy. We've all seen the YouTube videos and Instagram photos taken by a "digital nomad" of their laptop at beach or some other place with an amazing view. Sure it looks pretty, but the idea of doing remote work from a beach is ridiculous. Following up on emails, reading reports, or heaven forbid calling a colleague is the last thing I want to do while at a beach or some other beautiful spot. It's too distracting.
Coffee shops tend to be more ideal, but in my experience it's incredibly challenging to sit for long periods in a coffee shop chair. Expecting a reasonably calm environment isn't always guaranteed, you're spending money on food and beverages, and then wireless access can be hit or miss. Also sitting at a table looking down at a screen while sitting in a non-ergonomic chair really sucks. The optimal way to do work with minimal distractions is inside an office (either home or traditional work office). Granted, I absolutely love to leave the office to go work at a coffee shop for an hour or two, but I couldn't do that for hours on end.
My final thought is that constantly moving around means you don't establish roots, community, or a sense of belonging. Traveling to places where English isn't commonly spoken, if at all, can feel isolating (and I speak from experience). Even learning the local language as best you can, you're not likely going to have deep conversations with anyone soon, unless you come across others that speak your language.
That said, I'm not trying to discourage anyone to try it, but if you're a remote worker, definitely try something short term. Regardless, I think traveling abroad and experiencing different cultures is good for you IMO.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>
https://chuck.is/nomad
Mid-2024, before my son was born, I had been experimenting with servers and created a media server that I could access both locally and externally from my home network. This was primarily the outcome of applying some of the stuff I've been learning in a network technologies program, but I've also always enjoyed tinkering with Linux and computers in general. Without getting too technical, my small media server runs Jellyfin, connected to two 14TB drives configured in a RAID1 array, which can contain a ton of content.
Previously I had been using a small, painfully slow 4TB external drive to store my media. After transferring my files over to the new drives, I had a lot of free space so naturally I had to fill it with more content, which is itself a drawback in just how much time it takes to find, download, and manage all that content. Jellyfin does a decent job properly identifying movies and shows, but not always, so I've had to identify a lot of media files manually.
Part of me thinks that in order to justify the media servers existence and the costs associated with it, it needs to be used.
Back in October, we flew my wife's parents up here from Vietnam to help out before, during, and after our son's birth. Since our apartment was furnished for only two people and two cats, we needed to make the place comfortable for four people, two cats, and an incoming baby. We ended up buying a large sectional couch and a 65" TV so that her parents would be more comfortable and entertained during the winter months (and her dad absolutely loves watching YouTube).
Previously we only had a simple futon in our living room, but we didn't have a TV. When we wanted to watch a movie or a show, I'd have to pull out a projector, my laptop, and external hard drive, then hook it all up. We'd also have to wait until it got dark enough because we get a lot of natural light in our apartment and it's not ideal conditions for the projector. I intentionally went with this setup to create some friction so I can't just mindlessly plop onto the couch and consume media.
I've also have had more free time on my hands. I've been on an extended paternity leave while caring for an 8 month old that demands constant attention which makes it challenging to just sit and read a book, meditate, write, or do other things that require focus. It's far easier to let a movie, documentary, or show play in the background that I'm only partially paying attention to.
All of these factors together have played a part in my new habits. I've fallen into a pattern of turning on the TV and playing something off the media server every morning and often every evening. There is some legitimate quality content like documentaries (PBS Nova is one of my favorites), lectures, good movies, and shows I know I'd watch again and that I'd recommend to friends and family that have access.
I'm not anti-media, but I can't help but consider Neil Postman's concept of amusing ourselves to death. It makes me feel like I'm learning something or doing something, but the reality is that I'm not learning hardly anything from that Netflix documentary series I downloaded. I could learn more by reading for 15 minutes about a topic rather than an hour long documentary.
So what's the solution? I know most will say something like "learn moderation", but I am the type of person who can struggle with self control. If I have a box of cookies in my home, it's very possible that I'll eat the entire box in a day, so that's why I don't keep cookies in my home. The solution might be as simple as just selling the TV. Or maybe create new constraints like "no TV within the first three hours of waking up", but that's far too easy to dismiss. The answer is to create more friction, but I'm not sure what that friction is. I'm still figuring it out.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>
https://chuck.is/watching-tv
The dark forest hypothesis suggests that many alien civilizations exist in the universe, but are silent and thus undetectable out of fear of being destroyed by other hostile civilizations. These silent civilizations may also be "hunters" themselves in that when they detect another civilization, they attack and destroy it.
The framework for cosmic sociology is introduced in the book and explains how a dark forest universe would come about. Cosmic sociology simply refers to the study of how intelligent civilizations might interact on a cosmic scale. The dark forest hypothesis is born out of cosmic sociology.
Suppose that survival is the primary need of every civilization. Civilizations are likely to expand. Resources in the universe are finite. Destroying any potential rival civilization ensures access to resources and the continued survival of the civilization.
A Hobbesian trap explains another reason why preemptive strikes occur between two civilizations. The chain of suspicion, as Cixin Liu describes, both sides fear imminent attack from the other and therefore will seek to destroy the one another. A civilization cannot be certain that another civilization is not hostile, or may not one day become hostile. If guaranteed survival of your civilization is the goal, a preemptive strike is a rational course of action.
Here's an excerpt from the book:
Because they don’t know what we’re thinking.
Suppose they know that we’re not devils?
The problem still exists.
They don’t know what we’re thinking about them.
They don’t know what we’re thinking about what they’re thinking about us.
That carries on in an endless chain of suspicion: They don’t know what we’re thinking about what they’re thinking about what we’re thinking about what they’re thinking about what we’re...
The explanation for why we don't see other civilizations in the universe is because they're intentionally hiding themselves. This hiding is because of the perception that any other civilization would be a potential threat to their own, likely destroying any other civilizations they come across.
The book also touches on the Berserker hypothesis, another solution to the Fermi Paradox, and similar to the dark forest hypothesis with some overlap. The hypothesis suggests that the reason why we don't detect other civilizations is because they've all been wiped out by another highly aggressive force via destructive Von Neumann probes. This force could be an expansionist civilization that wants to wipe out any and all competition, or anything that could be a threat to their continued existence. Or it may not even be an expansionist civilization, but an artificial intelligence.
Although I tend to be agnostic toward these ideas, I love to think about them. Personally, I don't believe that the universe is teeming with life, at least not in a way where we would be able to meaningfully contact each other. Life itself is an exceptionally rare occurrence, and it when life does emerge, it's extremely unlikely that that life is intelligent enough to be capable of science, technology, interstellar travel, etc. Civilizations with these capabilities are separated by an infinitely vast amount of space and time.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>
https://chuck.is/dark-forest
To set SELinux to permissive edit the SELinux configuration file with sudo vim /etc/selinux/config. Then modify the SELinux parameter by changing the line SELINUX=enforcing to SELINUX=permissive. Save the changes to the file, then reboot the system for the changes to take effect.
Fedora 42 isn't officially supported yet, so I had to force it with the command sudo curl -sSL https://install.pi-hole.net | sudo PIHOLE_SKIP_OS_CHECK=true bash. This will run through the install script. Once it's completed, I've had to open up some ports allowing http, https, dns, etc:
firewall-cmd --permanent --add-service=http --add-service=https --add-service=dns --add-service=dhcp --add-service=dhcpv6 --add-service=ntp
firewall-cmd --permanent --new-zone=ftl
firewall-cmd --permanent --zone=ftl --add-interface=lo
firewall-cmd --reload
Once the installation is complete, I like to change the randomly generated password to something I'll remember. To do this, run sudo pihole setpassword, enter the new password, then reboot the system.
In most cases, this should be the end, but in both a virtual environment and bare metal, I've had pihole fail to start with the error /run/log/pihole/pihole.log: No such file or directory, at least on Fedora 42. To solve this, I had to create the folder and give the proper permissions to pihole.
sudo mkdir -p /run/log/pihole
sudo chown pihole:pihole /run/log/pihole
sudo systemctl restart pihole-FTL
This will fix the problem, but I've noticed that after the server reboots, I've had to run these commands again. To solve this problem, I've had to automate this with the systemd mechanism called tmpfiles.d. Creating/editing the file sudo vim /etc/tmpfiles.d/pihole.conf and then adding the line d /run/log/pihole 0755 pihole pihole -. This automatically creates a folder ("d"), providing the pihole group and owner the 0755 permission, and "-" meaning no age limit (meaning do not delete).
Hooray! Pihole is now making my network be a more sacred place.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>
After experimenting running some server services I use in a VM, I decided to move my entire homelab to Proxmox. My homelab consisted of four separate servers (specifically three mini-PCs and a Raspberry Pi 5), and one dual-bay DAS. One of the biggest benefits of virtualization is that I can run multiple services that I traditionally separated by bare metal server, on a single server via virtualization. Proxmox allowed me to combine the four physical servers into a single server, and they can share a USB port to access my dual-bay DAS via USB passthrough. In this sense, virtualization is simpler.
My primary physical server (called "jupiter") was configured to run five virtual machines, each running one to three services. Below is the documentation and notes for how I had the VMs separated (not included are the two other nodes "mars" and "neptune" which didn't have any many, if any, VMs before switching back). What's neat is that each VM gets it's own IP address (though I believe this can be configured so that Proxmox handles NAT instead of the router), and I designed a basic IP scheme for the proxmox server and VMs, assigning them statically on my router's DHCP server. The physical server gets an IP ending in .10, while the VMs get a .11, .12, etc. Experimenting with cluster, the physical servers would get .20 and .30, and their VMs would get .21, .31, etc.
Granted, I'm far from a Proxmox pro, but I was regularly losing networking on the host server at random, and therefore all of the guest VMs. Sometimes it would be every couple of hours, sometimes within 48 hours. I had to install the app Keep It Up on my phone to send a ping every 5 minutes to make sure the server was still connected. I eventually solved this with a firmware update for the NIC, so this may not have been a Proxmox/Ubuntu issue, but a firmware issue. That said, I never had that issue when running it as a bare metal server with Fedora 42.
I've also been having issues with unstable VMs, and the reasons seemed to differ. I know that at least one of the reasons was due to not providing enough system resources for the VM to run, specifically storage space. However, I also had issues with my media server VM (tachyon) that ran samba, jellyfin, and cloudflare where it would crash and restart, only to load up rescue mode. By default, Fedora 42 disables the root account which makes extremely difficult when in rescue mode. I never tracked down the issue, but I rebuilt the VM, enabling the root account enabled, and no longer had the VM crash and load into rescue mode problem. However, I did have issues with rebooting at random, but after the reboot, it would behave normally - I would only notice this if I was interacting with the server when the crash happened.
I had also experimented with setting up a cluster and added two more nodes (the other two servers in my homelab). I had migrated my Pihole VM from one node to another and it seemed to work fine, but the next day I noticed that it was no longer functioning. The VM would not power on and I was not able to migrate it back to the original node. I never tracked down the cause, but it was then that I decided to go back to a "simple" bare metal server.
As I mentioned at the beginning, Proxmox allowed me to combine multiple physical servers into one server which does make things simple in a way. On the other hand, instead of managing one server, you're managing many virtual servers - each one needing it's own updates, and each needing maintenance and monitoring to remain stable. Admittedly, a lot of this due to my own ignorance with Proxmox and virtualization, and I should also point out that some of these problems may have had something to do with running Proxmox on a mini PC with limited resources. At the end of the day, I just don't have time to troubleshoot it when Proxmox/Ubuntu has issues, or instability of a VM.
Over the course of a few weeks, I spent chunks of every single day managing and fixing issues, and over time, I just stopped having fun. I needed a working server that I don't need to think much about so I can focus on other things. Even though I'm back on bare metal, this project did exactly what I wanted it to do: I've learned something. I've certainly learned a lot by playing around with Proxmox, breaking both guests and the host, and learning more about Linux. And that was the point of this experiment. I have a better understanding of not just Proxmox, but virtualization in general.
That said, I've decided to return to bare metal for core services like jellyfin, samba, and syncthing because if these services go down, family members are effected. Switching back to bare metal, I've also simplified my earlier physical setup down to two servers - the ThinkCentre M920q and the Raspberry Pi 5. I've removed a switch, two mini-PCs, and countless cabling.
This post is by no means a knock on Proxmox or on virtualization - if anything it's a knock on myself for jumping to a virtualized environment too soon before adequate testing. It's obvious to me, that virtualization is essential to by familiar with. And with any technical project, we should expect problems to arise. I may one day return to Proxmox and virtualization. If you have the inclination, it's definitely worth the time.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>
https://chuck.is/proxmox
Below are the steps I took to autmatically decrypt and mount a LUKS-encrypted drive at boot.
sudo dd if=/dev/urandom of=/root/luks-keyfile bs=4096 count=1
sudo chmod 600 /root/luks-keyfile
sudo cryptsetup luksAddKey /dev/sdXn /root/luks-keyfile
Replace /dev/sdXn with your encrypted partition device (for me, it's /dev/sdb1). After a moment, you'll be prompted to enter the existing LUKS passphrase.
Open crypttab via sudo vim /etc/crypttab, then add a the line:
luksdrive /dev/sdXn /root/luks-keyfile luks
"luksdrive" is the name of the mapped device (whatever name you choose, I called my drive "ganymede"). /dev/sdXn is your encrypted partition, and /root/luks-keyfile is the path to the keyfile.
Find the decrypted device, usually at /dev/mapper/luksdrive (matching the name in crypttab), and add an entry to your /etc/fstab to mount it automatically sudo vim /etc/fstab. Add the line to the bottom of the file:
/dev/mapper/luksdrive /mnt/your_mount_point ext4 defaults 0 2
Replace /mnt/your_mount_point with your desired mount point. Replace ext4 with your filesystem type.
After following these steps, I realized that the folder you're mounting the drive to must exist, and the /media, /run/media, and /mnt directories are typically cleared after a restart. Two of my Fedora VMs fail to start, booting to "rescue mode" as a result. Creating a persistent folder is easy enough.
Run sudo vim /etc/tmpfiles.d/luksdrive where "luksdrive" is the name of the mapped device. Add the lines d /mnt/your_mount_point 0755 user user - where "user" is your username. Then run sudo systemd-tmpfiles --create. This will automatically create a folder "luksdrive" within /mnt at startup.
The next time the machine starts, the encrypted LUKS drive will automatically be decrypted and mounted to /mnt/luksdrive.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>
https://chuck.is/auto-mounting-luks
In order for any of this to work, you first you need a domain and an existing home server with some resource you want to access. Each service will have it's own subdomain, so you'll have URL something like nextcloud.example.com or homeassistant.example.com so you can access that specific service.
If you haven't done so already, set up an account with Cloudfare. Then click "Add a domain" and follow the steps to get your domain added. I went with Cloudflare's free tier, which is adequate for my needs. You also need to use Cloudflare as your nameservers, so it's important to note that you can't have any other nameservers associated with the domain. Registrars can take up to 24 hours to process nameserver changes, but mine only took a few minutes. Once completed, you'll receive an email.
After the domain is configured, click on "Zero Trust" in the sidebar. In the "Zero Trust Overview" sidebar, expand "Networks" and select "Tunnels".
In the Tunnels screen, select "Create a Tunnel".
Cloudflare offers two tunnel types: Cloudflared and WARP Connector. Cloudflared is primarily a command-line tool that acts as a connector, creating secure tunnels to Cloudflare's services. WARP is a mobile app that provides a VPN-like experience which essentially encrypts all network traffic from your device to Cloudflare's servers. I chose Clareflared.
Next, give your tunnel a name and select "Save tunnel".
Now choose an environment and start the configuration process. I personally went with Red Hat only because I have a preference for running it as a systemd service and because I'm running Fedora on my home server and the instructions for RHEL only needs to be slightly modified for Fedora. The only modification I had to make with the RHEL install script was to change "sudo yum localinstall..." to "sudo dnf localinstall...".
Assuming the install went smoothly, hit "Save" and go click on "Public Hostname" at the top of the screen. To add a local network service, click "Add a public hostname".
On this screen you'll set up your service by adding your subdomain name, select your domain, and the local IP address of the server and the port number your service is running at. In my example below, I'm (fictiously) running Jellyfin on my server at 10.10.1.40:9999, so I'm pointing "jellyfin.example.com" to it.
If everything went smoothly, you should now be able to access that resource from anywhere. I verified this by accessing it from my phone over my cellular connection.
What's really cool about this service is that I can access resources on my home network from anywhere. I have a few services set up, most noteably (and most fun) is Jellyfin so I can access my media from anywhere. Also, every subdomain that's set up has it's own SSL certification for HTTPS.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.
]]>
https://chuck.is/tunneling