My MNT Reform – almost a year on

MNT’s Open Computing Autonomy sticker, CC BY-SA 2.0

I assembled my Reform almost a year ago. This post is about how thing have gone on with the Reform since then.

When I assembled the laptop at the time, I refrained from making any changes and just assembled the system as instructed. The change I had in my mind was to toggle the switch on the CPU module in order to change the boot sequence so that it would be possible to boot from the storage onboard the CPU module (eMMC) as the primary boot device. I wanted to change it so that I could add the boot loader there & setup the root filesystem on the nvme drive with ZFS. Since the switch is tucked under the CPU’s heat sink, I was worried that with the slip of the hand, before I’d event started using the machine, I would have screwed it up and damaged something. So the stock setup it was, and I’ve been with it since.

I bought be MNT Reform kit to assemble with the ath9k wireless card, sleeve, a printed handbook, trackball & trackpad modules. I started off with the trackball installed, intending to switch over to the trackpad after some time. I’ve yet to switch to the trackpad. The buttons on the trackball module give a nice click and it feels nice to use, so I’ve stuck with it.

The wifi connectivity at my place isn’t that great and even with other hardware and operating systems, including mobile phones, throughput is lacking. For the Reform, the antenna struggles with maintaining connectivity, so I worked around it by using ethernet which is fine at home. Outside it’s not such an issue if I’m tethering since the device acting as the access point is usually within very close proximity or where the access point is in the same room as the machine, rather than on opposite sides of a property.

The laptop came with the v3 system image based on Debian sid (unstable train) on a Transend SD card which I’m still using. Performance of the SD card was good and this was my first time running Debian sid. Despite being the unstable train it’s been fairly painless. I vaguely recall a package bug which caused issues with updates in the early days but since then, it’s been fine (colour me impressed). Only thing is the sheer volume of package updates on a weekly basis, give it a week or so and there’s easily a couple of hundred packages to update. I installed ZFS via dkms which I then used to create a zpool on my nvme drive. The nvme drive has a swap partition on it and a zpool with my data; OS currently lives on the supplied SD card along with the root of my home directory.

The system image is Debian sid with a specific kernel and drivers for the Reform, alongside some tools, and customisations to make the system more welcoming such as a login banner which lists useful commands at hand (defined as shell functions) like chat – when called invokes an irc client & joins the mnt-reform channel, or reform-handbook which displays the system handbook in a browser. The system image comes with KDE, Gnome 3, Sway, WindowMaker all pre-installed. I tried Sway briefly since it is a performant window manager but soon rolled back to Gnome 3 due to familiarity & comfort.

Screenshot of a terminal showing the output of the reform-help command

The system image comes with a bunch of apps preinstalled such as KiCad, ScummVM, FreeCAD, Inkscape. It sets a nice tone for the machine as a fun creative space and that’s pretty much how I’ve used the machine in this time. I have kept it separate from my daily environment and used it as a get away from the usual where I want to focus on something.

Of the reform specific packages which are installed, there is the reform-check utility which performs a sanity check on configuration and makes suggestions for new changes which been integrated into newer builds such as missing packages shipped with system image and outdated u-boot, so it is easier to maintain an installed OS image and reduces inconsistencies when it comes to debugging.

If the pace of change of bleeding edge Debian sid is too much, there is now a stable image available currently based on bookworm. There is a path to switch an existing unstable system image to the stable one but since you’re going back in package version it gets a bit messy with old and new version so probably safer to go for a fresh install of the stable image, especially if you’ve been keeping your unstable system image up to date with package updates.

It’s been nice to be able to provide minor fixes and changes to the components which compose the system image. When I first got the laptop, u-boot lacked support for initialising the display, when the system booted, Linux would then initialise the display. As development progressed u-boot grew support to init the display, it even displays the MNT logo! I believe these improvements were a community effort.

Reform sitting at the u-boot boot prompt.


With Debian’s support for running different ABIs it has been possible to run Steam on the Reform. Since you’re running a duplicate userspace, it has made the number of updates balloon but apart from that it just works. The hardware is sufficient to play Thimbleweed Park but the games needing more advanced OpenGL support will run but won’t display so for example Monkey Island 4 runs but there’s just a black window. Since the system is using the open source driver for the Vivante GC7000 GPU, I wondered if it could be made to work using the vendor’s closed binary blob driver since that supposedly has support for newer OpenGL, but I’ve not tried. The swap on nvme was necessary here since Steam will use up all the 4GB RAM and work its way through the swap too (it really needs 8GB RAM) but that’s fine, there’s no noticeable frame rate drop in the high pace action of a point & click adventure. Outside of Steam, the GPU is capable of handling Monkey Island 1 & 2 in ScummVM and the Minecraft clone Minetest which ships with system image works just fine (I’m not really a gamer, so don’t take that list as all the machine is capable of, I haven’t investigated what works and what doesn’t extensively).

I still enjoy looking at the machine just as much as using it. The Japanese keyboard layout with the backlight looks beautiful. The ability to take the machine apart easily has made a world of difference since it’s not a chore to investigate issues or perform maintenance. Need to reflash the keyboard, unscrew 6 screws, and lift the bezel. Need to reflash new firmware on the motherboard, 10 screws, and lift the base panel.

Overhead shot of the MNT Reform showing the keyboard and trackball.

Since I built the laptop, I made two hardware changes.

Originally, the keycaps did not have a notch for the home row index keys, and since the layout it is a little different to usual, it was a bit disorienting switching between machines with different keyboards since muscle memory was lacking. That’s no longer an issue as there are notched keycaps available which I purchased and installed.

The battery board which came with the laptop originally had a couple of issues which were addressed in the updated battery board. With the original battery board, I needed a full battery charge if I was going to compile the ZFS kernel modules otherwise the batteries would not sustain the prolonged surge in use. With the upgraded battery board this is no longer an issue.

I’m still using the battery cells which came with the laptop, the laptop is fast to recharge them. There’s a short delay of around 30 seconds before the system detects the charger is connected and switches over, which has caught me out when I’ve realised at the very last moment that I’m about to run out of power and hurried to connect the charger to the power outlet.

Unfortunately I don’t have numbers on the runtime off batteries as the system gets switched off in between uses, and I’ve not bothered with fighting sleep/resume, the original battery board caused issues with systems sleeping and later, kernel bugs prevented the system from resuming correctly.

Of the hardware features, I’ve yet to use the HDMI port to connect it to an external display. The builtin LCD panel is nice to look at. Since I’m not using full disk encryption on the SD card, I use a Yubikey for SSH keys. The orientation of the USB ports means that the Yubikey touchpad is facing down which is a little annoying to use, but since it flashes when you need to touch, it’s not something that’s going to go forgotten, though it is somewhat clumsy to have to lift the laptop up to touch. I have a small USB hub which I tried connecting the Yubikey to, so that I wouldn’t have to lift the laptop up, didn’t really work, the Yubikey remained facing down. Really need to use a Yubikey nano, rather than the standard one.

The headphone socket is fine for headphones with a cylindrical connector like the Apple earphones, but if your headphone uses another shaped jack like the L shaped, 3 pole one, it wont be able to go fully into the socket. This is due to the socket being positioned ever so slightly back, and with the side panel installed, the panel blocks the L shaped jack from going fully in. That was a trivial fix, use a 15cm 3.5mm 4 pole male to female stereo aux audio extension cable.

3 chords with 3.5mm jacks on top of Reform case.

One completely cosmetic mistake I made when assembling the machine on the first day is sticking the label with the units details on the perspex cover directly above where the CPU heatsink is positioned, that was probably not the best place for it in the long term.

The way the LPC driver for Linux interacts with the kernel currently means that there’s insufficient time for the nvme to complete its own internal shutdown procedure resulting in the unsafe shutdown S.M.A.R.T counter being incremented every time the system is powered down via Linux. Workaround is to issue a reboot instead & switch off the machine at u-boot.

There have been some new additional components added to the MNT Store for the Reform since last year. The most recent addition is the CM4 adapter which allows the compute modules by Raspberry & Banana Pi to be installed in the Reform as a CPU module replacement and games like Monkey Island 4 to be played without issue. When I can afford it, I really want to get the higher end LS1028A CPU module which has 16GB RAM and faster CPU cores. I don’t really strain the machine CPU wise currently, but more RAM is always good. There are now FPGA modules so you can replace the ARM CPU with an FPGA running a softcore which is so exciting and unique, however the modules are more than the price of the laptop itself. I would definitely go for it if I had the cash spare. For now, I’ll have to live with the potential idea of having a laptop form-factor which I could experiment with different CPUs by just synthesizing them.

There’s a new Laird wifi antenna and anti-flexing bars for the keyboard which are going to be my next items to purchase, curious the difference the bars will make.

From the software side I’m actually happy running Debian on this machine as there’s binary packages of anything that I think of wanting to try but I really want to switch to a root on ZFS setup and don’t feel that the DKMS path of rebuilding a module as a separate component works since a kernel upgrade and a failed ZFS module will render a system unbootable (not catastrophic but hassle to recover). I have hit this several times over the last year, for example when the license on the functions in the kernel were changed to GPL only, that broke the ZFS build. Not an immediate failure but something that flags up at the linking stage if I recall correctly which resulted in the module not building during the apt upgrade process. Workaround was to make a local modification to ZFS license ID so it would build successfully and forcing the rebuild of the package again. So I’m dragging my feet, pick up the ball again and integrate ZFS support into Viewpoint Linux and add an aarch64 build or just continue with the convenience of a maintained Debian system. I’d need to refresh everything in Viewpoint Linux and before that, I need to get the framework into place so that it’s easy to repeat the process and maintain it going forward. Ahh!

close up black and white photo of MNT Reform keyboard LCD display showing the MNT Reform logo and above on screen is the Debian logo.

Not having to focus on OS / packaging side of things has definitely been appreciated over the last year. Starting off with an idea of trying something without a lengthy detour into compilation and actually getting on with what I had in mind is great and something I would have to maintain otherwise. Hmm, perhaps this could be a good excuse to buy more hardware for the build infrastructure. There is a new Reform motherboard on the way, if I upgraded the CPU module with the motherboard, I would have spare hardware for the build system.
You have been reading the fantasy of someone with high end CPU taste, and microcontroller money. 🙂

Reform v0.4

Ubuntu 22.10 on Sipeed Lichee RV board

Sipeed Lichee RV Dock with 32GB microSD card installed and powered on via USB-C port

Things seems to have been ramping up on the RISC-V front over the last 12 months. Various open source projects have started offering official support, and new hardware is being released or announced regularly. As I write this, Sipeed’s Lichee RV has been around for quite a while (since 2021?) and soon there will be new hardware released which was announced before the end of 2022 with a quad core SoC and up to 16GB RAM. Following the Kinetic Kudu release, Canonical announced support for Sipeed’s Lichee RV RISC-V board on their blog and the wiki article seemed like it was fairly painless to try, so I gave it a try. The Lichee RV with dock is advertised as a Linux starter kit on the Sipeed’s AliExpress store, it’s a single core, uniprocessor board, based around the Allwinner D1 SoC, which comes with 512MB or 1GB RAM. The 1GB RAM version with the standard dock+wifi is the version I have been playing with. There is now a pro version of the dock available but that had not yet been announced at the time when I went shopping. Similar spec hardware which features a mini-hdmi and dual USB-C ports instead is available by another company called the Mango Pi MQ Pro, it’s based on an actual single circuit board, with no dock required.

The setup process to get started with the new Ubuntu release was straightforward, download the image, dd the image to an SD card and boot the device from it. It’s all headless since I have no LCD panel connected to the board and the HDMI port on the dock doesn’t seem to work out of the box when I tried it though it is listed as supported on 3rd party and vendor supplied distributions. The board was connected to a network using a USB Ethernet adapter so that I could continue setup but it is possible to obtain serial console access via the GPIO pins and USB-C.

The dock which the Lichee RV board connects to provides a Realtek RTL8723DS wifi (2.4GHz only) & Bluetooth controller. The wifi drivers for Ubuntu are provided via dkms which means they will be compiled during the install process. An hour and a half after invoking apt install licheerv-rtl8723ds-dkms the wifi driver was built, installed, and working.

Unlike the x86_64 builds of Ubuntu, the RISC-V build does not come with pre-compiled ZFS support, it needs to be installed via dkms. The single core uni-processor board took around 4.5 hours to complete apt install zfs-dkms. With ZFS support I could move on to attaching some disks to it. I wasted quite a lot time by fishing out an old USB 2.0 powered hub & attaching a set of disks to it but despite the power supply being a big heavy brick, it provided insufficient power to run anything more than a single hard disk safely. I tried attaching 3 disks and the board itself to the hub and the disks suffered (insufficient power to spin up the disks), tried 2 disks and the SBC but the power draw was too much which made the board unstable as disk I/O increased; it was amusing to see (imagining a scenario where you are trying to throttle the system in order to survive). After some experiments with online shopping I found a powered USB 3.1 hub which provided sufficient power to run the disks, the board still draws too much power to run disks and the SBC from the hub so the SBC ended up running from a separate power supply. I created a new mirrored pool on the pair of disks attached to the USB hub, and attached a third disk which contained another ZFS pool, then began rsyncing the data from the single disk pool to the new mirrored pool as a torture test to see how things would run, if at all. The new mirrored pool had ZFS compression=on set, though in hindsight it was a waste of CPU cycles as the end result compressratio was 1.03x. During the copy process the system suffered several module crashes related to ZFS functions which caused the copy process to stop, though the system was still available to SSH into and issue a reboot. Despite only having 1GB RAM, it took over a week to copy 2TB of data across the two pools, but many hours were lost between the module crashes and me noticing the issue to restart the process. Once the data was all copied across, it took several days to scrub the new mirrored pool. The load average around 5 throughout the entire period, from copying to scrubbing, waiting on i/o.

Since it’s a uniprocessor board, the system is unable to maintain a wifi connection and sustain prolonged CPU load, it drops off and I am not sure if it’s possible to teach it to retry yet. It has no such issue with USB Ethernet though, the CPU can be maxed out for some time and it’s still possible to reach the system, so perhaps the wifi driver may be a culprit or the lack of an antenna on this small piece of hardware.

There’s a lot of software available either packaged by Canonical or official binaries provided by projects themselves, so openjdk, zig, llvm, rust were readily available to install. These languages are provided as examples because they would have a hefty build time if you were starting from scratch generally but especially on a platform with limited resources.

With llvm installed I was able to compile bpftrace. Something is not quite right though, such as when running execsnoop, invoking the same command several times produced different results in the traced output, sometimes the exact command with arguments executed is traced, sometimes just that command was executed, and sometimes just blank, only a timestamp and process ID. Disabling systemd-resolved and switching to Unbound for the local resolver made a big difference to responsiveness of the system, noticeable when SSHing in, the password prompt returns quicker.

It’s nice to see that software support is there, it is an entry level system but being feature rich in terms of software readily available makes it a handy piece of hardware for experimenting which can be left on, lying around. My hope was to replace an ALIX 2c3 board with the Lichee RV for providing services like shell, routing (for the odd occasion when I try to use computers without wifi connectivity), and caching resolver via Unbound but I need to make progress with stable wifi connectivity before I can swap the systems around. Having ZFS support on such a platform is really cool, bugs have been reported in the Ubuntu launchpad for the module crashes: #2002663, #2002665, #2002666, #2002667, trying stock upstream versions are on the list to try next if I can get the build time down by building the modules on an emulated guest running elsewhere. There’s also a port of xv6 with D1 SoC support which is on my list to try out. For now my Lichee RV board sits running behind the ALIX board, it has been running 24×7 for the last couple of months, first month stress testing the board copying the data across and building bpftrace, second month mostly running on idle to see if it’s stable, while I focused on something else (organising media).

shortcoming
Performance garbage, single core, very cardy to use
Some software porting risc-v has wonderful bugs
No GPU, very laggy

Description in an advert on AliExpress by 3rd party for a D1 based RISC-V SBC.
Not sure what they mean by “very cardy” 🙂

LFS, round #6

It’s been a while since I wrote a technical post in this series, since the last post I made a build of what I called Viewpoint Linux Distribution available. This post will cover the time between the last post (round #5) and the launch of the distro.

By the time I’d written the previous post, things had roughly taken shape and I was thinking about what would sit on top via packaged software. Being interested in Guix from afar I thought about using that as there had been some interesting talks about it at FOSDEM‘s Declarative and Minimalistic Computing devroom a month prior. Didn’t end up going down that route as Guix requires GNU Guile, GnuTLS, and various extensions for Guile. It’s not so much of a problem what its requirements are but that I would have to ship and maintain copies of these in the base OS and I didn’t want to do that so I stuck with what I knew. I’ve spent a lot of time with pkgsrc and am comfortable working with it. pkgsrc gives you control where it satisfies dependencies and as long as you have a shell & compiler installed it can get itself to a working state. Unless specified, the bootstrap process on Linux opts to satisfy all dependencies from itself and ignore anything already installed on the system. This behaviour can be overridden by specifying --prefer-native yes when bootstrapping and in this scenario it was preferable since the OS was using recent if not latest available versions of things. Despite preferring native components, when it came to building packages, things that were present on the OS were being built again anyway, specifically readline.

$ cd /usr/pkgsrc/shells/bash ; bmake show-var VARNAME=IS_BUILTIN.readline
no

After some investigation it turned out the builtin detection mechanism was not working and so dependencies would always get built, this was due to a difference between where libraries are installed when following the LFS guide and where pkgsrc expects to find them. The instructions in the LFS guide specify /usr/lib for libdir and pkgsrc expects to find them in /usr/lib${LIBABISUFFIX} which in this case would expand to /usr/lib64. Just to move thing along I patched pkgsrc/mk/platform/Linux.mk to include /usr/lib for _OPSYS_SYSTEM_RPATH / _OPSYS_LIB_DIRS and builtin detection then started working. With a working packaging system, I began packaging BCC and bpftrace though opted to use the bpftrace binary which the project produces with every release in the end. This made things easier as there is a working environment out of the box to start with and if BCC is needed, it can be installed, but since the BPF Performance Tools book is largely about using bpftrace, you get to start off without dealing with packaging. By keeping the packaging system a separate component, it also saves on shipping a bootstrap kit for the packaging system with every release and likely stale packages depending on how quickly things evolve. I dislike the idea of having to run a package update on first boot to shed the stale packages which are shipped with the OS.

After testing various things out I set out to make a new build of the distro to publish, this time opting to use lib64 as the libdir to reduce the need for changes to pkgsrc, I have not attempted any large runs of bulkbuilds but Emacs 21 package was definitely not happy as it expected to find some things in /usr/lib.

There are various packages which ship with DTrace USDT probes which bpftrace can also make use of. This involves building those packages with DTrace support enabled and using SystemTap which provides a Python script called dtrace to do the relevant work, on Linux. I created a package but since it require Python, it created a circular dependency when using Python 3, as Python 3 has USDT probes. As a workaround to sidestep the issue, my SystemTap package uses Python 2, which is still supported by SystemTap. To enable building with DTrace support I introduced a “dtrace tool” which pulled in SystemTap as dependency on Linux when USE_TOOLS+=dtrace was specified, and nothing on other platforms. I then added USE_TOOLS+=dtrace across the tree where dtrace was a supported option.

bpftrace listing the USDT probes found in libpython built from the Python 3.8 package in pkgsrc

With the OS rebuild, I dropped nscd(8) from the system, the thought of having up to three caching resolvers seemed a bit excessive (nscd/systemd-resolved/unbound). This post highlights why you might not want nscd support on your system. As part of the rebuild I began populating the repository with sources for everything that would ship with distro, it was a tedious process that slowed that as I progressed through the build and imported more and more components, because on the initial import I would roll the tree back to the start to import into a branch, update to the tip of the tree, merge the branch, repeat. I used the hg-git mercurial plugin to convert and push the tree to a Git mirror

The kernel config used started life as the default config which gets created when you run make defconfig and built up from there to cover what the LFS guide suggests and those required by BCC / bpftrace. Testing that X11 worked ok revealed that I was missing various options, from lack of mouse support to support for emulated graphics, the safe bet being the VMware virtual card to use on VirtualBox (VMSVGA which is default) and QEMU, other options resulted in offset problems with the cursor where it would appear on one place on the screen but clicks and drags would register at a different location. Everything works out of the box with VMware option.

I’ve been really impressed by how quickly the system boots and shuts down (not having an initrd image to load and minimal drivers to probe for, account for that), I hope I don’t end up loosing that. I used the work leading up to the release as an excuse to start using org-mode on Emacs. Following the beginners guide I now have a long list of todo items which I work through. The next big item is build infrastructure so I can turn around releases quicker.

Introducing the Viewpoint Linux Distribution

Person observing what's going on a through a tiny window whilst huge, wild, painted horses
pass by

Viewpoint Linux is a distribution for providing a minimal environment for me to build on and play with. I hope that for others it can be a distro which provides a working environment to use alongside various texts I have in mind, allowing the reader to focus on study of the material at hand rather than trying to get their environment setup right.
The idea came about through having to side step from study to investigate broken stack traces and wondering about the level of pain when having to make build changes system-wide on a distribution which doesn’t provide infrastructure to rebuild at mass with ease. When I first started writing about my experiments with LFS it was suggested that I look at several different established distributions which were the answer to what I was looking for. I was aware of these distributions already and had even used some in the distant past, however I decided not to go down this path as there was either new tooling to learn which would drive system management or components were adapted (local changes and features). I was not interested in having to detour to learn another set of tools which are non-transferable between operating systems nor making use of derivatives before setting up the system how I needed it so that I could practice what I was studying, hence Viewpoint Linux strives to be innovationless in this regard.

Viewpoint currently lacks a framework to ease building the system hence everything has been built slowly by hand with a specific idea of how the system should be.

Some of those ideas are

  • It should work out of the box for texts in mind e.g full working stack traces for instrumenting with bpftrace and debugging using GDB
  • its concept of base system is a subset of the utilities installed by the LFS guide, containing general utilities for users and tools for administration. Components which are purely build dependencies are installed to a separate prefix (/osbt (os build tools)) so that they can be removed if desired. Everything else is satisfied from user installed packages which is currently provided by pkgsrc. Dependencies can grow out of hand, for example, dwarves has a cmake build dependency, dwarves provides the pahole utility which is used as a kernel build dependency to generate BTF but it’s also a useful utility for inspecting system data structures by itself. This was a grey area where I chose to include dwarves in base but to satisfy its build dependency (cmake) from external sources, in this case, the cmake project provides prebuilt binaries.
  • A repository (monorepo) of all components shipped. Not such a good idea because of fighting autotooled builds and timestamps, see Living with CVS in Autoconfiscated Projects. But it makes tracking changes in the distro easy which is more important for me.
  • It is safe to assume that I’ve run configure, make, make install a bunch of times with CFLAGS set to ‘-fno-omit-framepointer -g‘ or some variation of (such as you have to enable optimisation also for build glibc otherwise it fails).
  • Viewpoint is an inovationless distro, see previous point (there are no new methods or tooling on offering, just stock components from upstream built a certain way with differing flags)
  • Viewpoint uses systemd (I wondered what my own shit sandwich would taste like)
  • Mercurial for source repo (because one piece of Linusware at a time). There is a git mirror.
  • Primarily intended for use as a guest vm though it is possible to install on hardware, the distinction here is because nothing has been done to cater for differing hardware in the kernel config so manual intervention may be required to prep and get everything working e.g it booted fine on my ThinkPad x230 but I had no wifi. There is also no UEFI support currently, nor any additional firmware included.
  • Development of features to 3rd party components happen outside of the tree (because it’s inovationless)
  • Patches from LFS have not been applied, again because inovationless e.g their provided i18n patches to coreutils which are marked as rejected by upstream. The LFS guide states “In the past, many bugs were found in this patch. When reporting new bugs to Coreutils maintainers, please check first if they are reproducible without this patch.”
  • Versioning is going to be a sequencing number meaning nothing beyond an indication of a new release
  • Viewpoint doesn’t follow the FHS spec strictly & LSB at all. Perl & Python are not part of base (because I did not want to maintain them in base).
  • Currently intended to be used alongside Brendan Gregg‘s BPF Performance Tools book and Diomidis SpinellisEffective Debugging book for learning two different debugging workflows. Other texts are in mind for accomodation in the future. Would liked to have included DTrace but that currently requires running a fork of the kernel. While the fork is kept up to the date with upstream, as part of being inovationless, it is easier to swap components fresh from upstream and saves on having to eliminate another avenue where an issue could have been introduced when debugging problems.
Beware! Vendor Gratuitous Changes Ahead!

Source repository is currently 5.1GB (1.8GB .hg directory, 3.3GB of source), 1.8GB .hg/git conversion directory, so as you can tell, that’s a lot of value add 🙂 . On deciding whether to strip components down to the essential minimum I opted not to as running test suites is part of the LFS workflow when building things up and it would make CI integration easier. AMD Firmware included in Linux aside, the test suites from GCC and Binutils for example take up the most space in the repo.

Lots todo to smooth things over but some key features that I intend to work on to include in a future release

  • Build framework to automate the configure, make, make install routine and allow customisation with ease ala BSDs.
    There is a framework in LFS project called ALFS but I didn’t want to go down the literate programming route and maintain my own fork of the LFS guide (you feed it the XML source of the guide and it builds the distro from that).
  • Add ZFS support

Why the name?

  1. It is focused on observability
  2. It is opinionated
  3. I listened to a lot Alan Kay lectures (a nod to Viewpoints Research, ViewPoint OS from Xerox, though this distro is in no way a great feat in achievement)

Viewpoint is a variant of LFS distribution, registered on the Linux From Scratch Counter on 03/05/2021, ID: 28859, Name: Viewpoint Linux Distribution, First LFS Version: 10.0-systemd.

Post continued here

The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive licensee of Linus Torvalds, owner of the mark on a world­wide basis.

LFS, round #5

Up to this point I’ve been working with a chroot to build OS images from loop back mounted flat file which is then converted to the vmdk format for testing with virtualbox. I created packages for bpftrace and BCC, BCC was fairly trivial and the availability of a single archive which includes submodules as bcc-src-with-submodule.tar.gz helped avoiding the need to package libbpf. bpftrace doesn’t offer such an archive and tries to clone the googletest repo which I sidestepped addressing just to obtain the package. Both packages worked ok though I only tested the Python side of BCC and not LuaJit.

Execsnoop via BCC
Execsnoop from BCC

With that I wanted to see if what I had would boot on actual hardware, so dd’d the flat file to a usb flash drive and booted it on a Dell Optiplex. Things worked as far as making it to grub but then hit a couple of glitches. First issue was that because of the delay probing the USB bus the kernel needs to be passed the rootwait keyword that I was missing so it would just panic as no root file system could be found otherwise. After that I hit the issue that I’d nailed things to a specific device node (sda) and with the other disks in the system the flash drive was now another device node (sdb). Addressing that got me to the login prompt and I was able to repartition the SSD installed in the system with cfdisk, make a new file system, copy the contents of flash drive to SSD, install grub and reboot to boot the system off the new Linux install.

As the grub-mkconfig had included references to the GUID of the file system on the flash drive the system landed in the GRUB rescue mode. Since it wasn’t able to load the config nothing is loaded, most importantly, its prefix variable is set incorrectly. This results in a strange behaviour where nothing which would normally work in the GRUB prompt works. Setting the prefix variable to the correct path allows you to load the “normal module” and switch from rescue mode to normal mode.

grub rescue> set prefix=(hd0,msdos1)/boot/grub
grub rescue> insmod normal
grub rescue> normal

Once in normal mode it was possible to boot the system by loading the ext2 module and pointing the linux command to the path of the kernel to boot. Re-running grub-mkconfig once the system was up generated a working config.

With a faster build machine, the next step is to produce a fresh image, address these nits, and start putting things together to share.

Execsnoop via bpftrace
Execsnoop from bpftrace

LFS, round #4

Haven’t made any progress for a couple of weeks but things came together and instrumenting libc works as expected. One example demonstrated in section 12.2.2, chapter 12 of the BPF Performance book is attempting to instrument bash compiled without frame pointers where you only see a call to the read function.

Instrumenting bash which has been built with -fomit-frame-pointer.

Compiling with -fno-omit-frame-pointer produces a stack trace, within bash but the first frame is unresolved if libc isn’t also built with it, resulting in a hexadecimal address rather than showing a call to _start.

stack trace from bash on system with bash and libc built with -fno-omit-frame-pointer.

I’m going to look at integrating ZFS support next but I’m thinking of sharing a build as-is for the benefit of anyone wanting to work through the BPF Performance book.

LFS, round #3

With the OS image that I wrote about in the previous post I was able to build a new distro with the various substitution I’d made. There were 3 things that I wanted to mention in this post. First, turns out Linux has a bc(1) build dependency which I found when I omitted building it and came to compile the kernel. Second, you really need to run make defconfig before visiting the kernel menuconfig, otherwise you end up with a “default kernel config” lacking any on disk file system support (memory and network file systems are still supported). This was the reason why the kernel was unable to find init in the previous post.

My workflow is that I have flat file which I mount via loop back and I use that as the file system to build the OS in a chroot. When it comes to installing a boot loader using grub-install, if sysfs is not mounted to a location from within the chroot, grub-install will fail, complaining Unknown device "/dev/loop10p1": No such device, p1 being a child node for the first partition on the file backed by /dev/loop10 which is odd as grub-install was pointed at /dev/loop10, the parent node. This is because it is trying to enumerate the start of partitions so it can work out the start sector via sysfs (see grub-core/osdep/linux/hostdisk.c, starting at sysfs_partition_path() . Elsewhere it might have been achieved via ioctls. Either way it is asking the kernel but the dependency for a pseudo filesystem to be mounted in place threw me but looking around /sys/devices/virtual/block/loop10 I see files which expose various characteristics, including offset and start points for each partition.

My focus is now on building the dependencies for bpftrace which means getting BCC built and installed first.

LFS, round #2, 3rd try

In my previous post I ended with the binutils test suite not being happy after steering off the guide and making some changes to which components were installed. I decided to start again but cut back on the changes and see just how much I could omit from installing to get to the point of completing chapter 8. Ideally, I would like to shed everything that’s only a build dependency. It was doable but the sticking point was Python which is needed by Meson / ninja in order to build systemd and though you build Python earlier in chapter 7, at that stage it is built without the ctypes module as it requires libffi and the ctypes module is needed by Meson.

I thought I’d cheat by using pkgsrc to satisfy the build dependencies but the infrastructure for detecting termcap support is unable to detect support via ncursesw which LFS uses. Opting to prefer satisfying all dependencies from pkgsrc which is now the default setting for pkgsrc on Linux created a new problem that the components you compile manually outside of pkgsrc which call pkg-config would link to the pkgsrc versions of dependencies. To side step this issue I moved the pkg-config binary from pkgsrc out of the way where upon I hit an issue with linking systemd. After a night’s sleep I found that ninja in pkgsrc is patched to not adjust the rpath in binaries and this is need for systemd’s binaries because the libraries they depend on are tucked away in a subdirectory.

Upon completing chapter 8, I went back and started afresh once more, this time with the intent to make changes and substitutions once again. I installed bash as /bin/bash but did not create a link to /bin/sh and was surprised to find most things were happy with that, the autoconfed infrastructure could cope, until I reached binutils in chapter 8 where it called /bin/sh explicitly in tests. At this point I installed mksh and pointed /bin/sh to it. This revealed various failures from scripts and tests on other packages which were built after binutils in chapter 8, most significantly by GCC’s build infrastructure. Setting the CONIFG_SHELL variable to /bin/bash when envoking configure ensured that bash was called instead of sh during the build and when invoking the test stage, as the SHELL variable inherits this setting down the line and things move on smoothly. I need to look at getting binutils handle the override as well, rather than hardcoding /bin/sh.

All build dependencies were installed in a separate prefix so that they could be removed after the build. m4, make, bison, Perl, Python, gawk, pkg-config (built with pc location set to /usr/lib/pkgconfig), autoconf, automake, libffi, check, expect, flex, TCL were installed in this location.

Python’s build infrastructure assumes system provides libffi and if the system doesn’t, it struggles with linking. There’s a bug report to teach the build to make use of the information from pkg-config for libffi but the proposed patch in my case did not work as the location under the new prefix where libraries are installed were not in the search path for the dynamic linker. Since I was installing Python in the same prefix as libffi was already installed in, I adjusted the rpath by setting LDFLAGS.

Besides the bash to mksh swap for /bin/sh, I replaced gawk with nawk once more, there was no fallout though I did also install gawk under the new prefix as glibc requires it. tar was swapped with bsdtar from libarchive, Man-DB for mandoc.

I skipped on texinfo as it has a Perl runtime dependency and I don’t want to include Perl in the base OS. Groff was out as it had a texinfo dependency. I omitted libelf as I thought it was only used by tc from IPRoute2 which is for setting QoS policies, turns out it’s a build dependency for the kernel so that went back in.

With everything in place, I managed to build a kernel which for some reason couldn’t go multiuser because it couldn’t find init or sh! Comparing the config with my image from round #1 showed lots of differences which was baffling as I thought I’d only made the changes which the LFS guide suggests. Starting with a new config resolved the issue. I can only suspect that I must’ve pressed space bar by accident when navigating the kernel config menus which switched a bunch of stuff off. 🙁

I now have an OS image which appears to work. For the next round to put it to the test, I am going to try and use it to build a new distro. I will need to address the binutils build infrastructure issue so that I can point it to bash, otherwise I suspect I will run into the issues again when running the test suite (see previous post). I would also like to try and swap binutils out for elftoolchain. I have also been thinking about subsequent OS upgrades and using mtree for that.

LFS, round #1

Following on from the previous blog post, I started on the path of build a Linux From Scratch distribution. The project offers two paths, one using traditional Sys V init and systemd for the other. I opted for systemd route and followed the guide, it was all very straight forward. Essentially you fetch a bunch of source archives off the internet, run tar, configure, make, make test, make install a bunch of times with some system setup in between, before compiling your kernel, putting it into place in /boot and getting grub installed and configured.

The book assumes that you have a system running Linux already and you have spare space which you use for a new partition to install your linux distro that you built. I opted to create a 10GB file which I mounted via loopback instead, with the intention of using it as the boot disk of a virtual machine.

The guide has 11 chapters which takes you through building software on the host, as a new user (for a clean environment), then in a chroot with three iterations of GCC and binutils builds. With each software component that you’ll build, the guide instructs you on how to run their test suite, what failures should be expected and why, before performing an install.

For each component that you build, the guide documents why a patch is applied and what the configure options specified mean. At the end of each section all the installed components are documented, followed by a short description of each item. Unfortunately the dependencies are not documented but sort of implied by the order which things appear in the guide.

Chapter 8 is the most laborious with a hefty 71 packages to install/reinstall. End result is a fully fledged environment with Autotools, Perl, Python, TCL, GCC, Vim, various libraries, compression tools. If you follow the guide every step of the way, it should work a-ok in the end, providing the test suites passed as expected at each stage.

After I finished all 11 step, I had to convert my flat file which I created with dd(1) to a format which VirtualBox would recognise. I wasn’t sure if any of the supported formats was a flat file with a new file extension and it was quicker to convert it to a vmdk file than work through the list to see. First try it made it to the GRUB menu which was nice.

Grub menu

Followed by a panic as I guessed the wrong device node to specify as root in my grub.cfg.

Panic on first boot

A re-edit of the config to specify the device node hinted in the kernel panic and I made it to the login prompt.

First successful boot

At this point I began thinking how much trouble I could get into by substituting or omitting components and started a fresh new build.

Inverse vandalism: the making of things because you can

Alan Kay, “The Early History of Smalltalk,” ACM SIGPLAN Notices Volume 28, March 1, 1993

LFS round #2 started life with nawk instead of gawk, no bash installed but mksh is /bin/sh, BSD make instead of GNU make. No Perl, Python, TCL, gettext, bunzip, xz. BSD make got swapped for GNU make on the first step as it wasn’t happy about its sys.mk that was installed but will be revisited. I made it back to chapter 8 quickly (look ma, less dependencies) and things began to fall apart with rebuilding Glibc, turns out it really wants bison, Python, gawk not nawk. Glibc also really wants bash as well but its configure test is happy with mksh and it passes. It became apparent it wanted bash when running the test suite as some tests call /bin/bash specifically and stop when it is not found. At this point my environment began behaving strangely so I exited the chroot and I couldn’t get back in. Running strace on chroot showed that calls to execve() were returning ENOENT. Rebuilding glibc from the host environment allowed me to get back into the chroot once again, at which point I installed bash. For glibc’s Python dependency, I decided to treat it as part of the bootstrap kit as it seems to be a build dependency. Python got built without shared components (--disable-shared) and installed it in a separate prefix with the plan to remove it after the system is built. From glibc I jumped to building binutils in my chroot and again things came tumbling down during the test suite run. It was not happy about finding libgcc_s, despite the system being aware of it in its ld cache but I haven’t had a chance to investigate any further. I feel very much lost in the bazaar but I’m having fun. 🙂

How to open source: going from NetBSD to Linux

TL;DR: some BSD user tries something other and wonder why things are different.

This post has sat in draft form for quite some time. At first it was written with highlighting the NetBSD project in mind and I started thinking about revisiting it recently due to frustration with running a mainstream Linux distribution when investigating

  • how some critical libraries I was running were built
  • what, if any changes were made to them
  • wondering why the source repositories for components were buried away if at all available.

The recent article on LWN titled Toward a “modern” Emacs mentioned the frustration with distributions provided sufficient confirmation bias to get this together and posted. Note: this is not intended as a bragging contest about NetBSD or pkgsrc or a put down for Linux, but perhaps things I’m not grasping and expecting one to be like the other.

Each technology community has a set of norms around how they interact with their technology, with regard to obtaining software for example mobile users obtain theirs from an “app store”. Mac OS/Windows users traditionally would install packaged software but now mostly obtain their software from a store again. It would be odd to be given a source archive and asked to compile the software for yourself as a user on these platforms (if the source code was even available to users). Unix was the opposite, it was common to receive software in source form and have to compile it yourself. By association and nature (Open Source software) so do GNU/Linux distributions, however binary packages are provided and encouraged for use. The packages save a great deal of compilation time and lower the barrier for users which again is a good thing. I get the impression the details regarding source code and changes do not get the same spotlight especially in a security context, for example as I edit this post, among the most recent advisories on the Debian security page is an advisory for ModSecurity, fairly short, lists the CVEs and states “We recommend that you upgrade your modsecurity packages.”. If I’m interested in the actual changes to the package it’s buried five pages away from the advisory. The GUI update manager on my distro goes as far as collapsing the description panel for the updates which I find amusing.

Software updater in Ubuntu

I agree hiding technical detail from a user is a valid case. Actually, while trying to take this screenshot I visited the bug report of the GCC update and with a bit of clicking around, I found a link to a diff of changes. Why can’t the advisories document both paths (build your own or obtain the packages) and allow the user to choose.
I was hoping for something a bit more flexible which would allow me to use what’s in place and also allow me to rebuild the system or parts with ease should I wish/need to.
Relying on a distribution as a means of obtaining gratis binaries to use, at best, isn’t very appealing.
Use of Open Source software in such a way while completely acceptable overlooks the opportunity to mould software to your requirements should you be inclined.
Given a piece of software, to consume provided binaries, avoiding any customisation is akin to bending around an implementation and is actually heading in the opposite direction of what Open Source software is able to allow you to do.
Let me clarify, I’m not saying just because a piece of software is Open Source it must be compiled by every user by them self for maximum benefit (a talk I gave in 2019 was torpedoed by the objection that one should build their own version of Chrome or Firefox 🙂 ).
I’m suggesting that if you are relying on tools of an Open Source nature, you are best off owning your stack.
That is, you take active participation in projects, for you are able to help shape the evolution of your tools through participating and get insight into upcoming changes.
This makes upgrades and maintenance smoother because you are not reliant on a 3rd party and their release cycle for updates, potentially resulting in long gaps between upgrades which could also mean big jumps between major versions when you do upgrade, bringing about many changes since the previous version you were running.
You become familiarised with the process to assemble your tools which helps when you are reasoning about your stack during debugging.
Questions like “are there local changes from your distribution?” are off the table. e.g Linux From Scratch
Bad tools harbour bad habits.
The shortcomings of a bad tool are pushed on to the user/operator who is then forced to tolerate them and work around accordingly, leading to a clumsy workflow. See Poka-yoke
With a system composed of many such components, it becomes harder and harder to think about new ideas or existing problems in a new way because of the mental burden of coping with what is currently in place and adapting, leading to paralysis and surviving in maintenance mode where the system remains static and is kept running because no one dares make a change.

The enjoyment of one’s tools is an essential ingredient of successful work.

Vol. II, Seminumerical Algorithms, Section 4.2.2 part A, final paragraph

Enter NetBSD and pkgsrc which is where I was coming from as a user.
NetBSD is an open source operating system with a focus on portability.
It has been around since the early 1990s and is the oldest Open Source operating system which is still actively developed as well as one of the oldest active source code repositories on the internet today. The lineage of the code base is easily traceable back to the early days of UNIX thanks to the CSRG archive repository. This is not so important as a first port of call for a new user or for day to day operation but provides useful insight during debugging/troubleshooting.
Having the source code alone is not as useful as having access to the source repo and the history of the code base with commit messages (not that all commit messages are useful).
As with the other BSDs, the current source repository plays a prominent role on the front page of the website and very easy to find.
pkgsrc is NetBSD’s sibling packaging system project with a similar focus on portability.
pkgsrc provides a framework to package tens of thousands of pieces of software consistently across many different operating systems.
In combination of the two there is a complete stack to compose a system with, from operating system to a suite of 3rd party software (including Chrome and Mozilla based browsers, FYI! 🙂 ) or to take selected components and extend other systems with.
As an example, a feature of NetBSD is a tool called Rump Kernel. Rump allows you to instantiate an instance of the NetBSD kernel in the user space of another operating system instance. A common use of this in NetBSD is for testing, it is possible to perform tests on vital components of a system, safely, and on failure the result at worse is a failed system process, rather than a system crash. This saves valuable time between iterations when debugging, especially on larger systems where boot processes run into minutes (think about a server with a large number of disks attached, easily ~ 10 minutes or more to POST and probe a full shelf of disks before even getting to booting the operating system). Rump can also be used to supplement functionality on operating systems, saving development time of device drivers or subsystems. An example of this is the use of Rump in the GNU/Hurd operating system to provide a sound system and drivers for sound cards.
pkgsrc with its support for a range of operating systems means that it is possible to unify your workflow across a range of systems with relation to deploying software. This makes it possible to run the same variety of software with identical changes regardless of operating system. pkgsrc also provides the flexibility to select where dependencies are satisfied from, where possible. That is, if the host operating system provides a component as standard, pkgsrc could make use of it rather than building yet another copy of it, or as time goes on, with legacy systems it may be preferred not to use any such components provided by the host operating system but to only make use of components from pkgsrc, this is also possible. Like pkgsrc, NetBSD has its own build framework which makes it easy to build a release or cross build from another operating system which has a compiler installed. It feels very much like NetBSD comes to you and you work on it from your environment of choice rather that you having to change your environment to it in order to work on it, and the tools you become comfortable with you get to take with you to other platforms. You end up with a toolbox with for solving problems.
The GNU eco system itself is a vast toolbox to pick from also but I’m missing the integration and struggling with the fragmentation and the differences in project management if any. Source code up on a project hosting site alone is no good, neither is just a project site without access to the source code repository, you need both to engage with a project, to be able to track changes and to participate in the community. One doesn’t replace the other.

How I ended up here is that I installed Ubuntu because it provided ZFS support out of the box and I didn’t need to worry about things like pinning kernel versions to prevent kernel updates from rendering my machine un-bootable until I built new modules somehow and I thought it would be the easiest way to work through the BPF performance book. My experience with Linux has been with traditional distros, started on Slackware, then onto RedHat 5.x, Suse 6.x, Debian (Woody) and now Ubuntu 20.04. Tried Gentoo once about 15 years ago but never got past building an unbootable system from their live cd env I recall. I have not tried more recent distributions like Arch, void and such. I’m currently playing with Linux From Scratch.

May the source be with you

Marshall Kirk McKusick

Book review: BPF Performance Tools: Linux System and Application Observability

It’s more than 11 years since the shouting in the data centre video landed and I still manage to surprise folks in 2020 who have never seen it with what is possible.
The idea that such transparency is a reality in some circles comes as a shock.

Without the facility to be able to dynamically instrument a system the operator is severely limited of insight into what is happening on a system using conventional tools, solely. Having to resort to debugging tools to gain insight is a non option usually for several reasons
1) disruptive (may need for application to be re-invoked via tooling).
2) considerable performance impact.
3) unable to provide a holistic view (may provides insight into one component leaving it operator to correlate information from other sources).
If you do have the luxury, the problem is how do you instrument the system?
The mechanism offers the ability to ask questions about the system, but can you formulate the right question?? This book hopefully helps with that.

Observation of an application, you need both resource analysis and application-level analysis. With BPF tracing, this allows you to study the flow from the application and its code and context, through libraries and syscalls, kernel services, and device drivers. Imagine taking the various ways disk I/O was instrumented and adding query string as another dimension for breakdowns.

The BPF performance tools book centres around bpftrace but covers BCC as well. bpftrace gives a DTrace like tool for one liners and writing scripts similarly to D, so if you are comfortable with DTrace, syntax should be familiar though it is slightly different.
BCC provides a more powerful and complex interface for writing scripts which leverage other languages to compose a desired tool. I believe the majority of the BCC tools use Python though Luajit is supported too.
Either way, in the background everything end up as LLVM IR and goes through libLLVM to compile to BPF.

The first part of the book covers the technology, starting with introducing eBPF and moving down to cover the history, interfaces, how things work, and the tooling which compliment eBPF such as PMCs, flamegraphs, perf_events and more.
A quick introduction to performance analysis followed by a BCC and bpftrace introduction rounds off the first part of the book in preparation for applying them to different parts of a system, broken down by chapter, starting with CPU.

The methodology is clear cut. Use the traditional tools commonly available to gauge the state of the system and then use bpftrace or BCC to hone in on the problem, iterating through the layers of the system to find the root cause. As opposed to trying to solve thing purely with eBPF.

I did not read the third and fourth sections of the book which covered additional topics and appendixes but I suspect I will be returning to read the “tips, tricks and common problems” chapter.
From the first sixteen chapters which I read, the CPU chapter really helped me understand the way CPU usage is measured on Linux. I enjoyed the chapter dedicated to languages, especially the Bash Shell section.
Given a binary (in this case bash):
how you go about extracting information from it, whether it has been compiled with or without frame pointers preserved.
How you could expand the shell to add USDT probes.
I did not finish the Java section, too painful to read about what’s needed to be done due to the nature of Java being a C++ code base and the JIT runtime (the book states it is a complex target to trace) and couldn’t contain myself to read the containers *yawn* chapter.
All the scripts covered in the book have their history covered in the footnotes of the page which was nice to see (I like history)

I created the first execsnoop using DTrace on 24-Mar-2004, to solve a common performance problem I was seeing with short-lived processes in Solaris environments. My prior analysis technique was to enable process accounting or BSM auditing and pick the exec events out of the logs, but both of these came with caveats: Process accounting truncated the process name and arguments to only eight characters. By comparison, my execsnoop tool could be run on a system immediately, without needing special audit modes, and could show much more of the command string. execsnoop is installed by default on OS X, and some Solaris and BSD versions. I also developed the BCC version on 7-Feb-2016, and the bpftrace version on 15-Nov-2017, and for that I added the join() built-in to bpftrace.

and a heads up is given on the impact of running the script is likely to have, because some will have a noticeable impact.

The performance overhead of offcputime(8) can be significant, exceeding 5%, depending on the rate of context switches. This is at least manageable: it could be run for short periods in production as needed. Prior to BPF, performing off-CPU analysis involved dumping all stacks to user-space for post processing, and the overhead was usually prohibitive for production use.

I followed the book with a copy of Ubuntu 20.04 installed on my ThinkPad x230 and it mostly went smoothly, the only annoying thing was that user space stack traces were usually broken due to things such as libc not being built with frame pointers preserved (-fno-omit-frame-pointer).
Section 13.2.9 discusses the issue with libc and libpthread rebuild requirement as well as pointing to the Debian bug tracking the issue.
I’m comfortable compiling and installing software but didn’t want to go down the rabbit hole of trying to rebuild my OS as I worked through the book just yet, the thought of maintaining such a system alongside binary updates from vendor seemed like a hassle in this space. My next step is to address that so I have working stack traces. 🙂

Besides that, I enjoyed reading the book especially the background/history parts and look forward to Systems Performance: Enterprise and the Cloud, 2nd Edition, which is out in a couple of months.

Lessons learnt from adding OpenBSD/x86_64 support to pkgsrc

Before even getting into the internals of operating systems to learn about differences among a group of operating systems, It’s fairly evident that something as simple as naming is different between operating systems.

For example, the generations of trusty 32bit x86 PC is commonly named i386 in most operating systems, FreeBSD may also refer to it as just pc, Solaris & derivatives refer to it as i86pc, Mac OS X refers to it as i486 (NeXTSTEP never ran on a 386, it needed a minimum of a 486 and up until Sierra, machine(1) would report i486 despite being on a Core i7 system), this is one of the many architectures which needed to hadled within pkgsrc. To simplify things and reduce lengthy statements, all variants for an arch are translated to a common name whiche is then used for reference in pkgsrc. This means that all the examples above are grouped together under the MACHINE_ARCH i386. In the case of 64bit x86 or commonly referred to as amd64, we group under x86_64 or at least tried to. The exception to this grouping was OpenBSD/amd64, this resulted in the breakage of many packages because any special attention required was generally handled under the context of MACHINE_ARCH=x86_64. In some packages, developers had added a new exception for MACHINE_ARCH=amd64 when OPSYS=OPENBSD but it was not a sustainable strategy because to be affective, the entire tree would need to be handled. I covered the issue at the time in A week of pkgsrc #11 but to summarise, $machine_arch may be set at the start in the bootstrap script but as the process works through the list of tasks, the value of this variable is overriden despite being passed down the chain at the begining of a step. After some experimentation and the help of Jonathan Perkin, the hurdles were removed and thus OpenBSD/x86_64 was born in pkgsrc 😉

The value of this exercise for me was that I learnt the number of places within the internals of pkgsrc I could set something (by the nature of coupling components which share the same conventions (pkgtools, bsd make)) and really the only place I should be seeking to set something is at the start of the process and have that carry through, rather than trying to short circuit the process and repeat myself.

Thanks to John Klos, I was given control of a IBM Power 8+ S822LC running Ubuntu, which started setting up for pkgsrc bulk builds.
First issue I hit was pkgsrc not being able to find libc.so, this turned out to be the lack of handling for the multilib paths found on Debian & derivates for PowerPC based systems.
This system is a little endian 64bit PowerPC machine which is a new speciality in itself and so I set out to make my first mistake. Adding a new check for the wrong MACHINE_ARCH, long forgotten about the previous battle with OpenBSD/x86_64 I added a new statment to resolve the relevant paths for ppc64le systems. Bootstrap was happy with that & things moved forward. At this point I was pointed to lang/python27 most likely being borken by Maya Rashish, John had previously reported the issue and we started to poke at things. As we started rummaging through the internals of pkgsrc (pkgsrc/mk) I started to realise we’re heading down the wrong path of marking things up in multiple places again, rather than setting things once & propogating through.

It turned out that I only need to make 3 changes to add support for Linux running on little endian 64bit PowerPC to pkgsrc (2 additions & 1 correction 😉 )
First, add a case in the pkgsrc/bootstrap/bootstrap script to set $machine_arch to what we want to group under when the relevant machine type is detected. In this case it was when Linux running on a ppc64le host, set $machine_arch to powerpc64le. As this is a new machine arch, also ensure it’s listed in the correct endianness category of pkgsrc/mk/bsd.prefs.mk, in this case add powerpc64le to _LITTLEENDIANCPUS.
Then correct the first change to replace the reference to ppc64le for handling the multilib paths in pkgsrc/mk/platform/Linux.mk.

The bulkbuild is still in progress as I write this post but 5708/18148 packages in an the only fall out so far appears to be the ruby interepreters.

Goodbye Alphastation

My second cool legacy UNIX workstation which got me started on FreeBSD & OpenBSD, I obtained this workstation back in the summer of 2002, I first tried Redhat Linux 7.2 which was available as a free download as a promotion to demonstrate the optimisation ability of the Compaq compiler suite for the Alpha. It was a terrible experience consistent with my previous attempts at running Linux up to that point ( I’d started off on Slackware in 96, moved onto Redhat 5.2 followed by Suse 6.2 ), I soon dropped it & moved onto Debian 3.0 (Woody) which was ok but the 7 cd set was a bit too much hassle for doing package installs, the performance wasn’t much better with the compared to the “optimised” Redhat so I moved onto NT 4.0 workstation & FX32! & ran that for a bit before getting bored. In the new year FreeBSD 5.0 release was announced & Alpha was a supported platform so I gave it try on this machine, armed with a copy of the handbook & the help of IRC I made a lot of progress, first by dropping 5.0 & going back to version 4.7 after being told either x was broken in 5 or y was a bug in 5 too many times. I was blown away by how much faster it was compared to the so-called “optimised” edition of Redhat.
Towards the end of 2003 I started thinking about trying OpenBSD as a firewall after hearing about PF & deployed it when 3.4 was released, the Alphastation served as my gateway connected to a 512k/128k cable modem connection but ended up dropping it & moving to i386 when 3.5 was released because php mysql extension was broken on alpha & I wanted to launch this blog.
After that the Alphastation was used less & less over the years so I passed it onto a fellow techie who would appreciate it.

iPodLinux on iPod Classic

I’ve kept an eye on the iPodLinux project since I got my 120GB iPod Classic back in 2007, I was never able to try out the fruits of the project as the last supported model was the one prior to the Classic & from the description of the site, the reason was the Classic & newer models used an encrypted firmware.
I was bored tonight & decided to revisit the project to see if any progress had been made & found the site no longer loaded, reading up on the wikipedia page revealed freemyipod which lists the device as supported, so I gave it a go.

Why would you want to do this?

  • Support for file formats not offered by Apple e.g FLAC & OGG
  • Not being tied to an instance of iTunes on a specific computer
  • Installation is only supported via Linux or Windows & is fairly straightforward, I went with the “no iTunes installed” path on Windows and was done in a few minutes. Only sightly annoying thing is that the device needs to be formatted as part of the install process.

    Flashing iPod Classic

    Why would you not want to do this?

  • Rockbox interface is clunkier than the Apple one
  • Losing the ability to use iTunes to sync music (device presents itself as just another drive to computer, you need to manage getting the music on the device yourself)

    I think It was worth the effort to have gained some flexibility & if the interface is really an issue, it is an open source project, so just roll up the sleeves and get involved!

  • My 1st Patch!

    Woohoo!
    I’ve just created my 1st patch, to add support for Slackware to the iSCSI Enterprise Target software

    Read this guide if youre interested in rolling out your patches

    --- Makefile.orig 2004-11-22 10:30:57.000000000 +0000
    +++ Makefile 2004-11-22 10:35:16.000000000 +0000
    @@ -28,6 +28,8 @@
    install -v -m 755 scripts/initd.debian /etc/init.d/iscsi-target;
    elif [ -f /etc/redhat-release ]; then
    install -v -m 755 scripts/initd.redhat /etc/init.d/iscsi-target;
    + elif [ -f /etc/slackware-version ]; then
    + install -v -m 755 scripts/initd /etc/rc.d/iscsi-target;
    else
    install -v -m 755 scripts/initd /etc/init.d/iscsi-target;

    iSCSI On a budget!

    Following the Quick Guide to iSCSI on Linux I managed to setup a iSCSI Target host on Slackware 10 running on a virtual machine on VMware then connected to it from the Windows 2000 box which was the VMware host! 🙂

    I used the iSCSI Enterprise Target rather then the Ardis Target which the guide covers but as the Enterprise Target is a fork of the Ardis Target there is no variation in steps carried out.

    The Windows Initiator can be dowloaded from here