When I wrote up the previous post, there was a couple of points which I forgot to cover and since then a few more things have come up on the same subject that I want to go over here as a follow up.
I’ve been carrying on with the package building since the previous article and keeping on the same theme of old computer, old OS, old compiler, what can I build? (PowerPC mac, running OS X 10.4, with GCC 4.0.1). I resort to GCC 5 if I need C11 support & can go up to GCC 7 if needed. Have not yet attempted to package a newer version of GCC yet but after a certain point beyond GCC 7, the currently supported versions of GCC (11 and above?) require interim toolchain builds to go from a system with only GCC 4.x to 10+. Taking around 24 hours to build GCC 5 alone on the hardware I have at hand, I’m putting off attempting toolchain related changes as much as possible.
One thing I seem to spend a lot of time doing is trying to dig into finding requirements for software since the current trend is at most to list the names of components, without any version info. There are a couple of scenarios where the requirements are skewed indirectly. The software I want to build has modest requirements, but they require a critical 3rd party component which depends on newer tooling, thus setting a higher requirement and sometimes it’s just the build infrastructure which has a far higher system requirement. Of course if I was working on a modern systems with current hardware it wouldn’t be a problem since either binaries would be readily available or building would not take much time, but perhaps if you’re going to strive for minimalism, avoid requiring build tools which need the latest and greatest language support and beyond. No matter how much of a time the shiny new build infra will save, it would take many days for me to get up to having the tooling in place in order to attempt building your project.
I find myself in the situatio where I now appreciate autoconf in these scenarios and think that projects should carry the infra for both autotools & more fancy tooling. Autoconf may be old, odd, and slow, but it has been dragged through many scenarios and systems and has built up knowledge of edge cases, and in the world of old machines and uniprocessors, something that reduces down to a shell script and make files is just fine for such systems. A PowerBook would appreciate it, an iBook definitely would. Hence, I’ve been holding back versions in some cases where the projects have moved on to different infrastructure, pinning things to the last version to support autotools, for now. Through doing this, I discovered that sometimes there were no mirrors kept for the source files, the source archives are published on their repo but it’s an archive generated from the source tag, not the bootstrapped release archive which would’ve been published originally, resulting in a detour to bootstrapping before attempting to build and additional build dependencies to pull in autotools. I guess I could patch that infrastructure in, having bootstrapped separately, but that would be a hefty diff.
Whilst making progress with packages, I made a discovery as to the source of my troubles with pthread support detection, that I wrote about in the previous post, the common failure is to just check for the presence of the header file and not checking for the implementation details. It turns out, this is largely due to gnulib, the GNU portability library that is vastly used in the GNU ecosystem. The issue there is the detection of pthread support is split across several modules which cover different aspects such as “is there a pthread.h?”, “is there support for rwlock?” and in this fragmented way, it is possible to misdetect functionality, since projects are selective in what they import from the library. One issue, now fixed, was the miss detection of support for rwlock, which OS X Tiger & prior lack. If projects import all of the pthread components from gnulib, detection of pthread support works correctly and gnulib build process makes the correct substitutions and then its tests suite passes ok. If only some parts have been imported things fail when running the test suite since for example the test for pthread support tries to make use of rwlock support in the testcase. There are many other issues with gnulib and its test suite broadly, completely separate from pthread support that for now I’ve skipped on running the gnulib tests. I am unaware of the scope of breakage in the library since attempting to generate a library comprised of all components in gnulib take some time and disk space to generate (documentation states 512MB disk space but I crossed a couple of gigabytes before aborting). I have been working my way through the GNU software projects to see what’s broken in the current versions related to gnulib. I’m sad to say GNU Hello is a casualty and currently unbuildable on OS X Tiger. There is a lot more work related to gnulib that I haven’t even started on.
Technical aspects of building software aside, one thing that is re-occurring is peoples response to legacy systems. On the plus side folks work on restoring functionality on old systems and others question the age of the system and whether it is time to rip out support for them. I can’t help to wonder if this is conditioning from the Apple side, since there is often support for other long dead operating systems from extinct companies/products that remain in a projects codebase. I was once asked if I try new software and technology as soon as it becomes available for a vacancy with them. As a user of an iPod Classic, PowerBook G4 running OS X Tiger, a 2007 17″ MacBook Pro, and a resume mentioning packaging for legacy OS X, I answered yes & awaited to hear back from them.
When I assembled the laptop at the time, I refrained from making any changes and just assembled the system as instructed. The change I had in my mind was to toggle the switch on the CPU module in order to change the boot sequence so that it would be possible to boot from the storage onboard the CPU module (eMMC) as the primary boot device. I wanted to change it so that I could add the boot loader there & setup the root filesystem on the nvme drive with ZFS. Since the switch is tucked under the CPU’s heat sink, I was worried that with the slip of the hand, before I’d event started using the machine, I would have screwed it up and damaged something. So the stock setup it was, and I’ve been with it since.
I bought be MNT Reform kit to assemble with the ath9k wireless card, sleeve, a printed handbook, trackball & trackpad modules. I started off with the trackball installed, intending to switch over to the trackpad after some time. I’ve yet to switch to the trackpad. The buttons on the trackball module give a nice click and it feels nice to use, so I’ve stuck with it.
The wifi connectivity at my place isn’t that great and even with other hardware and operating systems, including mobile phones, throughput is lacking. For the Reform, the antenna struggles with maintaining connectivity, so I worked around it by using ethernet which is fine at home. Outside it’s not such an issue if I’m tethering since the device acting as the access point is usually within very close proximity or where the access point is in the same room as the machine, rather than on opposite sides of a property.
The laptop came with the v3 system image based on Debian sid (unstable train) on a Transend SD card which I’m still using. Performance of the SD card was good and this was my first time running Debian sid. Despite being the unstable train it’s been fairly painless. I vaguely recall a package bug which caused issues with updates in the early days but since then, it’s been fine (colour me impressed). Only thing is the sheer volume of package updates on a weekly basis, give it a week or so and there’s easily a couple of hundred packages to update. I installed ZFS via dkms which I then used to create a zpool on my nvme drive. The nvme drive has a swap partition on it and a zpool with my data; OS currently lives on the supplied SD card along with the root of my home directory.
The system image is Debian sid with a specific kernel and drivers for the Reform, alongside some tools, and customisations to make the system more welcoming such as a login banner which lists useful commands at hand (defined as shell functions) like chat – when called invokes an irc client & joins the mnt-reform channel, or reform-handbook which displays the system handbook in a browser. The system image comes with KDE, Gnome 3, Sway,WindowMaker all pre-installed. I tried Sway briefly since it is a performant window manager but soon rolled back to Gnome 3 due to familiarity & comfort.
The system image comes with a bunch of apps preinstalled such as KiCad, ScummVM, FreeCAD, Inkscape. It sets a nice tone for the machine as a fun creative space and that’s pretty much how I’ve used the machine in this time. I have kept it separate from my daily environment and used it as a get away from the usual where I want to focus on something.
Of the reform specific packages which are installed, there is the reform-check utility which performs a sanity check on configuration and makes suggestions for new changes which been integrated into newer builds such as missing packages shipped with system image and outdated u-boot, so it is easier to maintain an installed OS image and reduces inconsistencies when it comes to debugging.
If the pace of change of bleeding edge Debian sid is too much, there is now a stable image available currently based on bookworm. There is a path to switch an existing unstable system image to the stable one but since you’re going back in package version it gets a bit messy with old and new version so probably safer to go for a fresh install of the stable image, especially if you’ve been keeping your unstable system image up to date with package updates.
It’s been nice to be able to provide minor fixes and changes to the components which compose the system image. When I first got the laptop, u-boot lacked support for initialising the display, when the system booted, Linux would then initialise the display. As development progressed u-boot grew support to init the display, it even displays the MNT logo! I believe these improvements were a community effort.
With Debian’s support for running different ABIs it has been possible to run Steam on the Reform. Since you’re running a duplicate userspace, it has made the number of updates balloon but apart from that it just works. The hardware is sufficient to play Thimbleweed Park but the games needing more advanced OpenGL support will run but won’t display so for example Monkey Island 4 runs but there’s just a black window. Since the system is using the open source driver for the Vivante GC7000 GPU, I wondered if it could be made to work using the vendor’s closed binary blob driver since that supposedly has support for newer OpenGL, but I’ve not tried. The swap on nvme was necessary here since Steam will use up all the 4GB RAM and work its way through the swap too (it really needs 8GB RAM) but that’s fine, there’s no noticeable frame rate drop in the high pace action of a point & click adventure. Outside of Steam, the GPU is capable of handling Monkey Island 1 & 2 in ScummVM and the Minecraft clone Minetest which ships with system image works just fine (I’m not really a gamer, so don’t take that list as all the machine is capable of, I haven’t investigated what works and what doesn’t extensively).
I still enjoy looking at the machine just as much as using it. The Japanese keyboard layout with the backlight looks beautiful. The ability to take the machine apart easily has made a world of difference since it’s not a chore to investigate issues or perform maintenance. Need to reflash the keyboard, unscrew 6 screws, and lift the bezel. Need to reflash new firmware on the motherboard, 10 screws, and lift the base panel.
Since I built the laptop, I made two hardware changes.
Originally, the keycaps did not have a notch for the home row index keys, and since the layout it is a little different to usual, it was a bit disorienting switching between machines with different keyboards since muscle memory was lacking. That’s no longer an issue as there are notched keycaps available which I purchased and installed.
The battery board which came with the laptop originally had a couple of issues which were addressed in the updated battery board. With the original battery board, I needed a full battery charge if I was going to compile the ZFS kernel modules otherwise the batteries would not sustain the prolonged surge in use. With the upgraded battery board this is no longer an issue.
I’m still using the battery cells which came with the laptop, the laptop is fast to recharge them. There’s a short delay of around 30 seconds before the system detects the charger is connected and switches over, which has caught me out when I’ve realised at the very last moment that I’m about to run out of power and hurried to connect the charger to the power outlet.
Unfortunately I don’t have numbers on the runtime off batteries as the system gets switched off in between uses, and I’ve not bothered with fighting sleep/resume, the original battery board caused issues with systems sleeping and later, kernel bugs prevented the system from resuming correctly.
Of the hardware features, I’ve yet to use the HDMI port to connect it to an external display. The builtin LCD panel is nice to look at. Since I’m not using full disk encryption on the SD card, I use a Yubikey for SSH keys. The orientation of the USB ports means that the Yubikey touchpad is facing down which is a little annoying to use, but since it flashes when you need to touch, it’s not something that’s going to go forgotten, though it is somewhat clumsy to have to lift the laptop up to touch. I have a small USB hub which I tried connecting the Yubikey to, so that I wouldn’t have to lift the laptop up, didn’t really work, the Yubikey remained facing down. Really need to use a Yubikey nano, rather than the standard one.
The headphone socket is fine for headphones with a cylindrical connector like the Apple earphones, but if your headphone uses another shaped jack like the L shaped, 3 pole one, it wont be able to go fully into the socket. This is due to the socket being positioned ever so slightly back, and with the side panel installed, the panel blocks the L shaped jack from going fully in. That was a trivial fix, use a 15cm 3.5mm 4 pole male to female stereo aux audio extension cable.
One completely cosmetic mistake I made when assembling the machine on the first day is sticking the label with the units details on the perspex cover directly above where the CPU heatsink is positioned, that was probably not the best place for it in the long term.
The way the LPC driver for Linux interacts with the kernel currently means that there’s insufficient time for the nvme to complete its own internal shutdown procedure resulting in the unsafe shutdown S.M.A.R.T counter being incremented every time the system is powered down via Linux. Workaround is to issue a reboot instead & switch off the machine at u-boot.
There have been some new additional components added to the MNT Store for the Reform since last year. The most recent addition is the CM4 adapter which allows the compute modules by Raspberry & Banana Pi to be installed in the Reform as a CPU module replacement and games like Monkey Island 4 to be played without issue. When I can afford it, I really want to get the higher end LS1028A CPU module which has 16GB RAM and faster CPU cores. I don’t really strain the machine CPU wise currently, but more RAM is always good. There are now FPGA modules so you can replace the ARM CPU with an FPGA running a softcore which is so exciting and unique, however the modules are more than the price of the laptop itself. I would definitely go for it if I had the cash spare. For now, I’ll have to live with the potential idea of having a laptop form-factor which I could experiment with different CPUs by just synthesizing them.
There’s a new Laird wifi antenna and anti-flexing bars for the keyboard which are going to be my next items to purchase, curious the difference the bars will make.
From the software side I’m actually happy running Debian on this machine as there’s binary packages of anything that I think of wanting to try but I really want to switch to a root on ZFS setup and don’t feel that the DKMS path of rebuilding a module as a separate component works since a kernel upgrade and a failed ZFS module will render a system unbootable (not catastrophic but hassle to recover). I have hit this several times over the last year, for example when the license on the functions in the kernel were changed to GPL only, that broke the ZFS build. Not an immediate failure but something that flags up at the linking stage if I recall correctly which resulted in the module not building during the apt upgrade process. Workaround was to make a local modification to ZFS license ID so it would build successfully and forcing the rebuild of the package again. So I’m dragging my feet, pick up the ball again and integrate ZFS support into Viewpoint Linux and add an aarch64 build or just continue with the convenience of a maintained Debian system. I’d need to refresh everything in Viewpoint Linux and before that, I need to get the framework into place so that it’s easy to repeat the process and maintain it going forward. Ahh!
Not having to focus on OS / packaging side of things has definitely been appreciated over the last year. Starting off with an idea of trying something without a lengthy detour into compilation and actually getting on with what I had in mind is great and something I would have to maintain otherwise. Hmm, perhaps this could be a good excuse to buy more hardware for the build infrastructure. There is a new Reform motherboard on the way, if I upgraded the CPU module with the motherboard, I would have spare hardware for the build system. You have been reading the fantasy of someone with high end CPU taste, and microcontroller money. 🙂
# M I C R O S O F T H A C K !
# It turns out that the Mac Office (2008) rolled their own solution to roots.
# The X509Anchors file used to hold the roots in old versions of OSX. This was
# an implementation detail and was NOT part of the API set. Unfortunately,
# Microsoft used the keychain directly instead of using the supplied APIs. When
# the X509Anchors file was removed, it broke Mac Office. So this file is now
# supplied to keep Office from breaking. It is NEVER updated and there is no
# code to update this file. We REALLY should see if this is still necessary
Wasn’t Office 2008 a 32-bit app, in which case it would’ve stopped working long ago when they dropped 32-bit support from the operating system?
Something I’ve been thinking over on and off for a while, is the meaning of portability claimed by folks regarding software. Fish out an old computer with a matching vintage closed source operating system, how does the portable software fair in such a situation? Supporting an ancient operating systems when building recent software is enlightening, there’s so much room for dependencies to creep in, only to realise it when the dependencies are not there. Ideally, at the least, documentation would cover when said functionality was introduced so you have a vague idea in advance if something will work, but an exact idea of dependencies is needed to catalogue what is required.
I’ve been wading through building things on OS X Tiger again and looking for low hanging fruit to tick off. Armed with the stock version of GCC 4.0.1 from XCode 2.5, I’ve been avoiding going down the newer toolchain route and attempting to build anything which lists needing nothing more than C99/C++98 as I want to get as much done with the stock toolchain as possible since it reduces time starting with a fresh setup.
GCC 4.0.1 supports C99 and C++98, though according to the GCC wiki, C99 support was not completed in GCC until v4.5. GCC 4.0.1 defaults to C89, unless a standard via -std is specified. I can define __DARWIN_UNIX03 if I want to kick it up a notch for modernity, but that’s at least one version of SUS behind, implementing changes to existing functionality from IEEE Std 1003.1-2001 (defaults being older behaviour in some cases to accommodate migration to the then new behaviour, which has now become the expected behaviour). Still, I have a compiler which recognises -std=gnu99 and a system which was on the way to UNIX03 certification but not (first reached in Leopard). See compat(5) from Tiger, Leopard, Catalina. On my quest for low hanging fruit for things, I keep on getting caught out by software which claim to require just language support but really needing functionality beyond language support and 3rd party libraries from the the system. Some examples of things which have tripped my up like
Stating -std=c89 and requiring a C11 supported compiler.
Claiming just C99 support but requiring POSIX functionality from IEEE Std 1003.1-2008.
With a C11 compiler installed (GCC 5), missing POSIX functionality from IEEE Std 1003.1-2008.
The assumption that PowerPC hardware means host is running Linux.
Python modules with Rust dependency (luckily that is a quick, hard failure on PowerPC OS X 🙂 )
More than just language support aside, another common issue is the misdetection of functionality through the test being too broad. Usually see this with pthread support, where more recent pthread support is required but all that happens from the build side is a check for the presence of pthread.h header file rather than checking for required functions like pthread_setname_np(3) / pthread_getname_np(3). Luckily, from the lacking user-space aspect there is help at hand, either via libraries which provide an implementation of missing functionality, or re-implement the functionality as a wrapper around existing functionality. Sometimes there’s advice on how to use existing functionality for things which haven’t already been provided by compatibility libraries from the Windows community since they still have portability related issues on current OS versions. Just stuck with things which need hardware/kernel support – memory atomic operations? Going lower level, since the tooling is now so ancient, functionality which allows things to be treated generically across platforms hadn’t cross-pollinated. Non-platform specific mnemonics for assembler are lacking, and flags for linker to generate shared objects are missing. Luckily the linker fixes is easy to patch and forward compatible with newer OS versions since they have retained backwards compatibility for the old way, specifying -dynamiclib instead of -shared which is what is now commonly used.
The Practice of Programming, written in the late 1990s, before C99 was standardised, has a chapter on portability.
It’s hard to write software that runs correctly and efficiently. So once a program works in one environment, you don’t want to repeat much of the effort if you move it to a different compiler or processor or operating system. Ideally, it should need no changes whatsoever. This ideal is called portability. In practice, “portability” more often stands for the weaker concept that it will be easier to modify the program as it moves than to rewrite it from scratch. The less revision it needs, the more portable it is.
Of course the degree of portability must be tempered by reality. There is no such thing as an absolutely portable program, only a program that hasn’t yet been tried in enough environments. But we can keep portability as our goal by aiming towards software that runs without change almost everywhere. Even if the goal is not met completely, time spent on portability as the program is created will pay off when the software must be updated. Our message is this: try to write software that works within the intersection of the various standards, interfaces and environments it must accommodate. Don’t fix every portability problem by adding special code; instead, adapt the software to work within the new constraints. Use abstraction and encapsulation to restrict and control unavoidable non-portable code. By staying within the intersection of constraints and by localizing system dependencies, your code will become cleaner and more general as it is ported.
The Practice of Programming, Introduction to Chapter 8, Portability
Portable code is an ideal that is well worth striving for, since so much time is wasted making changes to move a program from one system to another or to keep it running as it evolves and the systems it runs on changes. Portability doesn’t come for free, however. It requires care in implementation and knowledge of portability issues in all the potential target systems. We have dubbed the two approaches to portability union and intersection. The union approach amounts to writing versions that work on each target, merging the code as much as possible with mechanisms like conditional compilation. The drawbacks are many: it takes more code and often more complicated code, it’s hard to keep up to date, and it’s hard to test. The intersection approach is to write as much of the code as possible in a form that will work without change on each system. Inescapable system dependencies are encapsulated in single source files that act as an interface between the program and the underlying system. The intersection approach has drawbacks too, including potential loss of efficiency and even of features, but in the long run, the benefits out weigh the costs
The Practice of Programming, Summary of Chapter 8, Portability
I guess the union approach is what you would call the autoconf workflow: test for the required functionality and generate definitions, based on the test results. Those definitions can then be checked for in the codebase to steer the build process.
It seems today portable software in most cases means with the listed requirements, you have a chance to build given an up to date, current, operating system, rather than building on the earliest version of where requirements are available. For portability, in terms of future proofing, being explicit in expectations helps prevent breakage caused by riding defaults when in time defaults change. As proof of portability, test, test, test, beyond current version of mainstream systems. If not for extending support, to bring clarity to expectations of where the software will be able to run.
The road to Uxn started off with getting SDL2 built on OS X Tiger. Not being familiar with SDL at all I went in at the deep end with the latest and greatest release. As the bludgeoning progressed and the patches stacked up, I took a step back and wondered what’s been done already to get SDL2 installed elsewhere.
SDL2 has moved forward with targetting features found in newer OS releases so there’s a world of difference in functionality found lacking in OS X Tiger and SDL2’s expectations, sometimes it’s just been a gratuitous bump imposed by external factors. Consulting MacPorts and Tigerbrew to see how they handled packaging SDL2 I noticed that Tigerbrew had a patched version of SDL2 2.0.3 for Tiger specifically. Great, that’s a start. I knew that Uxn required SDL2, but wasn’t sure if it was a specific version or 2.0.3 would qualify. After some time compiling things (because a PowerPC G4 is mighty and slow) I had a version of SDL2. To be honest, I veered off into having the latest version of deps to compile against. Building latest release of FFTW on Tiger took some time due to a buggy m4 macro for autoconf which broke the configure stage on legacy OS X with GCC. At some point the FFTW’s build process needed to differentiate between compilers on OS X and adjust flags for linker and assembler but the two offending tests didn’t work on Tiger’s mighty compiler, since the test for assembler flag causes compiler to print an error saying flags are not supported but return 0, passing the test, and for the linker flags, compiler would error out but the test passed, resulting in subsequent tests failing due to unsupported flags being used. I initially tried to get autoconf to restrict the tests to newer versions of OS X but failed in getting it to do that. Searching around I discovered the autoconf-archive project which is a community contributed archive of macros to extend autoconf tests. Replacing the macro included in FFTW for the compiler flag checks with copies from the autoconf-archive resolved the build issue. There isn’t yet a script for invoking the compiler with flags for the assembler to test with (-Wa) but there is a separate script for invoking the compiler with linker flags (-Wl) and that’s sufficient to move forward, since the issue with assembler flags is specific to GCC 4.0.1 and did not occur on newer versions of OS X with a newer compiler when I tested.
With SDL 2.0.3 built from the patches Tigerbrew used, it was time to try Uxn. Invoking the build.sh script soon started spitting out errors, again, Tiger’s compiler was too mighty. Uxn relies on type redefinition and so requires a compiler with C11 support so that -Wno-typedef-redefinition will work. I used GCC 5 but according to the GCC wiki, anything since 4.6 should suffice. With a new compiler, the build progressed and errored out at the linking stage with one missing symbol. It was looking for SDL_CreateRGBSurfaceWithFormat and SDL 2.0.3 was too old. The function was introduced in SDL 2.0.5, looking into the source of 2.0.5, they raised the OS version requirement. 2.0.4 and prior had a Leopard minimum requirement and 2.0.5 targets Snow Leopard. Wanting to avoid a fight with a new release of SDL I looked into what alternative functionality was there in the version of SDL that I could use. SDL_CreateRGBSurfaceWithFormat was created to make it easier for the programmer to create surfaces, by specifying a single parameter value for pixel format. A successor to SDL_CreateRGBSurface which instead of a single parameter require each parameter of the pixel format to be specified individually. Though SDL_CreateRGBSurfaceWithFormat was introduced in SDL 2.0.5, SDL_CreateRGBSurface shipped with SDL 2.0, so I switched Uxn to use SDL_CreateRGBSurface and the build succeeded and then promptly failed to run when the build finished. Turns out Uxn requires joystick support but the SDL patch for 2.0.3 does not cover joystick support nor haptic feedback support due to the reliance in a newer functionality which is lacking in Tiger’s mighty ageing USB stack. This wasn’t so much of an issue since Uxn supports keyboard. Removing joystick support from SDL’s initialisation list and rebuilding Uxn resulting in a successful build and the emulator starting up with the piano rom (build script does this by default unless you tell it to not run the emulator with --no-run). Cool! lets play keyboard! *cue harsh noises emitting from laptop speakers* (following video link warning: turn sound volume down, don’t use headphones, link).
At this point I wasn’t sure if the issue was with the patched SDL build or Uxn. I chose first to rule out SDL and see if it’s possible to generate sounds ok with a simple test case. I found the Simple SDL2 Audio repo which I used as my test case. The source is just a couple of .c files and a header, leaving you to assemble things however you choose. This was my first time hand assembling a library then linking to it.
The test tool executed fine on my PowerBook and the wave files the tool played sounded as they should, so the issue was not with the patched version of SDL. I started to compare how things were done regarding the SDL API in Uxn and the Simple SDL2 Audio example, it seemed that endianness of audio format is a parameter and while the example code explicitly specifies a little endian format, the Uxn codebase uses an alias which defaults to the same little endian format. So SDL has the concept of endianness but the two codebases are trying to play different sound files, Uxn is playing PCM files, where as the example project is playing wav files which are little endian. Switching Uxn to use a big endian audio format resulted in the piano rom sounding correctly on my PowerBook since it’s booted in big endian mode. According to the SDL_Audiospec there are formats which default to the byte order of the system being built on. Using that instead resulted in correct playback regardless of endianness of the host system.
With Uxn working on my PowerBook running OS X Tiger, it was time to upstream the changes. As the project is hosted on sr.ht, I needed to become familiar with git’s email workflow by following the tutorial on git-send-email.io. Given an up to date version of OpenSSL, Perl, and git, I was able to upstream the changes to Uxn using my 12″ PowerBook G4 running OS X Tiger (look ma! no web browser!).
With the exception of skipping joystick support in SDL everything is upstream, but that’s a trivial one word deletion. So as it stands, to run Uxn on OS X Tiger, one needs to install SDL 2.0.3 and GCC 4.6 or newer (I used 5). Edit uxn/src/uxnemu.c and remove the SDL_INIT_JOYSTICK flag from SDL_Init() statement, then run Uxn’s build.sh. See the Awesome Uxn list for links to get started once everything is built and ready to go.
Mac OS X has always come bundled with a version of Emacs, I see a copy of a binary named emacs-20.7 on the OS X 10.0 installation CD. On a fully patched version Mac OS X 10.4 Tiger, the version of Emacs is 21.2.1, 22.1.1 on Leopard and on more recent versions of macOS since Catalina it has been replaced by mg.
I wondered what the most recent version of Emacs would run on Tiger. The Emacs for Mac OS X project maintained builds at some point and has copies to the most recent builds and impressively all their builds going back to 2009. Great, I fetched both the nightly and 24.5-1 build for Tiger and tried them, only to be presented with errors regarding _getrlimit$UNIX2003 symbol not being found when running the Emacs-powerpc-10_4 binary inside the Emacs.app. Double clicking on the Emacs icon attempts to launch the application and an icon appears on the dock before disappearing again. I left it there and started looking at building the most recent version which I could myself.
The issue was apparently resolved in Emacs 25 but there’s more required than just this change for 24.5 as cherry picking It didn’t resolve the crashes and I didn’t investigate any further.
Emacs 23 was good. I had a working application on Tiger with 23.4.1.
I headed back to Emacs 24 and built 24.2, and that was good too. Currently, I’m stuck on that version and haven’t yet investigated where things broke on the road to 24.3. Emacs 24 introduced support for installing packages from a remote repository. Due to its age, its concept of handling encrypted connections via TLS wont work with newer versions of OpenSSL since it tries to invoke s_client with SSLv2 turned off which of course is no longer a supported feature. Another option is to use GnuTLS but I haven’t yet managed to build a new version of GnuTLS on Tiger to use with it since GnuTLS has grown another TLS requirement which requires a compiler with C11 support. This is not so much of an issue for connecting to Elpa, but Melpa requires HTTPS and given a recent version of OpenSSL and Emacs 24.2 built with a modified lisp/net/tls.el to drop -no_ssl2, it’s still not happy about something and will sit there for some time and fail. I sidestepped the issue for now by using a Melpa mirror which works via HTTP, and needed acouple of changes from from Emacs 25 to packages.el for version handling, otherwise it would fetch the repository metadata and fail due to an invalid version.
If you’re looking for a newer version of Emacs with GUI support to run on your PowerPC Mac, I’ve posted the binaries here. The binaries were built on OS X Tiger 10.4.11 PowerPC with ./configure --enable-ns-self-contained --with-ns and run tested on OS X Tiger & Leopard PowerPC. No idea if/when another build will be, but they’ll end up in /files/macosx-powerpc-emacs/ if I do.
Things seems to have been ramping up on the RISC-V front over the last 12 months. Various open source projects have started offering official support, and new hardware is being released or announced regularly. As I write this, Sipeed’s Lichee RV has been around for quite a while (since 2021?) and soon there will be new hardware released which was announced before the end of 2022 with a quad core SoC and up to 16GB RAM. Following the Kinetic Kudu release, Canonical announced support for Sipeed’s Lichee RV RISC-V board on their blog and the wiki article seemed like it was fairly painless to try, so I gave it a try. The Lichee RV with dock is advertised as a Linux starter kit on the Sipeed’s AliExpress store, it’s a single core, uniprocessor board, based around the Allwinner D1 SoC, which comes with 512MB or 1GB RAM. The 1GB RAM version with the standard dock+wifi is the version I have been playing with. There is now a pro version of the dock available but that had not yet been announced at the time when I went shopping. Similar spec hardware which features a mini-hdmi and dual USB-C ports instead is available by another company called the Mango Pi MQ Pro, it’s based on an actual single circuit board, with no dock required.
The setup process to get started with the new Ubuntu release was straightforward, download the image, dd the image to an SD card and boot the device from it. It’s all headless since I have no LCD panel connected to the board and the HDMI port on the dock doesn’t seem to work out of the box when I tried it though it is listed as supported on 3rd party and vendor supplied distributions. The board was connected to a network using a USB Ethernet adapter so that I could continue setup but it is possible to obtain serial console access via the GPIO pins and USB-C.
The dock which the Lichee RV board connects to provides a Realtek RTL8723DS wifi (2.4GHz only) & Bluetooth controller. The wifi drivers for Ubuntu are provided via dkms which means they will be compiled during the install process. An hour and a half after invoking apt install licheerv-rtl8723ds-dkms the wifi driver was built, installed, and working.
Unlike the x86_64 builds of Ubuntu, the RISC-V build does not come with pre-compiled ZFS support, it needs to be installed via dkms. The single core uni-processor board took around 4.5 hours to complete apt install zfs-dkms. With ZFS support I could move on to attaching some disks to it. I wasted quite a lot time by fishing out an old USB 2.0 powered hub & attaching a set of disks to it but despite the power supply being a big heavy brick, it provided insufficient power to run anything more than a single hard disk safely. I tried attaching 3 disks and the board itself to the hub and the disks suffered (insufficient power to spin up the disks), tried 2 disks and the SBC but the power draw was too much which made the board unstable as disk I/O increased; it was amusing to see (imagining a scenario where you are trying to throttle the system in order to survive). After some experiments with online shopping I found a powered USB 3.1 hub which provided sufficient power to run the disks, the board still draws too much power to run disks and the SBC from the hub so the SBC ended up running from a separate power supply. I created a new mirrored pool on the pair of disks attached to the USB hub, and attached a third disk which contained another ZFS pool, then began rsyncing the data from the single disk pool to the new mirrored pool as a torture test to see how things would run, if at all. The new mirrored pool had ZFS compression=on set, though in hindsight it was a waste of CPU cycles as the end result compressratio was 1.03x. During the copy process the system suffered several module crashes related to ZFS functions which caused the copy process to stop, though the system was still available to SSH into and issue a reboot. Despite only having 1GB RAM, it took over a week to copy 2TB of data across the two pools, but many hours were lost between the module crashes and me noticing the issue to restart the process. Once the data was all copied across, it took several days to scrub the new mirrored pool. The load average around 5 throughout the entire period, from copying to scrubbing, waiting on i/o.
Since it’s a uniprocessor board, the system is unable to maintain a wifi connection and sustain prolonged CPU load, it drops off and I am not sure if it’s possible to teach it to retry yet. It has no such issue with USB Ethernet though, the CPU can be maxed out for some time and it’s still possible to reach the system, so perhaps the wifi driver may be a culprit or the lack of an antenna on this small piece of hardware.
There’s a lot of software available either packaged by Canonical or official binaries provided by projects themselves, so openjdk, zig, llvm, rust were readily available to install. These languages are provided as examples because they would have a hefty build time if you were starting from scratch generally but especially on a platform with limited resources.
With llvm installed I was able to compile bpftrace. Something is not quite right though, such as when running execsnoop, invoking the same command several times produced different results in the traced output, sometimes the exact command with arguments executed is traced, sometimes just that command was executed, and sometimes just blank, only a timestamp and process ID. Disabling systemd-resolved and switching to Unbound for the local resolver made a big difference to responsiveness of the system, noticeable when SSHing in, the password prompt returns quicker.
It’s nice to see that software support is there, it is an entry level system but being feature rich in terms of software readily available makes it a handy piece of hardware for experimenting which can be left on, lying around. My hope was to replace an ALIX 2c3 board with the Lichee RV for providing services like shell, routing (for the odd occasion when I try to use computers without wifi connectivity), and caching resolver via Unbound but I need to make progress with stable wifi connectivity before I can swap the systems around. Having ZFS support on such a platform is really cool, bugs have been reported in the Ubuntu launchpad for the module crashes: #2002663, #2002665, #2002666, #2002667, trying stock upstream versions are on the list to try next if I can get the build time down by building the modules on an emulated guest running elsewhere. There’s also a port of xv6 with D1 SoC support which is on my list to try out. For now my Lichee RV board sits running behind the ALIX board, it has been running 24×7 for the last couple of months, first month stress testing the board copying the data across and building bpftrace, second month mostly running on idle to see if it’s stable, while I focused on something else (organising media).
shortcoming Performance garbage, single core, very cardy to use Some software porting risc-v has wonderful bugs No GPU, very laggy
Description in an advert on AliExpress by 3rd party for a D1 based RISC-V SBC. Not sure what they mean by “very cardy” 🙂
The Finder info pane tells me that I’d created my iPhoto Library back in 2007, there wasn’t a lot there but I had gone to the effort of creating albums and organising photos some years ago and left it at that. Time moved on and iPhoto was eventually discontinued, the migration path was meant to be to the Photos app which comes bundled with macOS as standard. I never made the switch to Photos, just like I didn’t switch to the Music app. Enter, Retroactive. Retroactive lets you install a bunch of unsupported applications on more recent versions of macOS, including iTunes and iPhoto. The process is fairly straightforward and in little time I had both applications running, I could see my old photos in iPhoto, and I left it there.
As things began winding down end of last year, I thought I should start organising my music collection and photos; so I fired up iPhoto and dragged a bunch of photos in, over 9000 to be exact. iPhoto started working through them and normally it would cycle through the images as it is importing them, but it didn’t do that this time.
Then the duplicate detection stepped in to ask what to do about a detected duplicate photo, again normally a thumbnail would be shown to help but just grey. I assumed that it was just graphics glitch because the photos show up fine in finder but when the import was finished, the copies of the photos in the iPhoto Library were sometimes plain black or white, so it looks like iPhoto had mangled the imported copies. The original source files were fine and viewable in Finder and Preview.
The images in the iPhoto Library are accessible via Finder, just right click / control click on the iPhoto Library and select Show Package Contents, there is a folder there called Masters with a folder structure reflecting what you see in iPhoto, I guess I could have just dragged that out and moved on to the new application I was going to use but what if there was information that would be lost because iPhoto stored it alongside those files. It certainly has the notion of revisions. Searching around, I found a now discontinued tool made by Google called Phoshare which allows you to export an Aperture or iPhoto library as a folder structure. The application is written in Python and makes use of Tk via Tkinter for GUI, the application for the Mac is a universal binary covering *both* PowerPC and 32-bit x86 which means it won’t run on Catalina or newer because support for 32-bit x86 binaries was dropped. Phoshare is open source and the subversion repo is still available but an odyssey into Python 2.x and getting all the relevant modules installed was going to be a faff. It would’ve been a good execuse to dust off a PowerBook but library was on a zpool and it was quicker just to import the zpool on an older MacBook Pro. 🙂
Phoshare requires Exiftool installed and it looks for it in /usr/bin which is disallowed on more recent versions of macOS so you will need to disable SIP and symlink from /usr/local/bin/exiftool to /usr/bin if you want to preserve metadata from your library.
Phoshare running on default setting will recreate the folder structure as you see within iPhoto in the location you stated to export the photos to, including a copy of the original photos in a subfolder if there are revisions of a photo e.g you used iPhoto to enhance them.
With my photos exported, I initially reached for darktable but soon realised that it does more than what I need, a simple photo manager to organise photos like Shotwell but cross-platform would be ideal whereas darktable seems more on processing capabilities and less on the file management side. Searching around it turned out that digikam is cross-platform and so I gave it a try and stuck with it. Over the past few weeks I have been trawling through disks finding and adding photos to my library which started out as the exported folder by Phoshare. digikam does duplicate detection which has made adding things a lot easier, just blindly add stuff, analyse, detect duplicates, clean up. There’s one interface behaviour which is slightly different in digikam if you’re coming from macOS / Gnome where if you’re drag and dropping things, on drop, it will show a menu asking you if you want to copy or move the items you’ve just dragged over. In a month of use it has only crashed on me once, otherwise it’s been great. I can now manage my photo library regardless of whether I’m on macOS or Linux using the same application. The folder structure on the file system is represented in digkam, which means you can organise things in your file manager of choice and that will be reflected in digikam (you have to scan for changes or it will pick up the changes the next time the application is started). digikam’s metadata is stored in files for sqlite (default option) or you have the choice to point it to an instance of mysql, .
Photos app does have an export feature for getting photos out but your photos are now stored in its own internal folder structure and information about your albums are stored and tracked in its own data files now. Having converted the iPhoto Library by opening it with Photos or organising things within Photos and trying to exporting that exact structure seemed to involve more fiddling than I was willing to commit to, I couldn’t get it working. Sticking with Photos wasn’t an options as it would tie me to macOS for managing photos. I never made the switch from iTunes to Music because it had a tendency to peg the CPU, even when open and not playing anything and they dropped support for iPod Classic which I still continue to use to this day as my music player. The workflow now is based on using Finder to sync a library with device whereas before I could add music to my device from any machine with iTunes.
MNT Reform keyboard, with logo displayed on OLED display
I’ve been following the development of the Reform laptop over the years and while I missed out on the crowd sourcing round via Crowd Supply I promised myself last year that I would buy a DIY laptop in 2022. The order was placed in April on the MNT shop and here I am typing this post having had the Reform for around a week now. A box showed up last Tuesday from Germany, I’d been tracking the parcel keenly on the UPS site, it sat in Germany for several days on its way before being flown over Monday night and in my hands the following morning.
Table with Reform parts spread out for the build to commence
I made myself a cup of coffee and began the unboxing of a big grey box filed with Styrofoam and a smaller black box the size of a shoebox which contained yet more smaller boxes and wrapped parts. Everything was stylishly wrapped in black and stickered with the MNT logo which turned out to be a faff to unwrap because I didn’t want to tear everything up. 🙂
There was a supplied poster with numbered steps and diagrams showing where the numbers refer to. Over several hours I took joy in building my first laptop and taking photos along the way. I hadn’t built a PC in some years, nor attempted to service a laptop in some time so it was nice to be hands-on with a machine again.
The build went fairly smoothly, with only one step on the poster that I struggled with (use of language and assumed perspective on step 3). The parts included an SD card which contained an image of Debian unstable preinstalled with lots of applications to get started with. It worked first time and I’ve been working with that just to get a feeling for how the system performs before switching to the 500GB WD Black NVMe drive which I also installed. MNT offers a choice of two different pointing devices for the Reform, a five button trackball and a trackpad. I obtained both and started off with the trackball installed which has been fine. The trackball buttons are nice and clicky; with five buttons it should be pretty good for chording in Acme (see Chords ofmouse buttons).
MNT Reform 5 button trackball
The NVMe drive is currently running with a zpool on it and a 16GB swap partition which I created with gnome-disks before pointing zpool(8) to the new device node. The Debian install instructions from the OpenZFS site worked without any problems, just had to wait for it to compile the kernel modules and then it was ready for use.
The build was really fun and complete opposite of the fiddly experience with maintenance on conventional laptops. It’s nice to be able assemble and access a laptop without the dread of dealing with extremely fragile parts usually in tight and narrow spaces. The DIY kit comes with the LCD panel installed in the lid which is attached to an empty base frame by hinges, so during the build you’re just focusing on populating the base with motherboard, keyboard and getting everything connected together with peripherals installed. My favorite part on the reform has to be the wifi antenna, so delightfully simple and easy to install. For those who have had to installed an antenna kit on early systems or dealt with cabling around a display assembly, this is not like that! 🙂
12″ PowerBook G4 sitting on top of an MNT Reform
As is traditional in hardware reviews to compare with the major brands, I’ll say that the Reform is a little bigger than a 12″ PowerBook G4 and builds on the thickness of the PowerBook G4, though it lacks the ability to choose whether to have the trackball on the left or right hand of the keyboard as with the Macintosh Portable.
Joking aside, the bottom of the laptop is covered by a transparent perspex sheet, which allows you to see all the components and on the first night, I spent some time with the laptop running upside down, with base upright, just looking at it. <3 An article in Interface Critique from 2018 (PDF) covers the design decisions and how the Reform came to be for background.
The keyboard feels nice to type on, the LCD panel is very good (sharp and good colour reproductions). It’s a little on the heavier side but I’m ok with that. It’s nice to use and I’m growing very fond of it. Also, if you get bored whilst waiting for a lengthy process to complete, you can just turn it over and look at the circuitry. 🙂
A short click test video of keys and buttons. (this was an embedded video here, but the video would be prefetched for every visitor to the site, regardless of whether they pressed play).
There’s a lot to take in if you want to explore, being open hardware, full source and schematics are available and there’s a handbook for general adminstration too. Plan is to switch to booting from the NVMe drive which involves flipping a switch on the CPU module but I have not yet investigated with else needs to be done for root on ZFS on ARM hardware and u-boot. The CPU module that comes with the Reform contains 4GB RAM and new CPU modules with 8 & 16GB RAM have just been announced. It’s a little early to shop for upgrades but the module with 16GB RAM would be nice to have alongside a zpool on NVMe. Exciting times!
Sipeed makes a range of FPGA boards called Tang Nano, based around the Gowin’s GW1N range of FPGA chips. The current top model in the range is the Tang Nano 9k which supports 8640 LUT and can fit a RISC-V softcore such as PicoRV23 for around £12 + £5 shipping from China via AliExpress. You can spend a little more and get different sizes of LCD panels which connect to the board via a ribbon, I went for the 4.3″ panel which supports a resolution of 480×272. The board itself has an HDMI connector so that could be used instead.
The official vendor supplied toolchain is the Gowin IDE which is a closed source tool with an educational version that can be used for working with the Tang Nano series without having to apply for a license. Both Windows & Linux builds are offered for download but I was curios if I could avoid the IDE and just use a completely open source toolchain for working with the board, and I was pleasantly surprised to find that I could.
The toolchain consists of yosys (for synthesis), apicula (for generating the bitstream of Gowin FPGAs), nextpnr (for place and route), openFPGALoader (to program the board).
With yosys, apicula, nextpnr-gowin, openFPGALoader installed, the workflow for building and programming my board with the examples from the apicula source repository, like the blinky example, which flashes the 6 LEDs on the Tang Nano 9k board, goes something like this:
I recorded a demo video of a ThinkPad X230 with terminal open showing first three commands listed above, executed, to prepare blinky example, waiting at the prompt to run openFPGALoader. There is a Sipeed Tang Nano 9k connected to an LCD panel sitting on the ThinkPad keyboard. Power is applied & the board loads its default program which cycles through its onboard LEDs whilst displaying a rainbow pattern. Enter is pressed on the keyboard to invoke openFPGALoader to flash another “blinky” program from the apicula source examples. The LCD turns white and the LEDs change the pattern they cycle through, having loaded the new program. (this was an embedded video here, but the video would be prefetched for every visitor to the site, regardless of whether they pressed play).
I’ve been dragging the PDF version of Dominic Giampaolo‘s book around for some time but never bothered to read it until recently when I went fishing for PDFs in my archive (~/Downloads) to load up on my new toy, a reMarkable 2 tablet. It has been a while since I read a book related to technical details of computers so a book on a file system from the past ticked all the right boxes and was a good test to see how the tablet fares as a reading device.
I’ve not read many books specifically on file systems, only one that comes to mind is the Solaris 10 ZFS Essentials book on Prentice Hall which is more of an administrators guide rather than a dive into the implementation and the thinking behind it. The Practical File System Design book starts by introducing how the BFS project come about and works up from the concept of what is a file system, establishing terminology and building up a picture from blocks on a disk up to mounting a file system, reading and writing a file, features which enhance a file system, the hurdles when developing and testing the file system, across twelve chapters. The book dedicates a chapter to cover other file systems in use at the time like FFS (described as the grandfather of modern file systems), XFS (the burly nephew) , HFS (the odd-ball cousin), NTFS (the blue-suited distant relative), ext2 (the fast and unsafe grandchild).
Memory usage was also a big concern. We did not have the luxury of assuming large amounts of memory for buffers because the primary development system for BFS was a BeBox with 8 MB of memory.
Dominic Giampaolo
The initial target for the file system project was six months, to fit with the operating system’s release cycle, but took nine months for the first beta release and the final version shipped a month after. The book was written around sixteen months after the initial development of the file system.
After the first three months of development it became necessary to enable others to use the BFS, so BFS graduated to become a full-time member of kernel space. At this stage, although it was not feature complete (by far!), BFS had enough functionality for use as a traditional-style file system. As expected, the file system went from a level of apparent stability in my own testing to a devastating number of bugs the minute other people were allowed to use it. With immediate feedback from the testers, the file system often saw three or four fixes per day. After several weeks of continual refinements and close work with the testing group, the file system reached a milestone: it was now possible for other engineers to use it to work on their own part of the operating system without immediate fear of corruption.
The book was written at a time when HFS+ was a recent revision, the block size of most modern hard disks was 512 bytes, when a disk greater than 5 GB was considered very large, and companies like AltaVista were trying to bring search to the desktop (and Yahoo! many years later). The search part (attributes, indexing, and queries) as the book states is “the essence of why BFS is interesting”, Dominic Giampaolo would later join Apple and bring desktop search to OS X in the form of Spotlight.
A file system designer must make many choices when implementing a file system. Not all features are appropriate or even necessary for all systems. System constraints may dictate some choices, while available time and resources may dictate others.
I really liked the writing style of the book, it was very self contained in that it explained everything it introduced clearly, covering minute details which would cause problems, options for solving a particular problem, and routes taken. For example, in the data structures chapter the impact of disk block size on the virtual memory subsystem and the avenues it would close when they come to unify the buffer cache and the VM system or accommodating the users expectation instead of using elegant data-structure search algorithms (read The inmates are running the asylum by Alan Cooper).
The short amount of time to complete the project and the lack of engineering resources meant that there was little time to explore different designs and to experiment with completely untested ideas.
The journaling and disk block cache chapters were my favourite to read. The journaling chapter made me realise my lack of understanding of journaling and what I thought I knew about how it worked, assuming that just because the term journaling was used the feature performed the same across differentimplementations (metadata journaling and the meaning of consistency vs storage leaks). Regarding caching, I still struggle with the concept of write back vs through in the abstract so always interested to read more about the subject.
The chapter on the vnode layer explained how the filesystem hooked into the kernel. Describing what it means in terms of process to mount a file system, starting from an i-node to vnode and back down from how the kernel interacts with the file system via the vnode layer using the functions provided by the file system and support for live queries, proceeded by the API the operating system offers for manipulating files in the following chapter.
A vnode layer connects the user-level abstraction of a file descriptor with specific file system implementations. In general, a vnode layer allows many different file systems to hook into the file system name space and appear as one seamless unit.
The API chapter was amusing to read because of the human aspect of the problem and trying to come to an agreement on approach, here being fought out between those in favour of Macintosh style file handling and POSIX style.
The BeOS C++ API for manipulating files and performing I/O suffered a traumatic birthing process. Many forces drove the design back and forth between the extremes of POSIX-dom and Macintosh-like file handling. The API changed many times, the class hierarchy mutated just as many times, and with only two weeks to go before shipping, the API went through one more spasmodic change. This tumultuous process resulted from trying to appeal to too many different desires. In the end it seemed that no one was particularly pleased. Although the API is functional and not overly burdensome to use, each of the people involved in the design would have done it slightly differently, and some parts of the API still seem quirky at times. The difficulties that arose were never in the implementation but rather in the design: how to structure the classes and what features to provide in each.
The book wraps up with a chapter on testing and various approaches to shake out bugs. One suggestion to stress the file system in early 1998, was to support a full USENET feed, resulting in 2GB of data per day at least being written to disk. When collecting more PDFs after reading the journaling chapter, I found a paper from USENIX in 2000 which states “anecdotal evidence suggests that a full news feed today is 15-20 GB per day”. ISC‘s InterNet News (INN) and netnews were useful tools for testing the robustness of a file system.
Of these tests, the most stressful by far is handling an Internet news feed. The volume of traffic of a full Internet news feed is on the order of 2 GB per day spread over several hundred thousand messages (in early 1998). The INN software package stores each message in a separate file and uses the file system hierarchy to manage the news hierarchy. In addition to the large number of files, the news system also uses several large databases stored in files that contain overview and history information about all the active articles in the news system. The amount of activity, the sizes of the files, and the sheer number of files involved make running INN perhaps the most brutal test any file system can endure.Running the INN software and accepting a full news feed is a significant task. Unfortunately the INN software does not yet run on BeOS, and so this test was not possible (hence the reason for creating the synthetic news test program). A file system able to support the real INN software and to do so without corrupting the disk is a truly mature file system.
The book was a great read, and provides lots of historical context and grounding of concepts for an autodidact (just don’t take away thinking 5GB HDD is a large disk). From a nostalgia perspective it was interesting because of the desktop search thing that was happening around that time and more recently the Systems We Lovetalk regarding the search capabilities of BFS.
At the time I never had the full BeOS experience since I didn’t have a system with enough RAM. I could boot BeOS but the system decomposed to no sound nor colour! I recall a disappointed experience, trying to boot the demo copy of BeOS v4.5? Personal Edition from a PC Plus cover disk. It would’ve been nice to use the colour display capabilities of my CRT at the very least. 🙂
Adam Sweeney, Doug Doucette, Wei Hu, Curtis Anderson, Michael Nishimoto, and Geoff Peck are all members of the Server Technology group at Silicon Graphics. Adam went to Stanford, Doug to NYU and Berkeley, Wei to MIT, Curtis to Cal Poly, Michael to Berkeley and Stanford, and Geoff to Harvard and Berkeley. None of them holds a Ph.D. All together they have worked at somewhere around 27 companies, on projects including secure operating systems, distributed operating systems, fault tolerant systems, and plain old Unix systems. None of them intends to make a career out of building file systems, but they all enjoyed building one.
Scalability in the XFS File System
There’s an article on BFS at Ars Technica if you want to read more about the file system. The article features an interview with BFS developers at Be & Haiku, and a comment by jean-Louis Gassée.
As an aside, the reMarkable 2 is physically a really nice device to hold, display is great, the ability to extract my highlighted items from a PDF could be a lot better, I could export a copy of this book as a PDF but there’s no way to get a view of just highlighted items and it’s not possible to copy from pdf on the device which meant I had to manually scan the exported PDF and save them in my notes.
Having grown up in Brighton & Hove, I started to get frustrated with being there towards the end of the 2000s. At the time I was in a bubble of fairly high level web folks and had missed the heyday of BSD in Brighton back in the day (the first ever EuroBSDcon was held there in 2001 and before that Pavilion Internet one of the early UK ISPs, which was a FreeBSD shop, where the PPP stack evolved). Musically things had also changed, with the emergence of electro, dubstep, and minimal techno, things had moved on from the housier side of things which was more my thing.
So, I started looking elsewhere for folks interested in the lower levels of the software stack, and hardware. This got me visiting London monthly for London OpenSolaris User Group (LOSUG) and later the Open Source Hardware User Group (OSHUG) where twice a month I got to hear interesting topics and I could talk to likeminded folks who could understand what I was talking about. I was able to stay content with being in Brighton for a few years with this arrangement. I did consider moving to London but couldn’t see how to make it work financially at the time.
Ironically, some of the people and types I was trying to get away from in Brighton were on Paul Downey‘s The Web is Agreement poster, who was a co-founder of OSHUG 🙂 .
I also gave various talks over the years at BCS OSSG. The photo below was taken at the 2019 AGM where I gave a talk entitled “It’s Open Source, not gratis binaries”. The photo captures the moment where the talk was torpedoed by the objection that one should compile their own Firefox or Chrome which have hefty dependencies, taking lots of time & resources to build. It wasn’t the argument I was trying to make, but I hadn’t been clear in conveying that. The slide on display is a reference to avoiding having to understand tools you rely on and going from fad to fad instead (fad at the time and currently, containers).
"Compiling anything meaningful from source is difficult." The current mainstream is using containers as a silver-bullet to make it easier to consume binaries – but maybe there are issues with this apporach. pic.twitter.com/XL9DWvn1FO
Me -“It’s Open Source, not gratis binaries”, BCS OSSG AGM 2019
The last event I organised was an evening on the theme of POWER & PowerPC. I will be stepping down from the event organisation team in September and continue participation as a member of the community.
Regarding music, in 2011, Ralph Lawson‘s 2020Vision put out a promo video for a day of parties they held in London and it made me want to be in London even more because musically it was really what I was into. I didn’t realise it until some years later that looking back I had indirectly and unconsciously fulfilled my wish, though since that video I only attended the Village Underground on twooccassions. Sometimes the end goal is reached, but not necessarily by the route imagined. 🙂
Gates at EveryCity’s former office, near London Bridge – 31/07/2010
At July’s London OpenSolaris User Group (LOSUG) meeting, Alasdair Lumsden invited various folks to the EveryCity office for a hackathon to start building a new distro on Saturday 31st of July, 2010. The hackathon was attended by folks from the LOSUG community in person and remotely on IRC from the OpenSolaris community at large (transcripts for day 1, 1&2. On the day, work began to attempt to build a distro from the published sources. I spent some time setting up the mailman instance (on Solaris 10 with Exim!) and creating various lists for use by the project that day.
Illumos webinar presented by Garrett D’Amore on 3/8/2010 18:00(BST)
Over the course of the next few weeks with the help with the help of Alan Coopersmith and Rich Lowe we attempted to build and document the various consolidations, Chris Ridd suffered the XNV (X11) consolidation which was the most painful to build due to missing components which were never pushed out publicly, if I recall correctly. I initially started on building the Sun FreeWare (SFW) consolidation and when that was done and documented, moved on to OSNET? (terminology might be wrong, I’m referring to the core-os consolidation). The confluence wiki wasn’t archive friendly it seems so trying to fish the initial documentation out from archive.org proved to be a bit of a challenge since the navigation menu wont load.
At some point Garrett D’Amore showed up on IRC and began lording it over us hanging out and I just wondered who is this person? as I saw OpenIndiana as Alasdair’s thing.
This was really my first actual involvement as a member of an open source project and I was very green, I recall that my patches to the SFW repo were committed by someone else but it wasn’t just being green on a technical ability level, it was also in my thinking. Thinking that another project that’s been around a little longer have a process down, why does anyone need to do anything different. That sort of rigid thinking would go on to receive heckles on more than one occasion 🙂 , many years later by Phil Harmon when a couple of us robots would attend to what followed LOSUG as Solaris SIG (Specialist Interest Group?) in London.
According to my mail archive, I handed over the mailman details to Alasdair a few months later, and left the project on 23 Nov 2010. During the few months on the project I made friends many of whom I still have contact with today. Chris Ridd, James O’Gorman, Andrew Watkins, Alan Coopersmith, Jeppe Fihl-Pearson, Peter Tribble, Andrzej Szeszo to name a few, but there were many more folks involved.
Apologies if I upset or annoyed anyone with my “obligatory F*ee*SD reference” comments in those days, I must admit it must’ve been quite nauseating. Over the years, since leaving OpenIndiana, I have often wondered if things would’ve worked out differently if Garrett had been told to fuck off.
I assume most people know but only a few heard it from me, so here it is: I resigned from the NetBSD project (and by association, pkgsrc) around about a year ago. It’s not that I’ve been inundated with questions about it but more that on a rare occasion over the past year there would be some reference made which I would fall silent to and I sort of felt bad about that, hence, this post.
It’s been a while since I wrote a technical post in this series, since the last post I made a build of what I called Viewpoint Linux Distribution available. This post will cover the time between the last post (round #5) and the launch of the distro.
By the time I’d written the previous post, things had roughly taken shape and I was thinking about what would sit on top via packaged software. Being interested in Guix from afar I thought about using that as there had been some interesting talks about it at FOSDEM‘s Declarative and Minimalistic Computing devroom a month prior. Didn’t end up going down that route as Guix requires GNU Guile, GnuTLS, and various extensions for Guile. It’s not so much of a problem what its requirements are but that I would have to ship and maintain copies of these in the base OS and I didn’t want to do that so I stuck with what I knew. I’ve spent a lot of time with pkgsrc and am comfortable working with it. pkgsrc gives you control where it satisfies dependencies and as long as you have a shell & compiler installed it can get itself to a working state. Unless specified, the bootstrap process on Linux opts to satisfy all dependencies from itself and ignore anything already installed on the system. This behaviour can be overridden by specifying --prefer-native yes when bootstrapping and in this scenario it was preferable since the OS was using recent if not latest available versions of things. Despite preferring native components, when it came to building packages, things that were present on the OS were being built again anyway, specifically readline.
$ cd /usr/pkgsrc/shells/bash ; bmake show-var VARNAME=IS_BUILTIN.readline no
After some investigation it turned out the builtin detection mechanism was not working and so dependencies would always get built, this was due to a difference between where libraries are installed when following the LFS guide and where pkgsrc expects to find them. The instructions in the LFS guide specify /usr/lib for libdir and pkgsrc expects to find them in/usr/lib${LIBABISUFFIX} which in this case would expand to /usr/lib64. Just to move thing along I patched pkgsrc/mk/platform/Linux.mk to include /usr/lib for _OPSYS_SYSTEM_RPATH / _OPSYS_LIB_DIRS and builtin detection then started working. With a working packaging system, I began packaging BCC and bpftrace though opted to use the bpftrace binary which the project produces with every release in the end. This made things easier as there is a working environment out of the box to start with and if BCC is needed, it can be installed, but since the BPF Performance Tools book is largely about using bpftrace, you get to start off without dealing with packaging. By keeping the packaging system a separate component, it also saves on shipping a bootstrap kit for the packaging system with every release and likely stale packages depending on how quickly things evolve. I dislike the idea of having to run a package update on first boot to shed the stale packages which are shipped with the OS.
After testing various things out I set out to make a new build of the distro to publish, this time opting to use lib64 as the libdir to reduce the need for changes to pkgsrc, I have not attempted any large runs of bulkbuilds but Emacs 21 package was definitely not happy as it expected to find some things in /usr/lib.
There are various packages which ship with DTrace USDT probes which bpftrace can also make use of. This involves building those packages with DTrace support enabled and using SystemTap which provides a Python script called dtrace to do the relevant work, on Linux. I created a package but since it require Python, it created a circular dependency when using Python 3, as Python 3 has USDT probes. As a workaround to sidestep the issue, my SystemTap package uses Python 2, which is still supported by SystemTap. To enable building with DTrace support I introduced a “dtrace tool” which pulled in SystemTap as dependency on Linux when USE_TOOLS+=dtrace was specified, and nothing on other platforms. I then added USE_TOOLS+=dtrace across the tree where dtrace was a supported option.
With the OS rebuild, I dropped nscd(8) from the system, the thought of having up to three caching resolvers seemed a bit excessive (nscd/systemd-resolved/unbound). This post highlights why you might not want nscd support on your system. As part of the rebuild I began populating the repository with sources for everything that would ship with distro, it was a tedious process that slowed that as I progressed through the build and imported more and more components, because on the initial import I would roll the tree back to the start to import into a branch, update to the tip of the tree, merge the branch, repeat. I used the hg-git mercurial plugin to convert and push the tree to a Git mirror
The kernel config used started life as the default config which gets created when you run make defconfig and built up from there to cover what the LFS guide suggests and those required by BCC / bpftrace. Testing that X11 worked ok revealed that I was missing various options, from lack of mouse support to support for emulated graphics, the safe bet being the VMware virtual card to use on VirtualBox (VMSVGA which is default) and QEMU, other options resulted in offset problems with the cursor where it would appear on one place on the screen but clicks and drags would register at a different location. Everything works out of the box with VMware option.
I’ve been really impressed by how quickly the system boots and shuts down (not having an initrd image to load and minimal drivers to probe for, account for that), I hope I don’t end up loosing that. I used the work leading up to the release as an excuse to start using org-mode on Emacs. Following the beginners guide I now have a long list of todo items which I work through. The next big item is build infrastructure so I can turn around releases quicker.
This should’ve been part of the original post but I feared for the attention it would end of drawing and the direction the “conversation” could end up taking when links are posted on various site, so I deferred. I was pleasantly surprised that despite the announcement being shared out, there was no drama and even received encouraging comments on the orange site, thank you to those who submitted the links to the various sites and the comments.
My intention for this post is not so much for promotion but as a formality so I can refer back to here should the need arise in the future. I really have no grand vision with this project and intend the project to be a personal one. I hope the distro becomes something useful for others which people carry in their metaphorical tool belt to call on, should they need such an environment to experiment on or adapt for themselves, but I’m not looking to actively recruit developers or soliciting contributions of functionality to integrate into the project, build upon it for yourself. By all means, if there is something amiss, please let me know.
As I was getting things ready to make the announcement I looked into putting together a code of conduct for the project, I believe open source projects should have one but since it is currently a one man show It would really have been an empty gesture as there would be no person or team other than myself for handling incidents, so if someone took exception to my behaviour they would be ultimately be contacting myself about myself.
Besides the projects twitter account I have opted not to utilise any public forum, whether it be IRC/mailing lists/forums or variants of. Direct email is very welcome if you have any questions or comments, you can reach me via sevan@ projects domain but I just don’t have the mental strength to deal with public group discussions or leave things open to trolls and bullies.
Now that the dust has settled from the launch, it is time for the work to resume. 🙂
Viewpoint Linux is a distribution for providing a minimal environment for me to build on and play with. I hope that for others it can be a distro which provides a working environment to use alongside various texts I have in mind, allowing the reader to focus on study of the material at hand rather than trying to get their environment setup right. The idea came about through having to side step from study to investigate broken stack traces and wondering about the level of pain when having to make build changes system-wide on a distribution which doesn’t provide infrastructure to rebuild at mass with ease. When I first started writing about my experiments with LFS it was suggested that I look at several different established distributions which were the answer to what I was looking for. I was aware of these distributions already and had even used some in the distant past, however I decided not to go down this path as there was either new tooling to learn which would drive system management or components were adapted (local changes and features). I was not interested in having to detour to learn another set of tools which are non-transferable between operating systems nor making use of derivatives before setting up the system how I needed it so that I could practice what I was studying, hence Viewpoint Linux strives to be innovationless in this regard.
Viewpoint currently lacks a framework to ease building the system hence everything has been built slowly by hand with a specific idea of how the system should be.
Some of those ideas are
It should work out of the box for texts in mind e.g full working stack traces for instrumenting with bpftrace and debugging using GDB
its concept of base system is a subset of the utilities installed by the LFS guide, containing general utilities for users and tools for administration. Components which are purely build dependencies are installed to a separate prefix (/osbt (os build tools)) so that they can be removed if desired. Everything else is satisfied from user installed packages which is currently provided by pkgsrc. Dependencies can grow out of hand, for example, dwarves has a cmake build dependency, dwarves provides the pahole utility which is used as a kernel build dependency to generate BTF but it’s also a useful utility for inspecting system data structures by itself. This was a grey area where I chose to include dwarves in base but to satisfy its build dependency (cmake) from external sources, in this case, the cmake project provides prebuilt binaries.
A repository (monorepo) of all components shipped. Not such a good idea because of fighting autotooled builds and timestamps, see Living with CVS in Autoconfiscated Projects. But it makes tracking changes in the distro easy which is more important for me.
It is safe to assume that I’ve run configure, make, make install a bunch of times with CFLAGS set to ‘-fno-omit-framepointer -g‘ or some variation of (such as you have to enable optimisation also for build glibc otherwise it fails).
Viewpoint is an inovationless distro, see previous point (there are no new methods or tooling on offering, just stock components from upstream built a certain way with differing flags)
Viewpoint uses systemd (I wondered what my own shit sandwich would taste like)
Primarily intended for use as a guest vm though it is possible to install on hardware, the distinction here is because nothing has been done to cater for differing hardware in the kernel config so manual intervention may be required to prep and get everything working e.g it booted fine on my ThinkPad x230 but I had no wifi. There is also no UEFI support currently, nor any additional firmware included.
Development of features to 3rd party components happen outside of the tree (because it’s inovationless)
Patches from LFS have not been applied, again because inovationless e.g their provided i18n patches to coreutils which are marked as rejected by upstream. The LFS guide states “In the past, many bugs were found in this patch. When reporting new bugs to Coreutils maintainers, please check first if they are reproducible without this patch.”
Versioning is going to be a sequencing number meaning nothing beyond an indication of a new release
Viewpoint doesn’t follow the FHS spec strictly & LSB at all. Perl & Python are not part of base (because I did not want to maintain them in base).
Currently intended to be used alongside Brendan Gregg‘s BPF Performance Tools book and Diomidis Spinellis‘ Effective Debugging book for learning two different debugging workflows. Other texts are in mind for accomodation in the future. Would liked to have included DTrace but that currently requires running a fork of the kernel. While the fork is kept up to the date with upstream, as part of being inovationless, it is easier to swap components fresh from upstream and saves on having to eliminate another avenue where an issue could have been introduced when debugging problems.
Beware! Vendor Gratuitous Changes Ahead!
Source repository is currently 5.1GB (1.8GB .hg directory, 3.3GB of source), 1.8GB .hg/git conversion directory, so as you can tell, that’s a lot of value add 🙂 . On deciding whether to strip components down to the essential minimum I opted not to as running test suites is part of the LFS workflow when building things up and it would make CI integration easier. AMD Firmware included in Linux aside, the test suites from GCC and Binutils for example take up the most space in the repo.
Lots todo to smooth things over but some key features that I intend to work on to include in a future release
Build framework to automate the configure, make, make install routine and allow customisation with ease ala BSDs. There is a framework in LFS project called ALFS but I didn’t want to go down the literate programming route and maintain my own fork of the LFS guide (you feed it the XML source of the guide and it builds the distro from that).
Add ZFS support
Why the name?
It is focused on observability
It is opinionated
I listened to a lot Alan Kay lectures (a nod to Viewpoints Research, ViewPoint OS from Xerox, though this distro is in no way a great feat in achievement)
Viewpoint is a variant of LFS distribution, registered on the Linux From Scratch Counter on 03/05/2021, ID: 28859, Name: Viewpoint Linux Distribution, First LFS Version: 10.0-systemd.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive licensee of Linus Torvalds, owner of the mark on a worldwide basis.
I’ve been looking at Gemini recently, having tried out various clients such as Lagrange on macOS and Bullox on the shell, I thought I’d try serving. There are many projects for servers written in different languages, I picked The Unsinkable Molly Brown from the list for the technical reason of great name 🙂
Serving a simple Gemini page from this domain for the first time.
Note: this post doesn’t provide a solution to address the “Incorrect number of thread records” file system issue but documents what I went through to see if I could when I faced it. I think your best bet would probably be to try ALSOFT’s Disk Warrior if you need to fix this issue.
I was sitting in front of a Mac Pro which had previously suffered from boot issues due to the expansion boards holding the memory not making contact with the logic board properly. The boot issue manifested itself as the system not POSTing and the fans spinning loudly. Luckily the issue was resolved by reseating things. The system also had a failing Cinema Display which would work fine after a “cold” power cycle having removed the power connector from PSU and reattached, but would fail to wake from sleep if it had been left on. The user was not aware of this issue and assumed whenever the system wouldn’t display anything on boot, it was the POST issue and would power cycle the system, hoping to strike it lucky. Through doing this, the HFS+ journaled file system ended up with problems where fsck_hfs(8) was unable to fix the issue.
Since the filesystem had problems, the system would try to boot, the grey screen with the Apple logo would appear along with a progress bar which would move from left to right and then after some time the system would switch off.
Booting the system in verbose mode by holding command + v showed that the system was running fsck on boot which is what the progress bar was an indicator for normally. It would attempt to repair the file system 3 times, fail, and switch off.
Connecting to another Mac via TDM, it was possible to mount the filesystem as read only and take a closer look. It would have been possible to also boot the system in rescue mode but the Mac I connected to via TDM was running a newer version of macOS and I was hoping that the tooling would be better able to deal with such an issue though fsck_hfs(8) bugs section states “fsck_hfs is not able to fix some inconsistencies that it detects“, regardless.
% sudo fsck_hfs -ypd /dev/disk2s2
/dev/rdisk2s2: starting
journal_replay(/dev/disk2s2) returned 0
Using cacheBlockSize=32K cacheTotalBlock=32768 cacheSize=1048576K.
Executing fsck_hfs (version hfs-522.100.5).
** Checking Journaled HFS Plus volume.
The volume name is Macintosh HD
** Checking extents overflow file.
** Checking catalog file.
Incorrect number of thread records
(4, 21962)
CheckCatalogBTree: fileCount = 421327, fileThread = 421326
** Checking multi-linked files.
Orphaned open unlinked file temp2639983
Orphaned open unlinked file temp2651454
Whole bunch of these temp files trimmed from listing
** Checking catalog hierarchy.
** Checking extended attributes file.
** Checking volume bitmap.
** Checking volume information.
Verify Status: VIStat = 0x0000, ABTStat = 0x0000 EBTStat = 0x0000
CBTStat = 0x0800 CatStat = 0x00000000
** Repairing volume.
FixOrphanedFiles: nodeName for id=2671681 do not match
FixOrphanedFiles: Created thread record for id=2671681 (err=0)
FixOrphanedFiles: nodeName for id=2671681 do not match
FixOrphanedFiles: Created thread record for id=2671681 (err=0)
FixOrphanedFiles: nodeName for id=2671681 do not match
FixOrphanedFiles: Created thread record for id=2671681 (err=0)
FixOrphanedFiles: nodeName for id=2671681 do not match
FixOrphanedFiles: Created thread record for id=2671681 (err=0)
** Rechecking volume.
Repeat again a second time
** Checking Journaled HFS Plus volume.
The volume name is Macintosh HD
** Checking extents overflow file.
** Checking catalog file.
Incorrect number of thread records
(4, 21962)
CheckCatalogBTree: fileCount = 421327, fileThread = 421326
** Checking multi-linked files.
Orphaned open unlinked file temp2639983
Orphaned open unlinked file temp2651454
Whole bunch of these temp files trimmed from listing
** Checking catalog hierarchy.
** Checking extended attributes file.
** Checking volume bitmap.
** Checking volume information.
Verify Status: VIStat = 0x0000, ABTStat = 0x0000 EBTStat = 0x0000
CBTStat = 0x0800 CatStat = 0x00000000
** The volume Macintosh HD could not be repaired after 3 attempts.
volume type is pure HFS+
primary MDB is at block 0 0x00
alternate MDB is at block 0 0x00
primary VHB is at block 2 0x02
alternate VHB is at block 975093950 0x3a1ec0be
sector size = 512 0x200
VolumeObject flags = 0x07
total sectors for volume = 975093952 0x3a1ec0c0
total sectors for embedded volume = 0 0x00
CheckHFS returned 8, fsmodified = 1
It was possible to find out the offending files by looking up the inodes listed in the fsck_hfs output. Note the id reported in “FixOrphanedFiles: nodeName for id=2671681 do not match” and use find(1) to look it up
Ironically, the files causing the issue are auditd(8) logs from crash reports?
I thought perhaps turning off journaling would help sidestep the issue by causing fsck to remove the offending files, rather than trying to make use of journal data to replay. hfs.util(8) which is tucked away in /System/Library/Filesystems/hfs.fs/Contents/Resources let’s you do that.
% sudo /System/Library/Filesystems/hfs.fs/Contents/Resources/hfs.util -N /dev/disk2s2
Turned off the journaling bit for /dev/disk2s2
It didn’t help.
hfs.util‘s supposedly supports a -M (Force Mount) but I was unable to get this to work. I was hoping to force mount the file system read/write & delete the 2 files.
I ended up wiping the disk and reinstalling macOS.
As an aside, the history section of the hfs.util(8) claims it was “Derived from the Openstep Workspace Manager file system utility programs“. The sources for hfs v106 package on Apple’s site shed some more light. Oldest entry in “change history” section of hfsutil_main.c states
13-Jan-1998 jwc first cut (derived from old NextStep macfs.util code and cdrom.util code).
Note the description of what main() does in a comment block inside hfsutil_main.c.
Purpose -
This our main entry point to this utility. We get called by the WorkSpace. See ParseArgs for detail info on input arguments.
Input -
argc - the number of arguments in argv.
argv - array of arguments.
Output -
returns FSUR_IO_SUCCESS if OK else one of the other FSUR_xyz errors in loadable_fs.h.
Up to this point I’ve been working with a chroot to build OS images from loop back mounted flat file which is then converted to the vmdk format for testing with virtualbox. I created packages for bpftrace and BCC, BCC was fairly trivial and the availability of a single archive which includes submodules as bcc-src-with-submodule.tar.gz helped avoiding the need to package libbpf. bpftrace doesn’t offer such an archive and tries to clone the googletest repo which I sidestepped addressing just to obtain the package. Both packages worked ok though I only tested the Python side of BCC and not LuaJit.
Execsnoop from BCC
With that I wanted to see if what I had would boot on actual hardware, so dd’d the flat file to a usb flash drive and booted it on a Dell Optiplex. Things worked as far as making it to grub but then hit a couple of glitches. First issue was that because of the delay probing the USB bus the kernel needs to be passed the rootwait keyword that I was missing so it would just panic as no root file system could be found otherwise. After that I hit the issue that I’d nailed things to a specific device node (sda) and with the other disks in the system the flash drive was now another device node (sdb). Addressing that got me to the login prompt and I was able to repartition the SSD installed in the system with cfdisk, make a new file system, copy the contents of flash drive to SSD, install grub and reboot to boot the system off the new Linux install.
As the grub-mkconfig had included references to the GUID of the file system on the flash drive the system landed in the GRUBrescue mode. Since it wasn’t able to load the config nothing is loaded, most importantly, its prefix variable is set incorrectly. This results in a strange behaviour where nothing which would normally work in the GRUB prompt works. Setting the prefix variable to the correct path allows you to load the “normal module” and switch from rescue mode to normal mode.
grub rescue> set prefix=(hd0,msdos1)/boot/grub grub rescue> insmod normal grub rescue> normal
Once in normal mode it was possible to boot the system by loading the ext2 module and pointing the linux command to the path of the kernel to boot. Re-running grub-mkconfig once the system was up generated a working config.
With a faster build machine, the next step is to produce a fresh image, address these nits, and start putting things together to share.