Portable Legacy – Part 2

When I wrote up the previous post, there was a couple of points which I forgot to cover and since then a few more things have come up on the same subject that I want to go over here as a follow up.

I’ve been carrying on with the package building since the previous article and keeping on the same theme of old computer, old OS, old compiler, what can I build? (PowerPC mac, running OS X 10.4, with GCC 4.0.1). I resort to GCC 5 if I need C11 support & can go up to GCC 7 if needed. Have not yet attempted to package a newer version of GCC yet but after a certain point beyond GCC 7, the currently supported versions of GCC (11 and above?) require interim toolchain builds to go from a system with only GCC 4.x to 10+. Taking around 24 hours to build GCC 5 alone on the hardware I have at hand, I’m putting off attempting toolchain related changes as much as possible.

One thing I seem to spend a lot of time doing is trying to dig into finding requirements for software since the current trend is at most to list the names of components, without any version info. There are a couple of scenarios where the requirements are skewed indirectly.
The software I want to build has modest requirements, but they require a critical 3rd party component which depends on newer tooling, thus setting a higher requirement and sometimes it’s just the build infrastructure which has a far higher system requirement.
Of course if I was working on a modern systems with current hardware it wouldn’t be a problem since either binaries would be readily available or building would not take much time, but perhaps if you’re going to strive for minimalism, avoid requiring build tools which need the latest and greatest language support and beyond.
No matter how much of a time the shiny new build infra will save, it would take many days for me to get up to having the tooling in place in order to attempt building your project.

I find myself in the situatio where I now appreciate autoconf in these scenarios and think that projects should carry the infra for both autotools & more fancy tooling.
Autoconf may be old, odd, and slow, but it has been dragged through many scenarios and systems and has built up knowledge of edge cases, and in the world of old machines and uniprocessors, something that reduces down to a shell script and make files is just fine for such systems. A PowerBook would appreciate it, an iBook definitely would.
Hence, I’ve been holding back versions in some cases where the projects have moved on to different infrastructure, pinning things to the last version to support autotools, for now. Through doing this, I discovered that sometimes there were no mirrors kept for the source files, the source archives are published on their repo but it’s an archive generated from the source tag, not the bootstrapped release archive which would’ve been published originally, resulting in a detour to bootstrapping before attempting to build and additional build dependencies to pull in autotools. I guess I could patch that infrastructure in, having bootstrapped separately, but that would be a hefty diff.

Whilst making progress with packages, I made a discovery as to the source of my troubles with pthread support detection, that I wrote about in the previous post, the common failure is to just check for the presence of the header file and not checking for the implementation details. It turns out, this is largely due to gnulib, the GNU portability library that is vastly used in the GNU ecosystem. The issue there is the detection of pthread support is split across several modules which cover different aspects such as “is there a pthread.h?”, “is there support for rwlock?” and in this fragmented way, it is possible to misdetect functionality, since projects are selective in what they import from the library. One issue, now fixed, was the miss detection of support for rwlock, which OS X Tiger & prior lack. If projects import all of the pthread components from gnulib, detection of pthread support works correctly and gnulib build process makes the correct substitutions and then its tests suite passes ok. If only some parts have been imported things fail when running the test suite since for example the test for pthread support tries to make use of rwlock support in the testcase. There are many other issues with gnulib and its test suite broadly, completely separate from pthread support that for now I’ve skipped on running the gnulib tests. I am unaware of the scope of breakage in the library since attempting to generate a library comprised of all components in gnulib take some time and disk space to generate (documentation states 512MB disk space but I crossed a couple of gigabytes before aborting). I have been working my way through the GNU software projects to see what’s broken in the current versions related to gnulib. I’m sad to say GNU Hello is a casualty and currently unbuildable on OS X Tiger. There is a lot more work related to gnulib that I haven’t even started on.

Technical aspects of building software aside, one thing that is re-occurring is peoples response to legacy systems. On the plus side folks work on restoring functionality on old systems and others question the age of the system and whether it is time to rip out support for them. I can’t help to wonder if this is conditioning from the Apple side, since there is often support for other long dead operating systems from extinct companies/products that remain in a projects codebase. I was once asked if I try new software and technology as soon as it becomes available for a vacancy with them. As a user of an iPod Classic, PowerBook G4 running OS X Tiger, a 2007 17″ MacBook Pro, and a resume mentioning packaging for legacy OS X, I answered yes & awaited to hear back from them.

Lingering Mac Office 2008 workaround

Nosing around the macOS 13.0 sources I stumbled across a comment regarding Mac Office 2008 in the security_certificates source drop.

# M I C R O S O F T  H A C K !
# It turns out that the Mac Office (2008) rolled their own solution to roots.
# The X509Anchors file used to hold the roots in old versions of OSX.  This was
# an implementation detail and was NOT part of the API set.  Unfortunately, 
# Microsoft used the keychain directly instead of using the supplied APIs.  When
# the X509Anchors file was removed, it broke Mac Office.  So this file is now
# supplied to keep Office from breaking.  It is NEVER updated and there is no
# code to update this file.  We REALLY should see if this is still necessary

Wasn’t Office 2008 a 32-bit app, in which case it would’ve stopped working long ago when they dropped 32-bit support from the operating system?

Portable Legacy

Something I’ve been thinking over on and off for a while, is the meaning of portability claimed by folks regarding software.
Fish out an old computer with a matching vintage closed source operating system, how does the portable software fair in such a situation?
Supporting an ancient operating systems when building recent software is enlightening, there’s so much room for dependencies to creep in, only to realise it when the dependencies are not there. Ideally, at the least, documentation would cover when said functionality was introduced so you have a vague idea in advance if something will work, but an exact idea of dependencies is needed to catalogue what is required.

Easy to deploy. Just add monocypher.c and monocypher.h to your project. They compile as C99 or C++ and are dedicated to the public domain (CC0-1.0, alternatively 2-clause BSD).
Portable. There are no dependencies, not even on libc.

I’ve been wading through building things on OS X Tiger again and looking for low hanging fruit to tick off. Armed with the stock version of GCC 4.0.1 from XCode 2.5, I’ve been avoiding going down the newer toolchain route and attempting to build anything which lists needing nothing more than C99/C++98 as I want to get as much done with the stock toolchain as possible since it reduces time starting with a fresh setup.

GCC 4.0.1 supports C99 and C++98, though according to the GCC wiki, C99 support was not completed in GCC until v4.5. GCC 4.0.1 defaults to C89, unless a standard via -std is specified. I can define __DARWIN_UNIX03 if I want to kick it up a notch for modernity, but that’s at least one version of SUS behind, implementing changes to existing functionality from IEEE Std 1003.1-2001 (defaults being older behaviour in some cases to accommodate migration to the then new behaviour, which has now become the expected behaviour).
Still, I have a compiler which recognises -std=gnu99 and a system which was on the way to UNIX03 certification but not (first reached in Leopard). See compat(5) from Tiger, Leopard, Catalina.
On my quest for low hanging fruit for things, I keep on getting caught out by software which claim to require just language support but really needing functionality beyond language support and 3rd party libraries from the the system.
Some examples of things which have tripped my up like

  • Stating -std=c89 and requiring a C11 supported compiler.
  • Claiming just C99 support but requiring POSIX functionality from IEEE Std 1003.1-2008.
  • With a C11 compiler installed (GCC 5), missing POSIX functionality from IEEE Std 1003.1-2008.
  • The assumption that PowerPC hardware means host is running Linux.
  • Python modules with Rust dependency (luckily that is a quick, hard failure on PowerPC OS X 🙂 )

More than just language support aside, another common issue is the misdetection of functionality through the test being too broad. Usually see this with pthread support, where more recent pthread support is required but all that happens from the build side is a check for the presence of pthread.h header file rather than checking for required functions like pthread_setname_np(3) / pthread_getname_np(3). Luckily, from the lacking user-space aspect there is help at hand, either via libraries which provide an implementation of missing functionality, or re-implement the functionality as a wrapper around existing functionality.
Sometimes there’s advice on how to use existing functionality for things which haven’t already been provided by compatibility libraries from the Windows community since they still have portability related issues on current OS versions.
Just stuck with things which need hardware/kernel support – memory atomic operations?
Going lower level, since the tooling is now so ancient, functionality which allows things to be treated generically across platforms hadn’t cross-pollinated. Non-platform specific mnemonics for assembler are lacking, and flags for linker to generate shared objects are missing.
Luckily the linker fixes is easy to patch and forward compatible with newer OS versions since they have retained backwards compatibility for the old way, specifying -dynamiclib instead of -shared which is what is now commonly used.

The Practice of Programming, written in the late 1990s, before C99 was standardised, has a chapter on portability.

It’s hard to write software that runs correctly and efficiently. So once a program works in one environment, you don’t want to repeat much of the effort if you move it to a different compiler or processor or operating system. Ideally, it should need no changes whatsoever. This ideal is called portability. In practice, “portability” more often stands for the weaker concept that it will be easier to modify the program as it moves than to rewrite it from scratch. The less revision it needs, the more portable it is.

Of course the degree of portability must be tempered by reality. There is no such thing as an absolutely portable program, only a program that hasn’t yet been tried in enough environments. But we can keep portability as our goal by aiming towards software that runs without change almost everywhere. Even if the goal is not met completely, time spent on portability as the program is created will pay off when the software must be updated. Our message is this: try to write software that works within the intersection of the various standards, interfaces and environments it must accommodate. Don’t fix every portability problem by adding special code; instead, adapt the software to work within the new constraints. Use abstraction and encapsulation to restrict and control unavoidable non-portable code. By staying within the intersection of constraints and by localizing system dependencies, your code will become cleaner and more
general as it is ported.

The Practice of Programming, Introduction to Chapter 8, Portability

When building Monocypher, I needed to drop -march=native from CFLAGS, s/-shared/-dynamiclib, and remove -Wl,-soname,$(SONAME) from the makefile in order for it to build on OS X Tiger. As stated on their homepage, there were no external dependencies.

Portable code is an ideal that is well worth striving for, since so much time is wasted making changes to move a program from one system to another or to keep it running as it evolves and the systems it runs on changes.
Portability doesn’t come for free, however. It requires care in implementation and knowledge of portability issues in all the potential target systems. We have dubbed the two approaches to portability union and intersection. The union approach amounts to writing versions that work on each target, merging the code as much as possible with mechanisms like conditional compilation. The drawbacks are many: it takes more code and often more complicated code, it’s hard to keep up to date, and it’s hard to test.
The intersection approach is to write as much of the code as possible in a form that will work without change on each system. Inescapable system dependencies are encapsulated in single source files that act as an interface between the program and the underlying system. The intersection approach has drawbacks too, including potential loss of efficiency and even of features, but in the long run, the benefits out­ weigh the costs

The Practice of Programming, Summary of Chapter 8, Portability

I guess the union approach is what you would call the autoconf workflow: test for the required functionality and generate definitions, based on the test results. Those definitions can then be checked for in the codebase to steer the build process.

It seems today portable software in most cases means with the listed requirements, you have a chance to build given an up to date, current, operating system, rather than building on the earliest version of where requirements are available.
For portability, in terms of future proofing, being explicit in expectations helps prevent breakage caused by riding defaults when in time defaults change.
As proof of portability, test, test, test, beyond current version of mainstream systems.
If not for extending support, to bring clarity to expectations of where the software will be able to run.

Running Uxn on Mac OS X Tiger

uxnemu running piano.rom on OS X Tiger

The road to Uxn started off with getting SDL2 built on OS X Tiger. Not being familiar with SDL at all I went in at the deep end with the latest and greatest release. As the bludgeoning progressed and the patches stacked up, I took a step back and wondered what’s been done already to get SDL2 installed elsewhere.

SDL2 has moved forward with targetting features found in newer OS releases so there’s a world of difference in functionality found lacking in OS X Tiger and SDL2’s expectations, sometimes it’s just been a gratuitous bump imposed by external factors. Consulting MacPorts and Tigerbrew to see how they handled packaging SDL2 I noticed that Tigerbrew had a patched version of SDL2 2.0.3 for Tiger specifically. Great, that’s a start. I knew that Uxn required SDL2, but wasn’t sure if it was a specific version or 2.0.3 would qualify. After some time compiling things (because a PowerPC G4 is mighty and slow) I had a version of SDL2. To be honest, I veered off into having the latest version of deps to compile against. Building latest release of FFTW on Tiger took some time due to a buggy m4 macro for autoconf which broke the configure stage on legacy OS X with GCC. At some point the FFTW’s build process needed to differentiate between compilers on OS X and adjust flags for linker and assembler but the two offending tests didn’t work on Tiger’s mighty compiler, since the test for assembler flag causes compiler to print an error saying flags are not supported but return 0, passing the test, and for the linker flags, compiler would error out but the test passed, resulting in subsequent tests failing due to unsupported flags being used. I initially tried to get autoconf to restrict the tests to newer versions of OS X but failed in getting it to do that. Searching around I discovered the autoconf-archive project which is a community contributed archive of macros to extend autoconf tests. Replacing the macro included in FFTW for the compiler flag checks with copies from the autoconf-archive resolved the build issue. There isn’t yet a script for invoking the compiler with flags for the assembler to test with (-Wa) but there is a separate script for invoking the compiler with linker flags (-Wl) and that’s sufficient to move forward, since the issue with assembler flags is specific to GCC 4.0.1 and did not occur on newer versions of OS X with a newer compiler when I tested.

With SDL 2.0.3 built from the patches Tigerbrew used, it was time to try Uxn. Invoking the build.sh script soon started spitting out errors, again, Tiger’s compiler was too mighty. Uxn relies on type redefinition and so requires a compiler with C11 support so that -Wno-typedef-redefinition will work. I used GCC 5 but according to the GCC wiki, anything since 4.6 should suffice. With a new compiler, the build progressed and errored out at the linking stage with one missing symbol. It was looking for SDL_CreateRGBSurfaceWithFormat and SDL 2.0.3 was too old. The function was introduced in SDL 2.0.5, looking into the source of 2.0.5, they raised the OS version requirement. 2.0.4 and prior had a Leopard minimum requirement and 2.0.5 targets Snow Leopard. Wanting to avoid a fight with a new release of SDL I looked into what alternative functionality was there in the version of SDL that I could use. SDL_CreateRGBSurfaceWithFormat was created to make it easier for the programmer to create surfaces, by specifying a single parameter value for pixel format. A successor to SDL_CreateRGBSurface which instead of a single parameter require each parameter of the pixel format to be specified individually. Though SDL_CreateRGBSurfaceWithFormat was introduced in SDL 2.0.5, SDL_CreateRGBSurface shipped with SDL 2.0, so I switched Uxn to use SDL_CreateRGBSurface and the build succeeded and then promptly failed to run when the build finished. Turns out Uxn requires joystick support but the SDL patch for 2.0.3 does not cover joystick support nor haptic feedback support due to the reliance in a newer functionality which is lacking in Tiger’s mighty ageing USB stack. This wasn’t so much of an issue since Uxn supports keyboard. Removing joystick support from SDL’s initialisation list and rebuilding Uxn resulting in a successful build and the emulator starting up with the piano rom (build script does this by default unless you tell it to not run the emulator with --no-run). Cool! lets play keyboard! *cue harsh noises emitting from laptop speakers* (following video link warning: turn sound volume down, don’t use headphones, link).

Problem Report window for OS X following a uxnemu crash

At this point I wasn’t sure if the issue was with the patched SDL build or Uxn. I chose first to rule out SDL and see if it’s possible to generate sounds ok with a simple test case. I found the Simple SDL2 Audio repo which I used as my test case. The source is just a couple of .c files and a header, leaving you to assemble things however you choose. This was my first time hand assembling a library then linking to it.

gcc audio.c -I/path/to/sdl2/headers/include -I. -c
ar -r audio.a audio.o
ranlib audio.a
gcc test.c -I/path/to/sdl2/headers/include -Xlinker /path/to/sdl2/lib/libSDL2.dylib audio.a

The test tool executed fine on my PowerBook and the wave files the tool played sounded as they should, so the issue was not with the patched version of SDL. I started to compare how things were done regarding the SDL API in Uxn and the Simple SDL2 Audio example, it seemed that endianness of audio format is a parameter and while the example code explicitly specifies a little endian format, the Uxn codebase uses an alias which defaults to the same little endian format. So SDL has the concept of endianness but the two codebases are trying to play different sound files, Uxn is playing PCM files, where as the example project is playing wav files which are little endian. Switching Uxn to use a big endian audio format resulted in the piano rom sounding correctly on my PowerBook since it’s booted in big endian mode. According to the SDL_Audiospec there are formats which default to the byte order of the system being built on. Using that instead resulted in correct playback regardless of endianness of the host system.

With Uxn working on my PowerBook running OS X Tiger, it was time to upstream the changes. As the project is hosted on sr.ht, I needed to become familiar with git’s email workflow by following the tutorial on git-send-email.io. Given an up to date version of OpenSSL, Perl, and git, I was able to upstream the changes to Uxn using my 12″ PowerBook G4 running OS X Tiger (look ma! no web browser!).

Playing Tet on OS X Tiger

With the exception of skipping joystick support in SDL everything is upstream, but that’s a trivial one word deletion. So as it stands, to run Uxn on OS X Tiger, one needs to install SDL 2.0.3 and GCC 4.6 or newer (I used 5). Edit uxn/src/uxnemu.c and remove the SDL_INIT_JOYSTICK flag from SDL_Init() statement, then run Uxn’s build.sh.
See the Awesome Uxn list for links to get started once everything is built and ready to go.

Tet's startup banner

Emacs builds for Mac OS X on PowerPC

Mac OS X has always come bundled with a version of Emacs, I see a copy of a binary named emacs-20.7 on the OS X 10.0 installation CD. On a fully patched version Mac OS X 10.4 Tiger, the version of Emacs is 21.2.1, 22.1.1 on Leopard and on more recent versions of macOS since Catalina it has been replaced by mg.

Screenshot of Terminal on OS X Tiger, running Emacs 21.2.1 as bundled with OS

I wondered what the most recent version of Emacs would run on Tiger. The Emacs for Mac OS X project maintained builds at some point and has copies to the most recent builds and impressively all their builds going back to 2009. Great, I fetched both the nightly and 24.5-1 build for Tiger and tried them, only to be presented with errors regarding _getrlimit$UNIX2003 symbol not being found when running the Emacs-powerpc-10_4 binary inside the Emacs.app. Double clicking on the Emacs icon attempts to launch the application and an icon appears on the dock before disappearing again. I left it there and started looking at building the most recent version which I could myself.

Screenshot of Emacs 22.1.1 running on OS X Leopard

Support for OS X prior to 10.6 Snow Leopard was dropped with Emacs 25 which is why the most recent builds were of version 24.5. I managed to build 24.5 on Tiger without issue, running the result not so much. It turns out that 24.x to 25 was a turbulent time for OS X support in Emacs. My build of 24.5 would launch, a blank window would show, CPU usage would spike and eventually crash and disappear.
Same result for 24.4.
On 24.3 things things were a little more compact but same crashing behaviour after a short period.

Screenshot of what's meant to be Emacs 24.3 running on OS X Tiger. There's just a thing bin of several sliders on top of one another.

The issue was apparently resolved in Emacs 25 but there’s more required than just this change for 24.5 as cherry picking It didn’t resolve the crashes and I didn’t investigate any further.

Emacs 23 was good. I had a working application on Tiger with 23.4.1.

Screenshot of Emacs 23.4.1 running on OS X Tiger

I headed back to Emacs 24 and built 24.2, and that was good too. Currently, I’m stuck on that version and haven’t yet investigated where things broke on the road to 24.3. Emacs 24 introduced support for installing packages from a remote repository. Due to its age, its concept of handling encrypted connections via TLS wont work with newer versions of OpenSSL since it tries to invoke s_client with SSLv2 turned off which of course is no longer a supported feature. Another option is to use GnuTLS but I haven’t yet managed to build a new version of GnuTLS on Tiger to use with it since GnuTLS has grown another TLS requirement which requires a compiler with C11 support. This is not so much of an issue for connecting to Elpa, but Melpa requires HTTPS and given a recent version of OpenSSL and Emacs 24.2 built with a modified lisp/net/tls.el to drop -no_ssl2, it’s still not happy about something and will sit there for some time and fail. I sidestepped the issue for now by using a Melpa mirror which works via HTTP, and needed a couple of changes from from Emacs 25 to packages.el for version handling, otherwise it would fetch the repository metadata and fail due to an invalid version.

Screenshot of OS X Tiger desktop, running Emacs 24.2 displaying the message buffer. There are messages of failed connection attempts with gnutls and opennsl s_client

If you’re looking for a newer version of Emacs with GUI support to run on your PowerPC Mac, I’ve posted the binaries here. The binaries were built on OS X Tiger 10.4.11 PowerPC with ./configure --enable-ns-self-contained --with-ns and run tested on OS X Tiger & Leopard PowerPC. No idea if/when another build will be, but they’ll end up in /files/macosx-powerpc-emacs/ if I do.

Extracting photos from iPhoto Library and migrating away

Finder info window showing details of my iPhoto Libary

The Finder info pane tells me that I’d created my iPhoto Library back in 2007, there wasn’t a lot there but I had gone to the effort of creating albums and organising photos some years ago and left it at that. Time moved on and iPhoto was eventually discontinued, the migration path was meant to be to the Photos app which comes bundled with macOS as standard. I never made the switch to Photos, just like I didn’t switch to the Music app. Enter, Retroactive. Retroactive lets you install a bunch of unsupported applications on more recent versions of macOS, including iTunes and iPhoto. The process is fairly straightforward and in little time I had both applications running, I could see my old photos in iPhoto, and I left it there.

Retroactive "Get Started" window which is presented when the application is launched

As things began winding down end of last year, I thought I should start organising my music collection and photos; so I fired up iPhoto and dragged a bunch of photos in, over 9000 to be exact. iPhoto started working through them and normally it would cycle through the images as it is importing them, but it didn’t do that this time.

iPhoto importing photos with no preview

Then the duplicate detection stepped in to ask what to do about a detected duplicate photo, again normally a thumbnail would be shown to help but just grey. I assumed that it was just graphics glitch because the photos show up fine in finder but when the import was finished, the copies of the photos in the iPhoto Library were sometimes plain black or white, so it looks like iPhoto had mangled the imported copies. The original source files were fine and viewable in Finder and Preview.

iPhoto duplicate photo detection asking "would you like to import the following duplicate photo"

The images in the iPhoto Library are accessible via Finder, just right click / control click on the iPhoto Library and select Show Package Contents, there is a folder there called Masters with a folder structure reflecting what you see in iPhoto, I guess I could have just dragged that out and moved on to the new application I was going to use but what if there was information that would be lost because iPhoto stored it alongside those files. It certainly has the notion of revisions. Searching around, I found a now discontinued tool made by Google called Phoshare which allows you to export an Aperture or iPhoto library as a folder structure. The application is written in Python and makes use of Tk via Tkinter for GUI, the application for the Mac is a universal binary covering *both* PowerPC and 32-bit x86 which means it won’t run on Catalina or newer because support for 32-bit x86 binaries was dropped. Phoshare is open source and the subversion repo is still available but an odyssey into Python 2.x and getting all the relevant modules installed was going to be a faff. It would’ve been a good execuse to dust off a PowerBook but library was on a zpool and it was quicker just to import the zpool on an older MacBook Pro. 🙂

Phoshare requires Exiftool installed and it looks for it in /usr/bin which is disallowed on more recent versions of macOS so you will need to disable SIP and symlink from /usr/local/bin/exiftool to /usr/bin if you want to preserve metadata from your library.

Phoshare running on default setting will recreate the folder structure as you see within iPhoto in the location you stated to export the photos to, including a copy of the original photos in a subfolder if there are revisions of a photo e.g you used iPhoto to enhance them.

With my photos exported, I initially reached for darktable but soon realised that it does more than what I need, a simple photo manager to organise photos like Shotwell but cross-platform would be ideal whereas darktable seems more on processing capabilities and less on the file management side. Searching around it turned out that digikam is cross-platform and so I gave it a try and stuck with it. Over the past few weeks I have been trawling through disks finding and adding photos to my library which started out as the exported folder by Phoshare. digikam does duplicate detection which has made adding things a lot easier, just blindly add stuff, analyse, detect duplicates, clean up. There’s one interface behaviour which is slightly different in digikam if you’re coming from macOS / Gnome where if you’re drag and dropping things, on drop, it will show a menu asking you if you want to copy or move the items you’ve just dragged over. In a month of use it has only crashed on me once, otherwise it’s been great. I can now manage my photo library regardless of whether I’m on macOS or Linux using the same application. The folder structure on the file system is represented in digkam, which means you can organise things in your file manager of choice and that will be reflected in digikam (you have to scan for changes or it will pick up the changes the next time the application is started). digikam’s metadata is stored in files for sqlite (default option) or you have the choice to point it to an instance of mysql, .

Photos app does have an export feature for getting photos out but your photos are now stored in its own internal folder structure and information about your albums are stored and tracked in its own data files now. Having converted the iPhoto Library by opening it with Photos or organising things within Photos and trying to exporting that exact structure seemed to involve more fiddling than I was willing to commit to, I couldn’t get it working. Sticking with Photos wasn’t an options as it would tie me to macOS for managing photos. I never made the switch from iTunes to Music because it had a tendency to peg the CPU, even when open and not playing anything and they dropped support for iPod Classic which I still continue to use to this day as my music player. The workflow now is based on using Finder to sync a library with device whereas before I could add music to my device from any machine with iTunes.

Screenshot of Finder with iPod connected when enabling music sync requires wiping existing content on the device first so a dialogue box is displayed asking if it's ok to remove & sync?

I have yet to look for an iTunes replacement.

HFS “Incorrect number of thread records” error

Note: this post doesn’t provide a solution to address the “Incorrect number of thread records” file system issue but documents what I went through to see if I could when I faced it. I think your best bet would probably be to try ALSOFT’s Disk Warrior if you need to fix this issue.

I was sitting in front of a Mac Pro which had previously suffered from boot issues due to the expansion boards holding the memory not making contact with the logic board properly. The boot issue manifested itself as the system not POSTing and the fans spinning loudly. Luckily the issue was resolved by reseating things. The system also had a failing Cinema Display which would work fine after a “cold” power cycle having removed the power connector from PSU and reattached, but would fail to wake from sleep if it had been left on. The user was not aware of this issue and assumed whenever the system wouldn’t display anything on boot, it was the POST issue and would power cycle the system, hoping to strike it lucky. Through doing this, the HFS+ journaled file system ended up with problems where fsck_hfs(8) was unable to fix the issue.

Since the filesystem had problems, the system would try to boot, the grey screen with the Apple logo would appear along with a progress bar which would move from left to right and then after some time the system would switch off.

Booting the system in verbose mode by holding command + v showed that the system was running fsck on boot which is what the progress bar was an indicator for normally. It would attempt to repair the file system 3 times, fail, and switch off.

Connecting to another Mac via TDM, it was possible to mount the filesystem as read only and take a closer look. It would have been possible to also boot the system in rescue mode but the Mac I connected to via TDM was running a newer version of macOS and I was hoping that the tooling would be better able to deal with such an issue though fsck_hfs(8) bugs section states “fsck_hfs is not able to fix some inconsistencies that it detects“, regardless.

% sudo fsck_hfs -ypd /dev/disk2s2 
/dev/rdisk2s2: starting
journal_replay(/dev/disk2s2) returned 0
	Using cacheBlockSize=32K cacheTotalBlock=32768 cacheSize=1048576K.
   Executing fsck_hfs (version hfs-522.100.5).
** Checking Journaled HFS Plus volume.
   The volume name is Macintosh HD
** Checking extents overflow file.
** Checking catalog file.
   Incorrect number of thread records
(4, 21962)
	CheckCatalogBTree: fileCount = 421327, fileThread = 421326
** Checking multi-linked files.
   Orphaned open unlinked file temp2639983
   Orphaned open unlinked file temp2651454
   Whole bunch of these temp files trimmed from listing
** Checking catalog hierarchy.
** Checking extended attributes file.
** Checking volume bitmap.
** Checking volume information.
   Verify Status: VIStat = 0x0000, ABTStat = 0x0000 EBTStat = 0x0000
                  CBTStat = 0x0800 CatStat = 0x00000000
** Repairing volume.
 	FixOrphanedFiles: nodeName for id=2671681 do not match
	FixOrphanedFiles: Created thread record for id=2671681 (err=0)
	FixOrphanedFiles: nodeName for id=2671681 do not match
	FixOrphanedFiles: Created thread record for id=2671681 (err=0)
	FixOrphanedFiles: nodeName for id=2671681 do not match
	FixOrphanedFiles: Created thread record for id=2671681 (err=0)
	FixOrphanedFiles: nodeName for id=2671681 do not match
	FixOrphanedFiles: Created thread record for id=2671681 (err=0)
** Rechecking volume.

Repeat again a second time

** Checking Journaled HFS Plus volume.
   The volume name is Macintosh HD
** Checking extents overflow file.
** Checking catalog file.
   Incorrect number of thread records
(4, 21962)
	CheckCatalogBTree: fileCount = 421327, fileThread = 421326
** Checking multi-linked files.
   Orphaned open unlinked file temp2639983
   Orphaned open unlinked file temp2651454
   Whole bunch of these temp files trimmed from listing
** Checking catalog hierarchy.
** Checking extended attributes file.
** Checking volume bitmap.
** Checking volume information.
   Verify Status: VIStat = 0x0000, ABTStat = 0x0000 EBTStat = 0x0000
                  CBTStat = 0x0800 CatStat = 0x00000000
** The volume Macintosh HD could not be repaired after 3 attempts.
	volume type is pure HFS+ 
	primary MDB is at block 0 0x00 
	alternate MDB is at block 0 0x00 
	primary VHB is at block 2 0x02 
	alternate VHB is at block 975093950 0x3a1ec0be 
	sector size = 512 0x200 
	VolumeObject flags = 0x07 
	total sectors for volume = 975093952 0x3a1ec0c0 
	total sectors for embedded volume = 0 0x00 
	CheckHFS returned 8, fsmodified = 1

It was possible to find out the offending files by looking up the inodes listed in the fsck_hfs output. Note the id reported in “FixOrphanedFiles: nodeName for id=2671681 do not match” and use find(1) to look it up

% find /tmp/p -inum 2671681

Ironically, the files causing the issue are auditd(8) logs from crash reports?

I thought perhaps turning off journaling would help sidestep the issue by causing fsck to remove the offending files, rather than trying to make use of journal data to replay. hfs.util(8) which is tucked away in /System/Library/Filesystems/hfs.fs/Contents/Resources let’s you do that.

% sudo /System/Library/Filesystems/hfs.fs/Contents/Resources/hfs.util -N /dev/disk2s2 
Turned off the journaling bit for /dev/disk2s2

It didn’t help.

hfs.util‘s supposedly supports a -M (Force Mount) but I was unable to get this to work. I was hoping to force mount the file system read/write & delete the 2 files.

I ended up wiping the disk and reinstalling macOS.

As an aside, the history section of the hfs.util(8) claims it was “Derived from the Openstep Workspace Manager file system utility programs“. The sources for hfs v106 package on Apple’s site shed some more light. Oldest entry in “change history” section of hfsutil_main.c states

13-Jan-1998 jwc 		first cut (derived from old NextStep macfs.util code and cdrom.util code).

Note the description of what main() does in a comment block inside hfsutil_main.c.

Purpose -
This our main entry point to this utility.  We get called by the WorkSpace.  See ParseArgs for detail info on input arguments.
Input -
argc - the number of arguments in argv.
argv - array of arguments.
Output -
returns FSUR_IO_SUCCESS if OK else one of the other FSUR_xyz errors in loadable_fs.h.

There are icons for HFS formatted disks from Rhapsody in the directory: hfs_RHD.fs.tiff, hfs_RHD.openfs.tiff. These live on in hfs v556.60.1 which is the most recent version available on the site as I write this.

Forcing an Xcode command line tools reinstall in order to update

Lost some time debugging a build issue which I was unable to reproduce. Turns out I was on an older version of clang despite both of us running the same version of macOS Catalina. Though you install the command line tools using xcode-select --install, there’s no way to force a reinstall with the tool as rerunning the command will tell you xcode-select: error: command line tools are already installed, use "Software Update" to install updates.
So updates are managed via the Software Update section in System Preferences and macOS reckons I’m up to date.

You can remove /Library/Developer/CommandLineTools and rerun xcode-select --install at which point you’ll obtain the latest version of command line tools. As a bonus while the install is in progress, macOS will serve a notice that an update is available and pop up the Software Update section in System Preferences.

When the initial install process invoked by running xcode-select complete, the update offered via Software Update disappears and it goes back to reporting everything is up to date.

While these two events were happening I wondered why the initial download was clocking up hours to download a 451MB file so I fired up tcpdump to see if there was any traffic coming through, turns out actually my machine was very busy downloading from an IP address of my ISP via plain HTTP. I initially wrote this post thinking that command line tools are not updated across major OS version upgrades, but I’m now wondering if the cache at the ISP is stale which is why I do not have the update. I also was not served the iOS 14.3 update notice until sometime last week though it was relased over a month ago!

Trying to operate macOS in single user mode

Wednesday lunch time, I opened up my laptop and in the middle of writing an email my machine froze and after a few seconds rebooted. Uh oh, the system sat at the grey screen for a few seconds and then the dreaded folder with a question mark began flashing which means there was no bootable disk found.

Turned the machine off, turned it back on again, ah, a message that the machine had crashed and restarting once more it booted as usual, making it back into macOS before it did the same freeze and reset again. I ended up spending the rest of the day trying various things to get my data off the disk before the SSD stopped responding all together Wednesday night.

I thought to rule out file system issues first.
Booting single user mode and running fsck_apfs(8) didn’t get very far when I first tried. SIGINFO reported that bash was waiting meaning that it never got to executing fsck_apfs.
Restarting and trying to boot in recovery mode to run the file system check using Disk Utility didn’t work out too well. Upon reaching the GUI, recovery mode began to spin, if things worked ok I would have been greeted with a file vault encrypted disk to unlock but it didn’t and the spinning spiral would go around endlessly, so it was back to single user mode.

In single user mode all data is accessible but the file systems are mounted read only with the exception of /private/var/vm which is writable. At the end of booting into single user mode, the system reports:
To mount the root device as read-write:
$ /sbin/mount -X /

But the mount command on Catalina 10.15.7 has no such option, the old advised method of using -uw instead of -X still worked however.

While I was experimenting I noticed that I had spent a considerable amount of time in single user mode and the system never hard reset like when I booted normally.
I intended to copy the data to another machine, however ifconfig reported no interfaces.
I mistakenly thought that I could load the relevant kernel extensions and could slowly bring things up bit by bit that way. Except can’t do any of that because SIP prevents you.
localhost:/ root# kextload /System/Library/Extensions/some.kext
/System/Library/Extensions/some.kext has invalid signature: Trust code is disabled.
Untrusted kexts are not allowed
Kext rejected due to invalid signature: <OSKext 0xSOMEHEX [0xSOMEHEX]> { URL = "file:///System/Library/Extensions/some.kext", ID = "com.apple.foo.bar" }
/System/Library/Extensions/some.kext failed security checks; failing.

I tried several times again to get back into recovery mode environment in order to disable SIP using csrutil disable from the terminal there, out of sheer luck the disk behaved long enough one time that I managed to get the disk unlocked and make it in. I disabled SIP and while I was there I checked the disk with Disk Utility. Things started off ok and while it was spending some time checking the Data volume it hard reset. This definitely wasn’t a file system issue and an indicator that the hardware is misbehaving, which meant it probably wouldn’t be long before I lost access to the data on there.

Back in single user mode I confirmed SIP was disabled
localhost:/ root# csrutil status
System Integrity Protection status: disabled.

After working my way through loading what I thought were relevant extensions I gave up and started looking up how to bring up the system, I was trying to get either an Apple USB Ethernet adapter, Thunderbolt Gigabit adapter or the builtin Airport card to work.

To get the baseline system going you need to start kextd(8), notifyd(8), configd(8). Once diskarbitrationd(8) is loaded, it pulls the relevant dependencies to get networking running.

launchctl load /System/Library/LaunchDaemons/com.apple.kextd.plist
launchctl load /System/Library/LaunchDaemons/com.apple.notifyd.plist
launchctl load /System/Library/LaunchDaemons/com.apple.configd.plist
launchctl load /System/Library/LaunchDaemons/com.apple.diskarbitrationd.plist

To configure your wireless card use airport(8) which can be found at /System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport

macOS airport(8) manual

I opted for the Thunderbolt Gigabit adapter as that was my fastest option, the interface would autoconf with DHCP/RS, and could be configured with just ifconfig(8) but I couldn’t get an NFS share mounted which I now suspect was because I did not specify the use of a reserved port when mounting the share on the Mac.
localhost:/ root# mount_nfs -o resvport my-nfs-server:/share /net
As I was racing against time I ended up cobbling together a USB disk which was HFS+ formatted and used rsync to clone my home directory. Since the system was in single user mode, new disks would not be auto mounted (that’s what diskarbitrationd does normally) and issuing diskutil list would not work. Without diskarbitrationd loaded it complains
Unable to run because unable to use the DiskManagement framework.
Common reasons include, but are not limited to, the DiskArbitration framework being unavailable due to being booted in single-user mode.

and with diskarbitrationd loaded it complains
Could not start up a DiskManagement session
You can instead use fstyp(8) by pointing it at device nodes to find out the file system type on the other side of the device node.
Before connecting a disk, run ls /dev/disk* to see what’s there already, attach the disk, repeat ls /dev/disk* to see which new nodes have been created. Point fstyp at those device nodes to find the correct one with the filesystem, in this case it was HFS.
localhost:/ root# fstyp /dev/disk2s2

localhost:/ root# mount_hfs /dev/disk2s2 /net
I then began to rsync my data to the external disk with rsync -av /Users/sme /net/ and after a while the disk I/O stopped and the kernel reported
IOAHCIBlockStorageDrive: could not recover SATA HDD after 5 attempts. terminating
completionRead: 1598: Failed read request b88146000, 4096: e00002c0
disk1s1: no such device.

Well, there’s the hardware misbehaving.

apfs_vnop_read: 7261: ### obj-id longnumber/anotherlongnumber retval 6 filesize 16388 offset 0 resid 16388 ###
rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at /AppleInternal/BuildRoot/Library/Caches/com.apple.xbs/Sources/rsync/rsync54.120.1/rsync/rsync/rsync.c(244) [sender=2.6.9]
rsync: writefd_unbuffered failed to write 185 bytes [generator]: Broken pipe (32)
disk1s5: media not present.
nx_buf_bread:592: buf_biowait() failed, error = 6, b_error = 6, buf_flags_after_io = 0x101, crypto = [encrypted composite]
_vnode_dev_read:811: *** got err 6 reading blknum 54480 (num read errs: 1)
localhost:/ root# apfs_vfsop_sync:3357: /dev/disk1: failed to finish all transactions to sync() - Device not configured(6

At this point there was nothing else possible to do but power cycle.
Over several iterations I managed to get most of my home directory copied across to the external disk with rsync before the SSD stopped responding all together.

A week of pkgsrc #13

With the lead up to the release of pkgsrc-2019Q2 I picked up the ball with the testing on OS X Tiger again. It takes about a month for a G4 Mac Mini to attempt a bulk build of the entire pkgsrc tree with compilers usually taking up most days without success. To reduce the turnaround time, I switched to attempting a small subset of packages for a quicker turnaround using meta-pkgs/bulk-large. After a couple of days of compiling I received a report in my mailbox showing breakages in key packages such as OpenSSL and libxml2.

security/openssl issue was a regression upstream which was resolved by bringing the packages up to date.

textproc/libxml2 breakage was due to -Wno-array-bounds being passed to the compiler and the ancient version of GCC in Tiger not supporting it, resulting in a hard cc1: error: unrecognized command line option "-Wno-array-bounds" error. The use of BUILDLINK_TRANSFORM.Darwin here allowed the option to be removed dynamically, on the fly, confined to only being applied during builds on Darwin.

security/libgpg-error needed the definition of __DARWIN_UNIX03 to make use of unsetenv(3) which returns an integer, unsetenv is called inside an if statement and is unable to test the result as Tiger still used the old implementation by default, which returns void. This results in the error sysutils.c:178: error: void value not ignored as it ought to be when building. The fix for this came from macports.

There was many breakages due to the lack of support of strnlen(3) in Tiger, Apple introduced support for this function in Lion, as a workaround, pkgtools/libnbcompat now includes an implementation which will be used in place for packages which specify strnlen as a requirement using USE_FEATURES and the OS is marked as missing the features using _OPSYS_MISSING_FEATURES.

databases/sqlite3 has issues with readline included with Tiger, as a workaround locally, I switched to using gnu readline.

As find(1) in Tiger lacks support for the + parameter for -exec, the ability to install Python egg modules is currently broken, I worked around this locally to progress with the bulk build. Next step possibly is to make pkgsrc aware of find as a tool and substitute legacy versions with a version from sysutils/coreutils perhaps.

devel/re2c was broken due to the ageing version of GNU Bison being called, which again lacked support for a feature.

  GEN      src/ast/parser.cc
/usr/bin/bison: option `--defines' doesn't allow an argument
Usage: /usr/bin/bison [-dhklntvyV] [-b file-prefix] [-o outfile] [-p name-prefix]
       [--debug] [--defines] [--fixed-output-files] [--no-lines]
       [--verbose] [--version] [--help] [--yacc]
       [--no-parser] [--token-table]
       [--file-prefix=prefix] [--name-prefix=prefix]
       [--output=outfile] grammar-file

Making use of the version in pkgsrc instead resolved the issue by specifying bison in USE_TOOLS.

With the advancement of language development and new standards being defined, pkgsrc grew support for specifying which versions of C & C++ language standards a package may require e.g USE_LANGUAGES=c++03. This in turn passes the relevant standard to the compiler using --std= option. If the compiler being used doesn’t support the standard specified all tests in GNU configure to determine the compiler and language support will fail, resulting in a cryptic configure: error: C++ preprocessor "/lib/cpp" fails sanity check (cpp is in /usr/bin on Tiger). As a local workaround I have commented out the block in pkgsrc/mk/compiler.mk so that the standard is not set. Not sure whether a knob to ignore setting the stand standard is worth it or to move forward by enforcing the use of a new (external) toolchain.

With these change, the bulk build result went from 673 packages out of 1878 to 1067 out of 1878. The resulting packages and bootstrap kit are now up on sevan.mit.edu.

Next step is to address support for find in the pkgsrc tools infrastructure, remove setting 64bit mode on G5 macs as a bootstrap mode as Tiger really doesn’t support it.

Thanks to viva PowerPC for the plug.

A glimpse of sevan.mit.edu as it runs currently:

A blast from the past – Amit Singh

I spent most of today tied up with working on getting an upspin instance operational again. Instead of running things as a standalone daemon, I proxy through a web server so that the upspinserver can run as a unprivileged user, on a unprivileged port and everything should be good and happy. Except it’s not, on the client side is macOS with osxfuse and the upspin space expose to the OS via upspinfs, browsing works great, manipulation through finder, not so much. You can copy a file in to place and when the transfer is finished, the file disappears from finder however the upspin tool is able to show things correctly as they should appear. I spent some time looking in to the osxfuse source repositories and installer package as there is a separate repo for the kext which I wondered where it fits in, in a world with SIP.

While going through the osxfuse source, Amit Singh’s name popped up in the licenses. I was a big fan of his blog kernelthread.com when it was active and would look forward to new posts. I fondly remember trying out the test app he wrote to demonstrate the sudden motion sensor working on Mac OS X with my 17″ PowerBook G4 and the posts to complement the Mac OS X Internals book.

Turns out he gave a talk at Google back in the day on MacFuse which is the predecessor to osxfuse.

It’s a nice talk which provides some history on how the project came about and why he decided to work on it, there were moments which made me smile especially when discussing conditioning and deeply held (mis?)beliefs.

I also found that Apple still runs their mailman instance with a small number of list being served from it, most importantly Darwin related ones. 🙂

Back to more wrestling tomorrow, gnite!

Unable to mount or open disk images generated with Nero (.nrg file)

It appears that VirtualBox & OS X are unable to open .nrg files, despite them essentially being a ISO 9660 format file.

VirtualBox reports:
Result Code:
IMedium {4afe423b-43e0-e9d0-82e8-ceb307940dda}
IVirtualBox {0169423f-46b4-cde9-91af-1e9d5b6cd945}
Callee RC:

Finder reports:
image not recognised

This turns out to be due to a footer added by Nero which may make the file size something which in not a the sum of a multiple of 2K.

Editing the file in a hex editor and removing the footer (of 72 bytes) should result in the file being usable

28633000 45 54 4e 32 00 00 00 20 00 00 00 00 00 00 00 00 |ETN2... ........|
28633010 00 00 00 00 28 63 30 00 00 00 00 00 00 00 00 00 |....(c0.........|
28633020 00 00 00 00 00 00 00 00 4d 54 59 50 00 00 00 04 |........MTYP....|
28633030 00 00 00 01 45 4e 44 21 00 00 00 00 4e 45 52 35 |....END!....NER5|
28633040 00 00 00 00 28 63 30 00 |....(c0.|

A week of pkgsrc #9

The past few weeks have been pretty hectic, as the time for BSDcan gets shorter and shorter, I’m thinking about my talk and testing more and more in pkgsrc. Rodent@ added support for Bitrig to pkgsrc-current last month, his patches highlighted an issue with the autoconf scripts (which should be shared across core components) not being pulled in automatically. Joerg Sonnenberger resolved this issue and I regenerated the patch set again. With the system bootstrapped the next thing which was broken was Perl, applying the changes needed for OpenBSD resolved any remaining issues and the bulk build environment was ready. After three days, the first bulkbuild attempt on Bitrig was complete and a report was published. There is now a bulkbuild in progress with devel/gettext-tools and archivers/unzip fixed, that should free over 8400 packages to be attempted to be built.
For Solaris, my first bulkbuild on Solaris 10 completed after 22 days. Mid-April I also started off bulkbuilds on Solaris 11 (x86 and SPARC) using the SunStudio compilers (It’s not possible to use GCC at the moment due to removed functionality that was previously deprecated). The Solaris 11 SPARC bulkbuild is still in progress and the x86 bulkbuild is running. Unfortunately the build cluster had some connectivity issues and needed rebooting during the bulkbuild but not until lots of packages had failed to fetch distfiles, hence the figures look a lot worse than they could be. Solaris 10 SPARC report, Solaris 11 x86 report.

Through bulk building on multiple operating systems another issue that’s surfaced is problematic packages that hold the build up. On Bitrig mail/fml4 is an issue, on OpenBSD www/wml, FTP mirror issues for ruby extension on Solaris, Xorg FTP mirror issues on OmniOS. Things need regular kicking, a brief glance into pkgsrc/mk didn’t reveal any knobs which would allow the preference of HTTP for fetching distfiles. On Bitrig & OpenBSD I’ve excluded these packages from being attempted via NOT_FOR_PLATFORM statement in their Makefile until I have a look into the issue.

sevan.mit.edu completed another bulkbuild, pkgsrc-current now ships with MesaLib 10.5.3 as graphics/MesaLib, version 7 has now been re-imported as graphics/MesaLib7 by tnn@, the new MesaLib needed a patch for FreeBSD, similar to NetBSD to build successfully, due to ERESTART not being defined. At present, it’s still broken on Tiger as I’ve not looked into yet.

I revisited AIX again to test out pkgsrc once again, this has turned into a massive yak shaving session. I’ve yet to run a bulkbuild successfully as the scan stage ends with a coredump.
I originally started off with using the stock system shell, bootstrap completed successfully but scan stage of a bulkbuild would just stop without anything being logged. Manually changing the shell used to shells/pdksh in pkg/etc/mk.conf and pbulk/etc/mk.conf resulted in the following error message:
bmake: don't know how to make pbulk-index. Stop
pbulk-scan: realloc failed:

This turned to be a lack of RAM, my shell account was to a AIX 7.1 LPAR running on a Power8 host with 2 CPUs and 2GB of RAM committed, unfortunately the OS image IBM provided came with Tivoli support enabled and a bug in the resource management controller which meant RMC was consuming way more resource than it needed to. I was running with less than 128MB of RAM.
Stopping Tivoli & RMC freed up about 500MB of RAM, attempting to bulkbuild again, caused the process to fail once again at the same stage. With a heads up from David Brownlee & Joerg Sonnenberger, I bumped the memory and data area resource limits to 256MB.
This allowed the scan to finish with a segfault.
/usr/pkgsrc/pbulk/libexec/pbulk/scan[54]: 11272416 Segmentation fault(coredump).
pscan.stderr logged multiple instances of
bmake: don't know how to make pbulk-index. Stop.
The segfault generated a coredump but it turned out that dbx, the debugger in AIX was not installed. IBMPDP on twitter helped by pointing to the path where some components are available for installation, unfortunately, while the dbx package was available there, some of its dependencies were not. Waiting on IBMPDP to get back to me, I fetched a new pkgsrc-current snapshot (I couldn’t update via CVS because it wouldn’t build) and re-setup my pbulk environment via mk/pbulk/pbulk.sh.
I should mention that initially when I setup, I’d explicitly set CC=/usr/bin/gcc last time, then while trying to get various things to build subsequently, I’d symlink /usr/bin/cc to /usr/bin/gcc. When I came to set thing up with the new snapshot, I did not pass CC=/usr/bin/gcc this time round and found that I was unable to link Perl, not sure if this was the Perl build files assuming if on AIX & /usr/bin/cc exists, it’s XLC or if ld(1) takes on different behaviour but I had to remove this symlink.
Once everything was setup, the bulkbuild failed agin at the same place, except this time I had a different message logged.
/bin/sh: There is no process to read data written to a pipe..
I edited the bootstrap/bootstrap script & devel/bmake/Makefile to set shells/pdksh as a dependency & rerun bulkbuild.
The scan stage again completed with a coredump with this time pscan.stderr just contained Memory fault (core dumped).
I’ve committed these changes so pkgsrc-current now defaults to using shells/pdksh as its shell but have not been able to try anything else as this weekend the system is unaccessible due to maintenance.

At present, I’m attempting to bulkbuild pkgsrc-current on 8 Operating systems
OpenBSD (5.6-RELEASE & -current), FreeBSD, Bitrig (current), Mac OS X (Tiger), Solaris (10 & 11), OmniOS on 4 architectures (i386, AMD64, SPARC, PowerPC).
If I could get AIX going that would bump the OS & arch could up by 1. Maybe by the next post perhaps. 🙂

Thanks to Patrick Wildt for access to host running Bitrig and Rodent@ for adding support to pkgsrc.

Virtualising retail Mac OS X images on OS X with virtualbox

For testing changes related to OS X in pkgsrc I revisited trying to get virtual machines of the various releases of OS X running to improve test coverage. At present I’m confined to testing on Tiger and Mavericks though I also have machines running Leopard and Lion but they need setting up.
By default, it’s not possible to boot an instance of Mac OS X from a genuine install image, on a Mac host, running OS X using virtualbox.
Searching around reveals using modified images intended for building Hackintosh as the solution most people use. Virtualbox supports OS X guests but when following the usual steps in the wizard to create a new VM & pointing it to your unmodified OS image, nothing much happens.
Depending on the version of OS X you’re trying to boot you’ll either end up with a XNU hang/panic or just dropped straight to an EFI prompt.
Again, depending on the version of OS X being attempted the issue differs. I’ve managed to install 10.7 to 10.10 successfully on virtualbox so far. 10.5 & 10.6 remain to be done.

10.7 – Lion

With the release of 10.7, Apple changed the way OS was packaged, the digital distribution came with a disk image named InstallESD.dmg nested inside an application named Install Mac OS X Lion.app. It’s possible to use this disk image with virtualbox as-is without change however the system will not boot from the image because it fails a test by the boot loader to ensure the image is being booted on a genuine Mac. In my case it is, but unfortunately the cpuid virtualbox presents to the operating system is not one that the OS recognise & so it fails.
The solution to this is to tell virtualbox to mask the cpuid of the guest, unfortunately depending on the version of hardware? or virtualbox that you’re using you may have to experiment with which ID works. I first tried the ID 00000001 000306a9 00020800 80000201 178bfbff listed in the post by BitTorrent engineering but it did not work on a Mid-2012 MacBookAir5,1 with VirtualBox 4.3.22 r98236.
Searching around I found the ID 1 000206a7 02100800 1fbae3bf bfebfbff to try in a comment on another guide which did work.

To create a working VM of Lion in virtual
1) Create a VM in virtualbox named something, type Mac OS X, version Mac OS X 10.7 Lion (64 bit).
2) Before booting the VM, switch to Terminal and change the cpuid of the guest by running
VBoxManage modifyvm something --cpuidset 1 000206a7 02100800 1fbae3bf bfebfbff
3) Right click on Install Mac OS X Lion.app, select “Show Package Contents” and navigate to Contents/SharedSupport. Copy InstallESD.dmg to a locate on your disk which is navigatable.
4) Start the VM & when asked for an install disk, point to the InstallESD.dmg which you copied out in the previous step. The system should boot without any need for further modification (most guides recommend other changes such as switching to a PIIX chipset).

10.8 – Mountain Lion and newer

With Montain Lion, InstallESD.dmg was changed once again, this time to contain multiple partitions (EFI, Boot/Rescue, Install), unfortunately it’s not possible to boot these images successfully as the notion of multiple partitions is not applicable to media such as optical so what happens is that the system is able to boot from the disk image & load the kernel but unable to continue to load the install environment.
What needs to happen is a new “flattened” image needs to be generated which is on a single partition & contains everything from the boot partition.
There is no need to modify any settings for the VM such as cpuid as previous or chipset as recommended by other guides like Engadgets

To flatten the image a tool called iESD is used.
iESD can either be installed via gem(1) or if you’re a pkgsrc user, I’ve created a WiP package.

The instruction in the Engadget guide pretty much covers everything needed. Just make sure that the disk images are fully detached before issuing the hdiutil commands, quickest way being to open Disk Utility.app, selecting mounted disk images & pressing eject or checkout the output of hdiutil info & using hdiutil detach $devicename to detach all device names associated with the disk images.

A week of pkgsrc #7

Time again to write up what’s been happening on the pkgsrc front since last time, this time the focus hasn’t been so much around pkgsrc on Darwin/PowerPC but more about pkgsrc in general & not necessarily code related.
At the end of the last post I mentioned a gentleman who’d been working on pkgsrc/Haiku and posting videos of his progress, I managed to make contact with him (James) & discussed his work that he’d been doing on pkgsrc. He sent me copy of the repo he’d be working off so I could assist with the aim of getting everything upstreamed as in the current state everything would need to be reintegrated per quarterly release rather than only having to pay attention if a new issue has arisen.
After getting the correct version of Haiku installed in virtualbox, I discovered a nasty bug in the Haiku network kit, it was unable to detect when the end of the file had been reached & would continue (restart?), this was revealed when I tried to download the pkgsrc tar ball from via WebPositive, ftp from the terminal was not affected however. pkgsrc bootstrapped unprivileged without issue. Hint: use the nightly snapshots until there is a newer release than Alpha 1 available.
The integration of pkgsrc into the user-land on Haiku is not currently possible due to the way the user-land is constructed, from what I understood, each Haiku package contains a piece of the filesystem, all the packages are union mounted to construct the user-land dynamically when the system comes up. That aside, with my system bootstrapped, I attempted to build Perl and ran into another bug, it seems that the library path for libperl is not populated on haiku hence perl is able to “build” but unable to run, the workaround for this in the tree I was given was to symlink libperl into ~/pkg/lib & move on. I tried various things but was unsuccessful, I believe the problem is pkgsrc specific as the version of Perl available in haikuports do not need any special treatment and the rpath is passed in correctly.
The problem was trying to isolate the required change to fix the problem, whereas in pkgsrc a policy file is passed to the build to set how Perl should be built, haikuports clobbers the source & patches in a replacement, I stopped at that point.

Haiku nightly running in virtualbox

At around about this time I received the good news about the NetBSD Foundation membership and commit bit so my focus moved to reading the various developer documentation & getting familiar with processes.

sevan.mit.edu finished a bulkbulid attempt of the entire tree which took the longest time so far to complete, through all the build attempts I uncovered a new bug, the range to use for numerical IDs of UID/GID, is not sufficient to cover all the packages in the tree that need to create an account. On further discussion with asau@, it was suggested the IDs are allocated randomly and should be fixed for consistency across builds. I started doing bulkbuilds of the entire tree on FreeBSD 10.1-RELEASE and stumbled across a very nasty bug. There is a version of tcsh package in the pkgsrc tree called shells/standalone-tcsh, this is tcsh built as a static binary & set to install to /bin (the only package in the pkgsrc tree which violates the rules and places files outside of $PREFIX by default?), this ended up overwriting the system bundled version of tcsh in FreeBSD & then deleting /bin/tcsh when the package is removed, this was fixed promptly by dholland@. It was also discovered that Python’s configure script had not been changed when FreeBSD switched to elf binaries and so still trimmed the name of libraries to account for the old linker which could not handle a minor version in a libraries filename (libpython.so.1, not libpython.so.1.0). All versions of Python had been patched in the pkgsrc tree to remove this change so that it used a consistent naming convention across all platforms. After discussing this with bapt@ (FreeBSD) at FOSDEM, it turned out to be bug and should be fixed in future Python versions once the fixes are upstreamed.

Koobs@ (FreeBSD) dug out the commit which introduced the change & the bug report.

The opportunity to join the pkg-security team came up & for the past few weeks I’ve been getting familiar with the processes of dealing with security advisories & listing them so that users who fetch the pkg-vulnerabilities database are notified if they have any vulnerable packages installed. The general advisory process is a little infuriating, based on my recent experience I’d say at the top of my list are the Oracle security advisories as they do not divulge any details other than “unknown” in version(s) X, PHP for the frequency, OpenSSL for the impact. On the one hand I was quite impressed that CVE IDs were becoming so familiar that I could spot, on the fly, an advisory that had been accounted for, but on the other hand quite upset that I was using brain capacity on this. The availability of information is quite frustrating too, issues which are assigned an ID but cannot by checked on Mitre’s site take extra effort to find the necessary information to include (Mitre are responsible for allocating the CVEs!), I should note that this is from public advisories, say from a distribution. Example, CVE-2015-0240 was announced today, the Redhat security team published a blog post covering the issue, the Mitre site at present says:
“** RESERVED ** This candidate has been reserved by an organisation or individual that will use it when announcing a new security problem. When the candidate has been publicised, the details for this candidate will be provided.”
The wording & the lack of information can also be frustrating because it’s not clear what is affected. Looking at it positively, the requirement for clarification on these discrepancies means I get lots of opportunities to approach new people in different communities to ask questions.

I created a new wiki article on the NetBSD wiki to start documenting the bootstrap process of pkgsrc on Solarish, Illumos based distributions. At present the article covers what’s required to bootstrap successfully on OmniOS, Tribblix, OpenIndiana and OpenSXCE.

One thing that’s clearly evident is my workflow needs attention, at the moment things are very clumsy, involving lots of switching around but hopefully that will be addressed in the coming month. The first thing I’ve done is setup templates for emails with the correct preferences specified so that I just need to fill the necessary information & hit send, the necessary settings are automatically applied. Still thinking about how to deal with the scenario where the system that work is being carried out on is different to the system where the patch is going to be committed from, this also happens to be a different system which a developer is using. How to deal with that in as few steps from reading, say a bug report, to generating a patch, testing it & committing a fix.

For testing patches on Mac OS X, I revisited running OS X as guest on a Mac running OS X with virtualbox. Attempts in the past had not been successful & it seemed from search results that the only approach taken was to use modified OS images for hackintosh which I did not want to take. I have a genuine machine & genuine license, I shouldn’t have to resort to 3rd party images to run this. After whinging on twitter & referencing some older links I was able to successfully virtualise Mac OS X 10.7 to 10.10 in virtualbox. Will follow up with the details on that in a separate post.

OS X Lion as a virtualbox guest

A week of pkgsrc #6

Since the last post I’ve made some further progress with pkgsrc on Darwin/PowerPC again, the biggest achievement was fixing lang/ruby19-base, lang/ruby200-base and lang/ruby21-base which accounted for the breakage of some 1500 packages (variation of 500 or so modules for each version of Ruby). This was caused by the failure to build the DBM module, which on OS X required the inclusion of dbm.h as well as ndbm.h otherwise all tests fail and the module is not built. The frustrating thing is that there appears to be no documentation for the build process of Ruby, luckily, by Ruby 2.0 there was a comment added to ext/db/extconf.rb to shed some light on the issue:
# Berkeley DB's ndbm.h (since 1.85 at least) defines DBM_SUFFIX.
# Note that _DB_H_ is not defined on Mac OS X because
# it uses Berkeley DB 1 but ndbm.h doesn't include db.h.

pkg/49508 was committed prior to the 2014Q4 pkgsrc release but
pkg/49511 and pkg/49512 did not.

cross/h8300-hms-gcc, databases/java-tokyocabinet devel/py-argh lang/smalltalk net/ser include some additional files which weren’t accounted for previously pkg/49473 & pkg/49474 pkg/49476 pkg/49478 pkg/49496 pkg/49498 fixed that.

devel/commit-patch used the -a flag for cp(1) which isn’t available on older operating systems, pkg/49475 switched to the use of -pPR instead (which -a is an alias of).

graphics/ivtools failed to build successfully due to a packing issue due to the explicit specification of operating system in the name of one of the generated files. pkg/49497 switched the use of the LOWER_OPSYS & added missing item which addressed the issue.

security/CSP failed at the installation stage due to the target directory not existing, pkg/49499 fixed that.

mail/nullmailer referenced uid_t & guid_t but did not include sys/types.h, pkg/49523 fixed that.

net/dnsmasq referenced SIOCGIFAFLAG_IN6, IN6_IFF_TENTATIVE, IN6_IFF_DEPRECATED & SIOCGIFALIFETIME_IN6 but did not include netinet6/in6_var.h when building on OS X which broke the build. pkg/49524 fixed that.

lang/lua52 failed to build on Tiger due to sys/types.h, pkg/49526 fixed that.

lang/php55 bundles its own version of sqlite and requires the necessary flags to disable features not available pkg/49527 fixed that but the correct fix is to not build an entire new version solely for PHP’s use. I began to look but had flashbacks of dealing with the same issue in TCL.

For graphics/MesaLib I looked to build it using a newer version of binutils but it appears that support for Darwin/OS X/iOS and Mach-o is rudimentary and hence missing support in most of the tools. Support began being added upstream to binutils back in 2011 but is still not complete.

For devel/cmake supports the ability to specify the location of library & header files, this can be done by creating a file which includes the necessary declaration that is passed to the configuration process using the --init flag. Indeed when the configuration process displayed the correct versions of OpenSSL, CURL, Zlib, BZip among others from pkgsrc rather than the older system bundled versions, unfortunately the build still failed when it came to the linking stage as the paths to the libraries was prefixed with /Developer/SDKs/MacOSX10.4u.sdk, as a kludge just to progress with the builds, I symlinked /usr/pkg to /Developer/SDKs/MacOSX10.4u.sdk/usr, the build then succeeded without issue. Next task is to work out how to drop the /Developer/SDKs/MacOSX10.4u.sdk prefix correctly.

There are currently 11132 packages available on sevan.mit.edu for Mac OS X/PowerPC with a new bulkbuild of the entire tree in progress. There are also Intel builds of the entire tree being attempted by Save OS X (64-bit packages) and Jonathan Perkin (32-bit packages) which should further improve support for OS X in pkgsrc.

Whilst browsing I discovered a series of videos on youtube by DisneyDumbazz, he has also been covering his work on improving support for Haiku in pkgsrc at length.
He was also struggling with issue in Ruby, QT4 & Mesa it seems.

A week of pkgsrc #5

Definitely more than a week, I’ve not had a chance to devote much time to this over the past few months due but have made sufficient progress to qualify another post.
The most import thing is apart from one PR, all previously submitted patches have now been committed to pkgsrc-current, pkg/49082 still remains.

With the introduction of GCC 4.9, the same changes needed to be applied to lang/gcc49 as with previous versions, pkg/49178 took care of that, however this highlighted another problem. 32bit & 64bit hosts running Darwin both identify themselves as powerpc in the uname(1) output which means that GCC is always built with multilib support disabled, even when building on a 64bit host.

The pkgsrc guide pdf now has the correct date since pkg/49216, previously it reported 18/09/2007.

Some of the cross compilation tools for micro controllers were hardcoded to use ksh to build with when in fact it was only required for NetBSD >= 5, this caused the build to break on Tiger (assuming because of the old version of bundled ksh), pkg/49311 fixed that for cross/binutils but the changes were also applied to cross/freemint-binutils and devel/binutils by the maintainer.

cross/avr-libc was previously broken because it was using the system compiler & headers instead of avr-gcc and the headers installed in pkgsrc during builds. pkg/49316 fixed this issue and upgraded the version to v1.81 of avr-libc.

It’s no longer a requirement to declare the MACOSX_DEPLOYMENT_TARGET environment variable to build lang/perl5. By default Perl declares this to be 10.3 which is no longer applicable on modern systems and when building with clang mmacosx-version-min is specified, making it redundant. This had been removed in pkgsrc via a patch and it broke the build for GCC users as without this variable the target defaults to 10.1 and Perl needs specific attention for versions prior to Panther. pkg/49349 added this variable back in for Darwin 9 and prior which were GCC only releases. Bug #117433 in the Perl RT was the source of the patch proposed to resolve the issue.

lang/ocaml now builds on Tiger, the workaround for the lack of support for -no_compact_unwind in the shipped linker was applicable to prior releases and not just specifically Leopard, pkg/49417 fixed that.

devel/py-py2app previously failed to build on PowerPC OS X due to an error in the PLIST, the use of the MACHINE_ARCH variable would expand to powerpc which raised a packing error. pkg/49418 fixed that.

graphics/MesaLib and devel/cmake still remain broken in the pkgsrc tree for Darwin PowerPC, I was able to generate a MesaLib package successfully by forcing static binaries which allowed the previously unattempted packages to be tried in a bulk build of the entire tree. Unfortunately I hadn’t caught a merge conflict from when pkg/49077 was committed and so devel/icu was not built, this caused a another large subset of packages to not be built.

Thanks to the pointer from Jonathan Perkin, after I’d resolved the merge conflict I removed the entry in /mnt/bulklog/meta/error and ran bulkbuild-restart to re-attempt building devel/icu & those which depended on it.

With these changes, there were over 10000 packages available on sevan.mit.edu but unfortunately that included lots of duplicate packages from previous bulk builds. pkgtools/pkglint has the ability to scan packages against a pkgsrc tree & remove duplicate/stale packages. Running lintpkgsrc -K /mnt/packages -pr took the number of packages down to 9200. There is an AWK based solution but I’ve not had a chance to try it.

I was able to get devel/cmake to build successfully by removing the references to /Developer/SDKs in Modules/Platform/Darwin.cmake and subsequently build packages such as databases/mysql56-client but I’ve not added the changes to the tree yet. Will look to add this in a future bulk build, I want to get MesaLib linking correctly first before adding more kludges into the mix. The next thing I want to try is using a newer version of linker from devel/binutils instead of the one bundled with Xcode.

Restrictions on Apple hardware

I was recently looking for a link I thought I’d bookmarked on how to install recent versions of Mac OS X on EoL Apple hardware, specifically the Mac Pro. I was unsuccessful in finding the link I was looking for but I did find that  you can re-flash a MacPro1,1 with a MacPro2,1 EFI firmware, main benefit being microcode updates. Turns out the hardware in the first & second generation Mac Pro is identical bar the model of CPU available. There’s also modified images to bring the MacPro4,1 to 5,1 which seems to provide much more benefit than the previously mentioned modification.

This got me thinking about some of the issues I’d experienced with older apple hardware and the work arounds, it has been a while since I’ve posted something here so I wrote this post.

On the old world SCSI Macs (pre biege G3?) the drive vendor on the disk firmware with be identified as Apple which the Drive Setup utility (predecessor of Disk Utility) would look for, if it was not found, you would not be able to format your drive as HFS and hence be unable to install Mac OS. Work around was either finding another platform to format the disk or modify a copy of Drive Setup utility with ResEdit & add the drive to the necessary table.

The first of blue & white PowerMac G3 systems logic board shipped with a buggy CMD IDE controller which would corrupt data when doing DMA transfer, Apple shipped the disks in these systems with the firmware tied to PIO mode which was lots of fun when you came to replace the disk with a newer/bigger/faster one. To complete the replacement successfully, the new disk with need to be connected to a PC first & using the firmware utility provided by the vendor, make the same change of restricting the disks operation mode to PIO, otherwise it would not be possible to rely on the disk as data would be corrupted as you began writing to it, there was a recall for the motherboard If you were aware of the issue at the time.

The Mid/Late 2007 MacBook Pro (per advisory?) has the SATA port on the ICH8-M south bridge locked to SATA I even though it is capable of SATA II.

Most systems with user replaceable RAM are capable of taking more than official specification documents list. MacTracker – an application which lists specs & information about Apple hardware provides advertised & actual maximum memory capabilities of system. Not so much a software based restriction but a documentation one.

A week of pkgsrc #4

AnyConnect login banner

Shortly after the last blog post I had access to a couple of AIX LPAR. This would be my first time on a IBM PowerPC system and AIX, I’d applied for two AIX 7.1 instances, one defined as “AIX 7.1 Porting Image” and the other as plain “AIX 7.1”. The difference at a glance seemed to be the porting image had more gnu / common open source tools e.g GNU/Tar though both images had a version of GCC installed.

Using built-in specs.
Target: powerpc-ibm-aix7.1.0.0
Configured with: ./configure --disable-multilib --with-cpu=powerpc --enable-debug=no --with-debug=no --without-gnu-ld --with-ld=/usr/bin/ld --with-pic --enable-threads=aix --with-mpfr=/opt/freeware/lib --with-gmp=/opt/freeware/lib --with-system-zlib=/opt/freeware/lib --with-mpc=/opt/freeware/lib --with-mpicc=mpcc_r --with-libiconv-prefix=/usr --disable-nls --prefix=/software/gnu_c/bin --enable-languages=c,c++
Thread model: aix
gcc version 4.6.0 (GCC)

The stock version came with GCC 4.2 built on AIX 6.1 whereas the porting image came with GCC 4.6.
Alongside the open source tools each instance also had proprietary tools installed including IBM’s compiler XLC, cc without any options invokes a man page which describes the different commands that represent a language at a level.

c99 – Invokes the compiler for C source files, with a default language level of STDC99 and specifies compiler option -qansialias (to allow type-based aliasing). Use this invocation for strict conformance to the ISO/IEC 9899:1999 standard..

The pkgsrc bootstrap process didn’t work too well by trying to allow it to workout things out for itself via cc so opted to use GCC specifically.

export CC=gcc

pkgsrc happily bootstrapped without privilege and I proceeded to install misc/tmux and shells/pdksh on AIX.

pkgsrc pkg_info on AIX

security/openssl comes with 4 different configuration settings for AIX, a pair of settings for the XLC & GCC compilers with a 32bit or 64bit target. It turned out that in pkgsrc it just defaulted to aix-cc (XLC with a 32bit target), pkg/49131 is now committed so the correct configuration is used, XLC successfully builds OpenSSL with a 32bit or 64bit ABI but GCC is only able to manage a 32bit target.

To switch compiler to xlc, declare it as the value to PKGSRC_COMPILER in your mk.conf.

Over the week I attempted to compile components of GCC 4.8 without much success, starting off with lang/gcc48-cc++ & falling back to lang/gcc48-libs.
The build process was very unstable, again as with the Tiger/PowerPC, the build would spin off & hang, pegging the CPU until killed. Attempting to restrict the processor time via ulimit didn’t have much effect.

Alongside trying to get GCC built on AIX, I kicked off building meta-pkgs/bulk-medium on sevan.mit.edu, the previously reported unfixed components prevented some of the packages from building again (ruby, MesaLib, cmake).

I began looking into fixing devel/cmake so that it would link against the correct version of curl libraries & use the matching header files, Modules/FindCURL.cmake in the cmake source references 4 variables which provide some control but I was unsuccessful in being able to pass these to the pkgsrc make process. While trying to resolve this issue I also discovered that on more recent version of Mac OS, the dependencies from pkgsrc ignored, opting for the use of the Apple supplied versions even though the pkgsrc version would be installed.

-- Found ZLIB: /usr/lib/libz.dylib (found version "1.2.5")
-- Found CURL: /usr/lib/libcurl.dylib (found version "7.30.0")
-- Found BZip2: /usr/lib/libbz2.dylib (found version "1.0.6")
-- Looking for BZ2_bzCompressInit in /usr/lib/libbz2.dylib
-- Looking for BZ2_bzCompressInit in /usr/lib/libbz2.dylib - found
-- Found LibArchive: /usr/lib/libarchive.dylib (found version "2.8.4")
-- Found EXPAT: /usr/lib/libexpat.dylib (found version "2.0.1")
-- Looking for wsyncup in /usr/lib/libcurses.dylib
-- Looking for wsyncup in /usr/lib/libcurses.dylib - found
-- Looking for cbreak in /usr/lib/libncurses.dylib
-- Looking for cbreak in /usr/lib/libncurses.dylib - found

mail/mailman had a missing README in PLIST which was handled differently between Tiger & newer releases. pkg/49143 was committed to fix that.

A week of pkgsrc #3

Didn’t uncover anything new in pkgsrc last week as my attention was more on coreboot, I had previously been building different parts of the tree on a couple of Mac’s which where disconnected from each other & copying packages to sevan.mit.edu manually for serving, as a first off this was a good idea but bad as an ongoing thing. What ends up happening is stale packages become left behind as they are unaccounted for, luckily there aren’t too many duplicates currently but it’s something which needs to be addressed in the set of packages currently available.

There is now a page on the NetBSD wiki to keep note of issues & ideas.

To test the status of AIX support in pkgsrc I joined the IBM Power Developer Platform which provides access to Power7/7+/8 systems running AIX 6.1 & 7.1 to build software on. This’ll be my first time on a Power system & AIX, looking forward to seeing what the OS is like.

System reservation on IBM PDP

With the addition of a G5 iMac to the effort kindly donated again by Thomas Brand, I started testing builds of lang/gcc48 on sevang5.mit.edu. Next step will be to get the two systems at MIT working together to build packages once I’ve been able to get GCC 4.8 to build successfully.