LFS, round #4

Haven’t made any progress for a couple of weeks but things came together and instrumenting libc works as expected. One example demonstrated in section 12.2.2, chapter 12 of the BPF Performance book is attempting to instrument bash compiled without frame pointers where you only see a call to the read function.

Instrumenting bash which has been built with -fomit-frame-pointer.

Compiling with -fno-omit-frame-pointer produces a stack trace, within bash but the first frame is unresolved if libc isn’t also built with it, resulting in a hexadecimal address rather than showing a call to _start.

stack trace from bash on system with bash and libc built with -fno-omit-frame-pointer.

I’m going to look at integrating ZFS support next but I’m thinking of sharing a build as-is for the benefit of anyone wanting to work through the BPF Performance book.

LFS, round #3

With the OS image that I wrote about in the previous post I was able to build a new distro with the various substitution I’d made. There were 3 things that I wanted to mention in this post. First, turns out Linux has a bc(1) build dependency which I found when I omitted building it and came to compile the kernel. Second, you really need to run make defconfig before visiting the kernel menuconfig, otherwise you end up with a “default kernel config” lacking any on disk file system support (memory and network file systems are still supported). This was the reason why the kernel was unable to find init in the previous post.

My workflow is that I have flat file which I mount via loop back and I use that as the file system to build the OS in a chroot. When it comes to installing a boot loader using grub-install, if sysfs is not mounted to a location from within the chroot, grub-install will fail, complaining Unknown device "/dev/loop10p1": No such device, p1 being a child node for the first partition on the file backed by /dev/loop10 which is odd as grub-install was pointed at /dev/loop10, the parent node. This is because it is trying to enumerate the start of partitions so it can work out the start sector via sysfs (see grub-core/osdep/linux/hostdisk.c, starting at sysfs_partition_path() . Elsewhere it might have been achieved via ioctls. Either way it is asking the kernel but the dependency for a pseudo filesystem to be mounted in place threw me but looking around /sys/devices/virtual/block/loop10 I see files which expose various characteristics, including offset and start points for each partition.

My focus is now on building the dependencies for bpftrace which means getting BCC built and installed first.

LFS, round #2, 3rd try

In my previous post I ended with the binutils test suite not being happy after steering off the guide and making some changes to which components were installed. I decided to start again but cut back on the changes and see just how much I could omit from installing to get to the point of completing chapter 8. Ideally, I would like to shed everything that’s only a build dependency. It was doable but the sticking point was Python which is needed by Meson / ninja in order to build systemd and though you build Python earlier in chapter 7, at that stage it is built without the ctypes module as it requires libffi and the ctypes module is needed by Meson.

I thought I’d cheat by using pkgsrc to satisfy the build dependencies but the infrastructure for detecting termcap support is unable to detect support via ncursesw which LFS uses. Opting to prefer satisfying all dependencies from pkgsrc which is now the default setting for pkgsrc on Linux created a new problem that the components you compile manually outside of pkgsrc which call pkg-config would link to the pkgsrc versions of dependencies. To side step this issue I moved the pkg-config binary from pkgsrc out of the way where upon I hit an issue with linking systemd. After a night’s sleep I found that ninja in pkgsrc is patched to not adjust the rpath in binaries and this is need for systemd’s binaries because the libraries they depend on are tucked away in a subdirectory.

Upon completing chapter 8, I went back and started afresh once more, this time with the intent to make changes and substitutions once again. I installed bash as /bin/bash but did not create a link to /bin/sh and was surprised to find most things were happy with that, the autoconfed infrastructure could cope, until I reached binutils in chapter 8 where it called /bin/sh explicitly in tests. At this point I installed mksh and pointed /bin/sh to it. This revealed various failures from scripts and tests on other packages which were built after binutils in chapter 8, most significantly by GCC’s build infrastructure. Setting the CONIFG_SHELL variable to /bin/bash when envoking configure ensured that bash was called instead of sh during the build and when invoking the test stage, as the SHELL variable inherits this setting down the line and things move on smoothly. I need to look at getting binutils handle the override as well, rather than hardcoding /bin/sh.

All build dependencies were installed in a separate prefix so that they could be removed after the build. m4, make, bison, Perl, Python, gawk, pkg-config (built with pc location set to /usr/lib/pkgconfig), autoconf, automake, libffi, check, expect, flex, TCL were installed in this location.

Python’s build infrastructure assumes system provides libffi and if the system doesn’t, it struggles with linking. There’s a bug report to teach the build to make use of the information from pkg-config for libffi but the proposed patch in my case did not work as the location under the new prefix where libraries are installed were not in the search path for the dynamic linker. Since I was installing Python in the same prefix as libffi was already installed in, I adjusted the rpath by setting LDFLAGS.

Besides the bash to mksh swap for /bin/sh, I replaced gawk with nawk once more, there was no fallout though I did also install gawk under the new prefix as glibc requires it. tar was swapped with bsdtar from libarchive, Man-DB for mandoc.

I skipped on texinfo as it has a Perl runtime dependency and I don’t want to include Perl in the base OS. Groff was out as it had a texinfo dependency. I omitted libelf as I thought it was only used by tc from IPRoute2 which is for setting QoS policies, turns out it’s a build dependency for the kernel so that went back in.

With everything in place, I managed to build a kernel which for some reason couldn’t go multiuser because it couldn’t find init or sh! Comparing the config with my image from round #1 showed lots of differences which was baffling as I thought I’d only made the changes which the LFS guide suggests. Starting with a new config resolved the issue. I can only suspect that I must’ve pressed space bar by accident when navigating the kernel config menus which switched a bunch of stuff off. 🙁

I now have an OS image which appears to work. For the next round to put it to the test, I am going to try and use it to build a new distro. I will need to address the binutils build infrastructure issue so that I can point it to bash, otherwise I suspect I will run into the issues again when running the test suite (see previous post). I would also like to try and swap binutils out for elftoolchain. I have also been thinking about subsequent OS upgrades and using mtree for that.

LFS, round #1

Following on from the previous blog post, I started on the path of build a Linux From Scratch distribution. The project offers two paths, one using traditional Sys V init and systemd for the other. I opted for systemd route and followed the guide, it was all very straight forward. Essentially you fetch a bunch of source archives off the internet, run tar, configure, make, make test, make install a bunch of times with some system setup in between, before compiling your kernel, putting it into place in /boot and getting grub installed and configured.

The book assumes that you have a system running Linux already and you have spare space which you use for a new partition to install your linux distro that you built. I opted to create a 10GB file which I mounted via loopback instead, with the intention of using it as the boot disk of a virtual machine.

The guide has 11 chapters which takes you through building software on the host, as a new user (for a clean environment), then in a chroot with three iterations of GCC and binutils builds. With each software component that you’ll build, the guide instructs you on how to run their test suite, what failures should be expected and why, before performing an install.

For each component that you build, the guide documents why a patch is applied and what the configure options specified mean. At the end of each section all the installed components are documented, followed by a short description of each item. Unfortunately the dependencies are not documented but sort of implied by the order which things appear in the guide.

Chapter 8 is the most laborious with a hefty 71 packages to install/reinstall. End result is a fully fledged environment with Autotools, Perl, Python, TCL, GCC, Vim, various libraries, compression tools. If you follow the guide every step of the way, it should work a-ok in the end, providing the test suites passed as expected at each stage.

After I finished all 11 step, I had to convert my flat file which I created with dd(1) to a format which VirtualBox would recognise. I wasn’t sure if any of the supported formats was a flat file with a new file extension and it was quicker to convert it to a vmdk file than work through the list to see. First try it made it to the GRUB menu which was nice.

Grub menu

Followed by a panic as I guessed the wrong device node to specify as root in my grub.cfg.

Panic on first boot

A re-edit of the config to specify the device node hinted in the kernel panic and I made it to the login prompt.

First successful boot

At this point I began thinking how much trouble I could get into by substituting or omitting components and started a fresh new build.

Inverse vandalism: the making of things because you can

Alan Kay, “The Early History of Smalltalk,” ACM SIGPLAN Notices Volume 28, March 1, 1993

LFS round #2 started life with nawk instead of gawk, no bash installed but mksh is /bin/sh, BSD make instead of GNU make. No Perl, Python, TCL, gettext, bunzip, xz. BSD make got swapped for GNU make on the first step as it wasn’t happy about its sys.mk that was installed but will be revisited. I made it back to chapter 8 quickly (look ma, less dependencies) and things began to fall apart with rebuilding Glibc, turns out it really wants bison, Python, gawk not nawk. Glibc also really wants bash as well but its configure test is happy with mksh and it passes. It became apparent it wanted bash when running the test suite as some tests call /bin/bash specifically and stop when it is not found. At this point my environment began behaving strangely so I exited the chroot and I couldn’t get back in. Running strace on chroot showed that calls to execve() were returning ENOENT. Rebuilding glibc from the host environment allowed me to get back into the chroot once again, at which point I installed bash. For glibc’s Python dependency, I decided to treat it as part of the bootstrap kit as it seems to be a build dependency. Python got built without shared components (--disable-shared) and installed it in a separate prefix with the plan to remove it after the system is built. From glibc I jumped to building binutils in my chroot and again things came tumbling down during the test suite run. It was not happy about finding libgcc_s, despite the system being aware of it in its ld cache but I haven’t had a chance to investigate any further. I feel very much lost in the bazaar but I’m having fun. 🙂

How to open source: going from NetBSD to Linux

TL;DR: some BSD user tries something other and wonder why things are different.

This post has sat in draft form for quite some time. At first it was written with highlighting the NetBSD project in mind and I started thinking about revisiting it recently due to frustration with running a mainstream Linux distribution when investigating

  • how some critical libraries I was running were built
  • what, if any changes were made to them
  • wondering why the source repositories for components were buried away if at all available.

The recent article on LWN titled Toward a “modern” Emacs mentioned the frustration with distributions provided sufficient confirmation bias to get this together and posted. Note: this is not intended as a bragging contest about NetBSD or pkgsrc or a put down for Linux, but perhaps things I’m not grasping and expecting one to be like the other.

Each technology community has a set of norms around how they interact with their technology, with regard to obtaining software for example mobile users obtain theirs from an “app store”. Mac OS/Windows users traditionally would install packaged software but now mostly obtain their software from a store again. It would be odd to be given a source archive and asked to compile the software for yourself as a user on these platforms (if the source code was even available to users). Unix was the opposite, it was common to receive software in source form and have to compile it yourself. By association and nature (Open Source software) so do GNU/Linux distributions, however binary packages are provided and encouraged for use. The packages save a great deal of compilation time and lower the barrier for users which again is a good thing. I get the impression the details regarding source code and changes do not get the same spotlight especially in a security context, for example as I edit this post, among the most recent advisories on the Debian security page is an advisory for ModSecurity, fairly short, lists the CVEs and states “We recommend that you upgrade your modsecurity packages.”. If I’m interested in the actual changes to the package it’s buried five pages away from the advisory. The GUI update manager on my distro goes as far as collapsing the description panel for the updates which I find amusing.

Software updater in Ubuntu

I agree hiding technical detail from a user is a valid case. Actually, while trying to take this screenshot I visited the bug report of the GCC update and with a bit of clicking around, I found a link to a diff of changes. Why can’t the advisories document both paths (build your own or obtain the packages) and allow the user to choose.
I was hoping for something a bit more flexible which would allow me to use what’s in place and also allow me to rebuild the system or parts with ease should I wish/need to.
Relying on a distribution as a means of obtaining gratis binaries to use, at best, isn’t very appealing.
Use of Open Source software in such a way while completely acceptable overlooks the opportunity to mould software to your requirements should you be inclined.
Given a piece of software, to consume provided binaries, avoiding any customisation is akin to bending around an implementation and is actually heading in the opposite direction of what Open Source software is able to allow you to do.
Let me clarify, I’m not saying just because a piece of software is Open Source it must be compiled by every user by them self for maximum benefit (a talk I gave in 2019 was torpedoed by the objection that one should build their own version of Chrome or Firefox 🙂 ).
I’m suggesting that if you are relying on tools of an Open Source nature, you are best off owning your stack.
That is, you take active participation in projects, for you are able to help shape the evolution of your tools through participating and get insight into upcoming changes.
This makes upgrades and maintenance smoother because you are not reliant on a 3rd party and their release cycle for updates, potentially resulting in long gaps between upgrades which could also mean big jumps between major versions when you do upgrade, bringing about many changes since the previous version you were running.
You become familiarised with the process to assemble your tools which helps when you are reasoning about your stack during debugging.
Questions like “are there local changes from your distribution?” are off the table. e.g Linux From Scratch
Bad tools harbour bad habits.
The shortcomings of a bad tool are pushed on to the user/operator who is then forced to tolerate them and work around accordingly, leading to a clumsy workflow. See Poka-yoke
With a system composed of many such components, it becomes harder and harder to think about new ideas or existing problems in a new way because of the mental burden of coping with what is currently in place and adapting, leading to paralysis and surviving in maintenance mode where the system remains static and is kept running because no one dares make a change.

The enjoyment of one’s tools is an essential ingredient of successful work.

Vol. II, Seminumerical Algorithms, Section 4.2.2 part A, final paragraph

Enter NetBSD and pkgsrc which is where I was coming from as a user.
NetBSD is an open source operating system with a focus on portability.
It has been around since the early 1990s and is the oldest Open Source operating system which is still actively developed as well as one of the oldest active source code repositories on the internet today. The lineage of the code base is easily traceable back to the early days of UNIX thanks to the CSRG archive repository. This is not so important as a first port of call for a new user or for day to day operation but provides useful insight during debugging/troubleshooting.
Having the source code alone is not as useful as having access to the source repo and the history of the code base with commit messages (not that all commit messages are useful).
As with the other BSDs, the current source repository plays a prominent role on the front page of the website and very easy to find.
pkgsrc is NetBSD’s sibling packaging system project with a similar focus on portability.
pkgsrc provides a framework to package tens of thousands of pieces of software consistently across many different operating systems.
In combination of the two there is a complete stack to compose a system with, from operating system to a suite of 3rd party software (including Chrome and Mozilla based browsers, FYI! 🙂 ) or to take selected components and extend other systems with.
As an example, a feature of NetBSD is a tool called Rump Kernel. Rump allows you to instantiate an instance of the NetBSD kernel in the user space of another operating system instance. A common use of this in NetBSD is for testing, it is possible to perform tests on vital components of a system, safely, and on failure the result at worse is a failed system process, rather than a system crash. This saves valuable time between iterations when debugging, especially on larger systems where boot processes run into minutes (think about a server with a large number of disks attached, easily ~ 10 minutes or more to POST and probe a full shelf of disks before even getting to booting the operating system). Rump can also be used to supplement functionality on operating systems, saving development time of device drivers or subsystems. An example of this is the use of Rump in the GNU/Hurd operating system to provide a sound system and drivers for sound cards.
pkgsrc with its support for a range of operating systems means that it is possible to unify your workflow across a range of systems with relation to deploying software. This makes it possible to run the same variety of software with identical changes regardless of operating system. pkgsrc also provides the flexibility to select where dependencies are satisfied from, where possible. That is, if the host operating system provides a component as standard, pkgsrc could make use of it rather than building yet another copy of it, or as time goes on, with legacy systems it may be preferred not to use any such components provided by the host operating system but to only make use of components from pkgsrc, this is also possible. Like pkgsrc, NetBSD has its own build framework which makes it easy to build a release or cross build from another operating system which has a compiler installed. It feels very much like NetBSD comes to you and you work on it from your environment of choice rather that you having to change your environment to it in order to work on it, and the tools you become comfortable with you get to take with you to other platforms. You end up with a toolbox with for solving problems.
The GNU eco system itself is a vast toolbox to pick from also but I’m missing the integration and struggling with the fragmentation and the differences in project management if any. Source code up on a project hosting site alone is no good, neither is just a project site without access to the source code repository, you need both to engage with a project, to be able to track changes and to participate in the community. One doesn’t replace the other.

How I ended up here is that I installed Ubuntu because it provided ZFS support out of the box and I didn’t need to worry about things like pinning kernel versions to prevent kernel updates from rendering my machine un-bootable until I built new modules somehow and I thought it would be the easiest way to work through the BPF performance book. My experience with Linux has been with traditional distros, started on Slackware, then onto RedHat 5.x, Suse 6.x, Debian (Woody) and now Ubuntu 20.04. Tried Gentoo once about 15 years ago but never got past building an unbootable system from their live cd env I recall. I have not tried more recent distributions like Arch, void and such. I’m currently playing with Linux From Scratch.

May the source be with you

Marshall Kirk McKusick

Book review: BPF Performance Tools: Linux System and Application Observability

It’s more than 11 years since the shouting in the data centre video landed and I still manage to surprise folks in 2020 who have never seen it with what is possible.
The idea that such transparency is a reality in some circles comes as a shock.

Without the facility to be able to dynamically instrument a system the operator is severely limited of insight into what is happening on a system using conventional tools, solely. Having to resort to debugging tools to gain insight is a non option usually for several reasons
1) disruptive (may need for application to be re-invoked via tooling).
2) considerable performance impact.
3) unable to provide a holistic view (may provides insight into one component leaving it operator to correlate information from other sources).
If you do have the luxury, the problem is how do you instrument the system?
The mechanism offers the ability to ask questions about the system, but can you formulate the right question?? This book hopefully helps with that.

Observation of an application, you need both resource analysis and application-level analysis. With BPF tracing, this allows you to study the flow from the application and its code and context, through libraries and syscalls, kernel services, and device drivers. Imagine taking the various ways disk I/O was instrumented and adding query string as another dimension for breakdowns.

The BPF performance tools book centres around bpftrace but covers BCC as well. bpftrace gives a DTrace like tool for one liners and writing scripts similarly to D, so if you are comfortable with DTrace, syntax should be familiar though it is slightly different.
BCC provides a more powerful and complex interface for writing scripts which leverage other languages to compose a desired tool. I believe the majority of the BCC tools use Python though Luajit is supported too.
Either way, in the background everything end up as LLVM IR and goes through libLLVM to compile to BPF.

The first part of the book covers the technology, starting with introducing eBPF and moving down to cover the history, interfaces, how things work, and the tooling which compliment eBPF such as PMCs, flamegraphs, perf_events and more.
A quick introduction to performance analysis followed by a BCC and bpftrace introduction rounds off the first part of the book in preparation for applying them to different parts of a system, broken down by chapter, starting with CPU.

The methodology is clear cut. Use the traditional tools commonly available to gauge the state of the system and then use bpftrace or BCC to hone in on the problem, iterating through the layers of the system to find the root cause. As opposed to trying to solve thing purely with eBPF.

I did not read the third and fourth sections of the book which covered additional topics and appendixes but I suspect I will be returning to read the “tips, tricks and common problems” chapter.
From the first sixteen chapters which I read, the CPU chapter really helped me understand the way CPU usage is measured on Linux. I enjoyed the chapter dedicated to languages, especially the Bash Shell section.
Given a binary (in this case bash):
how you go about extracting information from it, whether it has been compiled with or without frame pointers preserved.
How you could expand the shell to add USDT probes.
I did not finish the Java section, too painful to read about what’s needed to be done due to the nature of Java being a C++ code base and the JIT runtime (the book states it is a complex target to trace) and couldn’t contain myself to read the containers *yawn* chapter.
All the scripts covered in the book have their history covered in the footnotes of the page which was nice to see (I like history)

I created the first execsnoop using DTrace on 24-Mar-2004, to solve a common performance problem I was seeing with short-lived processes in Solaris environments. My prior analysis technique was to enable process accounting or BSM auditing and pick the exec events out of the logs, but both of these came with caveats: Process accounting truncated the process name and arguments to only eight characters. By comparison, my execsnoop tool could be run on a system immediately, without needing special audit modes, and could show much more of the command string. execsnoop is installed by default on OS X, and some Solaris and BSD versions. I also developed the BCC version on 7-Feb-2016, and the bpftrace version on 15-Nov-2017, and for that I added the join() built-in to bpftrace.

and a heads up is given on the impact of running the script is likely to have, because some will have a noticeable impact.

The performance overhead of offcputime(8) can be significant, exceeding 5%, depending on the rate of context switches. This is at least manageable: it could be run for short periods in production as needed. Prior to BPF, performing off-CPU analysis involved dumping all stacks to user-space for post processing, and the overhead was usually prohibitive for production use.

I followed the book with a copy of Ubuntu 20.04 installed on my ThinkPad x230 and it mostly went smoothly, the only annoying thing was that user space stack traces were usually broken due to things such as libc not being built with frame pointers preserved (-fno-omit-frame-pointer).
Section 13.2.9 discusses the issue with libc and libpthread rebuild requirement as well as pointing to the Debian bug tracking the issue.
I’m comfortable compiling and installing software but didn’t want to go down the rabbit hole of trying to rebuild my OS as I worked through the book just yet, the thought of maintaining such a system alongside binary updates from vendor seemed like a hassle in this space. My next step is to address that so I have working stack traces. 🙂

Besides that, I enjoyed reading the book especially the background/history parts and look forward to Systems Performance: Enterprise and the Cloud, 2nd Edition, which is out in a couple of months.

Lessons learnt from adding OpenBSD/x86_64 support to pkgsrc

Before even getting into the internals of operating systems to learn about differences among a group of operating systems, It’s fairly evident that something as simple as naming is different between operating systems.

For example, the generations of trusty 32bit x86 PC is commonly named i386 in most operating systems, FreeBSD may also refer to it as just pc, Solaris & derivatives refer to it as i86pc, Mac OS X refers to it as i486 (NeXTSTEP never ran on a 386, it needed a minimum of a 486 and up until Sierra, machine(1) would report i486 despite being on a Core i7 system), this is one of the many architectures which needed to hadled within pkgsrc. To simplify things and reduce lengthy statements, all variants for an arch are translated to a common name whiche is then used for reference in pkgsrc. This means that all the examples above are grouped together under the MACHINE_ARCH i386. In the case of 64bit x86 or commonly referred to as amd64, we group under x86_64 or at least tried to. The exception to this grouping was OpenBSD/amd64, this resulted in the breakage of many packages because any special attention required was generally handled under the context of MACHINE_ARCH=x86_64. In some packages, developers had added a new exception for MACHINE_ARCH=amd64 when OPSYS=OPENBSD but it was not a sustainable strategy because to be affective, the entire tree would need to be handled. I covered the issue at the time in A week of pkgsrc #11 but to summarise, $machine_arch may be set at the start in the bootstrap script but as the process works through the list of tasks, the value of this variable is overriden despite being passed down the chain at the begining of a step. After some experimentation and the help of Jonathan Perkin, the hurdles were removed and thus OpenBSD/x86_64 was born in pkgsrc 😉

The value of this exercise for me was that I learnt the number of places within the internals of pkgsrc I could set something (by the nature of coupling components which share the same conventions (pkgtools, bsd make)) and really the only place I should be seeking to set something is at the start of the process and have that carry through, rather than trying to short circuit the process and repeat myself.

Thanks to John Klos, I was given control of a IBM Power 8+ S822LC running Ubuntu, which started setting up for pkgsrc bulk builds.
First issue I hit was pkgsrc not being able to find libc.so, this turned out to be the lack of handling for the multilib paths found on Debian & derivates for PowerPC based systems.
This system is a little endian 64bit PowerPC machine which is a new speciality in itself and so I set out to make my first mistake. Adding a new check for the wrong MACHINE_ARCH, long forgotten about the previous battle with OpenBSD/x86_64 I added a new statment to resolve the relevant paths for ppc64le systems. Bootstrap was happy with that & things moved forward. At this point I was pointed to lang/python27 most likely being borken by Maya Rashish, John had previously reported the issue and we started to poke at things. As we started rummaging through the internals of pkgsrc (pkgsrc/mk) I started to realise we’re heading down the wrong path of marking things up in multiple places again, rather than setting things once & propogating through.

It turned out that I only need to make 3 changes to add support for Linux running on little endian 64bit PowerPC to pkgsrc (2 additions & 1 correction 😉 )
First, add a case in the pkgsrc/bootstrap/bootstrap script to set $machine_arch to what we want to group under when the relevant machine type is detected. In this case it was when Linux running on a ppc64le host, set $machine_arch to powerpc64le. As this is a new machine arch, also ensure it’s listed in the correct endianness category of pkgsrc/mk/bsd.prefs.mk, in this case add powerpc64le to _LITTLEENDIANCPUS.
Then correct the first change to replace the reference to ppc64le for handling the multilib paths in pkgsrc/mk/platform/Linux.mk.

The bulkbuild is still in progress as I write this post but 5708/18148 packages in an the only fall out so far appears to be the ruby interepreters.

Goodbye Alphastation

My second cool legacy UNIX workstation which got me started on FreeBSD & OpenBSD, I obtained this workstation back in the summer of 2002, I first tried Redhat Linux 7.2 which was available as a free download as a promotion to demonstrate the optimisation ability of the Compaq compiler suite for the Alpha. It was a terrible experience consistent with my previous attempts at running Linux up to that point ( I’d started off on Slackware in 96, moved onto Redhat 5.2 followed by Suse 6.2 ), I soon dropped it & moved onto Debian 3.0 (Woody) which was ok but the 7 cd set was a bit too much hassle for doing package installs, the performance wasn’t much better with the compared to the “optimised” Redhat so I moved onto NT 4.0 workstation & FX32! & ran that for a bit before getting bored. In the new year FreeBSD 5.0 release was announced & Alpha was a supported platform so I gave it try on this machine, armed with a copy of the handbook & the help of IRC I made a lot of progress, first by dropping 5.0 & going back to version 4.7 after being told either x was broken in 5 or y was a bug in 5 too many times. I was blown away by how much faster it was compared to the so-called “optimised” edition of Redhat.
Towards the end of 2003 I started thinking about trying OpenBSD as a firewall after hearing about PF & deployed it when 3.4 was released, the Alphastation served as my gateway connected to a 512k/128k cable modem connection but ended up dropping it & moving to i386 when 3.5 was released because php mysql extension was broken on alpha & I wanted to launch this blog.
After that the Alphastation was used less & less over the years so I passed it onto a fellow techie who would appreciate it.

iPodLinux on iPod Classic

I’ve kept an eye on the iPodLinux project since I got my 120GB iPod Classic back in 2007, I was never able to try out the fruits of the project as the last supported model was the one prior to the Classic & from the description of the site, the reason was the Classic & newer models used an encrypted firmware.
I was bored tonight & decided to revisit the project to see if any progress had been made & found the site no longer loaded, reading up on the wikipedia page revealed freemyipod which lists the device as supported, so I gave it a go.

Why would you want to do this?

  • Support for file formats not offered by Apple e.g FLAC & OGG
  • Not being tied to an instance of iTunes on a specific computer
  • Installation is only supported via Linux or Windows & is fairly straightforward, I went with the “no iTunes installed” path on Windows and was done in a few minutes. Only sightly annoying thing is that the device needs to be formatted as part of the install process.

    Flashing iPod Classic

    Why would you not want to do this?

  • Rockbox interface is clunkier than the Apple one
  • Losing the ability to use iTunes to sync music (device presents itself as just another drive to computer, you need to manage getting the music on the device yourself)

    I think It was worth the effort to have gained some flexibility & if the interface is really an issue, it is an open source project, so just roll up the sleeves and get involved!

  • My 1st Patch!

    Woohoo!
    I’ve just created my 1st patch, to add support for Slackware to the iSCSI Enterprise Target software

    Read this guide if youre interested in rolling out your patches

    --- Makefile.orig 2004-11-22 10:30:57.000000000 +0000
    +++ Makefile 2004-11-22 10:35:16.000000000 +0000
    @@ -28,6 +28,8 @@
    install -v -m 755 scripts/initd.debian /etc/init.d/iscsi-target;
    elif [ -f /etc/redhat-release ]; then
    install -v -m 755 scripts/initd.redhat /etc/init.d/iscsi-target;
    + elif [ -f /etc/slackware-version ]; then
    + install -v -m 755 scripts/initd /etc/rc.d/iscsi-target;
    else
    install -v -m 755 scripts/initd /etc/init.d/iscsi-target;

    iSCSI On a budget!

    Following the Quick Guide to iSCSI on Linux I managed to setup a iSCSI Target host on Slackware 10 running on a virtual machine on VMware then connected to it from the Windows 2000 box which was the VMware host! 🙂

    I used the iSCSI Enterprise Target rather then the Ardis Target which the guide covers but as the Enterprise Target is a fork of the Ardis Target there is no variation in steps carried out.

    The Windows Initiator can be dowloaded from here