A slow / low-end system capable of running most modern BSDs

I was looking to test a change related to buffering in cat(1) and wondered what was the slowest system I could use which was capable of running the current versions of NetBSD, FreeBSD, OpenBSD. An old PC and the ARM based BeagleBone Black sprang to mind immediately, then a PowerPC Mac? SPARC64?

Apart from a Sun Fire T1000, I do not have any SPARC hardware, sun4v is only supported on NetBSD & OpenBSD at present, FreeBSD/sun4v was only a pre-alpha rough cut from before the days of version 7 and sparc64 support may be going away in FreeBSD moving forward.

Considered the BeagleBone Black but currently NetBSD-HEAD does not boot on it port-arm/51380 and FreeBSD has issues with running DTrace bug/211389. So that was off the list.

A G4 based PowerPC Mac is supported between my choice of BSDs, unfortunately I couldn’t get a working disk burnt from the FreeBSD iso files to try it out on a 12″ PowerBook. bug/211488.

I settled on running i386 builds on a Alix 2c3 I have, it has 256MB RAM and a 500Mhz Geode CPU, currently running FreeBSD/i386 11-BETA3 without issue and has no problems with any of the other BSDs. It’s a little too “modern” and high spec though in my test.

Running FreeBSD / OpenBSD / NetBSD as a virtualised guest on Online.net

I’ve been running a mixture of FreeBSD / OpenBSD & NetBSD as guests on a dedicated server at Online.net. While getting the operating systems installed was fairly seamless, getting networking going was not.

  1. Client are not isolated in a layer 2 domain
  2. DHCPv6 config is broken

Clients not being isolated is not so much a problem itself and is typically what you’d expect if you plugged a bunch of computers into a switch with a single VLAN or unmanaged switched for example; but in a shared environment with untrusted tenants it can cause problems. Broadcast & IPv6 multicast floods aside, one is open to most of the attacks in something like THC-IPv6 due to lack of MLD snooping which would prevent a rogue IPv6 router.

Attacks via IPv6 are not so much of a problem as their use of non-RFC complaint timers settings in their DHCPv6 make it unfeasible to use the offered native IPv6 connectivity as clients will fail to renew leases. Depending on the DHCPv6 client used, the amount of time it takes fail to renew a lease will vary. dhcpcd for example now warns if detects a lease is not compliant with RFC 3315 section 22.4 “Identity Association for Non-temporary Addresses Option”.

Despite having a vast address range in IPv6 and a /48 subnet is allotted free of charge, you’ll need the equal amount of v4 address addresses as the v6 addresses you intend to use at Online.net. There is a way of using a /48 and allocating addresses yourself but it’s only possible using a version of Proxmox which they provide.

You can save yourself a lot of hassle both with configuration & trying to deal with their support  regarding IPv6 by using a Hurricane Electric tunnel. I actually found connectivity was also faster from Hurricane Electric than using the native connectivity.

For IPv4 connectivity on a guest (assuming you’re renting individual IP addresses & not a /27 prefix), you’ll need to use the default gateway IP address assigned to your host alongside the allotted IP address and a /32 prefix.

Assuming the network details are as follows
Default gateway on host:
Failover IP #1:, assigned to MAC address 00:50:56:00:01:AA
Failover IP #2:, assigned to MAC address 00:50:56:00:02:BB
Failover IP #3:, assigned to MAC address 00:50:56:00:03:CC

The MAC addresses need to be assigned to the tap(4) interface on the host.
If you’re using bhyve and your guest is using the interface tap0, this would be performed using the -s flag to configure the virtual PCI ethernet card, eg -s 1:0,virtio-net,tap0,mac=00:50:56:00:01:AA

It’s then onto configuring each OS to handle a gateway which is in a another subnet for IPv4 connectivity.


In FreeBSD you need to construct a route to reach the default IP address first, before you specify the default IP address, otherwise things will not work. So assuming we’re going to use Failover IP #1, your configuration in /etc/rc.conf would be as follows

static_routes="gateway default"
route_gateway="-host $gateway_ip -interface $gateway_if"
route_default="default $gateway_ip"

Note, the installer at present prevents network installs, you should use a iso image containing the distfiles, bug 206355 has more details.


On NetBSD, configure networking using /etc/netstart.local, entering the commands you’d enter at the console inside the file. Assuming failover IP #2 is going to be used for the NetBSD VM, the following would configure the guest to reach the outside world using, as discussed in the NetBSD Network FAQ

ifconfig vioif0
route add -net -link -cloning -iface vioif0
route add default -ifa


On OpenBSD, configure the networking from the ethernet interfaces configuration file hostname.if(5).

Assuming failover IP #3 is going to be used for the OpenBSD VM, the following will setup networking.


inet NONE
!/sbin/route add -net -netmask -link -cloning -iface vio0
!/sbin/route add default -ifa

It’s also possible to not specify the -cloning flag but a patch is required if you’re running 5.9 release.

The BSD family of operating systems

At OSHUG #46 I was given the opportunity to present the BSD’s to a group of open source hardware enthusiast & speak about why this family of operating systems would benefit the person running a flavour on their hardware. There was a recording made of the talk but it may be some time before it is made available online, so I thought I’d take the time to write something up to share in the meantime.

My slides.

This line of operating systems started out life as a series of patches to AT&T UNIX which was introduced to the University of Berkeley by Ken Thompson whilst on sabbatical in 1977.
From the 1BSD TAPE file included in the CSRG archive CD set

Berkeley UNIX Software Tape
Jan 16, 1978 TP 800BPI

The first release came with things such as the ex editor, ashell and Pascal compiler as an add-on for UNIX v7, running on a PDP-11. Over the life time of the CSRG they produced releases which included vi, csh, the IPv4 TCP/IP network stack, the virtual memory subsystem (the kernel being named vmunix, parodied by Linux as vmlinuz) and UFS.
The distribution tapes were only available to AT&T licensees; over time the code base of the distribution grew increasingly independent from AT&T UNIX. At the same time the cost of the AT&T license continued to increase as well. Starting out at a cost of $10000 and reaching north of $250000 in the late 80’s. According to Kirk McKusick there was pressure to release the independently developed components of the CSRG so the community could benefit from the use of things such as the network stack without purchasing a costly license. This resulted in several release, comprised mostly of the code developed outside of AT&T such as 4.3BSD-Net/1, Net/2, 4.4BSD-Lite & Lite2. “Mostly” in that with the release of Net/2 AT&T file a lawsuit against the University of California for alleged code copying and theft of trade secrets.
During its lifetime, BSD saw itself being run on several CPU architectures from the DEC PDP-11, VAX to the MIPS, HP 9000 and Motorola 68000 to name a few. These ports along with the  Power 6/32 helped to improve the portability of the code base. The code base was deemed to be 90% platform independent, the remaining 10% being mostly related to the VM subsystem which was platform specific. As with AT&T UNIX, portability & migration between different systems was part of the nature of the code base, from the beginning.

The 4.3BSD-Net/2 code base was used as the basis for a port to the Intel 386, resulting in 386BSD (free) & BSD386 (commercial) releases.

The Modern BSD variants
At the time of writing there are many BSD variants in existence, each with its own area of focus. Everything still leads back to 2 major variants.

NetBSD was the first of the modern variants that is still actively developed. It started out life as a fork of 386BSD. The focus of NetBSD is portability which not only makes porting to new hardware easier (currently supporting over 60 different ports across may CPU architectures).
Everything from a VAX, ARM & MIPS Windows CE based PDAs to a Sega Dreamcast and many other systems are supported and able to run the latest version of NetBSD. There’s even a toaster which run NetBSD
The focus on portability also makes reusing components on other operating systems easy. For example the packaging system (forked from FreeBSD (which we’ll talk about next)) supports over 20 operating systems.
This enables a consistent toolset to be used regardless of operating system.

Some of the highlights of NetBSD include ATF, unprivileged builds and portable build infrastructure using build.sh.

ATF, as the name suggests is used for automated tests of the source code to discover regression in the code base in an automated manner. Results can be found on the NetBSD release engineering page.

Unprivileged builds allow a user to not only build a copy of the operating systems without elevated privileged, but they can also build and install software from pkgsrc in a location they have write access to (by default, in a prefix under their home directory).

build.sh, the build framework, allows NetBSD to be built on any modern POSIX compliant operating system. Freeing the person to use a operating system of their choice to build releases.

04/05/2016- Note Ollivier’s comment, I made a mistake when I was gathering info and looked at the source for head and checked the history for the COPYRIGHT file there, not noticing the repository started with v2.0.

Forked from the 4.4BSD Lite code base, 6 months after NetBSD was started. The focus of FreeBSD was performance on i386 systems. Over time support was added for the DEC Alpha as this meant porting the code base to a 64bit systems and addressing any bugs which would prevent the code base from running on a 64bit system. Many years later the project branched out and introduced support for additional platforms. Today the project boasts support for CPUs such as ARMv8, RISC-V and BERI.

Forked from NetBSD, the focus of OpenBSD is security. The project is home to many components which see wider use outside of OpenBSD, such as OpenSSH, PF (firewall), LibreSSL and others.

Forked from FreeBSD, the focus of DragonFly BSD is scalability & performance. Taking the operating system in a new direction with regards to how SMP is implemented and from there, developing a new files system called HAMMER.

No matter the flavour, documentation is a key part of the development process for the BSD’s.
Whether it is the Design & Implementation series which started with covering 4.3BSD in 1989 and more recently FreeBSD 10 in the fourth instalment of the series, or each projects own set of documentation. Documentation is important as it distinguishes intent & implementation as well as save a lot of question and answers emails.
FreeBSD has handbooks, NetBSD has guides, OpenBSD has FAQs and all projects make their man pages available online as web pages. There is even  a teaching course based around the  The Design and Implementation of the FreeBSD Operating System, 2nd edition.

Frameworks for building embedded images
Each operating system release is a complete, self contained bundle, containing the documentation and necessary toolchain required for building a copy of the operating system from source. release(7) on FreeBSD & NetBSD, release(8) on OpenBSD, nerelease(7) on DragonFlyBSD

For the purpose of embedding the operating system it may not be desirable to build a full blown release. Depending on the choice of variant, either the functionality is built in as standard or a project exists to assist with generating customised images with ease.

FreeBSD had PicoBSD which is now superseded by NanoBSD.
OpenBSD has flashrd and resflash.
NetBSD has a target for generating an image in build.sh, customisations controlled by variables set in mk.conf.
DragonFlyBSD has nrelase.

RetroBSD / LiteBSD
RetroBSD is a port of 2.11BSD (originally targeted for the PDP-11) to the MIPS M4K core found on the PIC32 micro-controllers. LiteBSD is a port of 4.4BSD to the PIC32MZ micro-controllers with a MIPS32 core. Due to the limited resources available, RetroBSD does not offer a network stack, Of the 128KB of RAM, 96KB are available for user space applications. A compiler, editor & various utilities come bundled with the OS so software could be developed on the PIC itself. Variants of common software titles are available to extend the system, such as an Emacs like editor.
LiteBSD is based on a more recent version of BSD, taking advantage of the availability of more RAM (512KB) and MMU on the targeted micro controller. It features a network stack.

Projects such as these take advantage of prior effort and offer the user a consistent environment from the microcontroller to desktop to server. With the extensive documentation and availability of source history, it is possible to realise at which stage in the evolution of the code base the currently running system is and if a desired feature is implemented.

The development of BSD is closely tied with that of the internet. BSD’s modern variants are some of the oldest communities who have collaborated over the internet to develop a software project. The workflow of the projects has transpired to become the standard way of developing open source software on the internet, whether it’s adhering to a style guide or developing with a publicly accessible source repository or holding a hackathon.

For a newcomer interested in an operating system to run on your hardware, it is a great opportunity to be a part of a tech savvy community working to evolve an idea started almost 40 years ago.

As a business, each project produces a mature and robust operating system that has seen many applications from running on devices such as game consoles, mobile phones, cars, satellites and the international space station. Nearly all projects are backed by a non-profit foundation which can act as a liaison for businesses and assist with enquiries regarding development.

A NeXT workstation in Brighton

NeXTstationFor many years I’ve wondered if there were any NeXT systems in my home town. Search results certainly brought up mirrors at University of Brighton many years ago or promotional announcements of classic UNIX workstation vendors such as Solborne for University of Sussex. End of last summer I received a pleasant message from Luke on twitter to say one had been found sitting around in a disused room that had been locked for many years .
It Turns out such systems are still around and finally one is in my hands. Internet friends are the best! 🙂
grazed NeXT logo
Despite the grazed logo, system is in great condition. The status of the system is unknown due to the lack of peripherals. It’s just the slab. By the logo it appears to be a stock mono NeXTstation. I’ll leave it connected to the power before I try and power it up NeXT week!

NeXTstation label

Building a l2tp/IPsec VPN based around a OpenBSD head-end – Part 1

This is the first in a series of posts to cover building a l2tp/IPsec VPN service which remote users (road warriors) connect to.
In this post I will begin with getting OpenBSD setup as the head-end & follow up with subsequent posts to cover configuration of various platforms as clients which compose the road warriors.
Undeadly featured an article on configuring OpenBSD in 2012, things have improved since this article was posted and some of the steps are no longer required, hence I will go over the process again here.

It’s assumed you have an install of OpenBSD running that’s setup as a gateway and communicating on the network, we will continue from there.

The following snippet of config needs to be added to your PF config (/etc/pf.conf by default). It unconditionally permits the IPsec ESP & AH protocols intended for the OpenBSD host, as well as any UDP traffic for ISAKMP and to support NAT traversal.
pass quick proto { esp, ah } from any to self
pass quick proto udp from any to self port {isakmp, ipsec-nat-t} keep state
pass on enc0 from any to self keep state (if-bound)

A minimal PF config which just permits the establishment of a VPN tunnel might look like the following

set skip lo
block return
pass quick proto { esp, ah } from any to self
pass quick proto udp from any to self port {isakmp, ipsec-nat-t} keep state
pass on enc0 from any to self keep state (if-bound)

By only permitting isakmp, it enforces having a working IPsec config before anything else happens whereas permitting UDP port 1701 would permit the establishment of a l2tp tunnel without IPsec which in this scenario would likely be undesired.

A basic IPsec config to use a pre-shared key.The default ciphers used for main & quick mode are documented in ipsec.conf(5). The IP address is configured on the OpenBSD host which connections will be accepted on.

ike passive esp transport proto udp from to any port 1701 psk "password"

Note, the OpenBSD defaults are too high for establishing a connection using the networking preferences on Apple devices and so would need to be restricted down to auth "hmac-sha1" enc "3des" group modp1024 which is not recommended, configuring Apple systems will be covered as a separate article.

The default npppd config (/etc/npppd/nppd.conf) works as-is, without any further changes required. That is unless you prefer to use RADIUS for accounting, instead of local user accounts.


npppd is set to use pppx(4) interfaces for established sessions, in order for these interfaces to work correctly, pipex(4) needs to be enabled.

sysctl net.pipex.enable=1

and adding net.pipex.enable=1 to /etc/sysctl.conf so it’s set on boot.

Note, hosts missing this commit (5.8-RELEASE and snapshots from today & prior) will suffer a panic on the OpenBSD host upon establishment of a session by clients, if pipex(4) is not enabled.

Start isakmpd & npppd with

isakmpd -K

Load your ipsec.conf with
ipsecctl -f /etc/ipsec.conf

Your host should be ready to accept VPN connections, set this services to be started on boot by adding the following to /etc/rc.conf.local

Adventures in Open Source Software: Dealing with Security

I had the opportunity to give this talk at the London chapter of DefCon, DC4420 and censecutivily at London Perl Mongers technical meeting last week.
The subject of the talk was all the factors outside of doing security resceach which can make the process of dealing with advisories a daunting or a seamless process. As observed during the last year, while working as part of a security team.


pkgsrc is a crossplatform packaging system by the NetBSD project, forked from the FreeBSD ports in the late 90’s, initially the primary target was NetBSD but with the portability focus of the project, the list of supported flatforms has grown to a list of 23 operating systems (16 out of those 23 are currently actively worked on). Within the pkgsrc project, there is a dedicated security team whose responsibility is to audit published vulnerabilities and ensure that those which apply to software we offer packages for are listed in a file. Users download this file & use it to check their installed packages.
Other open source projects have teams who are more involved and participate in the security research process and publish their own advisories, such as Debian or Redhat but that is not  main focus of our team.
There may be several reasons for this, during my talk I refered to the aquisition of a security company by Redhat, but looking up Redhat on Wikipedia, there doesn’t appear to be anything to suggest that.
I can say that for the pkgsrc-security team, the role is focused on filtering information and ensuring that items are listed in the pkg-vulnerabilities file, maintainers are notified (if there is one) and co-ordinating with the release engineering team so that necessary commits are pulled into the relevant branches. This is because we try to avoid dealing with development within our tree and opt to co-ordinate with upstream to submit fixes. Majority of our changes focus on removing assumptions to ensure things are built in a consistent manner and allow the software to be packaged how we like.

Dealing with advisories:
The advisories which we receive range in quality / detail.
A personal favourite are the drupal advisories. We offer the drupal core as a package but not any of the 3rd party modules. Their advisories clearly indicate if they apply to core or any published 3rd party modules and which scenario is needed for the vulnerability to be exploitable eg the user must be able to upload content.

The opposite of that is independently published advisories without any co-ordination with affected parties or independently published advisories which are disputed or not acknowledge from upstream. In these situation the role becomes more involved in order to work out if there is clearly an issue or not.

Then there’s Oracle advisories, we can confirm there’s one or more problems in the following versions of software, no more details than that. Upgrade to this version at a minimum to fix said issue(s). Here’s a chart so you can evaluate the risk.

It can be that upstream has actually made an announcement with the details of an issue in public but the mitre website will still lists the CVE as reserved. Ideally you’d like to list the mitre site in pkg-vulnerabilities as it’s where IDs are assigned and it’s self referencing (url will contain the CVE id). But it’s a terrible thing to do to a user. “You have a package installed which is vulnerable to the following type of issue follow this link to not find out any more information about it. Go fish”. Or maybe you have no choice.

Project Websites:
If the published advisory come via a Linux distribution it can be common that the fix references a binary package for users to install or perhaps further information required. In the 2000’s Soureceforge was a popular host for open source projects usually complimented with a separate web page of some kind, it’s now common to have projects which solely exists as an authoritative repo on github. In either scenario, a dedicated section for publishing security information is usually not found. This trend is also prevelant in large commercially backed projects, which play an extremely critical role. Projcts such as ICU (International Components for Unicode), a project by IBM which deals with unicode, an issue in ICU can mean an issue in chrome/chromium, java.

There are also projects like Qemu which have a security page for submitting vulnerability information but never publish advisories themselves. It is common for advisories to reference a git commit email. KVM completely lacks any links related to security. Qemu has a strong link with Xen & KVM which rely on Qemu in one way or another.
While we do not offer KVM as a package, we do at present have four different versions of Xen in our tree and Qemu. This becomes a bit of a timesink when there are multiple advisories to address.

Commercial Repositories:
There are opensource projects with no publicly accessible source code repo. This makes the evaluation of the range of effected verisons difficult if the project only chooses to cover their supported versions.
ISC up until recently (past two years?) required paid membership to access BIND’s repo.
In the talk I refered to the ICU project here, this was incorrect. ICU advisories are either reserved or the bug report access blocked from public view.

Relationships with projects are important and they play a critical role in not only sharing information but code as well. Changes for 3rd party software really needs to be passed to the 3rd party to take care of. If relationships have tourned sour, it makes sharing somewhat difficult and has further implications when developing a project with the support for other projects in mind.
The LibreSSL project published a patches page which covered the changes needed to get affected software built but also co-ordinated with upstream projects to get the fixes integrated. Some projects needed more pressure^Wpersuasion than others to accept the patches.

Key components & deadware:
As mentioned previously, you want changes to go back up stream and not to carry changes in your own tree. But there are scenarios where this is possible, for example, the project is no longer developed. This is a huge problem if the project is widely used because you end up carrying local patches which hinders progress when auditing for vulnerabilites by consumers of the software downstream. It’s no longer a case of ensuring you have a specific version number but which patches are also applied to that baseline version, that then opens further questions about the patches, have you created a new issue that didn’t exist previously??
The widely used unzip utility is such an example,  are you patched for CVE-2015-7696?

An example fragmentation of fixes being carried locally is libwmf, with the announcement of some CVEs earlier this year, Jason Unovitch from the FreeBSD project discovered that there were unpatched vulnerabilities in this library going back to 2004, with patches spread across different Linux distributions, none carrying fixes for all advisories, in one case a hunk of the patch didn’t even apply. Development for libwmf stopped in the early 2000’s but it still exists as a project on sourceforge.

Jasper is another commonly used graphics library, this time for jpeg-2000, again development ceased long ago. In this case Slackware put out an advisory for their package to cover vulnerabilities from the past, going back to 2008, at which point we realised that we didn’t have the fixes either. The version in OpenBSD ports was vulnerable to the issues listed from 2014 but the vulnerabilities from prior (2008) were fixed because they’d been flagged up by the compiler in OpenBSD.

Widely Deployed:
Popular projects which have a large install base can greatly increase impact of a mistake, hence local changes should be kept to a minimum to ease maintanence and auditability.
Projects can see a very fast release cycle, especially ones which have advisories published about them regularly.
Keeping local changes to a minimum reduces the necessary effort to update. With projects which rely on downstream consumers to publish information it makes the process more difficult. Both KVM & QEMU projects do not publish any advisories themselves, at best you may have a git commit email which may be the patch you carry locally. Thankfully the Xen project publish advisories on Qemu as it can be a dependency. They are able to flesh out the details of the issue a little better than a vague commit message.
I’m unsure what happens if you’re not a Linux distro and utilise KVM.

Co-ordinating with upstream:
As I mentioned, relationships are important. An understanding and tolerrance for difference is absolutely essential in the world of software just as it is in day to day life. A common topic of disagreement is licensing, the terms expressed by said licenses and the strong opinions expressed by the participants in the disagreement. Whatever ones belief, the need to co-ordinate with people from different groups is absolutely necessary.
Of the fixes upstreamed from LibreSSL, the author of stunnel rejected a fix initially but eventually changed his mind. The change in question was a 2 line addition to add an ifdef statement so that RAND_egd function was only used if the SSL library being linked to offered such a function (detected by autoconf already). The author rejected the change based on terms of licensing of his project when the change was submitted.

Taking bigger leaps in a software project by trying to clean up a popular target can amass a large collection of local patches which need to make their way upstream. As observed by the Alpine Linux project, a Linux distribution with a new libc called musl libc. While it’s possible to build over 13000 packages with Debian 8 on pkgsrc, the package count is less than 9000 on Alpine, despite both being Linux distributions.

The submission process can be quite daunting depending on the project, to filter out submissions which may not be sound and reduce the workload of developers working on a project, some opt to requiring certain things such as results from a test suite or alike. It doesn’t help matters if project has multiple branches developed in parallel without changes being in sync.
Dealing with GNU toochain such as GCC can be very much like this. Again, local changes amass, slow transforming the local version of the toolchain to an extended version of upstream. While the toolchain may offer security features such as SSP (stack smashing protection), it’s not just the simple case of being able to switch it on, in some cases it either doesn’t work or worse, it results in broken binaries. Work to enable some of these features in pkgsrc began in the summer.

While not specifically security related, I was reminded of an incident on the OpenBSD mailing lists with the author of the once popular ion window manager. The ion developer was frustrated with operating system projects packaging older versions of his software with local changes as users where following up with him for support (in this case he was referring to RedHat). So he came on to the OpenBSD mailing list to ask about updating the software to the latest version available but that was not possible due to incompatible license changes. As usual, the combination of making demanding + licensing drama didn’t work out to well.


  • Organise the information on your site, for an Open Source software project your source control repo should be a single click away from your websites main page.
  • Provide a dedicated security page or section where you post advisories.
  • Write descriptive commit messages.
  • Participate in the community and help your neighbours
  • Even if you stop developing software, the code may live on longer than envisioned, think about what happens if/when you decide to stop (who become the authoritative repo).
  • Don’t make changes locally which do not go upstream by default, for it’ll surely bite you or a member of the project later down the line.
  • Publish actual advisories for your project, don’t pass the buck.
  • Technical problems are best solved with technical solutions eg a bug can still continue to exist despite adhering to a license.
  • Make the submission process to your project effortless for both parties, not just one or the other.

Unable to mount or open disk images generated with Nero (.nrg file)

It appears that VirtualBox & OS X are unable to open .nrg files, despite them essentially being a ISO 9660 format file.

VirtualBox reports:
Result Code:
IMedium {4afe423b-43e0-e9d0-82e8-ceb307940dda}
IVirtualBox {0169423f-46b4-cde9-91af-1e9d5b6cd945}
Callee RC:

Finder reports:
image not recognised

This turns out to be due to a footer added by Nero which may make the file size something which in not a the sum of a multiple of 2K.

Editing the file in a hex editor and removing the footer (of 72 bytes) should result in the file being usable

28633000 45 54 4e 32 00 00 00 20 00 00 00 00 00 00 00 00 |ETN2... ........|
28633010 00 00 00 00 28 63 30 00 00 00 00 00 00 00 00 00 |....(c0.........|
28633020 00 00 00 00 00 00 00 00 4d 54 59 50 00 00 00 04 |........MTYP....|
28633030 00 00 00 01 45 4e 44 21 00 00 00 00 4e 45 52 35 |....END!....NER5|
28633040 00 00 00 00 28 63 30 00 |....(c0.|

Running BSDi BSD/OS on VirtualBox

By default, the BSD/OS kernel recognises the CPU of a VirtualBox guest VM as a Pentium II. The kernel is able to boot correctly but performing any I/O results in failure due to memory errors. Adjusting the CPU mask of the VM from the host resolves this issue.

Note the name or GUID of the VM to be modified
% VBoxManage list vms
"BSDi BSD/OS" {36772f8c-ec06-4f37-a995-662fc38ad103}

Adjust the CPU of the VM
VBoxManage modifyvm "BSDi BSD/OS" --cpuidset 1 0x4a7 0x7100800 0x17bae3ff 0xbfebfbff

Obtained from OS2 Museum.

A week of pkgsrc #12

To fill in the gap since the last post, I thought I’d get the notes which had been collecting up, posted here. pkgsrc got a mention in the Quarterly FreeBSD status report. My bulkbuild effort started on FreeBSD/amd64 10.1-RELEASE but thanks to my friend James O’Gorman, I was able to expand to FreeBSD 11-CURRENT and recently switched over from 10.1-RELEASE to 10.2-RELEASE.
I got the idea to try to pkgsrc on Android after someone posted a screenshot of their Nexus 7 tablet with the bootstrap process completed.

There are several projects on the google play store for running the user land built from a Linux/arm distro in a chroot on Android.
The first project I tried was Debian noroot (based on the tweet that inspired me), it spawned a full X11 desktop to run & so the process was painfully slow.

Switching to GNUroot Debian which just ran a shell in the chroot was much faster at extracting the pkgsrc archive though bootstrap still took long. The best result was with Linux deploy using an Arch Linux user land, everything was very snappy.

On Mac OS X Tiger PowerPC, GCC 5 appears to no longer require switching off multilib support when building on a 32-bit PowerPC CPU, my hardware has changed but the CPU is still a G4. The same changes to force dwarf2 and removing the space in-between flags and paths fed to the linker were otherwise required, as with previous versions of GCC.

I spent a little time with OmniOS and “addressed” the outstanding issues which prevented it from working out of the box. shells/standalone-tcsh was excluded on OmniOS which prevented the version of tcsh shipped with the OS from being clobbered during bulkbuilds. The other issue was what appeared to be a problem with gettext but turned to be an issue with the compiler shipped with OmniOS. This became a topic of discussion on what the correct solution to the problem is. The GCC provided with OmniOS is built with Fortran support and includes the OpenMP libraries (I’m guessing this is the reason for the libraries) in its private lib directory inside /opt/gcc-4.8.1/lib, it turns out that gettext will make use of OpenMP libraries if it detects them during configure stage which I’ve not been able to find a concrete answer for why, the GCC documentation don’t say more than a paragraph about the OpenMP libraries themselves (libgomp) either. The problem was that GCC was exposing its private library in the link path but not in the run path, this meant you could produce binaries which would compile fine but would not run without having to play around with the runtime linker. In my case I’d previously added the private library locate to the runtime linkers search path as a workaround, I disabled the OpenMP support in devel/gettext-tools and that’s where the discussion began. Basically, it’s not possible to expose the private library location to the linker because that would cause issues with upgrades. The location should not be exposed by the compiler in the first place (I guess this was for the convenience of building the actual release of OS?). Richard Palo pursued the issue further and I’m informed that future releases of OmniOS will move libgomp out from this private location to /usr/lib so that it’s in the default library search path.

With the introduction of the GPLv3 license, GNU projects have been switching to the new license. This causes problems for projects outside the GNU eco-system which utilise them if the terms of the new license are unacceptable for them. Each project has dealt with it differently, for OpenBSD they maintain the last version which was available under GPLv2 & extend the functionality it provides. Bitrig has inherited some of this through the fork. Through the bulkbuilds it was revealed that the upstream version of binutils has no support for OpenBSD/amd64 or Bitrig at all. Adding rudimentary support was easily achieved by lifting some of the changes from the OpenBSD CVS repo. While at present I’m running bulkbuilds against a patched devel/binutils which I’ve not upstreamed or committed for both OpenBSD & Bitrig, I am thinking that for OpenBSD we should actually just use the native version and not attempt to build the package. For Bitrig, there is already a separate package in their ports tree for a newer version of binutils, it’s pulled in alongside other modern versions of tools under the meta/bitrig-syscomp package so it makes sense to mimic that behaviour.

Coming to the realisation that stock freedesktop components were not going to build on OpenBSD, I switched to using X11_TYPE=native to utilise what’s provided by Xenocara. Despite the switch, pkgsrc still attempted to ignore the native version of MesaLib and try to build its own, the build would fail and prevent a couple of thousand packages from building.
This turned out to be because of a test to detect the presence of X11 in mk/defaults/mk.conf, it was testing for the presence of an old path which no longer exists. As this test would fail, the native components would be ignored & pkgsrc components would be preferred. The tests for OpenBSD & Bitrig were removed & now default to a default of an empty PREFER_PKGSRC variable. The remaining platforms need to be switched over after testing now.

As Mac OS X on PowerPC gets older and older with time, the requirement for defining MACOSX_DEPLOYMENT_TARGET grows ever more redundant, Ruby now ships with it & unless it’s defined, you will find that it’s not possible to build the ruby interpreter any more. I am considering setting MACOSX_DEPLOYMENT_TARGET="10.4" for PowerPC systems running Tiger or Leopard so that packages could be shared between the two but have not had a chance to test on Leopard yet to commit it. I somehow ended up on a reply list for a ticket in the Perl RT for dealing with this exact issue there. They opted to cater for both legacy & modern version of OS X by setting the necessary variables where necessary.

Getting through backlogged notes to be continued

Hipster keyboard layout on Windows

Windows supports the Dvorak keyboard layout natively, out of the box, so there is no tinkering required outside of visiting control panel & selecting the desired layout.

To switch the location of the control & caps locks keys however, you need to modify the registry & and reboot. I’ve uploaded a registry snippet which can be applied (taken from Windows 7). It implements the changes covered in a post on kodiva.com.

EuroBSDcon 2015

I unfortunately will not be presenting my talk at EuroBSDcon 2015 later this week. A family emergency that developed while I was in Ottawa earlier this year came to a head in early August. Things had been pretty hectic up until this point and I didn’t feel up to buttoning down for the next two months to work so I decided to cancel my talk as I just wanted to switch off. Life is now back in motion again as of earlier this month and I intend to pick up from where I left off with this project next month to resubmit next year. I’m sorry I will not be there in Sweden to enjoy the conference with some of you but hopefully see you in 2016 for the next round!

Book review: The Design and Implementation of the 4.3BSD UNIX Operating System

The Design and Implementation of 4.3BSD UNIX Operating System
According to my photographs, I picked up this book in February of this year. With a 105 sections spread over 13 chapters I’ve been working through the book slowly at a section a day. Despite being a technical subject, the book does a very good job of explaining the operation system at a high level without becoming a study of the source code. There are snippets of source code & pseudo code to compliment the text and an extensive list of papers for reference at end of each chapter for those that wish to dig deeper.

I had previously attempted to complete the Minix book, Operating Systems: Design And Implementation but struggled with the extensive source reference. switching back and fourth between chapters or the requirement for a computer to view the source code was not a viable option. I took a chance on this book as used copies are available on Amazon for the cost of a postage which is less than a couple of pounds. The book is well written and enjoyable to read, while implementation details may not be completely applicable to modern BSD variants The fundamental details may still hold true in most cases if not providing a historical background around the technical challenges they faced at the time. What I liked with the Minix was that it provided lots of background to accommodated a beginner and get a reader up to speed though I much preferred the ability to read this book by itself without requiring access to the source code.

I found some of the details in the interprocess communication part a little unclear at times but enjoyed the filesystem and memory management chapters the most and the terminal handling chapter the least though I did learn of Berknet there, aswell as many other historical artefacts throughout the book, some of which I tweeted under the hashtag di43bsd.

Berknet is an obsolete batch-oriented network that was used to connect PDP-11 and VAX UNIX systems using 9600-baud serial lines. Due to the overhead of input processing in the standard line discipline, a special reduced-function network discipline was devised.

The 4.3BSD kernel is not partitioned into multiple processes. This was a basic design decision in the earliest versions of UNIX. The first two implementations by Ken Thompson had no memory mapping at all, and thus made no hardware-enforced distinction between user and kernel space. A message-passing system could have been implemented as readily as the actually implemented model of kernel and user processes. The latter was chosen for simplicity. And the early kernels were small. It has been largely the introduction of more and larger facilities (such as networking) into the kernel that has made their separation into user processes an attractive prospect — one that is being pursued in, for example, Mach.

The book breaks down the percentage of components in each category (such as headers) which are platform independent and platform specific. With a total of 48270 lines of platform independent code versus 68200 lines of platform specific code, the 4.3BSD kernel was largely targeted at the VAX.

From the details on the implementation of mmap() in the BSD memory management design decisions section it was interesting to read about virtual memory subsystems of old

The original virtual memory design was based on the assumption that computer memories were small and expensive, whereas disk were locally connected, fast, large, and inexpensive. Thus, the virtual-memory system was designed to be frugal with its use of memory at the expense of generating extra disk traffic.

It made me think of Mac OS X 10.4 (Tiger) as that still struggled with the same issue many years on which I have to suffer when building from pkgsrc. Despite having a system with 2GB of RAM, memory utilisation rarely goes above 512MB.

The idea of having to compile the system timezone in the kernel amused me though it was stated that with 4.3BSD Tahoe, support for the Olson timezone database that we are now familiar with was first added, allowing individual processes to select a set of rules.

I enjoyed the filesystem chapter as I learnt about the old berkley filesystem and the “new” which evolved into what we use today, the performance issues with the old filesystem due to the free list becoming scrambled with the age of the filesystem (in weeks), resulting in longer seek times and the amount of space wasted as a function of block size.

Although the old filesystem provided transfer rates of up to 175 Kbyte per second when it was first created, the scrambling of the free list caused this rate to deteriorate to an average of 30 Kbyte per second after a few weeks of moderate use.

The idea of being rotationally optimal to reduce seek times and implementing mechanisms to account for that was very interesting to read about.

To simplify the task of locating rotationally optimal blocks, the summary information for each cylinder group includes a count of the available blocks at different rotational positions. Eight rotational positions are distinguished, so the resolution of the summary information is 2 milliseconds for a 3600 revolution-per-minute-drive.

Though this is not so valid today with traditional spindle disks as there is not a 1:1 mapping between the physical location & logical representation of the blocks on disk.

The book is a bargain second hand and worth it for the BSD archeology.

Two months after the beginning of the first implementation of the UNIX operating system, there were two processes, one for each of the terminals of the PDP-7. At age 10 months, and still on the PDP-7, UNIX had many processes, the fork operation, and something like the wait system call. A process executed a new program by reading a new program in on top of itself. The PDP-11 system (first edition UNIX) saw the introduction of exec. All these systems allowed only one process in memory at a time. When PDP-11 with memory management (a KS-11) was obtained, the system was modified to permit several processes to remain in memory simultaneously, in order to reduce swapping. But this modification did not apply to multiprogramming, because disk I/O was synchronous. This state of affairs persisted into 1972 and the first PDP-11/45 system. True multiprogramming was finally introduced when the system was rewritten in C. Disk I/O for one process could then proceed while another process ran. The basic structure of process management in UNIX has not changed since that time.

A week of pkgsrc #11

It’s been a while since the last post in the series, the details of what was covered in these posts was the partial basis of my talk at BSDCan and I got to repeat the talk again in Berlin, I was much less nervous the second time, not having a fire alarm going off during the talk may have helped. I will cover briefly some things that were mentioned in the talks which I hadn’t written up here, for the sake of completeness.
Thanks to the DragonFlyBSD folks, I have access to a build server for doing regular bulkbuilds on. As I’m running these as a unprivileged user, there’s not much parallelism in the package builds, it’s one package at a time. The system aptly named Monster is a 48 Opteron CPU server with a 128GB of RAM so I can at least run with MAKE_JOBS set to 96. At the start of the bulkbuilds some deadlock issues in DragonFlyBSD were revealed by pkgsrc which Mat Dillon addressed promptly.

On the Bitrig front, I managed to add support for the OS to lang/python27 which was the package causing the biggest breakage and now in the process of trying to get the support added upstream, there appears to be a bug report from 2013 in the Python bug tracker to add support ubut it was marked as won’t fix, I’m hoping the decision will be changed but will have to wait and see.
With Python 2.7 built successfully it was onto the next set of breakages, gettext!
I had taken a patch from OpenBSD ports for getting devel/gettext-tools building but was asked to back it out as it was not the correct solution to the problem. I decided to reapply the fix in my build just to progress to the next hurdle. The next major breakage was with devel/p5-gettext which needed to be told to include libiconv, I’m now stuck at getting converters/help2man building.
During this process I found that we were missing some necessary flags for creating shared libraries which were highlighted by clang:
relocation R_X86_64_32S can not be used when making a shared object; recompile with -fPIC

This turned out to be bug in the platform support, the necessary fPIC flags were defined but under a if statement for version of OS running with a.out binaries still. mk/platform/Bitrig.mk was stripped of anything related to a.out and everything was rebuilt again from scratch.

OpenBSD and Bitrig probably have many more breakages due to the fact that their architecture is detected as amd64 and not under the x86_64 banner by the build system. One example is x11/libdrm which is set to add sysutils/libpciaccess as a dependency if the host is a i386 or x86_64.
At present libdrm fails at the configure stage with
checking for PCIACCESS... no
configure: error: Package requirements (pciaccess >= 0.10) were not met:

No package 'pciaccess' found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

Alternatively, you may set the environment variables PCIACCESS_CFLAGS
and PCIACCESS_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.

Trying to add OpenBSD to the x86_64 arch list revealed a problem in pkgsrc, the culprit being devel/bmake.
The problem is that there are three separate points where the architecture is defined. In the bootstrap script, in BSDMake’s on source and the settings it passes onto pkgtools/pkg_install. Unfortunately the settings defined at the start in bootstrap are ignored at the bootstrap stage & are not necessarily what pkg_install is built with. To add to this, it’s possible that BSDMake may need to work out what the system is for itself rather than to be expected to have settings passed to itself. That is they should build with settings passed down in succession or independently.
With severe bludgeoning of code between devel/bmake and pkgtools/pkg_install, I managed to get it to
pkg_add: OpenBSD/x86_64 5.7 (pkg) vs. OpenBSD/amd64 5.7 (this host)
pkg_install performs a check of the OS it’s running on against the settings it was built with (the settings bmake passed it during bootstrap), removing the check revealed there was nothing else preventing things from working but the check needs to be there.

For OmniOS, a major components components in the OS which caused many packages to break was the bundled gettext, failing during builds as it could not find the libgomp from (the also bundled) GCC. As a temporary work around to see how the build would progress if libgomp could be found, I added the lib directory to the search path of ld using crle(1).

Configuration file [version 4]: /var/ld/ld.config
Platform: 32-bit LSB 80386
Default Library Path (ELF): /lib:/usr/lib:/opt/gcc-4.8.1/lib
Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system default)

Command line:
crle -c /var/ld/ld.config -l /lib:/usr/lib:/opt/gcc-4.8.1/lib

It was possible to build 13398 packages out of 16536 possible packages with this workaround in place.

With the help of Joerg Sonnenberger, at pkgsrcCon I added support for fetching the OS version info in OmniOS & SmartOS for use in build build reports, this should mean that these operating systems will be reported correctly rather than as SunOS 5.11.

sevan.mit.edu is back online as a G4 Mac Mini with 128GB SSD. It’s yet to complete its first bulkbuild since the rebuild but it’s nearly finished as I type this.

It’s now possible to build more than 14100 packages on FreeBSD 10.1-RELEASE with pkgsrc.

A week of pkgsrc #10

Following on from last weeks post, I forgot to mention building on OpenBSD/sparc64 via a LDOM running on a Sun T5210, this was even more painful than the Solaris counterpart and took the best part of a month, some of this delay was initially caused by problematic packages which held up the build, not parallelising the builds and again issues with FTP mirrors.
devel/electric-fence was another of packages which was responsible for holding up the build that I didn’t mention in the previous post. During the build it runs a binary called eftest and that’s it, it’s stuck there until killed.
The LDOM I was running in was allocated 4 vCPUs but the build was running as a single threaded build. Defining MAKE_JOBS=4 in pkg/etc/mk.conf and recompressing the bootstrap kit (bootstrap.tar.gz) helped this situation. To work around FTP issues, bulkbuilds were switched to HTTP only thanks to a pointer from Joerg Sonnenberger. As defined in pkgsrc/mk/defaults/mk.conf
# Same as MASTER_SORT, but takes a regular expression for more
# flexibility in matching. Regexps defined here have higher priority
# than MASTER_SORT. This example would prefer ftp transfers over
# anything else.
# Possible: Regexps as in awk(1)
# Default: none

Setting MASTER_SORT_REGEX= http://.*/ in pkg/etc/mk.conf and recompressing the bootstrap kit ensured builds use HTTP from there on.

The bulkbuild report showed lots of fallout from packages which hadn’t been updated yet to support LibreSSL e.g. net/wget expects DES support.
Rodent@ fixed lang/python27 with the changes due for subsequent releases of Python.
Bernard Spil fixed Heimdal which failed due to the lack of RAND_EGD in LibreSSL, these fixes will be in the next release of Heimdal (1.6.0?), back porting the changes to 1.5.3 which is the current release available resolved the issue with lack of RAND_EGD but then failed at building a kerberised telnet due to changes in the OpenBSD IPv6 stack which removed functionality telnet was expecting to be there. There is no fix for the issue in the OpenBSD ports as Heimdal is set to build without legacy and insecure protocols such as telnet and rsh.

Due to the connectivity issues on the OpenCSW build cluster, I erased the error report and restarted the bulkbuild on Solaris 10 SPARC and 11 x86 to re-attempt everything that had failed during the previous run for whichever reason. The Solaris 10 SPARC bulkbuild has now finished with a total of 7389 packages built, previously 5701. I discovered a particularly nasty bug with lang/gcc3-c++ which cost 3 days as the configure stage ran over and over again before being killed manually.

My access to the AIX LPAR expired, taking with it what I had previously tried, I requested access again but this time also requested access to a LPAR running SUSE 12 on Power8 as well.
Still no further with the AIX LPAR but managed to getting bulkbuild going on the SUSE 12 one with a little bit of assistance.
The first thing which needed to be done was to specify the ABI and the suffix applied to the library search path. This is because the system is 64 bit without any 32bit libraries installed and by default pkgsrc opts for 32bit unless set otherwise. When attempting to bootstrap initially, it failed with ERROR: bin/digest: missing library: libc.so.6. I initially set about the wrong path of trying to locate the glibc-32bit rpm for SUSE on Power before realising what was actually required. This may have been a knee-jerk reaction from the past before the days of yum and such on Linux. With the necessary change to pkgsrc/mk/platform/Linux.mk the bulkbuild environment setup continued before hanging on the installation of pkgtools/pkg_install. pkg_add would hang and CPU utilisation would spike to 100%.

A backtrace of the running process in gdb revealed it was stuck on mpool_get().
(gdb) bt
#0 0x0000000010096650 in mpool_get ()
#1 0x0000000010093658 in __bt_search ()
#2 0x000000001009318c in __bt_put ()
#3 0x000000001000b614 in pkgdb_store ()
#4 0x000000001000430c in extract_files ()
#5 0x0000000010006fd0 in pkg_do ()
#6 0x00000000100075a4 in pkg_perform ()
#7 0x0000000010005650 in main ()

Turns out the issue also affects pkgsrc on Linux/ARM and was previously reported in a bug report from 2013 with a workaround. Setting the GCC optimisation level to 0 for pkgtools/libnbcompat and pkgtools/pkg_install allowed mk/pbulk/pbulk.sh to setup a buklbuild environment and a bulkbuild is currently in progress. The bulkbuild was initially aborted to added some critical missing components which caused major breakage.

zypper install libxshmfence-devel gettext-tools gcc-c++.

With Suse Linux on Power8, that bumps my operating system count to 9 across 5 architectures. Just need to get AIX going to round off the OS count. 🙂

A week of pkgsrc #9

The past few weeks have been pretty hectic, as the time for BSDcan gets shorter and shorter, I’m thinking about my talk and testing more and more in pkgsrc. Rodent@ added support for Bitrig to pkgsrc-current last month, his patches highlighted an issue with the autoconf scripts (which should be shared across core components) not being pulled in automatically. Joerg Sonnenberger resolved this issue and I regenerated the patch set again. With the system bootstrapped the next thing which was broken was Perl, applying the changes needed for OpenBSD resolved any remaining issues and the bulk build environment was ready. After three days, the first bulkbuild attempt on Bitrig was complete and a report was published. There is now a bulkbuild in progress with devel/gettext-tools and archivers/unzip fixed, that should free over 8400 packages to be attempted to be built.
For Solaris, my first bulkbuild on Solaris 10 completed after 22 days. Mid-April I also started off bulkbuilds on Solaris 11 (x86 and SPARC) using the SunStudio compilers (It’s not possible to use GCC at the moment due to removed functionality that was previously deprecated). The Solaris 11 SPARC bulkbuild is still in progress and the x86 bulkbuild is running. Unfortunately the build cluster had some connectivity issues and needed rebooting during the bulkbuild but not until lots of packages had failed to fetch distfiles, hence the figures look a lot worse than they could be. Solaris 10 SPARC report, Solaris 11 x86 report.

Through bulk building on multiple operating systems another issue that’s surfaced is problematic packages that hold the build up. On Bitrig mail/fml4 is an issue, on OpenBSD www/wml, FTP mirror issues for ruby extension on Solaris, Xorg FTP mirror issues on OmniOS. Things need regular kicking, a brief glance into pkgsrc/mk didn’t reveal any knobs which would allow the preference of HTTP for fetching distfiles. On Bitrig & OpenBSD I’ve excluded these packages from being attempted via NOT_FOR_PLATFORM statement in their Makefile until I have a look into the issue.

sevan.mit.edu completed another bulkbuild, pkgsrc-current now ships with MesaLib 10.5.3 as graphics/MesaLib, version 7 has now been re-imported as graphics/MesaLib7 by tnn@, the new MesaLib needed a patch for FreeBSD, similar to NetBSD to build successfully, due to ERESTART not being defined. At present, it’s still broken on Tiger as I’ve not looked into yet.

I revisited AIX again to test out pkgsrc once again, this has turned into a massive yak shaving session. I’ve yet to run a bulkbuild successfully as the scan stage ends with a coredump.
I originally started off with using the stock system shell, bootstrap completed successfully but scan stage of a bulkbuild would just stop without anything being logged. Manually changing the shell used to shells/pdksh in pkg/etc/mk.conf and pbulk/etc/mk.conf resulted in the following error message:
bmake: don't know how to make pbulk-index. Stop
pbulk-scan: realloc failed:

This turned to be a lack of RAM, my shell account was to a AIX 7.1 LPAR running on a Power8 host with 2 CPUs and 2GB of RAM committed, unfortunately the OS image IBM provided came with Tivoli support enabled and a bug in the resource management controller which meant RMC was consuming way more resource than it needed to. I was running with less than 128MB of RAM.
Stopping Tivoli & RMC freed up about 500MB of RAM, attempting to bulkbuild again, caused the process to fail once again at the same stage. With a heads up from David Brownlee & Joerg Sonnenberger, I bumped the memory and data area resource limits to 256MB.
This allowed the scan to finish with a segfault.
/usr/pkgsrc/pbulk/libexec/pbulk/scan[54]: 11272416 Segmentation fault(coredump).
pscan.stderr logged multiple instances of
bmake: don't know how to make pbulk-index. Stop.
The segfault generated a coredump but it turned out that dbx, the debugger in AIX was not installed. IBMPDP on twitter helped by pointing to the path where some components are available for installation, unfortunately, while the dbx package was available there, some of its dependencies were not. Waiting on IBMPDP to get back to me, I fetched a new pkgsrc-current snapshot (I couldn’t update via CVS because it wouldn’t build) and re-setup my pbulk environment via mk/pbulk/pbulk.sh.
I should mention that initially when I setup, I’d explicitly set CC=/usr/bin/gcc last time, then while trying to get various things to build subsequently, I’d symlink /usr/bin/cc to /usr/bin/gcc. When I came to set thing up with the new snapshot, I did not pass CC=/usr/bin/gcc this time round and found that I was unable to link Perl, not sure if this was the Perl build files assuming if on AIX & /usr/bin/cc exists, it’s XLC or if ld(1) takes on different behaviour but I had to remove this symlink.
Once everything was setup, the bulkbuild failed agin at the same place, except this time I had a different message logged.
/bin/sh: There is no process to read data written to a pipe..
I edited the bootstrap/bootstrap script & devel/bmake/Makefile to set shells/pdksh as a dependency & rerun bulkbuild.
The scan stage again completed with a coredump with this time pscan.stderr just contained Memory fault (core dumped).
I’ve committed these changes so pkgsrc-current now defaults to using shells/pdksh as its shell but have not been able to try anything else as this weekend the system is unaccessible due to maintenance.

At present, I’m attempting to bulkbuild pkgsrc-current on 8 Operating systems
OpenBSD (5.6-RELEASE & -current), FreeBSD, Bitrig (current), Mac OS X (Tiger), Solaris (10 & 11), OmniOS on 4 architectures (i386, AMD64, SPARC, PowerPC).
If I could get AIX going that would bump the OS & arch could up by 1. Maybe by the next post perhaps. 🙂

Thanks to Patrick Wildt for access to host running Bitrig and Rodent@ for adding support to pkgsrc.

Captive Portals & Brighton

Yesterday I gave a talk at SANE user group on my history with wireless networks as part of the PierToPier.net project in Brighton (now defunct) and my experimentation with captive portal software which I began revisiting this time last year. I thought it would be a good opportunity to develop my programming skills by tidying up and modernising parts of the codebase which caused problems, such as things preventing builds on a modern system with clang which is now the default compiler for FreeBSD on i386/AMD64 architectures and OS X. The slides from my talk can be found here.


Back in the early to mid 2000’s there were 2 initiatives to provide public access wifi in Brighton. Loose Connection and PierToPier.net, each had a different focus & approach.
Loose Connection was a commercial venture which could be deemed a VAR, they resold a ADSL connection along with a draytek router & that was it. Individual wireless networks with the loose connection SSID dotted around drinking holes in Brighton, the founding(?) company Metranet lives on as a WISP today.
PierToPier.net was a community driven effort with a technical team of volunteers, predominantly from a service provider / telecoms / networking background. Each node on the network was sponsored by a host who’d buy and run the equipment while the project members managed it.

The project started off based around the fanless VIA mini-itx boards, Prism2 chipset wireless cards, booting linux with hostapd and nocatauth/nocatsplash off a CF card in a IDE to CF adapter.
This was a very flexible platform, if there was no package for it you could build it with ease, problem was that it had a high cost for entry, £250 to £300? so from the start we were looking to reduce costs.
The other issue was though we’d eliminated moving parts, the casing was not suitable for outdoor use.

With the availability of 3rd party firmware and promotional sales of the WRT54G, PierToPier switched hardware platforms as the low cost solution for new nodes.
Tom Grifiths discovered Chillispot around the same time frame and we adopted it due to enhanced functionality it provided, such as RADIUS accounting and working captive portal. We’d previously ran into issues with browser support running nocatauth which by that point was no longer maintained and stability issues with nocatsplash.

Glastonbury 2005
VIA motherboard again this time with CF adapter & mini-pci slot onboard
2x Atheros A/B/G mini-pci cards, one on mini-pci to PCI bridges and second onboard
Stuck to a pelican case with epoxy
Two holes drilled in the side of the case for external antennas

It was early times for support of the cards and the wireless standards. In this era OpenBSD was leading the way in terms of support of hardware and development of their ieee80211 wifi stack, they were the first to reverse engineer the Atheros binary blob HAL (years before anyone?) but late to the game for the 802.11g, 11a was enabled from the start but didn’t appear to work – bringing the interface up in .11a hostap mode wouldn’t necessarily work.
Looking at alternatives there was a short lived live environment named WifiBSD which was based around FreeBSD but later moved to NetBSD before development ceased. The support for the Atheros cards was not as good as OpenBSD, hence not wasn’t much use.

The hardware for the Glastonbury nodes were truly terrible, all functionality had been wired to a single bus which caused the system to lock hard in most configurations. e.g. you booted from a CF card and tried to bring up a wireless interface. The only way to use the system was to disable everything that wasn’t needed in the BIOS including VGA, if there was an issue, you’d have to factory reset the BIOS before diagnosing.

For the wireless network at Glastonbury, the 11a 5GHz network was used as the backhaul while the 11b/g interface was used for connecting wireless clients. No captive portal, connect to the AP & off you go.

We arrived onsite on Tuesday, starting getting things running, Thursday morning the rain and lightning started things went downhill from there. loss of connectivity between the backhaul links meant things fell apart.

By the time we’d discovered Chillipot the project had a 1.0 release out which had preliminary support for FreeBSD. The website claimed only FreeBSD 5 and up were supported, I created a port and submitted it for inclusion in the tree, net-mgmt/chillispot was born, Edwin@ from the ports team fixed the code so that it’d work on previous releases. I then moved onto creating a OpenBSD port, this was slightly harder and the final peace was actually resolved by a Steve Davies. I got the code to build on OpenBSD but networking wouldn’t work. This turned out to be because an additional 4 bytes needed to be allocated which Steve fixed. It never made it into the OpenBSD tree (only tested it on i386 and SPARC, it used strcpy() everywhere and didn’t run on SPARC) but it can be found in ports-wip. I then moved onto creating a live CD environment based on FreeBSD 6 using freesbie for advocacy purposes named BrightonChilli. The idea was to remove the hurdle of going through the installation process and provided an environment that just needed the configuration of network interfaces and chillispot. A person with previous experience of running chillispot would be familiar and a new user would not be too out of place.
This was in the days of X configuration being a part of sysinstall which could hang if Xconfigure was run and you’d have to start the install process again as the install was incomplete (for a newcomer). I was interviewed on BSDtalk #73 regarding BrightonChilli.

PierToPier also produced its own Linux image named Muddy Linux targeted for x86 hardware that ran the necessary stack to serve as a node on the network.
After Chillispot 1.1.0, the project went quiet, there was no answer from the founding developer for quite a while and eventually the web hosting stopped and the domain expired.
The community rehomed to coova.org and development continued in Coovachilli which was founded by David Bird, a contributor to Chillispot.
Coovachilli initially lacked support for FreeBSD but it was eventually added in by David and net-mgmt/coovachilli was born in ports. Not much else was done after that until a year ago. With FreeBSD 10 and the switch to clang, the codebase needed attention, first step was to get it to build correctly with GCC. The use of error_t from glibc caused the build to fail as it’s not available in FreeBSD, ensuring this was declared allowed the build to complete successfully. To resolve build issues with clang, nested functions were separated out. Any function with missing prototypes & parameter lists were addressed next.
struct ifreq had been marked as deprecated since 2000 and was finally removed in FreeBSD 10. The *BSD specific sections of Coova were switched out to the new struct ifaliasreq & Linux was left to use the pre-existing method. There was extensive use of macros for the logging functionality, these were dropped in favour of using the existing standard syslog(3) with the correct log level defined. This had the benefit of revealing issues which were not detected previously such as incorrect format specifiers.
There are still many things that need to be cleared up, the 3rd party functions added in are particularly problematic and will probably be my next task to replace with standard components.
CoovaChilli & Chillisport have seen large scale deployments thanks to use by Fon, o2 and Google which now owns Coova.


Hipster keyboard layout on NetBSD

Each of the major BSD’s have a different way of handling keyboard layouts on the console & X11. On OpenBSD X11 inherits the setting from wscons by default, on FreeBSD the console keyboard config is separate to the X11 config & depending on if you go down the hald route or not, you may find yourself writing XML to configure your keyboard. For NetBSD which I’ll cover here, wscons configuration is again separate from X11 configuration but everything is configured as per usual via the xorg.conf keyboard layout.

The snippet below is from xorg.conf which sets the keyboard model as a ThinkPad T60 (it should apply to X60 series apart from issues with media buttons), US Dvorak layout with the crtl & caps locks switched.
Section "InputDevice"
Identifier "Keyboard0"
Driver "kbd"
Option "XkbRules" "xorg"
Option "XkbModel" "thinkpad60"
Option "XkbLayout" "us"
Option "XkbVariant" "dvorak"
Option "XkbOptions" "ctrl:nocaps"

I didn’t know about the ctrl:nocaps option and I happen to stumble across it in the X section of the NetBSD guide.

To apply the same layout to the console, edit /etc/wscons.conf and set encoding to us.dvorak.swapctrlcaps followed /etc/rc.d/wscons restart.

Not sure how hipster this all is, managed to get sidetracked into NetBSD desktop config as I was working on updating a package in pkgsrc and remembered the tweet above. Seems like a common thing in the emacs world.

Cross-platform software packaging with pkgsrc

I recently gave a brief talk titled “cross-platform packaging, from Oracle Solaris to Oracle Linux & many more Operating systems using pkgsrc” at Solaris SIG (formerly London OpenSolaris User Group), where I tried to describe the benefits of pkgsrc for a system administrator. This was the second talk on the subject of software packaging at these meetings, the first was by Mandy Waite back in 2009 on the OpenSolaris SourceJuicer which used the spec files usually offered for generating rpm packages. There were no slides and I was quite nervous so I thought I’d write this post to clarify things.

pkgsrc is a portable framework for building software with more than 12000 packages available (with variants of packages, more than 15000) on 22 different platforms (see platform notes for level of support).

pkgsrc is inspired from FreeBSD ports. As with ports, each unique package is assigned a sub-directory in a parent directory named after the relevant category. This directory contains a minimum of a Makefile, a file containing cryptographic checksums of the source files, a file containing a description and a packing list which contains a list of files which will be installed. Sometimes there may be additional files which need to be installed or patches which need to be included as well. Though patches exist in the tree, we strive to upstream changes as much as possible to a) be good netizens b) make maintenance easier going forward.

Packages which are available to be built from pkgsrc generally do not have changes outside what’s needed to integrate with the framework or build correctly on various systems where additional flags need to be specified. There is a mechanism in place for packages to offer build time options to enable or disable functionality.

We try to use components offered by host operating system if viable but offer the facility to rebuild against versions available from pkgsrc.

If you’re mandated to use specific packages for system wide use but require tools for use personally, there is a unprivileged mode where packages are installed within ~/pkg (or another prefix of your choice) and the OS is left untouched.

There is a dedicated security team responsible for keeping a vulnerabilities file up to date so that packages installed from the tree that are affected will be flagged. There is also support for signed packages which utilises gpg for signing.

To start using pkgsrc you either need to run the bootstrap script in pkgsrc/bootstrap (fetch & uncompress pkgrc.tar.gz) or obtain a prebuilt bootstrap kit for your OS from pkgsrc.org. If you bootstrap manually using the script you’ll end up with your own copy of the bootstrap kit which you can distribute amongst other systems running the same operating system. With a system bootstrapped, you have the necessary tools to build software from the pkgsrc tree or fetch and install prebuilt binaries.

Assuming that your bootstrap kit is /usr/pkg & you uncompressed your pkgsrc archive to /usr/pkgsrc and you wanted to install nginx
cd /usr/pkgsrc/www/nginx
/usr/pkg/bin/bmake install

You can setup a build environment using the pkgsrc/mk/pbulk/pbulk.sh script which will allow you to build packages in bulk or individually for the purpose of distribution across multiple system.

Solaris SIG name badge