Something I’ve been thinking over on and off for a while, is the meaning of portability claimed by folks regarding software.
Fish out an old computer with a matching vintage closed source operating system, how does the portable software fair in such a situation?
Supporting an ancient operating systems when building recent software is enlightening, there’s so much room for dependencies to creep in, only to realise it when the dependencies are not there. Ideally, at the least, documentation would cover when said functionality was introduced so you have a vague idea in advance if something will work, but an exact idea of dependencies is needed to catalogue what is required.
I’ve been wading through building things on OS X Tiger again and looking for low hanging fruit to tick off. Armed with the stock version of GCC 4.0.1 from XCode 2.5, I’ve been avoiding going down the newer toolchain route and attempting to build anything which lists needing nothing more than C99/C++98 as I want to get as much done with the stock toolchain as possible since it reduces time starting with a fresh setup.
GCC 4.0.1 supports C99 and C++98, though according to the GCC wiki, C99 support was not completed in GCC until v4.5. GCC 4.0.1 defaults to C89, unless a standard via
-std is specified. I can define
__DARWIN_UNIX03 if I want to kick it up a notch for modernity, but that’s at least one version of SUS behind, implementing changes to existing functionality from IEEE Std 1003.1-2001 (defaults being older behaviour in some cases to accommodate migration to the then new behaviour, which has now become the expected behaviour).
Still, I have a compiler which recognises
-std=gnu99 and a system which was on the way to UNIX03 certification but not (first reached in Leopard). See
compat(5) from Tiger, Leopard, Catalina.
On my quest for low hanging fruit for things, I keep on getting caught out by software which claim to require just language support but really needing functionality beyond language support and 3rd party libraries from the the system.
Some examples of things which have tripped my up like
-std=c89and requiring a C11 supported compiler.
- Claiming just C99 support but requiring POSIX functionality from IEEE Std 1003.1-2008.
- With a C11 compiler installed (GCC 5), missing POSIX functionality from IEEE Std 1003.1-2008.
- The assumption that PowerPC hardware means host is running Linux.
- Python modules with Rust dependency (luckily that is a quick, hard failure on PowerPC OS X 🙂 )
More than just language support aside, another common issue is the misdetection of functionality through the test being too broad. Usually see this with pthread support, where more recent pthread support is required but all that happens from the build side is a check for the presence of
pthread.h header file rather than checking for required functions like
pthread_getname_np(3). Luckily, from the lacking user-space aspect there is help at hand, either via libraries which provide an implementation of missing functionality, or re-implement the functionality as a wrapper around existing functionality.
Sometimes there’s advice on how to use existing functionality for things which haven’t already been provided by compatibility libraries from the Windows community since they still have portability related issues on current OS versions.
Just stuck with things which need hardware/kernel support – memory atomic operations?
Going lower level, since the tooling is now so ancient, functionality which allows things to be treated generically across platforms hadn’t cross-pollinated. Non-platform specific mnemonics for assembler are lacking, and flags for linker to generate shared objects are missing.
Luckily the linker fixes is easy to patch and forward compatible with newer OS versions since they have retained backwards compatibility for the old way, specifying
-dynamiclib instead of
-shared which is what is now commonly used.
The Practice of Programming, written in the late 1990s, before C99 was standardised, has a chapter on portability.
It’s hard to write software that runs correctly and efficiently. So once a program works in one environment, you don’t want to repeat much of the effort if you move it to a different compiler or processor or operating system. Ideally, it should need no changes whatsoever. This ideal is called portability. In practice, “portability” more often stands for the weaker concept that it will be easier to modify the program as it moves than to rewrite it from scratch. The less revision it needs, the more portable it is.
Of course the degree of portability must be tempered by reality. There is no such thing as an absolutely portable program, only a program that hasn’t yet been tried in enough environments. But we can keep portability as our goal by aiming towards software that runs without change almost everywhere. Even if the goal is not met completely, time spent on portability as the program is created will pay off when the software must be updated. Our message is this: try to write software that works within the intersection of the various standards, interfaces and environments it must accommodate. Don’t fix every portability problem by adding special code; instead, adapt the software to work within the new constraints. Use abstraction and encapsulation to restrict and control unavoidable non-portable code. By staying within the intersection of constraints and by localizing system dependencies, your code will become cleaner and moreThe Practice of Programming, Introduction to Chapter 8, Portability
general as it is ported.
When building Monocypher, I needed to drop
s/-shared/-dynamiclib, and remove
-Wl,-soname,$(SONAME) from the
makefile in order for it to build on OS X Tiger. As stated on their homepage, there were no external dependencies.
Portable code is an ideal that is well worth striving for, since so much time is wasted making changes to move a program from one system to another or to keep it running as it evolves and the systems it runs on changes.The Practice of Programming, Summary of Chapter 8, Portability
Portability doesn’t come for free, however. It requires care in implementation and knowledge of portability issues in all the potential target systems. We have dubbed the two approaches to portability union and intersection. The union approach amounts to writing versions that work on each target, merging the code as much as possible with mechanisms like conditional compilation. The drawbacks are many: it takes more code and often more complicated code, it’s hard to keep up to date, and it’s hard to test.
The intersection approach is to write as much of the code as possible in a form that will work without change on each system. Inescapable system dependencies are encapsulated in single source files that act as an interface between the program and the underlying system. The intersection approach has drawbacks too, including potential loss of efficiency and even of features, but in the long run, the benefits out weigh the costs
I guess the union approach is what you would call the autoconf workflow: test for the required functionality and generate definitions, based on the test results. Those definitions can then be checked for in the codebase to steer the build process.
It seems today portable software in most cases means with the listed requirements, you have a chance to build given an up to date, current, operating system, rather than building on the earliest version of where requirements are available.
For portability, in terms of future proofing, being explicit in expectations helps prevent breakage caused by riding defaults when in time defaults change.
As proof of portability, test, test, test, beyond current version of mainstream systems.
If not for extending support, to bring clarity to expectations of where the software will be able to run.