LFS, round #6

It’s been a while since I wrote a technical post in this series, since the last post I made a build of what I called Viewpoint Linux Distribution available. This post will cover the time between the last post (round #5) and the launch of the distro.

By the time I’d written the previous post, things had roughly taken shape and I was thinking about what would sit on top via packaged software. Being interested in Guix from afar I thought about using that as there had been some interesting talks about it at FOSDEM‘s Declarative and Minimalistic Computing devroom a month prior. Didn’t end up going down that route as Guix requires GNU Guile, GnuTLS, and various extensions for Guile. It’s not so much of a problem what its requirements are but that I would have to ship and maintain copies of these in the base OS and I didn’t want to do that so I stuck with what I knew. I’ve spent a lot of time with pkgsrc and am comfortable working with it. pkgsrc gives you control where it satisfies dependencies and as long as you have a shell & compiler installed it can get itself to a working state. Unless specified, the bootstrap process on Linux opts to satisfy all dependencies from itself and ignore anything already installed on the system. This behaviour can be overridden by specifying --prefer-native yes when bootstrapping and in this scenario it was preferable since the OS was using recent if not latest available versions of things. Despite preferring native components, when it came to building packages, things that were present on the OS were being built again anyway, specifically readline.

$ cd /usr/pkgsrc/shells/bash ; bmake show-var VARNAME=IS_BUILTIN.readline
no

After some investigation it turned out the builtin detection mechanism was not working and so dependencies would always get built, this was due to a difference between where libraries are installed when following the LFS guide and where pkgsrc expects to find them. The instructions in the LFS guide specify /usr/lib for libdir and pkgsrc expects to find them in /usr/lib${LIBABISUFFIX} which in this case would expand to /usr/lib64. Just to move thing along I patched pkgsrc/mk/platform/Linux.mk to include /usr/lib for _OPSYS_SYSTEM_RPATH / _OPSYS_LIB_DIRS and builtin detection then started working. With a working packaging system, I began packaging BCC and bpftrace though opted to use the bpftrace binary which the project produces with every release in the end. This made things easier as there is a working environment out of the box to start with and if BCC is needed, it can be installed, but since the BPF Performance Tools book is largely about using bpftrace, you get to start off without dealing with packaging. By keeping the packaging system a separate component, it also saves on shipping a bootstrap kit for the packaging system with every release and likely stale packages depending on how quickly things evolve. I dislike the idea of having to run a package update on first boot to shed the stale packages which are shipped with the OS.

After testing various things out I set out to make a new build of the distro to publish, this time opting to use lib64 as the libdir to reduce the need for changes to pkgsrc, I have not attempted any large runs of bulkbuilds but Emacs 21 package was definitely not happy as it expected to find some things in /usr/lib.

There are various packages which ship with DTrace USDT probes which bpftrace can also make use of. This involves building those packages with DTrace support enabled and using SystemTap which provides a Python script called dtrace to do the relevant work, on Linux. I created a package but since it require Python, it created a circular dependency when using Python 3, as Python 3 has USDT probes. As a workaround to sidestep the issue, my SystemTap package uses Python 2, which is still supported by SystemTap. To enable building with DTrace support I introduced a “dtrace tool” which pulled in SystemTap as dependency on Linux when USE_TOOLS+=dtrace was specified, and nothing on other platforms. I then added USE_TOOLS+=dtrace across the tree where dtrace was a supported option.

bpftrace listing the USDT probes found in libpython built from the Python 3.8 package in pkgsrc

With the OS rebuild, I dropped nscd(8) from the system, the thought of having up to three caching resolvers seemed a bit excessive (nscd/systemd-resolved/unbound). This post highlights why you might not want nscd support on your system. As part of the rebuild I began populating the repository with sources for everything that would ship with distro, it was a tedious process that slowed that as I progressed through the build and imported more and more components, because on the initial import I would roll the tree back to the start to import into a branch, update to the tip of the tree, merge the branch, repeat. I used the hg-git mercurial plugin to convert and push the tree to a Git mirror

The kernel config used started life as the default config which gets created when you run make defconfig and built up from there to cover what the LFS guide suggests and those required by BCC / bpftrace. Testing that X11 worked ok revealed that I was missing various options, from lack of mouse support to support for emulated graphics, the safe bet being the VMware virtual card to use on VirtualBox (VMSVGA which is default) and QEMU, other options resulted in offset problems with the cursor where it would appear on one place on the screen but clicks and drags would register at a different location. Everything works out of the box with VMware option.

I’ve been really impressed by how quickly the system boots and shuts down (not having an initrd image to load and minimal drivers to probe for, account for that), I hope I don’t end up loosing that. I used the work leading up to the release as an excuse to start using org-mode on Emacs. Following the beginners guide I now have a long list of todo items which I work through. The next big item is build infrastructure so I can turn around releases quicker.

Introducing the Viewpoint Linux Distribution (continued)

This should’ve been part of the original post but I feared for the attention it would end of drawing and the direction the “conversation” could end up taking when links are posted on various site, so I deferred. I was pleasantly surprised that despite the announcement being shared out, there was no drama and even received encouraging comments on the orange site, thank you to those who submitted the links to the various sites and the comments.

My intention for this post is not so much for promotion but as a formality so I can refer back to here should the need arise in the future. I really have no grand vision with this project and intend the project to be a personal one. I hope the distro becomes something useful for others which people carry in their metaphorical tool belt to call on, should they need such an environment to experiment on or adapt for themselves, but I’m not looking to actively recruit developers or soliciting contributions of functionality to integrate into the project, build upon it for yourself. By all means, if there is something amiss, please let me know.

As I was getting things ready to make the announcement I looked into putting together a code of conduct for the project, I believe open source projects should have one but since it is currently a one man show It would really have been an empty gesture as there would be no person or team other than myself for handling incidents, so if someone took exception to my behaviour they would be ultimately be contacting myself about myself.

Besides the projects twitter account I have opted not to utilise any public forum, whether it be IRC/mailing lists/forums or variants of. Direct email is very welcome if you have any questions or comments, you can reach me via sevan@ projects domain but I just don’t have the mental strength to deal with public group discussions or leave things open to trolls and bullies.

Now that the dust has settled from the launch, it is time for the work to resume. 🙂

Introducing the Viewpoint Linux Distribution

Person observing what's going on a through a tiny window whilst huge, wild, painted horses
pass by

Viewpoint Linux is a distribution for providing a minimal environment for me to build on and play with. I hope that for others it can be a distro which provides a working environment to use alongside various texts I have in mind, allowing the reader to focus on study of the material at hand rather than trying to get their environment setup right.
The idea came about through having to side step from study to investigate broken stack traces and wondering about the level of pain when having to make build changes system-wide on a distribution which doesn’t provide infrastructure to rebuild at mass with ease. When I first started writing about my experiments with LFS it was suggested that I look at several different established distributions which were the answer to what I was looking for. I was aware of these distributions already and had even used some in the distant past, however I decided not to go down this path as there was either new tooling to learn which would drive system management or components were adapted (local changes and features). I was not interested in having to detour to learn another set of tools which are non-transferable between operating systems nor making use of derivatives before setting up the system how I needed it so that I could practice what I was studying, hence Viewpoint Linux strives to be innovationless in this regard.

Viewpoint currently lacks a framework to ease building the system hence everything has been built slowly by hand with a specific idea of how the system should be.

Some of those ideas are

  • It should work out of the box for texts in mind e.g full working stack traces for instrumenting with bpftrace and debugging using GDB
  • its concept of base system is a subset of the utilities installed by the LFS guide, containing general utilities for users and tools for administration. Components which are purely build dependencies are installed to a separate prefix (/osbt (os build tools)) so that they can be removed if desired. Everything else is satisfied from user installed packages which is currently provided by pkgsrc. Dependencies can grow out of hand, for example, dwarves has a cmake build dependency, dwarves provides the pahole utility which is used as a kernel build dependency to generate BTF but it’s also a useful utility for inspecting system data structures by itself. This was a grey area where I chose to include dwarves in base but to satisfy its build dependency (cmake) from external sources, in this case, the cmake project provides prebuilt binaries.
  • A repository (monorepo) of all components shipped. Not such a good idea because of fighting autotooled builds and timestamps, see Living with CVS in Autoconfiscated Projects. But it makes tracking changes in the distro easy which is more important for me.
  • It is safe to assume that I’ve run configure, make, make install a bunch of times with CFLAGS set to ‘-fno-omit-framepointer -g‘ or some variation of (such as you have to enable optimisation also for build glibc otherwise it fails).
  • Viewpoint is an inovationless distro, see previous point (there are no new methods or tooling on offering, just stock components from upstream built a certain way with differing flags)
  • Viewpoint uses systemd (I wondered what my own shit sandwich would taste like)
  • Mercurial for source repo (because one piece of Linusware at a time). There is a git mirror.
  • Primarily intended for use as a guest vm though it is possible to install on hardware, the distinction here is because nothing has been done to cater for differing hardware in the kernel config so manual intervention may be required to prep and get everything working e.g it booted fine on my ThinkPad x230 but I had no wifi. There is also no UEFI support currently, nor any additional firmware included.
  • Development of features to 3rd party components happen outside of the tree (because it’s inovationless)
  • Patches from LFS have not been applied, again because inovationless e.g their provided i18n patches to coreutils which are marked as rejected by upstream. The LFS guide states “In the past, many bugs were found in this patch. When reporting new bugs to Coreutils maintainers, please check first if they are reproducible without this patch.”
  • Versioning is going to be a sequencing number meaning nothing beyond an indication of a new release
  • Viewpoint doesn’t follow the FHS spec strictly & LSB at all. Perl & Python are not part of base (because I did not want to maintain them in base).
  • Currently intended to be used alongside Brendan Gregg‘s BPF Performance Tools book and Diomidis SpinellisEffective Debugging book for learning two different debugging workflows. Other texts are in mind for accomodation in the future. Would liked to have included DTrace but that currently requires running a fork of the kernel. While the fork is kept up to the date with upstream, as part of being inovationless, it is easier to swap components fresh from upstream and saves on having to eliminate another avenue where an issue could have been introduced when debugging problems.
Beware! Vendor Gratuitous Changes Ahead!

Source repository is currently 5.1GB (1.8GB .hg directory, 3.3GB of source), 1.8GB .hg/git conversion directory, so as you can tell, that’s a lot of value add 🙂 . On deciding whether to strip components down to the essential minimum I opted not to as running test suites is part of the LFS workflow when building things up and it would make CI integration easier. AMD Firmware included in Linux aside, the test suites from GCC and Binutils for example take up the most space in the repo.

Lots todo to smooth things over but some key features that I intend to work on to include in a future release

  • Build framework to automate the configure, make, make install routine and allow customisation with ease ala BSDs.
    There is a framework in LFS project called ALFS but I didn’t want to go down the literate programming route and maintain my own fork of the LFS guide (you feed it the XML source of the guide and it builds the distro from that).
  • Add ZFS support

Why the name?

  1. It is focused on observability
  2. It is opinionated
  3. I listened to a lot Alan Kay lectures (a nod to Viewpoints Research, ViewPoint OS from Xerox, though this distro is in no way a great feat in achievement)

Viewpoint is a variant of LFS distribution, registered on the Linux From Scratch Counter on 03/05/2021, ID: 28859, Name: Viewpoint Linux Distribution, First LFS Version: 10.0-systemd.

Post continued here

The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive licensee of Linus Torvalds, owner of the mark on a world­wide basis.

LFS, round #5

Up to this point I’ve been working with a chroot to build OS images from loop back mounted flat file which is then converted to the vmdk format for testing with virtualbox. I created packages for bpftrace and BCC, BCC was fairly trivial and the availability of a single archive which includes submodules as bcc-src-with-submodule.tar.gz helped avoiding the need to package libbpf. bpftrace doesn’t offer such an archive and tries to clone the googletest repo which I sidestepped addressing just to obtain the package. Both packages worked ok though I only tested the Python side of BCC and not LuaJit.

Execsnoop via BCC
Execsnoop from BCC

With that I wanted to see if what I had would boot on actual hardware, so dd’d the flat file to a usb flash drive and booted it on a Dell Optiplex. Things worked as far as making it to grub but then hit a couple of glitches. First issue was that because of the delay probing the USB bus the kernel needs to be passed the rootwait keyword that I was missing so it would just panic as no root file system could be found otherwise. After that I hit the issue that I’d nailed things to a specific device node (sda) and with the other disks in the system the flash drive was now another device node (sdb). Addressing that got me to the login prompt and I was able to repartition the SSD installed in the system with cfdisk, make a new file system, copy the contents of flash drive to SSD, install grub and reboot to boot the system off the new Linux install.

As the grub-mkconfig had included references to the GUID of the file system on the flash drive the system landed in the GRUB rescue mode. Since it wasn’t able to load the config nothing is loaded, most importantly, its prefix variable is set incorrectly. This results in a strange behaviour where nothing which would normally work in the GRUB prompt works. Setting the prefix variable to the correct path allows you to load the “normal module” and switch from rescue mode to normal mode.

grub rescue> set prefix=(hd0,msdos1)/boot/grub
grub rescue> insmod normal
grub rescue> normal

Once in normal mode it was possible to boot the system by loading the ext2 module and pointing the linux command to the path of the kernel to boot. Re-running grub-mkconfig once the system was up generated a working config.

With a faster build machine, the next step is to produce a fresh image, address these nits, and start putting things together to share.

Execsnoop via bpftrace
Execsnoop from bpftrace

LFS, round #4

Haven’t made any progress for a couple of weeks but things came together and instrumenting libc works as expected. One example demonstrated in section 12.2.2, chapter 12 of the BPF Performance book is attempting to instrument bash compiled without frame pointers where you only see a call to the read function.

Instrumenting bash which has been built with -fomit-frame-pointer.

Compiling with -fno-omit-frame-pointer produces a stack trace, within bash but the first frame is unresolved if libc isn’t also built with it, resulting in a hexadecimal address rather than showing a call to _start.

stack trace from bash on system with bash and libc built with -fno-omit-frame-pointer.

I’m going to look at integrating ZFS support next but I’m thinking of sharing a build as-is for the benefit of anyone wanting to work through the BPF Performance book.

LFS, round #3

With the OS image that I wrote about in the previous post I was able to build a new distro with the various substitution I’d made. There were 3 things that I wanted to mention in this post. First, turns out Linux has a bc(1) build dependency which I found when I omitted building it and came to compile the kernel. Second, you really need to run make defconfig before visiting the kernel menuconfig, otherwise you end up with a “default kernel config” lacking any on disk file system support (memory and network file systems are still supported). This was the reason why the kernel was unable to find init in the previous post.

My workflow is that I have flat file which I mount via loop back and I use that as the file system to build the OS in a chroot. When it comes to installing a boot loader using grub-install, if sysfs is not mounted to a location from within the chroot, grub-install will fail, complaining Unknown device "/dev/loop10p1": No such device, p1 being a child node for the first partition on the file backed by /dev/loop10 which is odd as grub-install was pointed at /dev/loop10, the parent node. This is because it is trying to enumerate the start of partitions so it can work out the start sector via sysfs (see grub-core/osdep/linux/hostdisk.c, starting at sysfs_partition_path() . Elsewhere it might have been achieved via ioctls. Either way it is asking the kernel but the dependency for a pseudo filesystem to be mounted in place threw me but looking around /sys/devices/virtual/block/loop10 I see files which expose various characteristics, including offset and start points for each partition.

My focus is now on building the dependencies for bpftrace which means getting BCC built and installed first.

LFS, round #2, 3rd try

In my previous post I ended with the binutils test suite not being happy after steering off the guide and making some changes to which components were installed. I decided to start again but cut back on the changes and see just how much I could omit from installing to get to the point of completing chapter 8. Ideally, I would like to shed everything that’s only a build dependency. It was doable but the sticking point was Python which is needed by Meson / ninja in order to build systemd and though you build Python earlier in chapter 7, at that stage it is built without the ctypes module as it requires libffi and the ctypes module is needed by Meson.

I thought I’d cheat by using pkgsrc to satisfy the build dependencies but the infrastructure for detecting termcap support is unable to detect support via ncursesw which LFS uses. Opting to prefer satisfying all dependencies from pkgsrc which is now the default setting for pkgsrc on Linux created a new problem that the components you compile manually outside of pkgsrc which call pkg-config would link to the pkgsrc versions of dependencies. To side step this issue I moved the pkg-config binary from pkgsrc out of the way where upon I hit an issue with linking systemd. After a night’s sleep I found that ninja in pkgsrc is patched to not adjust the rpath in binaries and this is need for systemd’s binaries because the libraries they depend on are tucked away in a subdirectory.

Upon completing chapter 8, I went back and started afresh once more, this time with the intent to make changes and substitutions once again. I installed bash as /bin/bash but did not create a link to /bin/sh and was surprised to find most things were happy with that, the autoconfed infrastructure could cope, until I reached binutils in chapter 8 where it called /bin/sh explicitly in tests. At this point I installed mksh and pointed /bin/sh to it. This revealed various failures from scripts and tests on other packages which were built after binutils in chapter 8, most significantly by GCC’s build infrastructure. Setting the CONIFG_SHELL variable to /bin/bash when envoking configure ensured that bash was called instead of sh during the build and when invoking the test stage, as the SHELL variable inherits this setting down the line and things move on smoothly. I need to look at getting binutils handle the override as well, rather than hardcoding /bin/sh.

All build dependencies were installed in a separate prefix so that they could be removed after the build. m4, make, bison, Perl, Python, gawk, pkg-config (built with pc location set to /usr/lib/pkgconfig), autoconf, automake, libffi, check, expect, flex, TCL were installed in this location.

Python’s build infrastructure assumes system provides libffi and if the system doesn’t, it struggles with linking. There’s a bug report to teach the build to make use of the information from pkg-config for libffi but the proposed patch in my case did not work as the location under the new prefix where libraries are installed were not in the search path for the dynamic linker. Since I was installing Python in the same prefix as libffi was already installed in, I adjusted the rpath by setting LDFLAGS.

Besides the bash to mksh swap for /bin/sh, I replaced gawk with nawk once more, there was no fallout though I did also install gawk under the new prefix as glibc requires it. tar was swapped with bsdtar from libarchive, Man-DB for mandoc.

I skipped on texinfo as it has a Perl runtime dependency and I don’t want to include Perl in the base OS. Groff was out as it had a texinfo dependency. I omitted libelf as I thought it was only used by tc from IPRoute2 which is for setting QoS policies, turns out it’s a build dependency for the kernel so that went back in.

With everything in place, I managed to build a kernel which for some reason couldn’t go multiuser because it couldn’t find init or sh! Comparing the config with my image from round #1 showed lots of differences which was baffling as I thought I’d only made the changes which the LFS guide suggests. Starting with a new config resolved the issue. I can only suspect that I must’ve pressed space bar by accident when navigating the kernel config menus which switched a bunch of stuff off. 🙁

I now have an OS image which appears to work. For the next round to put it to the test, I am going to try and use it to build a new distro. I will need to address the binutils build infrastructure issue so that I can point it to bash, otherwise I suspect I will run into the issues again when running the test suite (see previous post). I would also like to try and swap binutils out for elftoolchain. I have also been thinking about subsequent OS upgrades and using mtree for that.

LFS, round #1

Following on from the previous blog post, I started on the path of build a Linux From Scratch distribution. The project offers two paths, one using traditional Sys V init and systemd for the other. I opted for systemd route and followed the guide, it was all very straight forward. Essentially you fetch a bunch of source archives off the internet, run tar, configure, make, make test, make install a bunch of times with some system setup in between, before compiling your kernel, putting it into place in /boot and getting grub installed and configured.

The book assumes that you have a system running Linux already and you have spare space which you use for a new partition to install your linux distro that you built. I opted to create a 10GB file which I mounted via loopback instead, with the intention of using it as the boot disk of a virtual machine.

The guide has 11 chapters which takes you through building software on the host, as a new user (for a clean environment), then in a chroot with three iterations of GCC and binutils builds. With each software component that you’ll build, the guide instructs you on how to run their test suite, what failures should be expected and why, before performing an install.

For each component that you build, the guide documents why a patch is applied and what the configure options specified mean. At the end of each section all the installed components are documented, followed by a short description of each item. Unfortunately the dependencies are not documented but sort of implied by the order which things appear in the guide.

Chapter 8 is the most laborious with a hefty 71 packages to install/reinstall. End result is a fully fledged environment with Autotools, Perl, Python, TCL, GCC, Vim, various libraries, compression tools. If you follow the guide every step of the way, it should work a-ok in the end, providing the test suites passed as expected at each stage.

After I finished all 11 step, I had to convert my flat file which I created with dd(1) to a format which VirtualBox would recognise. I wasn’t sure if any of the supported formats was a flat file with a new file extension and it was quicker to convert it to a vmdk file than work through the list to see. First try it made it to the GRUB menu which was nice.

Grub menu

Followed by a panic as I guessed the wrong device node to specify as root in my grub.cfg.

Panic on first boot

A re-edit of the config to specify the device node hinted in the kernel panic and I made it to the login prompt.

First successful boot

At this point I began thinking how much trouble I could get into by substituting or omitting components and started a fresh new build.

Inverse vandalism: the making of things because you can

Alan Kay, “The Early History of Smalltalk,” ACM SIGPLAN Notices Volume 28, March 1, 1993

LFS round #2 started life with nawk instead of gawk, no bash installed but mksh is /bin/sh, BSD make instead of GNU make. No Perl, Python, TCL, gettext, bunzip, xz. BSD make got swapped for GNU make on the first step as it wasn’t happy about its sys.mk that was installed but will be revisited. I made it back to chapter 8 quickly (look ma, less dependencies) and things began to fall apart with rebuilding Glibc, turns out it really wants bison, Python, gawk not nawk. Glibc also really wants bash as well but its configure test is happy with mksh and it passes. It became apparent it wanted bash when running the test suite as some tests call /bin/bash specifically and stop when it is not found. At this point my environment began behaving strangely so I exited the chroot and I couldn’t get back in. Running strace on chroot showed that calls to execve() were returning ENOENT. Rebuilding glibc from the host environment allowed me to get back into the chroot once again, at which point I installed bash. For glibc’s Python dependency, I decided to treat it as part of the bootstrap kit as it seems to be a build dependency. Python got built without shared components (--disable-shared) and installed it in a separate prefix with the plan to remove it after the system is built. From glibc I jumped to building binutils in my chroot and again things came tumbling down during the test suite run. It was not happy about finding libgcc_s, despite the system being aware of it in its ld cache but I haven’t had a chance to investigate any further. I feel very much lost in the bazaar but I’m having fun. 🙂

Book review: BPF Performance Tools: Linux System and Application Observability

It’s more than 11 years since the shouting in the data centre video landed and I still manage to surprise folks in 2020 who have never seen it with what is possible.
The idea that such transparency is a reality in some circles comes as a shock.

Without the facility to be able to dynamically instrument a system the operator is severely limited of insight into what is happening on a system using conventional tools, solely. Having to resort to debugging tools to gain insight is a non option usually for several reasons
1) disruptive (may need for application to be re-invoked via tooling).
2) considerable performance impact.
3) unable to provide a holistic view (may provides insight into one component leaving it operator to correlate information from other sources).
If you do have the luxury, the problem is how do you instrument the system?
The mechanism offers the ability to ask questions about the system, but can you formulate the right question?? This book hopefully helps with that.

Observation of an application, you need both resource analysis and application-level analysis. With BPF tracing, this allows you to study the flow from the application and its code and context, through libraries and syscalls, kernel services, and device drivers. Imagine taking the various ways disk I/O was instrumented and adding query string as another dimension for breakdowns.

The BPF performance tools book centres around bpftrace but covers BCC as well. bpftrace gives a DTrace like tool for one liners and writing scripts similarly to D, so if you are comfortable with DTrace, syntax should be familiar though it is slightly different.
BCC provides a more powerful and complex interface for writing scripts which leverage other languages to compose a desired tool. I believe the majority of the BCC tools use Python though Luajit is supported too.
Either way, in the background everything end up as LLVM IR and goes through libLLVM to compile to BPF.

The first part of the book covers the technology, starting with introducing eBPF and moving down to cover the history, interfaces, how things work, and the tooling which compliment eBPF such as PMCs, flamegraphs, perf_events and more.
A quick introduction to performance analysis followed by a BCC and bpftrace introduction rounds off the first part of the book in preparation for applying them to different parts of a system, broken down by chapter, starting with CPU.

The methodology is clear cut. Use the traditional tools commonly available to gauge the state of the system and then use bpftrace or BCC to hone in on the problem, iterating through the layers of the system to find the root cause. As opposed to trying to solve thing purely with eBPF.

I did not read the third and fourth sections of the book which covered additional topics and appendixes but I suspect I will be returning to read the “tips, tricks and common problems” chapter.
From the first sixteen chapters which I read, the CPU chapter really helped me understand the way CPU usage is measured on Linux. I enjoyed the chapter dedicated to languages, especially the Bash Shell section.
Given a binary (in this case bash):
how you go about extracting information from it, whether it has been compiled with or without frame pointers preserved.
How you could expand the shell to add USDT probes.
I did not finish the Java section, too painful to read about what’s needed to be done due to the nature of Java being a C++ code base and the JIT runtime (the book states it is a complex target to trace) and couldn’t contain myself to read the containers *yawn* chapter.
All the scripts covered in the book have their history covered in the footnotes of the page which was nice to see (I like history)

I created the first execsnoop using DTrace on 24-Mar-2004, to solve a common performance problem I was seeing with short-lived processes in Solaris environments. My prior analysis technique was to enable process accounting or BSM auditing and pick the exec events out of the logs, but both of these came with caveats: Process accounting truncated the process name and arguments to only eight characters. By comparison, my execsnoop tool could be run on a system immediately, without needing special audit modes, and could show much more of the command string. execsnoop is installed by default on OS X, and some Solaris and BSD versions. I also developed the BCC version on 7-Feb-2016, and the bpftrace version on 15-Nov-2017, and for that I added the join() built-in to bpftrace.

and a heads up is given on the impact of running the script is likely to have, because some will have a noticeable impact.

The performance overhead of offcputime(8) can be significant, exceeding 5%, depending on the rate of context switches. This is at least manageable: it could be run for short periods in production as needed. Prior to BPF, performing off-CPU analysis involved dumping all stacks to user-space for post processing, and the overhead was usually prohibitive for production use.

I followed the book with a copy of Ubuntu 20.04 installed on my ThinkPad x230 and it mostly went smoothly, the only annoying thing was that user space stack traces were usually broken due to things such as libc not being built with frame pointers preserved (-fno-omit-frame-pointer).
Section 13.2.9 discusses the issue with libc and libpthread rebuild requirement as well as pointing to the Debian bug tracking the issue.
I’m comfortable compiling and installing software but didn’t want to go down the rabbit hole of trying to rebuild my OS as I worked through the book just yet, the thought of maintaining such a system alongside binary updates from vendor seemed like a hassle in this space. My next step is to address that so I have working stack traces. 🙂

Besides that, I enjoyed reading the book especially the background/history parts and look forward to Systems Performance: Enterprise and the Cloud, 2nd Edition, which is out in a couple of months.