Book review: Practical File System Design with the Be File System

I’ve been dragging the PDF version of Dominic Giampaolo‘s book around for some time but never bothered to read it until recently when I went fishing for PDFs in my archive (~/Downloads) to load up on my new toy, a reMarkable 2 tablet. It has been a while since I read a book related to technical details of computers so a book on a file system from the past ticked all the right boxes and was a good test to see how the tablet fares as a reading device.

I’ve not read many books specifically on file systems, only one that comes to mind is the Solaris 10 ZFS Essentials book on Prentice Hall which is more of an administrators guide rather than a dive into the implementation and the thinking behind it. The Practical File System Design book starts by introducing how the BFS project come about and works up from the concept of what is a file system, establishing terminology and building up a picture from blocks on a disk up to mounting a file system, reading and writing a file, features which enhance a file system, the hurdles when developing and testing the file system, across twelve chapters. The book dedicates a chapter to cover other file systems in use at the time like FFS (described as the grandfather of modern file systems), XFS (the burly nephew) , HFS (the odd-ball cousin), NTFS (the blue-suited distant relative), ext2 (the fast and unsafe grandchild).

Memory usage was also a big concern. We did not have the luxury of assuming large amounts of memory for buffers because the primary development system for BFS was a BeBox with 8 MB of memory.

Dominic Giampaolo

The initial target for the file system project was six months, to fit with the operating system’s release cycle, but took nine months for the first beta release and the final version shipped a month after. The book was written around sixteen months after the initial development of the file system.

After the first three months of development it became necessary to enable others to use the BFS, so BFS graduated to become a full-time member of kernel space. At this stage, although it was not feature complete (by far!), BFS had enough functionality for use as a traditional-style file system. As expected, the file system went from a level of apparent stability in my own testing to a devastating number of bugs the minute other people were allowed to use it. With immediate feedback from the testers, the file system often saw three or four fixes per day. After several weeks of continual refinements and close work with the testing group, the file system reached a milestone: it was now possible for other engineers to use it to work on their own part of the operating system without immediate fear of corruption.

The book was written at a time when HFS+ was a recent revision, the block size of most modern hard disks was 512 bytes, when a disk greater than 5 GB was considered very large, and companies like AltaVista were trying to bring search to the desktop (and Yahoo! many years later). The search part (attributes, indexing, and queries) as the book states is “the essence of why BFS is interesting”, Dominic Giampaolo would later join Apple and bring desktop search to OS X in the form of Spotlight.

A file system designer must make many choices when implementing a file system.
Not all features are appropriate or even necessary for all systems.
System constraints may dictate some choices, while available time and resources may dictate others.

I really liked the writing style of the book, it was very self contained in that it explained everything it introduced clearly, covering minute details which would cause problems, options for solving a particular problem, and routes taken. For example, in the data structures chapter the impact of disk block size on the virtual memory subsystem and the avenues it would close when they come to unify the buffer cache and the VM system or accommodating the users expectation instead of using elegant data-structure search algorithms (read The inmates are running the asylum by Alan Cooper).

The short amount of time to complete the project and the lack of engineering resources meant that there was little time to explore different designs and to experiment with completely untested ideas.

The journaling and disk block cache chapters were my favourite to read. The journaling chapter made me realise my lack of understanding of journaling and what I thought I knew about how it worked, assuming that just because the term journaling was used the feature performed the same across different implementations (metadata journaling and the meaning of consistency vs storage leaks). Regarding caching, I still struggle with the concept of write back vs through in the abstract so always interested to read more about the subject.

The chapter on the vnode layer explained how the filesystem hooked into the kernel. Describing what it means in terms of process to mount a file system, starting from an i-node to vnode and back down from how the kernel interacts with the file system via the vnode layer using the functions provided by the file system and support for live queries, proceeded by the API the operating system offers for manipulating files in the following chapter.

A vnode layer connects the user-level abstraction of a file descriptor with specific file system implementations. In general, a vnode layer allows many different file systems to hook into the file system name space and appear as one seamless unit.

The API chapter was amusing to read because of the human aspect of the problem and trying to come to an agreement on approach, here being fought out between those in favour of Macintosh style file handling and POSIX style.

The BeOS C++ API for manipulating files and performing I/O suffered a traumatic birthing process. Many forces drove the design back and forth between the extremes of POSIX-dom and Macintosh-like file handling. The API changed many times, the class hierarchy mutated just as many times, and with only two weeks to go before shipping, the API went through one more spasmodic change. This tumultuous process resulted from trying to appeal to too many different desires. In the end it seemed that no one was particularly pleased. Although the API is functional and not overly burdensome to use, each of the people involved in the design would have done it slightly differently, and some parts of the API still seem quirky at times. The difficulties that arose were never in the implementation but rather in the design: how to structure the classes and what features to provide in each.

The book wraps up with a chapter on testing and various approaches to shake out bugs. One suggestion to stress the file system in early 1998, was to support a full USENET feed, resulting in 2GB of data per day at least being written to disk. When collecting more PDFs after reading the journaling chapter, I found a paper from USENIX in 2000 which states “anecdotal evidence suggests that a full news feed today is 15-20 GB per day”.
ISC‘s InterNet News (INN) and netnews were useful tools for testing the robustness of a file system.

Of these tests, the most stressful by far is handling an Internet news feed. The volume of traffic of a full Internet news feed is on the order of 2 GB per day spread over several hundred thousand messages (in early 1998). The INN software package stores each message in a separate file and uses the file system hierarchy to manage the news hierarchy. In addition to the large number of files, the news system also uses several large databases stored in files that contain overview and history information about all the active articles in the news system. The amount of activity, the sizes of the files, and the sheer number of files involved make running INN perhaps the most brutal test any file system can endure. Running the INN software and accepting a full news feed is a significant task. Unfortunately the INN software does not yet run on BeOS, and so this test was not possible (hence the reason for creating the synthetic news test program). A file system able to support the real INN software and to do so without corrupting the disk is a truly mature file system.

The book was a great read, and provides lots of historical context and grounding of concepts for an autodidact (just don’t take away thinking 5GB HDD is a large disk). From a nostalgia perspective it was interesting because of the desktop search thing that was happening around that time and more recently the Systems We Love talk regarding the search capabilities of BFS.

At the time I never had the full BeOS experience since I didn’t have a system with enough RAM. I could boot BeOS but the system decomposed to no sound nor colour!
I recall a disappointed experience, trying to boot the demo copy of BeOS v4.5? Personal Edition from a PC Plus cover disk.
It would’ve been nice to use the colour display capabilities of my CRT at the very least. 🙂

I have amassed more PDFs and currently reading a paper from 1996 on Scalability in the XFS File System which closes with

Adam SweeneyDoug DoucetteWei HuCurtis AndersonMichael Nishimoto, and Geoff Peck are all members of the Server Technology group at Silicon Graphics. Adam went to Stanford, Doug to NYU and Berkeley, Wei to MIT, Curtis to Cal Poly, Michael to Berkeley and Stanford, and Geoff to Harvard and Berkeley. None of them holds a Ph.D. All together they have worked at somewhere around 27 companies, on projects including secure operating systems, distributed operating systems, fault tolerant systems, and plain old Unix systems. None of them intends to make a career out of building file systems, but they all enjoyed building one.

Scalability in the XFS File System

There’s an article on BFS at Ars Technica if you want to read more about the file system. The article features an interview with BFS developers at Be & Haiku, and a comment by jean-Louis GassĂ©e.

As an aside, the reMarkable 2 is physically a really nice device to hold, display is great, the ability to extract my highlighted items from a PDF could be a lot better, I could export a copy of this book as a PDF but there’s no way to get a view of just highlighted items and it’s not possible to copy from pdf on the device which meant I had to manually scan the exported PDF and save them in my notes.

Book review: BPF Performance Tools: Linux System and Application Observability

It’s more than 11 years since the shouting in the data centre video landed and I still manage to surprise folks in 2020 who have never seen it with what is possible.
The idea that such transparency is a reality in some circles comes as a shock.

Without the facility to be able to dynamically instrument a system the operator is severely limited of insight into what is happening on a system using conventional tools, solely. Having to resort to debugging tools to gain insight is a non option usually for several reasons
1) disruptive (may need for application to be re-invoked via tooling).
2) considerable performance impact.
3) unable to provide a holistic view (may provides insight into one component leaving it operator to correlate information from other sources).
If you do have the luxury, the problem is how do you instrument the system?
The mechanism offers the ability to ask questions about the system, but can you formulate the right question?? This book hopefully helps with that.

Observation of an application, you need both resource analysis and application-level analysis. With BPF tracing, this allows you to study the flow from the application and its code and context, through libraries and syscalls, kernel services, and device drivers. Imagine taking the various ways disk I/O was instrumented and adding query string as another dimension for breakdowns.

The BPF performance tools book centres around bpftrace but covers BCC as well. bpftrace gives a DTrace like tool for one liners and writing scripts similarly to D, so if you are comfortable with DTrace, syntax should be familiar though it is slightly different.
BCC provides a more powerful and complex interface for writing scripts which leverage other languages to compose a desired tool. I believe the majority of the BCC tools use Python though Luajit is supported too.
Either way, in the background everything end up as LLVM IR and goes through libLLVM to compile to BPF.

The first part of the book covers the technology, starting with introducing eBPF and moving down to cover the history, interfaces, how things work, and the tooling which compliment eBPF such as PMCs, flamegraphs, perf_events and more.
A quick introduction to performance analysis followed by a BCC and bpftrace introduction rounds off the first part of the book in preparation for applying them to different parts of a system, broken down by chapter, starting with CPU.

The methodology is clear cut. Use the traditional tools commonly available to gauge the state of the system and then use bpftrace or BCC to hone in on the problem, iterating through the layers of the system to find the root cause. As opposed to trying to solve thing purely with eBPF.

I did not read the third and fourth sections of the book which covered additional topics and appendixes but I suspect I will be returning to read the “tips, tricks and common problems” chapter.
From the first sixteen chapters which I read, the CPU chapter really helped me understand the way CPU usage is measured on Linux. I enjoyed the chapter dedicated to languages, especially the Bash Shell section.
Given a binary (in this case bash):
how you go about extracting information from it, whether it has been compiled with or without frame pointers preserved.
How you could expand the shell to add USDT probes.
I did not finish the Java section, too painful to read about what’s needed to be done due to the nature of Java being a C++ code base and the JIT runtime (the book states it is a complex target to trace) and couldn’t contain myself to read the containers *yawn* chapter.
All the scripts covered in the book have their history covered in the footnotes of the page which was nice to see (I like history)

I created the first execsnoop using DTrace on 24-Mar-2004, to solve a common performance problem I was seeing with short-lived processes in Solaris environments. My prior analysis technique was to enable process accounting or BSM auditing and pick the exec events out of the logs, but both of these came with caveats: Process accounting truncated the process name and arguments to only eight characters. By comparison, my execsnoop tool could be run on a system immediately, without needing special audit modes, and could show much more of the command string. execsnoop is installed by default on OS X, and some Solaris and BSD versions. I also developed the BCC version on 7-Feb-2016, and the bpftrace version on 15-Nov-2017, and for that I added the join() built-in to bpftrace.

and a heads up is given on the impact of running the script is likely to have, because some will have a noticeable impact.

The performance overhead of offcputime(8) can be significant, exceeding 5%, depending on the rate of context switches. This is at least manageable: it could be run for short periods in production as needed. Prior to BPF, performing off-CPU analysis involved dumping all stacks to user-space for post processing, and the overhead was usually prohibitive for production use.

I followed the book with a copy of Ubuntu 20.04 installed on my ThinkPad x230 and it mostly went smoothly, the only annoying thing was that user space stack traces were usually broken due to things such as libc not being built with frame pointers preserved (-fno-omit-frame-pointer).
Section 13.2.9 discusses the issue with libc and libpthread rebuild requirement as well as pointing to the Debian bug tracking the issue.
I’m comfortable compiling and installing software but didn’t want to go down the rabbit hole of trying to rebuild my OS as I worked through the book just yet, the thought of maintaining such a system alongside binary updates from vendor seemed like a hassle in this space. My next step is to address that so I have working stack traces. 🙂

Besides that, I enjoyed reading the book especially the background/history parts and look forward to Systems Performance: Enterprise and the Cloud, 2nd Edition, which is out in a couple of months.

Book review: The Design and Implementation of the 4.3BSD UNIX Operating System

The Design and Implementation of 4.3BSD UNIX Operating System


According to my photographs, I picked up this book in February of this year. With 105 sections spread over 13 chapters I’ve been working through the book slowly at a section a day. Despite being a technical subject, the book does a very good job of explaining the operation system at a high level without becoming a study of the source code. There are snippets of source code & pseudo code to compliment the text and an extensive list of papers for reference at end of each chapter for those that wish to dig deeper.

Finished the design and implementation of 4.3BSD UNIX operating system book, now available for UNIBUS based multiuser system consultancy

— Sevan Janiyan (@sevanjaniyan) August 11, 2015

I had previously attempted to complete the Minix book, Operating Systems: Design And Implementation but struggled with the extensive source reference. Switching back and forth between chapters or the requirement for a computer to view the source code was not a viable option. I took a chance on this book as used copies are available on Amazon for the cost of a postage which is less than a couple of pounds. The book is well written and enjoyable to read, while implementation details may not be completely applicable to modern BSD variants, the fundamental details may still hold true in most cases if not providing a historical background around the technical challenges they faced at the time. What I liked with the Minix book was that it provided lots of background to accommodated a beginner and get a reader up to speed though I much preferred the ability to read this book by itself without requiring access to the source code.

I found some of the details in the interprocess communication part a little unclear at times but enjoyed the filesystem and memory management chapters the most and the terminal handling chapter the least though I did learn of Berknet there, as well as many other historical artifacts throughout the book, some of which I tweeted under the hashtag di43bsd.

Berknet is an obsolete batch-oriented network that was used to connect PDP-11 and VAX UNIX systems using 9600-baud serial lines. Due to the overhead of input processing in the standard line discipline, a special reduced-function network discipline was devised.

The 4.3BSD kernel is not partitioned into multiple processes. This was a basic design decision in the earliest versions of UNIX. The first two implementations by Ken Thompson had no memory mapping at all, and thus made no hardware-enforced distinction between user and kernel space. A message-passing system could have been implemented as readily as the actually implemented model of kernel and user processes. The latter was chosen for simplicity. And the early kernels were small. It has been largely the introduction of more and larger facilities (such as networking) into the kernel that has made their separation into user processes an attractive prospect — one that is being pursued in, for example, Mach.

The book breaks down the percentage of components in each category (such as headers) which are platform independent and platform specific. With a total of 48270 lines of platform independent code versus 68200 lines of platform specific code, the 4.3BSD kernel was largely targeted at the VAX.

From the details on the implementation of mmap() in the BSD memory management design decisions section it was interesting to read about virtual memory subsystems of old

The original virtual memory design was based on the assumption that computer memories were small and expensive, whereas disk were locally connected, fast, large, and inexpensive. Thus, the virtual-memory system was designed to be frugal with its use of memory at the expense of generating extra disk traffic.

It made me think of Mac OS X 10.4 (Tiger) as that still struggled with the same issue many years on which I have to suffer when building from pkgsrc. Despite having a system with 2GB of RAM, memory utilisation rarely goes above 512MB.

The idea of having to compile the system timezone in the kernel amused me though it was stated that with 4.3BSD Tahoe, support for the Olson timezone database that we are now familiar with was first added, allowing individual processes to select a set of rules.

I enjoyed the filesystem chapter as I learnt about the old Berkeley filesystem and the “new” which evolved into what we use today, the performance issues with the old filesystem due to the free list becoming scrambled with the age of the filesystem (in weeks), resulting in longer seek times and the amount of space wasted as a function of block size.

Although the old filesystem provided transfer rates of up to 175 Kbyte per second when it was first created, the scrambling of the free list caused this rate to deteriorate to an average of 30 Kbyte per second after a few weeks of moderate use.

The idea of being rotationally optimal to reduce seek times and implementing mechanisms to account for that was very interesting to read about.

To simplify the task of locating rotationally optimal blocks, the summary information for each cylinder group includes a count of the available blocks at different rotational positions. Eight rotational positions are distinguished, so the resolution of the summary information is 2 milliseconds for a 3600 revolution-per-minute-drive.

Though this is not so valid today with traditional spindle disks as there is not a 1:1 mapping between the physical location & logical representation of the blocks on disk.

The book is a bargain second hand and worth it for the BSD archeology.

Two months after the beginning of the first implementation of the UNIX operating system, there were two processes, one for each of the terminals of the PDP-7. At age 10 months, and still on the PDP-7, UNIX had many processes, the fork operation, and something like the wait system call. A process executed a new program by reading a new program in on top of itself. The PDP-11 system (first edition UNIX) saw the introduction of exec. All these systems allowed only one process in memory at a time. When PDP-11 with memory management (a KS-11) was obtained, the system was modified to permit several processes to remain in memory simultaneously, in order to reduce swapping. But this modification did not apply to multiprogramming, because disk I/O was synchronous. This state of affairs persisted into 1972 and the first PDP-11/45 system. True multiprogramming was finally introduced when the system was rewritten in C. Disk I/O for one process could then proceed while another process ran. The basic structure of process management in UNIX has not changed since that time.

Book review: The Art of Unix Programming

I picked this book by mistake, assuming that it was going to be a technically detailed book in line with the Advanced Programming in the Unix Environment book written by the late Richard Stevens, it turned out to be much more high level than that but I was not disappointed, It’s been a pleasure to read whilst travelling over the last month.
The book is 20 chapters split across four parts (context, design, implementation, community) with commentary from some big names of the UNIX world. There are lots of great advice in the book but I would look at what’s now available in regards to software today if I was looking to implement something. It does explain why lots of software relies on some common (and heavy weight?) components. Let me explain, long ago I was unaware that packages for the -current branch of OpenBSD were being built, whenever I grudgingly tried a new snapshot I went through & built my packages from the ports tree after a fresh install, then something would depend on XML related components & then pull in a bunch of things which would involve building ghostscript, on a Sun Blade 100, between Firefox & ghostscript, 24 hours would easily be wasted, I now understand that all that wasted time was thanks to someone taking the advice of ESR on how to prepare documentation for a software project.
Besides the dubious software recommendation (11-year-old book?) everything is explained in a clear manner that’s very easy to read.

Rule of Robustness: Robustness is the child of transparency and simplicity.
Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.
Rule of Optimization: Prototype before polishing. Get it working before you optimize it.
Rule of Diversity: Distrust all claims for “one true way”.
Rule of Extensibility: Design for the future, because it will be here sooner than you think.

The Pragmatic Programmer articulates a rule for one particular kind of orthogonality that is especially important. Their “Don’t Repeat Yourself” rule is: every piece of knowledge must have a single, unambiguous, authoritative representation within a system. In this book we prefer, following a suggestion by Brian Kernighan, to call this the Single Point Of Truth or SPOT rule.

The book is critical of Microsoft & their approach to software, explaining some of the design decisions (some inherited from the world of VMS).

From a complexity-control point of view, threads are a bad substitute for lightweight processes with their own address spaces; the idea of threads is native to operating systems with expensive process-spawning and weak IPC facilities.

the Microsoft version of CSV is a textbook example of how not to design a textual file format.

Criticisms of MacOS are of version 9 and prior which don’t really apply to OS X e.g. single shared address space. There are explanations of why things are such in the world of Unix and lots of great advice.

The ’rc’ suffix goes back to Unix’s grandparent, CTSS. It had a command-script feature called “runcom”. Early Unixes used ’rc’ for the name of the operating system’s boot script, as a tribute to CTSS runcom.

most Unix programs first check VISUAL, and only if that’s not set will they consult EDITOR. That’s a relic from the days when people had different preferences for line-oriented editors and visual editors

When you feel the urge to design a complex binary file format, or a complex binary application protocol, it is generally wise to lie down until the feeling passes.

One of the main lessons of Zen is that we ordinarily see the world through a haze of preconceptions and fixed ideas that proceed from our desires.

Doug McIlroy provides some great commentary too

As, in a different way, was old-school Unix. Bell Labs had enough resources so that Ken was not confined by demands to have a product yesterday. Recall Pascal’s apology for writing a long letter because he didn’t have enough time to write a short one. —Doug McIlroy

I’d recommend the book for anyone involved with computers and not necessarily involved with Unix or open source variants/likes. The author does a great job of explaining the theory of an approach to developing software and the operating it typically runs on, It’s accessible, easy to read and doesn’t require a computer to work through. You may need one however if you want to read it online for free.

My ideal for the future is to develop a filesystem remote interface (a la Plan 9) and then have it implemented across the Internet as the standard rather than HTML. That would be ultimate cool. —Ken Thompson

Book review : Kerberos, The definitive guide

Kerberos & AFS have been two technologies I’ve wanted to deploy for a long time but based on my experience with Kerberos in Windows 2000 & and studies for MCSE I had made myself believe that it would be a painful task, I purchased this book a couple of years back but never got around to reading it properly until the start of the new year. The book is divided into 10 chapters, the first 3 explain how Kerberos works conceptually, from there on the book covers the practical aspects, how to deploy Kerberos using the MIT, Heimdal & Windows implementation, how to troubleshoot common issues, improve security, integrate applications & services, implement cross realm authentication, windows & UNIX integration, finishing off with the future of Kerberos.
The book uses FreeBSD as the OS which the UNIX examples are demonstrated on though Kerberos is built from source. I also used FreeBSD to perform my test installation but instead opted to use the Heimdal implementation which comes bundled as standard in the base OS of the BSDs. Implementation was really simple, once the KDC was up & the necessary SRV records were in place, telnet authentication worked seamlessly and after I’d set GSSAPIAuthentication yes in my ssh(1) & sshd(8) config files, SSH also worked seamlessly. Only thing that caught me out was Heimdal in FreeBSD base uses DNS where as the book assumes that this is switched off (not sure if this feature was switched off by default at the time & has now changed or it’s just the FreeBSD bundled version which has it on by default). The information for troubleshooting & some of security is still relevant but other than that it is badly outdated, discussing DES encryption & the lack of support for RC4 encryption which was the default cipher used by Windows 2000. Setting up a slave KDC has also change since this book was published in Heimdal, you now need a hprop/hostname principal for each slave server where as the book recommends host/hostname principals which doesn’t work with Heimdal anymore.

Looking around, you will still see references to Windows 2000 when doing Kerberos implementation eg in the current Heimdal documentation, I’m not sure if this is still applicable to the latest version of Windows or it’s there for historical reference.
If I were looking to learn about Kerberos, specifically Heimdal, I would use the official documentation & the Kerberos5 article on the FreeBSD handbook instead of buying this book as there is too much outdated advice in this book that no longer applies.
Ignoring the outdated best practices, the initial implementation information has remained the same over the year & it’s amazingly easy to deploy in a lab scenario for testing.

Book Review: Pro DNS and BIND

So this is not a new book by any means, bought in 2007, published in 2005, covering BIND 9.x & now succeeded by Pro DNS and BIND 10, I’m on a mission to try & clear as much of my book shelf of books as I can, with ebooks & daily deals from publishers the digital shelf in ibooks is by no means shrinking while I’ve stopped buying books in print. My back is thankful for it & large reference books happily sit in digital format in reach when onsite. Anyway, back to the book this post is about, the book is a polished up version of the DNS for Rocket Scientists Guide which you most certainly would’ve come across if searching for answers to BIND & DNS related questions on the web, with a chapter on DNSSEC which is not on the website for added value.
The book is split into six parts:

  • Principles and Overview
  • Get Something Running
  • DNS Security
  • Reference
  • Programming
  • Appendixes
  • I read the first eleven of fifteen chapters which took me to the end of the DNS Security part, the last three part are all reference material such as BIND API, RFCs & configuration file parameter lists.
    Like the online guide the book is full of useful information & a very easy read apart from the DNS Security part. The “Securing Zone Transfers” felt out of place and jumped into using the dnssec-keygen with no prior reference to it, I struggled with the DNSSEC chapter also but that may have been more to do with it being my first exposure to the topic. The only thing I found annoying was the repeated reference to the backslash representing the spanning to a new line for every paragraph proceeding a command snippet.

    With reading this book and a review of deploying DNSSEC in the Intro to DNSSEC video from BSDCan 2012 I am looking forward to deploying DNSSEC via both DS & DLV as I have registrar support for some TLDs but not ccTLDs.

    Book Review: Implementing Cisco IOS Network Security (IINS)

    So I wrote up a review on the Cisco Press self-study guide for the 640-553 exam which I finished readuing this weekend & while double checking things I noticed that the 640-554 exam topics has already been announced last month with the self-study guide for 640-554 due to be published at the end of August, the new exams will follow on from the 1st of October.
    The new book will again be authored by Catherine Paquet so I’m curious how much new content there will be in the new revision.

    There are seven chapters in the current 640-553 book

  • Introduction to Network Security Principles
  • Perimeter Security
  • Network Security Using Cisco IOS Firewalls
  • Fundamentals of Cryptography
  • Site-to-Site VPNs
  • Network Security Using Cisco IOS IPS
  • LAN, SAN, Voice, and Endpoint Security Overview
  • Chapter 1, “Introduction to Network Security Principles” was the most tedious of the seven to read, a long drawn out chapter covering ethics, risk analysis, lots of charts, graphs & cost figures (I managed to get through the chapter by thinking of brass eye every time I came across one), marketing info on Ciscos “self-defending network” & buried amongst all that was some introductory info to different types of attack.

    Chapter 2, “Perimeter Security” covers getting setup (ACS Server on Windows, logging, AAA, views) more product line info & navigating SDM.

    Chapter 3, “Network Security Using Cisco IOS Firewalls” covers the fundamentals of firewalls, quiet a large portion of the chapter is on ACLs & configuring them which didn’t make sense as this is covered on ICND2, followed by a shorter section on configuring the zone based firewall via SDM & the firewall wizard.

    Chapter 4, “Fundamentals of Cryptography” was a good but contained some mistakes, like “DES is considered trustworthy” & “Cryptography researchers have scrutinized DES for nearly 35 years and have found no significant flaws”. These statements are wrong, the DES Cracker proved it in the late 90’s or perhaps this is what they were refering to by “because DES is based on simple mathematical functions, it can easily be implemented and accelerated in hardware”.

    Chapter 5, “Site-to-Site VPNs” was enjoyable & lead on from the foundation laid in the previous chapter, setup was also covered from the console this time.

    Chapter 6, “Network Security Using Cisco IOS IPS” covers the fundamentals on the theory side, how to configure it via SDM & more product intro. This chapter is available as a free sample for download.

    Chapter 7, “LAN, SAN, Voice, and Endpoint Security Overview” was 50/50, I enjoyed the SAN section because it was new to me, so there was new information to learn, the endpoint security section covered various attacks & vulnerabilities mixed up with product line info, the voice section was brief covering fundamentals, threats & defence, I didn’t find it very interesting. The chapter finished up with mitigating L2 attacks.

    I didn’t particularly enjoy this book, the first three chapters were pretty tedious to read but it got better in the later ones, overall it lacked flow & felt thrown together.
    It was also disappointing to see the use of TFTP being encouraged in a security book
    “The system that you choose should support TFTP to make it easy to transfer any resulting configuration files to the router” &
    “The added layer of MD5 protection is useful in environments in which the password crosses the network or is stored on a TFTP server”.
    The book is a combination of marketing material on the product line, some technical theory & mainly instructions to navigate the SDM though the console is covered here & there (main focus is SDM but that looks to change for the new exam to CPP).
    As self-study guides go I thought it was better than Stephen McQuerrys 2 books for the R&S CCNA. I’m looking forward to seeing how the CCNA Security book is, I really enjoyed reading Odoms CCNA books & though I’ve not read any of Kevin Wallaces books before, I found the video content he’s published on youtube very good so I’m looking forward to reading his book to prepare for the 640-553 exam.
    If the exam certification guides are generally on parr with Odoms books then in the future I think I will skip the self-study guides & move straight on to the exam certification guides.