Home
Seminars
Free Newsletter
Back Issues
Jack's Books
Jack's Articles
Contact Us

 


Embedded Systems Design, Arnold Berger

cover Click picture to order from Amazon.com.

Arnold Berger’s new book “Embedded Systems Design” (CMP Books, 2002, ISBN 1-57820-073-3) is an introduction to how we go about building embedded systems. Arnie teaches a class on the subject, so this book distills his wisdom in getting the message across to EE students. At $34.95 and 236 pages it offers welcome value to the world of overpriced technical books.

The book has a very powerful focus on tools, a direction no doubt gleaned from Arnie’s many years at AMD and Applied Microsystems. The explanation of BDM/JTAG debuggers is one of the best I’ve seen. In addition to the thorough coverage of standard BDMs, he also discusses the very important Nexus efforts to standardize these debuggers, and to extend them to offer more capability for dealing with real time systems.

 You’ll immediately note his bias towards using the most powerful debuggers, especially In-Circuit Emulators. Not sure how an ICE works? Read this book for a great overview and advice about using the most important features of these tools.

 Most welcome is the 23 pages devoted to selecting a CPU. That subject gets too little attention, and can be sometimes more a matter of faith than of science. The book covers all of the selection criteria in a readable and comprehensive manner. 

The book is a needed addition to our art. It’s not aimed at the experienced developer, though. Couple this with Simon’s An Embedded Software Primer and you’ll have a good start on the basics of building embedded systems.

Embedded Systems Design using the Rabbit 3000 Microprocessor, Kamal Hyder and Bob Perrin

  Click picture to order from Amazon.com.

Embedded Systems Design using the Rabbit 3000 Microprocessor, by Kamal Hyder and Bob Perrin, (ISBN: 0750678720) is a complete introduction to programming with this popular microprocessor.
 
Rabbit Semiconductor (http://rabbitsemiconductor.com/ ) sells a popular range of 8 bit microprocessors that offer quite high-end performance. My son and I just finished a project for his high school with one, and I've used them for a number of other applications. The R3000 is sort of like a Z80 on steroids, with many new instructions, wider address bus and a wealth of on-board peripherals.
 
Like any modern high-integration CPU the Rabbit offers so much it's sometimes hard to get a handle on managing all of the I/O. This book will get you started, and is a must-read for developers using the part.
 
The first few chapters describe the CPU in general and the development environment provided by Rabbit (Dynamic C).
 
Chapter 5, though, is a description of interfacing to the real world, using all sorts of devices. It's aimed at engineers, not raw newbies, but, for an engineer at least, is an easy and descriptive read.

 

The chapter on interrupts is one of the best I've seen in any book. It covers the hard stuff, like writing ISRs in C and assembly, with real-world examples. If you're using the R3000 just cut and paste the code into your application.
 
It seems today that if there's a transistor in a product then it needs an Internet connection. Rabbit has several development kits that include everything needed to connect to the 'net. The authors devote considerable space to networking, but thankfully with only a cursory explanation of protocols. Rather, they give step-by-step instructions on implementing a working network, and conclude with a complete web server for monitoring water sprinklers.
 
The final chapter covers an alternative toolchain from Softools. Dynamic C is a single-module compile-it-all paradigm that's highly interactive. Softools (http://www.softools.com/ ) sells a well-supported, reasonably-priced conventional C compiler, assembler and IDE. I only recommend products I've used and like, and the Softools products are first-rate.
 
Systems Design using the Rabbit 3000 Microprocessor is required reading for users of the R3000, and a pretty darn good introduction to the entire realm of embedded systems development as well.
 

 

An Embedded Software Primer, David Simon 

cover Click picture to order from Amazon.com.

An excellent book, “An Embedded Software Primer”, by David E. Simon (1999, Addison, Wesley, ISBN 0-201-61653-X) came across my desk. Embedded titles are becoming more common – not so long ago ANYTHING embedded was worthy of attention – but this book is a standout.

It’s aimed at the novice or nearly novice embedded person, one with experience in C but little feel for the unique issues of embedded systems. The book starts with the standard introduction to microprocessor hardware (which could have been left out), but quickly moves on to a very good description of interrupts; this section alone is quite worthwhile.

Three of the 11 chapters are devoted to real time operating systems. The included CD has a copy of the older version of Jean LaBrosse’s uC/OS RTOS. Whether you use uC/OS or a commercial product, Mr. Simon’s discussion of RTOS issues is a very good introduction to the subject.

If you’ve never used an RTOS, this is a pretty good reference (but also check out MicroC/OS-II, LaBrosse’s new companion volume to his upgraded RTOS). If you’re trying to figure out what firmware is all about, and get a sense of how one should write code, this book is for you.

This is one of my all time favorite books on embedded systems. Very highly recommended.

Extreme Programming Explained, Kent Beck 

cover Click picture to order from Amazon.com.

Software engineering is a field that seems to proceed in fits and starts. Most of us write code the same way we did back in college, though occasionally a new approach does come along. I’d count Fagin Inspections as one, OOP, another.

In the last couple of years, though, Kent Beck’s Extreme Programming (XP) has surfaced as another interesting approach to writing code. And *code* is the operative word. XP starts with the requirements in the form of user stories. The customers deliver and prioritize the user stories. The developers analyze the stories and write tests for them.

Everything ends with code. The code is developed in pairs of developers to increase quality. Quality code is the goal, and that’s obtained by constantly rewriting it (refactoring in XP lingo), pair programming so two pairs of eyes look at it all, and constant testing/integration. The output is clean, clear code that fulfills the customer’s wishes, with no extra frills or hooks for extensibility.

One book that does a great job of describing XP is Kent Beck’s Extreme Programming Explained (ISBN 201616416), a slender but complete $29.95 volume.

I sometimes find these sorts of books tiresome. An evangelist pushes what some might see as a wild-eyed new way to create software, while the evening wears on and my interest flags. This one is different. Between the writing style and the quite fascinating ideas behind XP I found the book compelling.

XP requires a customer who lives on-site, constantly providing feedback to the development team. A very cool idea. Practical? I have doubts, especially in the embedded world where so many of us build products used by thousands of disparate customers. But a cool idea nonetheless.

XP demands conformance to a coding standard. Excellent! The pair programming I’d find a little too “in your face”, but is an interesting concept that builds on the often-proven benefits of code inspections, though in my experience two pairs of eyes are not enough.

XP teams focus on validation of the software at all times. Programmers develop software by writing tests first, then software that fulfills the requirements reflected in the tests. Customers provide acceptance tests that enable them to be certain that the features they need are provided. There’s no such thing as a big integration phase. This is the XP practice I find most brilliant. Even if you’re not going to pursue XP, study it and take the testing ideas to heart.

Constant testing plays into the “frequent releases” XP requirement. Don’t build a big thing and then dump it on a customer. Release portions of the project often to get feedback. This is much like the spiral development model, which seems to offer the only practical hope to meet schedules. Of course, neither spiral nor XP development promises that we’ll know a real delivery time at the project’s outset; instead, we evolve the product and schedule together. Most managers can’t accept that.

Finally, I’d imagine most of us would quickly buy in to XP’s belief in 40 hour work weeks. Tired programmers make mistakes. Go home!

Extreme Programming Refactored, by Matt Stephens and Doug Rosenberg

 

Like Martin Luther’s 95 thesis, Matt Stephens and Doug Rosenberg’s new book “Extreme Programming Refactored”, Springer-Verlag, NY NY 2003, ISBN 1-59059-096-1) lifts the hood on the hype and exposes the problems that come with XP.

Just as educated Christians should read what’s available of the Talmud (at least, the little that’s been translated into English) to understand better an important and interesting part of our world, all educated developers should go dig through a couple of XP tomes. And then read this book, which in the Agile spirit I’ll acronym to XPR.

It’s the most infuriating programming book I’ve read. The message is spot-on, but is told in such an awful manner that it’s sometimes hard to hear the reasonable thoughts for the noise. Like the lyrics to 40 (I counted) annoying XP-bashing songs littered randomly in every chapter.

Sometimes witty, it’s often entertaining in the manner of the National Inquirer or a car wreck. Though the authors repeatedly express dismay at how XP zealots attack their doubting Thomases, XPR wages near-war against the XP personalities. An entire chapter belittles the opposition’s personas. A special overused icon warns the reader of yet another tiresome bout of sarcasm.

XPR carefully and correctly demonstrates how all 12 of XP’s practices are interrelated. Drop one and the entire game falls apart like a house of cards. Testing is the only defense against poor specs; pair programming an effort to save the code base from a poorly thought-out, frantically hacked-together creation. The book is worthwhile for this analysis alone. The XPers don’t stress how vital all 12 steps are to success on a project.

Yet the authors, in the few demonstrations of failed XP projects they present (no successes are noted), sheepishly admit that none of these programs were built using an unmodified form of XP. All used subsets… the very approach XPR demonstrates cannot succeed. So the credibility of these examples suffers.

A sidebar cleverly titled “The Voice of eXPerience” quotes disgruntled programmers who used (subsetted) XP. Actually, I think there are two programmers quoted, the same ones over and over. One pontificates: “My feeling is that XP wouldn’t score highly at all when compared to other available principles”. That may be true… but isn’t a very convincing demonstration of proof.

The authors do miss a couple of other arguments that indict XP-like development processes. The Agile community calls the test strategy XP’s “safety net”; they say it insures bad code never makes it to the field. Yet study after study shows tests don’t exercise all of the software - in some cases less than half! I’d argue that tests are the safety net that catch problems that leak through the code inspections, design checks, and careful design. In the embedded world, the automated tests required by XP are devilishly hard to implement, since our programs interact with users and the real world.

XPR completely ignores embedded systems, rather like, well, rather like every other software book. One anti-XP argument for an embedded project is that without some level of up front design you can’t even select the hardware. Do we need an 8051 or a Power PC? Is data trickling in or gushing at 100k samples per second?

XPR concludes with a modified version of XP that’s less eXtreme, more logical, and better suited to firmware development. That chapter is the best part of the book.

Now don’t get me wrong- I do believe there are some programs that can do well with XP. Examples include non-safety critical apps with rapidly changing requirements that simply can’t be nailed down. Web services come to mind. I know of one group that has been quite successful with XP in the embedded space, and numerous others who have failed.

Should you read the book? If the siren song of XP is ringing in your ears, if pair programming sounds like the environment you’re dying to work in, read XPR today. Others wishing for a balance to the torrent of pro-XP words flooding most software magazines will find this book interesting as well. If it had been a third as long, without the revisionist Beatles lyrics, and, well, more polite, it would deserve 5 stars.

 

Feature Driven Development, A Practical Guide to, by Stephen Palmer and John Felsing

cover
The last decade or so has been an exciting time to be in software development. Hardware design has, in my opinion, lost some of the fun now that ICs are so dense and speeds so high. But the software world has been flooded with new ideas and methodologies. Some are brilliant, others goofy, but all are fascinating.
 
One is Feature-Driven Development (FDD). Do read “A Practical Guide to Feature-Driven Development”, by Stephen Palmer and John Felsing (ISBN 0-13-067615-2, Prentice Hall, 2002), which is a readable treatise on this important topic.
 
Feature-Driven Development (FDD) is a relatively agile development methodology that is, in my opinion, much more suited to most embedded efforts than techniques like eXtreme Programming. XP is an interesting idea with lots of fabulous concepts we should steal, but I’m concerned about how XP shortchanges design. FDD requires considerable initial design, yet preserves much agility in the feature implementation phase.
 
As an aside, a new article (http://www.stsc.hill.af.mil/crosstalk/2003/12/0312Turner.html, People Factors in Software Management: Lessons From Comparing Agile and Plan-Driven Methods by Richard Turner and Barry Boehm) gives a quite good analysis of where Agile methods fit in the spectrum of projects. The quick summary: Agile methods are best if you have lots of superior people, a project whose reliability isn’t critical, quickly-changing requirements, a small team, and a culture that thrives on chaos.
 
FDD, too, requires above average to superior developers. That seems to be a characteristic of most new methods. Where do all of the average and below-average people go? Obviously, simple math tells us an awful lot of developers won’t score in the superprogrammer category.
 
FDD has a Project Manager who owns the project and acts as a force field to shield the developers from interruptions and administrivia. Day to day direction falls to the Development Manager, a person endowed with both people and technical skills.
 
A Chief Architect is responsible for the systems overall design.
 
Chief Programmers own feature sets and lead small teams implementing the code. They rely on Class Owners ­ the actual workers cranking out software. Unlike XP, where everyone owns all of the code, in FDD the Class Owner is responsible for a particular class.
 
FDD has 5 processes. The project starts with an overall design, called a “domain object model”. From there a features list is constructed. A plan is made, grouping features into sets which are assigned to Chief Programmers.
 
The fourth and fifth processes comprise the project’s engine room. A Chief Programmer selects a set of features that can be implemented in two weeks or so. He or she designs each feature, creating a list of classes which are designed and implemented. This closely resembles Gilb’s well-known Evolutionary process, which focuses on “time-boxing” a schedule ­ that is, figuring out what can be done in a two week timeframe, implementing that, and iterating.
 
The book includes a 10 page cheat-sheet that details each part of FDD. It’s a handy guide for outfits just embarking on a new project using the methodology.
 
The book has frequent sidebars featuring a dialog between an experienced FDD developer and one just embarking on a project using the technique. I found this distracting and not terribly enlightening. And the authors push TogetherSoft’s product just enough to be annoying.
 
But these are minor complaints. Unlike some programming books that are long on passion while shortchanging substance, this volume gives a clear introduction to FDD, with enough “how-to” to use the method in your own projects.
 
Highly recommended.
 

Guidelines for the Use of the C Language in Vehicle Based Software, by MISRA

Frequent contributors to the comp.arch.embedded newsgroup sometimes refer to the MISRA (Motor Industry Software Reliability Association) publication “Guidelines For the Use of The C Language in Vehicle Based Software”. As one interested in the firmware reliability (is that an oxymoron?) I wanted to check out this publication, but was frustrated by its unavailability on the net. So I ordered a copy from England (35 pounds for overseas shipments) through the web site (http://www.misra.org.uk).

In just a few weeks the 70 page bound booklet arrived. It’s emphatically NOT a software standard; rather, the authors define safe ways to use some C constructs and identify others that must be avoided. Use these guidelines in concert with a real standard, one that defines coding styles, commenting conventions, and the like (you’re welcome to download the one I use from http://www.ganssle.com/misc/fsm.doc).

While C is indeed a very powerful language, it should come with a warning label: “danger: experts only”. It’s so easy to create programs that leak memory, run pointers wildly all over memory, or create other difficult-to-find havoc.

The MISRA standard, a collection of 127 coding rules, tries to prevent problems by limiting the types of C constructs we use, and defining safe ways to use others.

Quite a few of the MISRA rules make tremendous sense: don’t redefine reserved words and standard library function names. Document and explain all uses of #pragma. When a function may return an error, always test for that error. Functions should have a single exit point.

Some are interesting: never use recursion. Keep pointer indirection to no more than two levels.

A couple are hard but possibly quite valuable: check every value passed to every library routine. Avoid many common library functions.

Other are trivial: only use characters defined by the ISO C standard. Don’t nest comments. Write code conforming to ANSI C. Don’t confuse logical and bitwise operators. Don’t have unreachable code.

Some of the requirements I find disturbing. For instance, rule 118 prohibits the use of dynamic memory allocation. Not a bad idea, due to problems associated with fragmentation. But there are alternatives to malloc/free that still give us the benefits of dynamic memory allocation without the pitfalls. More problematic, this rule tells us not to use library functions which employ dynamic memory, specifically mentioning string.h. This seems awfully restrictive to me… I sure don’t want to write my own string handlers… and further, how is one to identify the suspect libraries?

Rule 122 prohibits the use of setjmp and longjmp. These are worse than gotos, of course, in that they let us branch to specific memory addresses. Yet in a few cases longjmp is almost unavoidable.

I think there’s much value to the document, but as a stand-alone set of rules it’s incomplete. Better, incorporate the rules into your in-house software standard. It’s just too hard to conform to two sets of rules living in two different documents.

If MISRA published the rules on-line, they’d be more accessible to the embedded community, hopefully improving the quality of code everywhere. Without such an electronic copy, I doubt if many will ever incorporate these rules into their own standards.

Ham Radio for Dummies, Ward Silver

  Click picture to order from Amazon.com

Ward Silver's Ham Radio for Dummies appeared in my in-box recently. Published in 2004 by Wiley it's a moderately hefty 360 page introduction to the world of Amateur Radio (aka "ham radio.")

For those not in the know, Amateur Radio is a means of communicating world-wide with surprisingly sophisticated equipment using a vast array of frequencies. It's internationally regulated; all hams must have a license which comes only after passing a test.

Ham radio is sort of out of the purview of embedded systems, but this hobby pushed many of us into the world of electronics and computers. I've had a license for many decades; as a teenager building (vacuum tube!) radios I learned an awful lot about electronics. For me designing and building equipment was more fun than chatting with other hams. but that's ham radio's appeal. There are many different facets to the avocation.  

First I have to admit that the "For Dummies" books irritate me. I've spent a lifetime studying many subjects and may be uneducated on some, but never consider myself a "dummy." A title like "For Novices" or "An Introduction To" is a bit more seemly, yet for some reason these dummy books have a wide appeal.
 
This is a book for rank novices - not dummies, but for people who are interested in the hobby but just don't know where to go to learn more. Though the ARRL, the ham radio advocacy group, (http://www.arrl.org) does offer lots of useful information, this book packages the data in a more convenient form than any other publication I know of. The author does a superb job of describing what the hobby is all about. In fact, perhaps half the book discusses different aspects of ham radio. Did you know you can run your own TV station? Mr. Silver shows how. How about radioteletype, moonbounce, or other operating modes? This book gives an overview of each, with good links for more information.
 
It's peppered with amusing anecdotes and cartoons. The writing is lively and non-technical, easy enough for anyone to grasp.
 
You can't operate as a ham without a license and Mr. Silver clearly describes the testing process, as well as the different kinds of licenses available. This is not, however, a test preparation manual. You'll need other books, such as those at http://www.arrl.org/catalog/lm/ . Thus the book is totally tech-free.
 
What's the test like? In the US there are 35 or 50 multiple choice questions. Get 75% and you pass. Questions are both technical (electronics) and regulatory (the operating rules). Trust me. it's not hard to pass, especially using the aforementioned study material.
 
Though there is a license that doesn't need Morse code, any serious operator will want a license with more privileges. That requires passing a code test at 5 words per minute, 25 characters a minute or about two seconds per character. This requirement has been substantially downgraded from the 13 or 20 word per minute test of just a few years ago. With a little study and practice 5 WPM is a breeze.
 
One strength of the book is that Mr. Silver clearly explains actual operating procedures in a fashion that's more engaging than the ARRL publications.
 
Even after 35 years as a ham I didn't know about beacons used to check radio propagation (covered on page 101). And he discusses the digital modes which are all the rage today, with which I have no experience and therefore learned a few things.
 
Ham radio exists in a very different environment than when I first became interested in the hobby. It was relatively easy to build a rig when radios had only a handful of vacuum tubes. Today's multimode transceivers are packed full of surface-mounted ICs. It's harder to build this sort of equipment in a typical home shop. Yet there are still sources for kids and equipment, and a surprising number of hams build their own gear, especially "QRP" (very low power) gear. This book has a list of companies that sell kits.
 
Appendix B, a list of links and other sources, is invaluable.
 
The "bible" of ham radio is the ARRL Handbook for Radio Communication, which has a mediocre introduction to the hobby, but is fantastically complete in electronics and radio theory, coupled with plenty of build-it projects. Ward Silver's Ham Radio for Dummies fills the introductory niche left blank by the Handbook.
 

High Integrity Software, John Barnes

coverClick picture to order from Amazon.com

“High Integrity Software” – the title alone got me interested in this book by John Barnes. Subtitled “The SPARK Approach to Safety and Security”, the book is a description of the SPARK programming language’s syntax and rationale.

The very first page quoted C.A.R Hoare’s famous and profound statement: “There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies.” This meme has always rung true, and is one measure I use when looking at the quality of code. It’s the basis of the SPARK philosophy.

What is SPARK? It’s a language, a subset of Ada that will run on any Ada compiler, with extensions that automated tools can analyze to prove the correctness of programs. As the author says in this Preface, “I would like my programs to work without spending ages debugging the wretched things.” SPARK is designed to minimize debugging time (which averages 50% of a project’s duration in most cases).

SPARK relies on Ada’s idea of programming by contract, which separates the ability to describe a software interface (the contract) from its implementation (the code). This permits each to be compiled and analyzed separately.

It specifically attempts to insure the program is correct as built, in contrast to modern Agile methods which stress cranking a lot of code fast and then making it work via testing. Though Agility is appealing in some areas, I believe that, especially for safety critical system, focus on careful design and implementation beats a code-centric view hands down.

SPARK mandates adding numerous instrumentation constructs to the code for the sake of analysis. An example from the book:

Procedure Add(X: In Integer);

--#global in out Total;

--#post Total=Total~ + X;

--#pre X > 0;

The procedure definition statement is pure Ada, but the following three statements SPARK-specific tags. The first tells the analysis tool that the only global used is Total, and that it’s both an input and output variable. The next tag tells the tool how the procedure will use and modify Total. Finally a precondition is specified for the passed argument X.

Wow! Sounds like a TON of work! Not only do we have to write all of the normal code, we’re also constructing an almost parallel pseudo-execution stream for the analysis tool. But isn’t this what we do (much more crudely) when building unit tests? In effect we’re putting the system specification into the code, in a clear manner that the tool can use to automatically check against the code. What a powerful and interesting idea!

And it’s similar to some approaches we already use, like strong typing and function prototyping (though God knows C mandates nothing and encourages any level of software anarchy).

There’s no dynamic memory usage in SPARK – not that malloc() is inherently evil, but because use of those sorts of constructs can’t be automatically analyzed. SPARK’s philosophy is one of provable correctness. Again… WOW!

SPARK isn’t perfect, of course. It’s possible for a code terrorist to cheat the language, defining, for instance, that all globals are used everywhere as in and out parameters. A good program of code inspections would serve as a valuable deterrent to lazy abuse. And it is very wordy; in some cases the excess of instrumentation seems to make the software less readable. Yet SPARK is still concise compared to, say, the specifications document. Where C allows a starkness that makes code incomprehensible, SPARK lies in a domain between absolute computerese and some level of embedded specification.

The book has some flaws – it assumes the reader knows Ada, or can at least stumble through the language. That’s not a valid assumption anymore. And I’d like to see real life examples of SPARK’s successes, though there’s more info on that at http://www.sparkada.com/.

I found myself making hundreds of comments and annotations in the book, underlining powerful points and turning down corners of pages I wanted to reread and think about more deeply.

A great deal of the book covers SPARK’s syntax and the use of the automated analysis tools. If you’re not planning to actually use the language your eyes may glaze over in these chapters. But Part 1 of the tome, the first 80 pages which describes the philosophy and fundamentals of the language and the tools, is breathtaking. I’d love to see Mr. Barnes publish just this section as a manifesto of sorts, a document for advocates of great software to rally around. For I fear the real issue facing software development today is a focus on code uber alles, versus creating provably correct code from the outset.

High Integrity Software, The SPARK Approach to Safety and Security, by John Barnes. Published by Addison-Wesley, ISBN: 0321136160.

 

High Speed Digital Design, Howard Johnson and Martin Graham 

cover Click picture to order from Amazon.com.

Every embedded hardware designer simply must read High Speed Digital Design (a Handbook of Black Magic) by Howard Johnson and Martin Graham (1993 PTR Prentice Hall, NJ). Though the book will challenge you if your grasp of theory is rusty, it's worth reading even if you must skip the math.

Modern components are so fast that even slowly-clocked systems suffer from all sorts of speed problems. This book leaves no stone unturned in the quest for reliable digital designs.

The authors cover transmission line theory in detail. At first glance I shuddered, remembering with no joy two incomprehensible semesters of electromagnetics. Johnson and Graham balance theory with lots of practical information. For example, a right angle bend on a PCB trace is a transmission disaster... that you can sure simply by rounding the edges of the track.

Most of us vaguely know that corrupting a PCB ground or power plane is not a good thing to do. Yet we sometimes yield to temptation when that board will simply not route on 6 layers, so running a couple of tracks on the plane. In a few paragraphs this book shows why this is a horrible idea, as the current return for any track runs under the track itself. A slot etched in the ground plane, to allow the routing of tracks, may block a return path. Current will flow around the slot, greatly increasing the path's inductance. Even designers with the best of intentions may accidentally create this situation by poorly specing out hole sizes for connectors. If the holes are too large, they may intersect, creating a similar, though unintended, slot.

What's the best way to stack layers on a PCB? The book includes an entire chapter about this, though I would have liked to see more discussion about how signals couple with different stack configurations.

Vias, too, get their own chapter. There's lots of good advice. The best sound bite is that small vias are much faster than larger ones. Small sure helps routing as well, especially with SMT boards, so there's a ray of hope for us yet!

One of the biggest challenges faced by digital designers is propagating signals off-board through cables. A chapter about this subject is worth the price of the book alone. Ribbon cable is far better than I realized, especially when you run grounds as the authors recommend.

What's the best way to use a scope on a high speed system? What is the effect of that short little ground wire coming from the probe? It turns out that the 3 inch ground lead can degrade the displayed risetime by more than 4 nsec! The authors offer the best description of scope probe problems, and solutions, I've ever seen. They show how to build a better probe using parts found in any shop.

Did you know that skin effect, the tendency of high frequency signals to travel only in the outer edges of a conductor, can become important on PCB tracks at frequencies as low as 4 MHz? Halving the length of a conductor improves its frequency response by a factor of 4. Until reading this book I was under the impression that only RF designers needed to worry about this effect.

Read this book. Pass it along to your PCB designers. Then, read it again.

How Computers Do Math, Clive "Max" Maxfield and Alvin Brown

  Click picture to order from Amazon.com

Clive "Max" Maxfield and Alvin Brown have written a wonderful book called "How Computers Do Math" about the essential workings of computers. All of Max's writings are entertaining and offbeat (e.g., "Bebop to the Boolean Boogie").  
The book is aimed at people starting out in computers; we embedded experts know this stuff cold. But an interested 15 year old could get truly in-depth insight into the mysteries of computing from this volume.
 
It's a very readable book laid out with easy-on-the-eyes formatting and a plethora of clear illustrations. The illustration of a LIFO stack just booms clarity. Chapters start with relevant and often amusing quotes; one of my favorites is Lewis Carroll's "The four branches of arithmetic: ambition, distraction, uglification, and derision."
 
Quickly page through the book and you'll be puzzled by its organization. The first 55 pages (out of 450) comprise its ostensible meat. The rest are labs for each chapter, a series of problems the authors pose to illustrate important concepts. They nudge you through the solutions - there are no proofs left to the confused student.
 
The labs are very well-written accessible activities in which the authors take the reader along hand-in-hand. They're a bit insidious: work through them and the reader will become a reasonably competent assembly-language programmer, without realizing he's learning one of the more difficult aspects of programming. There's a perverse genius in covertly slipping assembly language into one's head without pain.
 
The authors' sure hands guide one along each lab, with descriptions and demonstrations till the code that's required is almost anticlimactic: "of *course* it must be like this!"
 
But how is one to do a lab? You need a computer, right? Well, sure, but the authors provide a DIY Calculator on CD, an interactive and sophisticated bit of code that runs on a PC. It sports the usual display and math functions, plus its own low-level programming language. And, it's extensible. The companion website (http://www.diycalculator.com/aboutdiy.shtml ) contains plenty of downloadable extension code, plus the calculator itself. Like open source advocates they hope the community will contribute to the set of routines.
 
The web site also has a fabulous background to the field of computing (http://www.diycalculator.com/cool.shtml#PhyVer ) that, if you're a history buff like me, will suck you in and surely doom the schedule of whatever product you're working on now.
 
Where too many computer books have a dreary chapter about number systems, "How Computers Do Math" cover the subject in an entertaining and very complete fashion. From basic binary math they go on to show how one constructs an adder out of gates. Signed, unsigned, multiplication, rounding (9 different approaches!), BCD - it's all there, and it's all extremely comprehensible.
 
The book is published by John Wiley & Sons, Hoboken, NJ, copyright 2005, and sells for $26 on Amazon.
 

Introduction to the Personal Software Process, Watts Humphrey

  coverClick picture to order from Amazon.com.

The Software Engineering Institute (www.sei.cmu.edu) wages a war on poor software practices via their seminars, conferences, on-line materials, and their Capability Maturity Model (CMM). 

 The CMM, though, is a bitter pill to swallow. Without total commitment from the top of the organization on down it is doomed to failure, as the practices it entails are far from easy. Going the CMM route is surely as difficult and costly as getting ISO9000 certified.

 Watts Humphrey, one of the architects of the CMM, realized that too many organizations will never be able to climb the rungs of the CMM ladder, yet are crying for ways to improve their software processes. His seminal work A Discipline for Software Engineering (1995 Addison-Wesley NY NY) outlined a process he calls the Personal Software Process (PSP) that lets us as individuals take charge of things, and improve the way we generate code, on our own, with no commitment from management.

 Now he’s followed that book with Introduction to the Personal Software Process (1997, Addison Wesley Longman, NY NY, ISBN 0-201-54809-7). Where the original book was long winded and filled with heady statistics, Introduction is practical, down to Earth, and filled with action plans. Introduction is the book to get if you want a step-by-step way to improve your estimation and coding skills. Humphrey claims that most engineers can achieve a 70% improvement - or better - from a “one semester” exposure to the PSP.

 I presume most people reading this have left “semesters” long behind in a happily-forgotten past! However, as professionals we can never stop learning new things, even if management is unsupportive of our efforts. Humprey’s original book feels, smells, and reads like a conventional college textbook; this successor is more of an “Idiot’s Guide” to the PSP, and is much more accessible.

 However, nothing important ever comes easily. In my experience it takes the average engineer who pursues the PSP on his or her own about 6 months of steady work, a couple of evenings a week, to master the concepts. Though this could be shortened considerably by management that makes a few hours during the workweek available, it’s rare to find such enlightened bosses.

 If your company won’t give you the time to do the PSP, go after it yourself, at night. Shut down the TV for a few hours here and there; the benefits are too great to ignore. Use Humphrey’s new book, Introduction, as it’s so much more tractable than the first.

But this book and process is not a cakewalk. If you're not willing to put some serious hours into it, don't buy the book.

On to more book reviews. 

Back to home page.

The Ganssle Group 
PO Box 38346, Baltimore, MD 21231 
Tel: 410-504-6660, Fax: 647-439-1454
Email info@ganssle.com 
© 2008 The Ganssle Group