|
|
A Look BackA look back at our field. Originally in Embedded Systems Programming, December, 1999. Want to increase your team's productivity? Reduce Bugs? Meet deadlines? Take Jack's one day Better Firmware Faster seminar. You’ll learn how to estimate a schedule accurately, thwart schedule-killing bugs, manage reuse, build predictable real-time code, better ways to deal with uniquely embedded problems like reentrancy, heaps, stacks and hardware drivers, and much, much more. Jack will be presenting this seminar in Chicago (April 23, 2008), Denver (April 25) and London, UK (May 19). Want to be your company’s embedded guru? Join us! More info here. For hints, tricks and ideas about better ways to build embedded systems, subscribe to The Embedded Muse, a free biweekly e- newsletter. No advertising, just down to earth embedded talk. Click here to subscribe. In
the blink of an eye our children grow into adults, middle age assaults us while
we’re still wrestling with becoming more or less responsible citizens, and the
technology of yesterday ages, becomes obsolete, and then forgotten. Once in a
while it’s important to step back, take a breath, and remember where we’ve
been. In
this, my last column for the millenium, instead of projecting ahead I’d like
to take a look back at the early days of the industry. Since the embedded
industry lacks a coherent record of its history the best I can offer is a
personal history, a very abridged chronicle of my early years building small
real time systems. Graybeards out there will no doubt have different stories and
recollections, but they all form the fabric of our experience. In
1971 Intel shocked the electronics world with the announcement that they had
produced a computer on a chip. A confirmed 18 year old computer geek at the
time, I remember the press hoopla well. Most engineers viewed the announcement
as marketing hyperbole. Everyone knew that computers cost tens of thousands of
dollars. A computer on a chip? Impossible. Time
proved that such an advance was indeed possible, though that 4 bit chip, the
4004, required an entire circuit board of support components. Soon thereafter
they followed this part with the 8008, the first 8 bitter. Engineers realized
that a byte-wide machine might just be able to do something useful. After all,
DEC’s PDP-8, a well-accepted “serious” machine, used a word just four bits
wider. The
8008 needed three separate voltages: +5, -9, and +12, plus a two phase clock
input. This didn’t leave many of the part’s 18 pins for the address and data
bus! Intel decided to multiplex data and address on the same pins, an approach
later used on their 8085 and 8088 as well. It’s only recently that high pin
count surface mount devices from Intel have come with separate busses. The
8008 used PMOS technology, which was later supplanted by NMOS and now CMOS. In
the early 70s CMOS was an odd little technology used solely by RCA in their
CD4000 series of logic gates. It had astonishingly low current needs, but
propagation delays were measured in fortnights rather than nanoseconds. At the
time we all knew that it was a technological dead end. Time proved us wrong, as
increasing transistor counts meant ICs dissipated self-destructing amounts of
power. CMOS’s low current requirements saved the day; in more modern guises
offering very high speeds it became the silicon de rigor. In
1972 no one dreamed that large microprocessor programs would be important. The
8008 had only 14 address lines, limiting its address space to a measly 16 Kb. It
would be tough to write a device driver in 16 Kb now, but back then we were
thrilled to have so much memory. In fact, memory was so expensive that none of
the embedded 8008 systems I worked on ever used more than 12 kb. Typical static
RAMs were 256 bits to 1k bits per part; dynamic devices weighed in with all of
4k. That’s about a millionth of the size of the biggest memories available
today. Programs
all lived in EPROMs, another Intel invention. Their 1702 stored a mere 256
bytes. 16 of these 24 pin parts, filling an entire circuit board, gave us 4k of
program store. There’s a clear symmetry between the 8008’s 16k address space
and limited memory sizes of the day. The 1702’s read cycle was 1.3
microseconds, about 2.5 orders of magnitude slower than those of today. The
8008 had a 7 word-deep stack built into the silicon. Issue more than 7 calls or
pushes without corresponding returns and pops, and the code crashed horribly.
Finding code that pushed just once too many times was a nightmare we fought
constantly. Through
the luck of being in the right place at the right time I managed to land an
engineering job while barely in college. No one had a clue how to program these
beasts; I knew as little as anyone but was cheap and available. I had been
working as a technician for a company that built grain analysis equipment. At
the time our product used an analog process, managed by hundreds of small scale
logic chips, to beam infra red light at a sample of wheat and compute percent
protein by measuring reflected wavelengths of light. An enlightened management
team immediately saw the benefit of replacing much of the random logic and even
analog components with an 8008, and I was drafted onto the development team. Our
development environment was an Intellec 8, a computer Intel built around the
8008. It had a modular bus with 18 slots. Given enough money you could populate
the computer with a whopping 16k of RAM. We built an interface to connect the
Intellec’s bus to the backplane in the system we were designing, building what
in effect was a crude bus-level emulator. Booting
the Intellec 8 was one of those rare “pleasures” of the era. Its front panel
was covered with switches. We’d key in a boot loader in binary, instruction by
instruction, and then hit the EXECUTE button. If your fingers set all the
switches perfectly the teletype would read in a paper tape loader program. A
single bit error required reentering all of the hundreds of switch settings. The
first ten or a hundred times I thought this was very cool: it was like operating
a piece of futuristic machinery, flipping switches in the way I imagined the
astronauts did. Before long it was just a dreary process repeated every time my
programs crashed, which is to say much too often. A
later upgrade put the boot loader in ROM. Then all one had to do was enter the
binary for a JMP 0000 to start the code. I still remember those codes: 01000100
00000000 00000000. Our
only I/O device, other than the lights and switches on the Intellec’s front
panel, was a not-so-trusty ASR-33 teletype. There were no CRT monitors in those
days. Operating at a blinding 10 characters per second, this mechanical beast
was no doubt the cause of many ulcers and crimes of frustration. 10 characters
per second means 8 seconds to print a single line. The
ASR-33 included a paper tape punch and reader. For those of you whose careers
bypassed those times, the paper tape everyone used at the time was an inch-wide,
hundreds of feet long spool of oiled paper that stored data as a series of holes
punched across the with of the tape in ASCII. Storage density was about 100
characters per inch of tape. With time we learned to read the ASCII codes by
looking at the pattern of holes. The ASR-33 sensed the holes with moving metal
fingers. The
floppy disk didn’t appear till much later; small hard disks didn’t exist,
even magnetic tape was confined to mainframes. Paper tape was our only external
storage media. Needless
to say, high level languages were just not feasible. We did have one brief
flirtation with PL/M, cross compiling on a mainframe and downloading the
resulting tape to the Intellec. The compiler was notoriously unreliable so we
eventually switched to assembly language and a development environment that
lived entirely on the 8008-based Intellec. Clearly,
given the nature of the ASR-33, the editor could not offer a full screen view of
the source. Instead it accepted our code a line at a time, storing the program
in RAM, as it let us edit individual lines rather than screens. At 10 cps,
displaying a module just one page long took several minutes. When the code
looked right we’d tell the editor to print the module, turning on the tape
punch to place the code in “permanent memory”. But
how did the editor itself wind up in the Intellec’s memory? Why, via paper
tape, of course. All of the tools came as long streams of tape we loaded through
the teletype’s pathetic 10 cps reader. The editor, being rather a small
program, loaded quickly in under half an hour. Next
step: load the assembler’s binary tape (about an hour). This was rather a
sophisticated tool that even accepted macro definitions… which meant it had to
make three passes over the source code. With only 16k of memory (shared by both
the assembler itself and the relocatable code it generated) there was no room to
store the source code. Instead we fed the source tape through three times, once
per assembler pass. The
ASR-33 was a beast of astonishing mechanical complexity. A mind-boggling array
of mechanical levers and arms moved in mysterious ways to read tapes and produce
printed output. None of us managed to decipher its operation, but we did find
that when problems arose a few magic spots, properly tweaked with a hammer,
often brought the unit back to life. Needless to say, occasionally the teletype
misread a character. When the assembler saw any change in the source on each of
the three passes it shut down with a “phase error” message, causing us to
restart the time consuming process. Syntax
errors sent the developers back to the editor (load editor tape, load source
tape, fix the code, punch an output file, reload the assembler, etc). Eventually
we were rewarded with a successful assemble, followed by the paper tape punch
spitting out a short relocatable image of the source. Repeating
this for each module we’d eventually have a collection of binary tapes. Next,
load the linker, and feed each relocatable through twice (the linker needing two
passes to resolve forward references). Undefined and multiply-defined external
references sent us back through the editing cycle, but eventually these
syntactical issues gave way to a final linker output: the absolute binary image
(on tape) of our code. Only now could we start debugging. Compared
to today’s infinitely fast edit-compile-link cycle our development process
offered not much more than excruciating tedium and long coffee breaks. The
process seems absurd by today’s standards, yet at the time, perhaps because of
everyone’s naïve youth, the fact that we could do any sort of development seemed amazing. Our
first product used every byte of the 4k EPROM space the designers included. That
4k of binary represented perhaps 30k of ASCII source, though no one measured
such things then. To reassemble and relink the entire program consumed three
days. Three days. Needless to say, one reassembled only rarely. The
Intellec had a simple debugger (much like a scaled down version of the PC’s
old DEBUG utility, though that too is mostly ancient history now) that let us
set two software breakpoints. Debugging was surprisingly interactive, though a
runaway program often trashed both the code and the debugger. Since
the debugger had no mini-assembler, we quickly learned all of the 8008’s
machine codes. Very simple bugs might get fixed with a quick hex instruction
change. When the fix wouldn’t fit on top of an instruction, we’d enter a
jump to a “patch area” – unused memory - and type in new code in hex. With
each change we’d carefully annotate the source listings with our
modifications. Later we’d edit these back into the source. Forget to record a
mod, and you’d wind up chasing the same bug later. Our casual debugging
techniques led to too many such repetitions; most of us old-timers are now
extremely methodical with our record-keeping, having learned via oh-so-much pain
the cost of forgetfulness. After a day of debugging, we’d have lots of changes. So many changes. We were all EEs, without the faintest clue about software engineering. Write the code, and then go to heroic lengths to make it work! Programs were small by today’s standards, making it possible – though in retrospect inadvisable – to engage in such casual strategies. So we’d punch a tape of the current memory image and go home, reloading it the next day to pick up from the same spot. The three day cost of creating and assembling clean source tapes prohibited frequent source updates. Despite
the crude capabilities of the tools and the processor itself, it’s amazing how
sophisticated early microprocessor-based instruments were. Our very first
product ran a quite complex mathematical analysis on large data sets, including
least squares fits to big polynomials. We needed floating point support, yet
assembler has no built-in float libraries… or any libraries, for that matter. Intel
did have a wonderful users group back then. Embedded designers could submit code
that someone disseminated to all subscribing developers. We found a priceless
treasure in this library, a floating point package designed by one Cal Ohne.
I’ve never met the man, but have spent a tremendous amount of time modifying
his library, and even porting it to different processors. Cal’s float code
wound up in many different products. It needed less than a K of memory yet
provided many developers of the day with all basic floating point operations.
Though very complex, as I recall we never found a problem with his package. After
an amazing amount of effort for a lousy 4K program we started shipping. Time
moved on and we later used better processors and algorithms; people left the
company, replaced with others not familiar with the older products. The
original units calibrated themselves using an iterative least squares
regression, which, if it didn’t converge to an answer within 20 minutes,
displayed “HELP” in the seven segment LEDs. I’ll never forget some years
later a technician came in, ashen faced, and told me “I’ve been trying to
repair this ancient unit, and after I fiddled with it for a while it started
flashing HELP at me!” Ancient evils and ghosts lurk in old machines… Intel
had promised us that the 1702 EPROMs would retain their data for 10 years, a
time that seemed infinitely far away. Hey, we’d be in our thirties by then!
Years passed, most of us moved on to other jobs. One day I received a panicked
call from the remnants of that company. It
seems that a decade to the month after shipping the earliest of these
instruments some were starting to lose bits! No one remembered how to load the
now fragile paper tapes and reburn the EPROMs; I was enlisted to help. I
learned an important lesson: embedded systems last forever. These units had been
quietly computing , unobtrusive till finally becoming victims of the passing
years. Perhaps PC applications have lifetimes measured in minutes or “Internet
years”, but embedded systems simply never go away. It’s up to us to write
code that can stand years or decades of maintenance. Back to home page. The Ganssle Group
|