|
|
Radio DaysRadios and noise reduction. Published in ESP, August 2000 Want to increase your team's productivity? Reduce Bugs? Meet deadlines? Take Jack's one day Better Firmware Faster seminar. You’ll learn how to estimate a schedule accurately, thwart schedule-killing bugs, manage reuse, build predictable real-time code, better ways to deal with uniquely embedded problems like reentrancy, heaps, stacks and hardware drivers, and much, much more. Jack will be presenting this seminar in Chicago (April 23, 2008), Denver (April 25) and London, UK (May 19). Want to be your company’s embedded guru? Join us! More info here. For hints, tricks and ideas about better ways to build embedded systems, subscribe to The Embedded Muse, a free biweekly e- newsletter. No advertising, just down to earth embedded talk. Click here to subscribe. Radio
Days
“Hi,
I’m from the government and I’m here to help you!”. We cringe when
bureaucrats step into our lives with their helping hands that usually take so
much more than they give. Yet in the past few months Big Brother has indeed
delighted a lot of its citizens. In
May the Department of Defense turned off selective availability, the
“feature” in the GPS system that deliberately introduced position errors
into what is otherwise an amazingly accurate system. The military once worried
that giving the world high accuracy GPS increased the threat of accurate bombing
of American cities. With our traditional enemies gone, and with the knowledge
that the DOD can indeed turn off GPS to any particular area at any time,
selective availability was clearly an anachronism. I
for one have always been infuriated that my tax dollars went for both reducing
the accuracy of GPS, and for
cheating the system to recover that lost accuracy. For the Coast Guard has long
offered differential GPS, a system that boosts restores the lost accuracy by
transmitting correction data from sites whose position has been determined with
great care. So
now a little $100 navigation set shows your location to better than 15 meters
– sometimes much better. This week my unit was showing an estimated position
error of just one meter. Perhaps
this new precision will change many of the embedded systems we’re building
now. Car navigation units, for instance, often watch for abrupt course changes
– like turning a corner – to determine that the vehicle is right at a
particular intersection. That may no longer be needed. In
December the FCC also gave some of the country’s citizens a present when they
amended the license structure for ham radio operators. The ruling effected many
aspects of the hobby, but most profoundly eliminated the insane high speed Morse
code requirements. Previously, any ham wishing to transmit voice in most of the
HF bands (under 30 MHz) had to pass a code test at 13 words per minute. The
“Extra” license, ham radio’s peak of achievement, required 20 WPM. No doubt lots of you have struggled with the code. Most folks quickly reach a plateau around 10 WPM. Getting to 13 requires an awful lot of effort. With the stroke of a pen the FCC maxed the speed at 5 WPM for all license grades. 5 is so slow – two seconds per character - that anyone can quickly pass the test just by memorizing the dits and dahs. Ham
radio’s ranks have been thinning for decades, partly due to the difficulty of
passing the code test, and partly due to young people’s fascination with
computers. In the olden days ham radio was the Internet of the age;
technically-oriented people played with radios as computers were unobtainable.
Now computer obsession and cheap worldwide communication supplants most folks’
willingness to struggle making contacts on noisy, unreliable, HF frequencies. Though
I’ve had a license for decades the fascination of working with radios died
long ago. It is pretty cool to make a contact with someone a continent
away, but it’s so much easier to pick up the phone or pop out an email. Being
busy, I’ve little time or desire to contact a more or less random person just
to chat. Too much like a blind date. I’d rather spend time talking with
friends and neighbors. But
when sailing I do find the ham bands useful since it’s about the only way to
talk to pals on other, distant boats, or to get messages to friends and family
ashore. At sea, 1000 miles from land, that radio suddenly becomes awfully
compelling. Today
we’re surrounded by radio transmissions, from the diminishing ranks of ham
signals, to the dozens of FM-stereo stations in every market, to high-powered AM
talk shows, TV stations, Bluetooth embedded systems chatting with each other,
GPS signals beamed from space, and of course the ubiquitous cellular phone.
Wireless is the future. We’re in a dense fog of electromagnetic radiation that
surrounds and pervades us. The
magazines abound with stories of these wireless marvels, yet I suspect that the
majority of embedded developers have little to do with radio-based devices.
That’s a shame, since the technology underlying radio has much to offer even
non-wireless developers. The Guts of Radio
Think
of that pea soup fog of electromagnetic waves that surrounds the planet. An
antenna funnels all of it into your radio, an incredible mush of
frequencies and modulation methods that amounts to an ineffable blather of
noise. Where is that one rock ‘n roll station you’re looking for in all of
the mess? How can a $30 portable FM receiver extract ultra high fidelity
renditions of Weird Al’s scatological riffs from the noise? Today’s
radios invariably use a design called the Superhetrodyne, or Superhet for short.
Hetrodyning is the process of mixing two AC signals, creating a third, at a
lower frequency which is easier to amplify and use. The
radio amplifies the antenna’s output just a bit, and then dumps it into a
mixer, where it’s added to a simple sine wave (produced by what’s called a
“local oscillator”) near the frequency of the station you’d like to hear.
The mixer’s output is the sum and the difference of the raw antenna signal and
the local oscillator’s sine wave. Turning the dial changes the frequency of
the local oscillator. Suppose
you’d like to hear Weird Al on 102 MHz. You set the dial to 102, but the local
oscillator might actually produce a sine wave about 10 MHz lower – 92 MHz –
resulting in a 10 MHz difference frequency and a 194 MHz sum (which gets
rejected). Thus, the mixer outputs a copy of the station’s signal at 10 MHz,
no matter where you’ve tuned the dial. A
filter then rejects everything other than that 10 MHz signal. The air traffic
controller on 121 MHz, the trucker’s CB at 28 MHz, and adjacent FM stations
all disappear due to the filter’s selectivity. All that other stuff – all of
that noise – is gone. More amplifiers boost the signal, another mixer drops
the frequency even more, and a detector removes the RF part of the station,
putting nothing more than unadulterated Weird Al out the speakers. But
how does the technology behind radio effect embedded systems? I’ve found it to
be one of the most useful ways to eliminate noise coming from analog sensors,
particularly from sensors operating near DC frequencies. Consider
a scale, the kind that weighs packages or people. A strain gauge typically
interprets the load as a resistance. Feed a bit of current through the gauge and
you can calculate weight pretty easily. The problem comes in trying to measure
the sample’s weight with a lot of digits of resolution– system noise at some
point overwhelms the signal. Noise
comes from all sorts of sources. That sea of radio signals gets coupled into the
strain gauge’s wiring. Distant lightening strikes do as well. The analog
sensing electronics itself inherently adds noise to the signal. The challenge is
to reduce these erroneous signals to extract as much meaningful data as
possible. In
their anti-noise quest analog designers first round up all of the usual
suspects. Shield all sensor wires. Twist them together to cancel common-mode
signals. Wrap mu-metal (an electromagnetic barrier) around critical parts of the
circuit. When
the analog folks can’t quite get the desired signal to noise ratios they ask
the firmware folks to write code that averages… and averages… and
averages… to get quieter responses. Averaging yields diminishing returns
(increase the number of sums by an order of magnitude and get only 50% noise
reduction), and eats into system response time. When the system finally gets too
slow we go to much more complex algorithms like convolutions, but then lose some
of the noise minimization. None
of these approaches are bad. In some cases, though, we can take a lesson from RF
engineers and quiet the system by just not looking at the noise. Analog noise is
quite broadband; it’s scattered all over the frequency domain. We can hang
giant capacitors on the strain gauge to create a filter that eliminates all
non-DC sources… but at the expense of greatly slowing system response time
(change the weight on the scale and the capacitor will take seconds or longer to
charge). This sort of DC filter is exactly analogous to averaging. It’s
better to excite the gauge with RF, say at 100 MHz, instead of the usual DC
current source. Then build what is essentially a radio front-end to mix perhaps
a 90 MHz sine wave with the signals, amplify it as much as is needed, and then
mix the result down to a low frequency suitable for processing by the system’s
A/D converter. Off-the-shelf
chips implement most of the guts of a radio. These offer high “Q” factors,
which is a measure of the filter’s narrowness. One that passes just 1000 Hz of
the spectrum (meanwhile rejecting all other frequencies and thus most of the
noise) has a higher Q than one that passes 10,000 Hz. Morse
code hung in as a viable communications method for 150 years because it is
incredibly bandwidth-efficient and noise-immune. You can filter out all of the
RF spectrum except for just a tiny 100 Hz slice, and still copy the information
intact. Virtually all noise disappears. Voice communication, by comparison,
requires at least 3 KHz of bandwidth, thus a much lower-Q filter, and makes the
system that much more susceptible to noise. The
scale is a very Morse-like situation since the data – the person peering over
a bulging tummy at the readout – changes slowly. The high-Q filter yields all
of the information with almost none of the noise. A
radio design also greatly simplifies building very high gain amplifiers – your
FM set converts mere microvolts out of the antenna into volts of speaker drive.
It further removes the strain gauge’s large DC offset from it’s
comparatively small signal excursions. Another
example application is a color-measuring instrument. Many of these operate at
near-DC frequencies since the input sample rests on the sensor for seconds or
minutes. High resolution requires massive noise reduction. The radio design is
simple, cheap (due to the many chip solutions now available), and very quiet. Many
eons ago I worked as a technician on a colorimeter designed in the 60s. The
design was quite fascinating as the (then) high cost of electronics resulted in
a design mixing both mechanical and electronic elements. The beam of light was
interrupted by a rotating bow-tie
shaped piece of plastic painted perfectly white. The effect was to change the DC
sensor output to about a 1000 Hz AC signal. A narrow filter rejected all but
this one frequency. The
known white color of the bow-tie also acted as a standard, so the instrument
could constantly calibrate itself. The
same company later built devices that measured the protein content of wheat by
sensing infrared light reflected from the sample. Signal levels were buried deep
into the noise. Somehow we all forgot the lessons of the colorimeter – perhaps
we really didn’t understand them at the time - and slaved on every single
instrument to reduce noise using all of the standard shielding techniques,
coupled with healthy doses of blood, sweat and tears. No other technical problem
at this company ever approached the level of trouble created by millivolts of
noise. Our analog amplifiers were expensive, quirky, and sensitive to just about
everything other than the signal we were trying to measure. Years
of struggling with the noise in these beasts killed my love of analog. Now I
consider non-digital circuits a nuisance we have to tolerate to deal with this
very analog world. The Murky Future
It
scares me that we could have learned so little from the rotating bow-tie. I’m
worried as well that increasing specialization reduces cross-pollination of
ideas even from within the same industry. The principle behind radio eludes most
embedded folks despite its clear benefits. Knowledge
is growing at staggering rates, with some pundits predicting that the total sum
of information will double every decade before long. At this year’s Embedded Executive Conference Regis McKenna
played down this issue since machines will manage the data. I’m
not so sanguine. Skyrocketing knowledge means increasing specialization. So we
see doctors specializing in hand surgery, process engineers whose whole career
is designing new Clorox bottles, and embedded developers expert at C but all too
often with little understanding of what’s going on under the hood of their
creations. A
generalist, or at least an expert in a field who has a broad knowledge of
related areas, can bring some well-known techniques – well known in one field
- to solve problems in many other areas. Perhaps we need an embedded Renaissance
person, who wields C, digital design, and analog op amps with aplomb. The few
who exist are on the endangered species list as each area requires so much
knowledge. Each can consume a lifetime. The
answer to this dilemma is unclear. Perhaps when machines do become truly
intelligent they’ll be able to marshal great hoards of data across application
domains. Our role then seems redundant, which will create newer and much more
difficult challenges for the homo sapiens. I
hope to live to see some of this future, but doubt that we’ll learn deal with
the implications of our inventions. We historically innovate much faster than we
adapt. Back to home page. The Ganssle Group
|