วันศุกร์ที่ 3 กันยายน พ.ศ. 2553

Power Amplifi er Architecture and Negative Feedback

Amplifi er Architectures
This grandiose title simply refers to the large-scale structure of the amplifi er; that is, the block
diagram of the circuit one level below that representing it as a single white block labeled Power
Amplifi er. Almost all solid-state amplifi ers have a three-stage architecture as described below,
though they vary in the detail of each stage. Two-stage architectures have occasionally been used,
but their distortion performance is not very satisfactory. Four-stage architectures have been used
in signifi cant numbers, but they are still much rarer than three-stage designs, and usually involve
relatively complex compensation schemes to deal with the fact that there is an extra stage to add
phase shift and potentially imperil high-frequency stability.

The Three-Stage Amplifi er Architecture

The vast majority of audio amplifi ers use the conventional architecture, shown in Figure 2.1 , and
so it is dealt with fi rst. There are three stages, the fi rst being a transconductance stage (differential
voltage in, current out), the second a transimpedance stage (current in, voltage out), and the third
a unity-voltage-gain output stage. The second stage clearly has to provide all the voltage gain
and I have therefore called it the voltage-amplifi er stage or VAS. Other authors have called it the pre-driver stage but I prefer to reserve this term for the fi rst transistors in output triples. This threestage
architecture has several advantages, not least being that it is easy to arrange things so that
interaction between stages is negligible. For example, there is very little signal voltage at the input
to the second stage, due to its current-input (virtual-earth) nature, and therefore very little on the
fi rst stage output; this minimizes Miller phase shift and possible Early effect in the input devices.
Similarly, the compensation capacitor reduces the second stage output impedance, so that the
nonlinear loading on it due to the input impedance of the third stage generates less distortion than
might be expected. The conventional three-stage structure, familiar though it may be, holds several
elegant mechanisms such as this. They will be fully revealed in later chapters. Since the amount of
linearizing global negative feedback (NFB) available depends upon amplifi er open-loop gain, how
the stages contribute to this is of great interest. The three-stage architecture always has a unity-gain
output stage – unless you really want to make life diffi cult for yourself – and so the total forward
gain is simply the product of the transconductance of the input stage and the transimpedance
of the VAS, the latter being determined solely by the Miller capacitor Cdom , except at very low
frequencies. Typically, the closed-loop gain will be between 20 and 30 dB. The NFB factor
at 20 kHz will be 25 – 40 dB, increasing at 6 dB/octave with falling frequency until it reaches the
dominant pole frequency P 1, when it fl attens out. What matters for the control of distortion is
the amount of NFB available, rather than the open-loop bandwidth, to which it has no direct
relationship. In my Electronics World Class-B design, the input stage gm is about 9 mA/V, and Cdom
is 100 pF, giving an NFB factor of 31 dB at 20 kHz. In other designs I have used as little as 26 dB
(at 20 kHz) with good results.
Compensating a three-stage amplifi er is relatively simple; since the pole at the VAS is already
dominant, it can be easily increased to lower the HF negative-feedback factor to a safe level. The
local NFB working on the VAS through Cdom has an extremely valuable linearizing effect.
The conventional three-stage structure represents at least 99% of the solid-state amplifi ers built,
and I make no apology for devoting much of this book to its behavior. I am quite sure I have not
exhausted its subtleties.

The Two-Stage Amplifi er Architecture

In contrast with the three-stage approach, the architecture in Figure 2.2 is a two-stage amplifi er,
the fi rst stage being once more a transconductance stage, though now without a guaranteed low
impedance to accept its output current. The second stage combines VAS and output stage in
one block; it is inherent in this scheme that the VAS must double as a phase splitter as well as a
generator of raw gain. There are then two quite dissimilar signal paths to the output, and it is not
at all clear that trying to break this block down further will assist a linearity analysis. The use of a
phase-splitting stage harks back to valve amplifi ers, where it was inescapable, as a complementary
valve technology has so far eluded us.
Paradoxically, a two-stage amplifi er is likely to be more complex in its gain structure than a threestage.
The forward gain depends on the input stage gm , the input stage collector load (because the input stage can no longer be assumed to be feeding a virtual earth) and the gain of the output
stage, which will be found to vary in a most unsettling manner with bias and loading. Choosing
the compensation is also more complex for a two-stage amplifi er, as the VAS/phase splitter has
a signifi cant signal voltage on its input and so the usual pole-splitting mechanism that enhances
Nyquist stability by increasing the pole frequency associated with the input stage collector will no
longer work so well. (I have used the term Nyquist stability, or Nyquist oscillation, throughout this
book to denote oscillation due to the accumulation of phase shift in a global NFB loop, as opposed
to local parasitics, etc.)
The LF feedback factor is likely to be about 6 dB less with a 4 Ω load, due to lower gain in the
output stage. However, this variation is much reduced above the dominant pole frequency, as there
is then increasing local NFB acting in the output stage.
Here are two examples of two-stage amplifi ers: Linsley-Hood [1] and Olsson [2] . The two-stage
amplifi er offers little or no reduction in parts cost, is harder to design, and in my experience
invariably gives a poor distortion performance.

The Four-Stage Amplifi er Architecture
The best-known example of a four-stage architecture is probably that published by Lohstroh and
Otala in their infl uential paper, which was confi dently entitled ‘ An audio power amplifi er for
ultimate quality requirements ’ and appeared in December 1973 [3] . A simplifi ed circuit diagram of
their design is shown in Figure 2.3 . One of their design objectives was the use of a low value of
overall feedback, made possible by heavy local feedback in the fi rst three amplifi er stages, in the
form of emitter degeneration; the closed-loop gain was 32 dB (40 times) and the feedback factor
20 dB, allegedly fl at across the audio band. Another objective was the elimination of so-called
transient intermodulation distortion, which after many years of argument and futile debate has at last been accepted to mean nothing more than old-fashioned slew-rate limiting. To this end
dominant-pole compensation was avoided in this design. The compensation scheme that was used
was complex, but basically the lead capacitors C1, C2 and the lead-lag network R19, C3 were
intended to cancel out the internal poles of the amplifi er. According to Lohstroh and Otala, these
lay between 200 kHz and 1 MHz, but after compensation the open-loop frequency response had its
fi rst pole at 1 MHz. A fi nal lag compensation network R15, C4 was located outside the feedback
loop. An important point is that the third stage was heavily loaded by the two resistors R11, R12.
The emitter-follower (EF)-type output stage was biased far into Class-AB by a conventional
Vbe -multiplier, drawing 600 mA of quiescent current. As explained later in Chapter 6, this gives
poor linearity when you run out of the Class-A region.
You will note that the amplifi er uses shunt feedback; this certainly prevents any possibility of
common-mode distortion in the input stage, as there is no common-mode voltage, but it does have
the frightening drawback of going berserk if the source equipment is disconnected, as there is then
a greatly increased feedback factor, and high-frequency instability is pretty much inevitable. Input
common-mode nonlinearity is dealt with in Chapter 4, where it is shown that in normal amplifi er
designs it is of negligible proportions, and certainly not a good reason to adopt overall shunt
feedback.
Many years ago I was asked to put a version of this amplifi er circuit into production for one of the
major hi-fi companies of the time. It was not a very happy experience. High-frequency stability was
very doubtful and the distortion performance was distinctly unimpressive, being in line with that
quoted in the original paper as 0.09% at 50 W, 1 kHz [3] . After a few weeks of struggle the four-stage architecture was abandoned and a more conventional (and much more tractable) threestage
architecture was adopted instead.
Another version of the four-stage architecture is shown in Figure 2.4 ; it is a simplifi ed version of
a circuit used for many years by another of the major hi-fi companies. There are two differential
stages, the second one driving a push – pull VAS Q8, Q9. Once again the differential stages
have been given a large amount of local negative feedback in the form of emitter degeneration.
Compensation is by the lead-lag network R14, C1 between the two input stage collectors and the
two lead-lag networks R15, C2 and R16, C3 that shunt the collectors of Q5, Q7 in the second
differential stage. Unlike the Lohstroh and Otala design, series overall feedback was used,
supplemented with an op-amp DC servo to control the DC offset at the output.
Having had some experience with this design (no, it’s not one of mine) I have to report that while
in general the amplifi er worked soundly and reliably, it was unduly fussy about transistor types and
the distortion performance was not of the best.
The question now obtrudes itself: what is gained by using the greater complexity of a four-stage
architecture? So far as I can see at the moment, little or nothing. The three-stage architecture
appears to provide as much open-loop gain as can be safely used with a conventional output stage;
if more is required then the Miller compensation capacitor can be reduced, which will also improve
the maximum slew rates. A four-stage architecture does, however, present some interesting
possibilities for using nested Miller compensation, a concept which has been extensively used in
op-amps.

Power Amplifi er Classes
For a long time the only amplifi er classes relevant to high-quality audio were Class-A and Class-
AB. This is because valves were the only active devices, and Class-B valve amplifi ers generated so
much distortion that they were barely acceptable even for public address purposes. All amplifi ers
with pretensions to high fi delity operated in push – pull Class-A.
Solid-state gives much more freedom of design; all of the amplifi er classes below have been
commercially exploited. This book deals in detail with Classes A, AB, B, D and G, and this certainly
covers the vast majority of solid-state amplifi ers. For the other classes plentiful references are given
so that the intrigued can pursue matters further. In particular, my book Self On Audio[4] contains a
thorough treatment of all known audio amplifi er classes, and indeed suggests some new ones.
Class-A
In a Class-A amplifi er current fl ows continuously in all the output devices, which enables the
nonlinearities of turning them on and off to be avoided. They come in two rather different kinds,
although this is rarely explicitly stated, which work in very different ways. The fi rst kind is simply
a Class-B stage (i.e. two emitter-followers working back to back) with the bias voltage increased so
that suffi cient current fl ows for neither device to cut off under normal loading. The great advantage
of this approach is that it cannot abruptly run out of output current; if the load impedance becomes
lower than specifi ed then the amplifi er simply takes brief excursions into Class-AB, hopefully with
a modest increase in distortion and no seriously audible distress.
The other kind could be called the controlled-current-source (VCIS) type, which is in essence
a single emitter-follower with an active emitter load for adequate current-sinking. If this latter
element runs out of current capability it makes the output stage clip much as if it had run out of
output voltage. This kind of output stage demands a very clear idea of how low an impedance it
will be asked to drive before design begins.
Valve textbooks will be found to contain enigmatic references to classes of operation called AB1
and AB2; in the former grid current did not fl ow for any part of the cycle, but in the latter it did.
This distinction was important because the fl ow of output-valve grid current in AB2 made the
design of the previous stage much more diffi cult.
AB1 or AB2 has no relevance to semiconductors, for in BJTs base current always fl ows when
a device is conducting, while in power FETs gate current never does, apart from charging and
discharging internal capacitances.
Class-AB
This is not really a separate class of its own, but a combination of A and B. If an amplifi er is biased
into Class-B, and then the bias further increased, it will enter AB. For outputs below a certain level
both output devices conduct, and operation is Class-A. At higher levels, one device will be turned
completely off as the other provides more current, and the distortion jumps upward at this point as AB action begins. Each device will conduct between 50% and 100% of the time, depending on the
degree of excess bias and the output level.
Class-AB is less linear than either A or B, and in my view its only legitimate use is as a fallback
mode to allow Class-A amplifi ers to continue working reasonably when faced with a low-load
impedance.
Class-B
Class-B is by far the most popular mode of operation, and probably more than 99% of the amplifi ers
currently made are of this type. Most of this book is devoted to it. My defi nition of Class-B is that
unique amount of bias voltage which causes the conduction of the two output devices to overlap
with the greatest smoothness and so generate the minimum possible amount of crossover distortion.
Class-C
Class-C implies device conduction for signifi cantly less than 50% of the time, and is normally only
usable in radio work, where an LC circuit can smooth out the current pulses and fi lter harmonics.
Current-dumping amplifi ers can be regarded as combining Class-A (the correcting amplifi er) with
Class-C (the current-dumping devices); however, it is hard to visualize how an audio amplifi er
using devices in Class-C only could be built. I regard a Class-B stage with no bias voltage as
working in Class-C.
Class-D
These amplifi ers continuously switch the output from one rail to the other at a supersonic
frequency, controlling the mark/space ratio to give an average representing the instantaneous
level of the audio signal; this is alternatively called pulse width modulation (PWM). Great effort
and ingenuity has been devoted to this approach, for the effi ciency is in theory very high, but the
practical diffi culties are severe, especially so in a world of tightening EMC legislation, where it
is not at all clear that a 200 kHz high-power square wave is a good place to start. Distortion is not
inherently low [5] , and the amount of global negative feedback that can be applied is severely limited
by the pole due to the effective sampling frequency in the forward path. A sharp cut-off low-pass
fi lter is needed between amplifi er and speaker, to remove most of the RF; this will require at least
four inductors (for stereo) and will cost money, but its worst feature is that it will only give a fl at
frequency response into one specifi c load impedance.
Chapter 13 in this book is devoted to Class-D. Important references to consult for further
information are Goldberg and Sandler [6] and Hancock [7] .
Class-E
This is an extremely ingenious way of operating a transistor so that it has either a small voltage
across it or a small current through it almost all the time, so that the power dissipation is kept very
low [8] . Regrettably this is an RF technique that seems to have no sane application to audio. Class-F
There is no Class-F, as far as I know. This seems like a gap that needs fi lling . . .
Class-G
This concept was introduced by Hitachi in 1976 with the aim of reducing amplifi er power
dissipation. Musical signals have a high peak/mean ratio, spending most of the time at low levels,
so internal dissipation is much reduced by running from low-voltage rails for small outputs,
switching to higher rails current for larger excursions [9,10] .
The basic series Class-G with two rail voltages (i.e. four supply rails, as both voltages are ) is
shown in Figure 2.5 . Current is drawn from the lower V1 supply rails whenever possible; should
the signal exceed V1, TR6 conducts and D3 turns off, so the output current is now drawn entirely
from the higher V2 rails, with power dissipation shared between TR3 and TR6. The inner stage
TR3, TR4 is usually operated in Class-B, although AB or A are equally feasible if the output
stage bias is suitably increased. The outer devices are effectively in Class-C as they conduct for
signifi cantly less than 50% of the time.
In principle movements of the collector voltage on the inner device collectors should not
signifi cantly affect the output voltage, but in practice Class-G is often considered to have poorer
linearity than Class-B because of glitching due to charge storage in commutation diodes D3, D4. However, if glitches occur they do so at moderate power, well displaced from the crossover region,
and so appear relatively infrequently with real signals.
An obvious extension of the Class-G principle is to increase the number of supply voltages.
Typically the limit is three. Power dissipation is further reduced and effi ciency increased as the
average voltage from which the output current is drawn is kept closer to the minimum. The inner
devices operate in Class-B/AB as before, and the middle devices are in Class-C. The outer devices
are also in Class-C, but conduct for even less of the time.
To the best of my knowledge three-level Class-G amplifi ers have only been made in Shunt mode, as
described below, probably because in Series mode the cumulative voltage drops become too great and
compromise the effi ciency gains. The extra complexity is signifi cant, as there are now six supply rails
and at least six power devices, all of which must carry the full output current. It seems most unlikely
that this further reduction in power consumption could ever be worthwhile for domestic hi-fi .
A closely related type of amplifi er is Class-G Shunt [11] . Figure 2.6 shows the principle; at low
outputs only Q3, Q4 conduct, delivering power from the low-voltage rails. Above a threshold set
by Vbias3 and Vbias4, D1 or D2 conduct and Q6, Q8 turn on, drawing current from the highvoltage
rails, with D3, D4 protecting Q3, Q4 against reverse bias. The conduction periods of the
Q6, Q8 Class-C devices are variable, but inherently less than 50%. Normally the low-voltage
section runs in Class-B to minimize dissipation. Such shunt Class-G arrangements are often called
‘ commutating amplifi ers ’ .
Some of the more powerful Class-G Shunt PA amplifi ers have three sets of supply rails to
further reduce the average voltage drop between rail and output. This is very useful in large
PA amplifi ers.
Chapter 12 in this book is devoted to Class-G.
Class-H
Class-H is once more basically Class-B, but with a method of dynamically boosting the single
supply rail (as opposed to switching to another one) in order to increase effi ciency [12] . The usual
mechanism is a form of bootstrapping. Class-H is occasionally used to describe Class-G as above;
this sort of confusion we can do without.
Class-S
Class-S, so named by Dr Sandman [13] , uses a Class-A stage with very limited current capability,
backed up by a Class-B stage connected so as to make the load appear as a higher resistance that
is within the fi rst amplifi er’s capability. The method used by the Technics SE-A100 amplifi er is
extremely similar [14] . I hope that this necessarily brief catalog is comprehensive; if anyone knows
of other bona fi de classes I would be glad to add them to the collection. This classifi cation does not
allow a completely consistent nomenclature; for example, Quad-style current-dumping can only be
specifi ed as a mixture of Classes A and C, which says nothing about the basic principle of operation,
which is error correction.
Variations on Class-B
The solid-state Class-B three-stage amplifi er has proved both successful and fl exible, so many
attempts have been made to improve it further, usually by trying to combine the effi ciency of
Class-B with the linearity of Class-A. It would be impossible to give a comprehensive list of the
changes and improvements attempted, so I give only those that have been either commercially
successful or particularly thought-provoking to the amplifi er-design community.
Error-Correcting Amplifi ers
This refers to error-cancelation strategies rather than the conventional use of negative feedback.
This is a complex fi eld, for there are at least three different forms of error correction, of which the
best known is error feedforward as exemplifi ed by the groundbreaking Quad 405 [15] . Other versions
include error feedback and other even more confusingly named techniques, some at least of which
turn out on analysis to be conventional NFB in disguise. For a highly ingenious treatment of the
feedforward method see a design by Giovanni Stochino [16] . A most interesting recent design using
the Hawksford correction topology has recently been published by Jan Didden [17] .Non-Switching Amplifi ers
Most of the distortion in Class-B is crossover distortion, and results from gain changes in the
output stage as the power devices turn on and off. Several researchers have attempted to avoid this
by ensuring that each device is clamped to pass a certain minimum current at all times [18] . This
approach has certainly been exploited commercially, but few technical details have been published.
It is not intuitively obvious (to me, anyway) that stopping the diminishing device current in its
tracks will give less crossover distortion (see also Chapter 10).
Current-Drive Amplifi ers
Almost all power amplifi ers aspire to be voltage sources of zero output impedance. This minimizes
frequency-response variations caused by the peaks and dips of the impedance curve, and gives a
universal amplifi er that can drive any loudspeaker directly.
The opposite approach is an amplifi er with a suffi ciently high output impedance to act as a
constant-current source. This eliminates some problems – such as rising voice-coil resistance with
heat dissipation – but introduces others such as control of the cone resonance. Current amplifi ers
therefore appear to be only of use with active crossovers and velocity feedback from the cone [19] .
It is relatively simple to design an amplifi er with any desired output impedance (even a negative
one), and so any compromise between voltage and current drive is attainable. The snag is that
loudspeakers are universally designed to be driven by voltage sources, and higher amplifi er
impedances demand tailoring to specifi c speaker types [20] .
The Blomley Principle
The goal of preventing output transistors from turning off completely was introduced by Peter
Blomley in 1971 [21] ; here the positive/negative splitting is done by circuitry ahead of the output
stage, which can then be designed so that a minimum idling current can be separately set up in
each output device. However, to the best of my knowledge this approach has not yet achieved
commercial exploitation.
I have built Blomley amplifi ers twice (way back in 1975) and on both occasions I found that there
were still unwanted artefacts at the crossover point, and that transferring the crossover function from
one part of the circuit to another did not seem to have achieved much. Possibly this was because the
discontinuity was narrower than the usual crossover region and was therefore linearized even less
effectively by negative feedback that reduces as frequency increases. I did not have the opportunity to
investigate very deeply and this is not to be taken as a defi nitive judgment on the Blomley concept.
Geometric Mean Class-AB
The classical explanations of Class-B operation assume that there is a fairly sharp transfer of
control of the output voltage between the two output devices, stemming from an equally abrupt
switch in conduction from one to the other. In practical audio amplifi er stages this is indeed the case, but it is not an inescapable result of the basic principle. Figure 2.7 shows a conventional
output stage, with emitter resistors Re1, Re2 included to increase quiescent-current stability and
allow current sensing for overload protection; it is these emitter resistances that to a large extent
make classical Class-B what it is.
However, if the emitter resistors are omitted, and the stage biased with two matched diode
junctions, then the diode and transistor junctions form a translinear loop[22] , around which the
junction voltages sum to zero. This links the two output transistor currents Ip , In in the relationship
In · Ip constant, which in op-amp practice is known as Geometric-Mean Class-AB operation.
This gives smoother changes in device current at the crossover point, but this does not necessarily
mean lower THD. Such techniques are not very practical for discrete power amplifi ers; fi rst, in
the absence of the very tight thermal coupling between the four junctions that exists in an IC, the
quiescent-current stability will be atrocious, with thermal runaway and spontaneous combustion a
near certainty. Second, the output device bulk emitter resistance will probably give enough voltage
drop to turn the other device off anyway, when current fl ows. The need for drivers, with their extra
junction-drops, also complicates things.
A new extension of this technique is to redesign the translinear loop so that 1/ In 1/ Ip constant,
this being known as Harmonic-Mean Class-AB operation [23] . It is too early to say whether this
technique (assuming it can be made to work outside an IC) will be of use in reducing crossover
distortion and thus improving amplifi er performance.
Nested Differentiating Feedback Loops
This is a most ingenious but conceptually complex technique for signifi cantly increasing the
amount of NFB that can be applied to an amplifi er. I wish I could tell you how well it works but
I have never found the time to investigate it practically. For the original paper see Cherry [24] , but
it’s tough going mathematically. A more readable account was published in Electronics Today
International in 1983, and included a practical design for a 60 W NDFL amplifi er [25]

Amplifi er Bridging
When two power amplifi ers are driven with anti-phase signals and the load connected between their
outputs, with no connection to ground, this is called bridging. It is a convenient and inexpensive
way to turn a stereo amplifi er into a more powerful mono amplifi er. It is called bridging because
if you draw the four output transistors with the load connected between them, it looks something
like the four arms of a Wheatstone bridge (see Figure 2.8 ). Doubling the voltage across a load of
the same resistance naturally quadruples the output power – in theory. In harsh reality the available
power will be considerably less, due to the power supply sagging and extra voltage losses in the two
output stages. In most cases you will get something like three times the power rather than four, the
ratio depending on how seriously the bridge mode was regarded when the initial design was done. It
has to be said that in many designs the bridging mode looks like something of an afterthought.
In Figure 2.8 an 8 Ω load has been divided into two 4 Ω halves, to underline the point that the
voltage at their center is zero, and so both amplifi ers are effectively driving 4 Ω loads to ground,
with all that that implies for increased distortion and increased losses in the output stages. A unitygain
inverting stage is required to generate the anti-phase signal; nothing fancy is required and
the simple shunt-feedback stage shown does the job nicely. I have used it in several products. The
resistors in the inverter circuit need to be kept as low in value as possible to reduce their Johnson
noise contribution, but not of course so low that the op-amp distortion is increased by driving
them; this is not too hard to arrange as the op-amp will only be working over a small fraction of its
voltage output capability, because the power amplifi er it is driving will clip a long time before the
op-amp does. The capacitor assures stability – it causes a roll-off of 3 dB down at 5 MHz, so it does
not in any way imbalance the audio frequency response of the two amplifi ers.
You sometimes see the statement that bridging reduces the distortion seen across the load because
the push – pull action causes cancelation of the distortion products. In brief, it is not true. Push – pull
systems can only cancel even-order distortion products, and in a well-found amplifi er these are in
short supply. In such an amplifi er the input stage and the output stage will both be symmetrical
(it is hard to see why anyone would choose them to be anything else) and produce only odd-order harmonics, which will not be canceled. The only asymmetrical stage is the VAS, and the distortion
contribution from that is, or at any rate should be, very low. In reality, switching to bridging mode
will almost certainly increase distortion, because as noted above, the output stages are now in effect
driving 4 Ω loads to ground instead of 8 Ω .
Fractional Bridging
I will now tell you how I came to invent the strange practice of ‘ fractional bridging ’ . I was tasked
with designing a two-channel amplifi er module for a multichannel unit. Five of these modules fi tted
into the chassis, and if each one was made independently bridgeable, you got a very fl exible system
that could be confi gured for anywhere between fi ve and ten channels of amplifi cation. The normal
output of each amplifi er was 85 W into 8 Ω , and the bridged output was about 270 W as opposed to
the theoretical 340 W. And now the problem. The next unit up in the product line had modules that
gave 250 W into 8 Ω unbridged, and the marketing department felt that having the small modules
giving more power than the large ones was really not on; I’m not saying they were wrong. The
problem was therefore to create an amplifi er that only doubled its power when bridged. Hmm!
One way might have been to develop a power supply with deliberately poor regulation, but this
implies a mains transformer with high-resistance windings that would probably have overheating
problems. Another possibility was to make the bridged mode switch in a circuit that clipped the
input signal before the power amplifi ers clipped. The problem is that building a clipping circuit that
does not exhibit poor distortion performance below the actual clipping level is actually surprisingly
diffi cult – think about the nonlinear capacitance of signal diodes. I worked out a way to do it, but it
took up an amount of PCB area that simply wasn’t available. So the ultimate solution was to let one
of the power amplifi ers do the clipping, which it does cleanly because of the high level of negative
feedback, and the fractional bridging concept was born.
Figure 2.9 shows how it works. An inverter is still used to drive the anti-phase amplifi er, but now
it is confi gured with a gain G that is less than unity. This means that the in-phase amplifi er will clip when the anti-phase amplifi er is still well below maximum output, and the bridged output is
therefore restricted. Double output power means an output voltage increased by root-2 or 1.41
times, and so the anti-phase amplifi er is driven with a signal attenuated by a factor of 0.41, which
I call the bridging fraction, giving a total voltage swing across the load of 1.41 times. It worked
very well, the product was a considerable success, and no salesmen were plagued with awkward
questions about power output ratings.
There are two possible objections to this cunning plan, the fi rst being that it is obviously ineffi cient
compared with a normal Class-B amplifi er. Figure 2.10 shows how the power is dissipated in the
pair of amplifi ers; this is derived from basic calculations and ignores output stage losses. PdissA is
the power dissipated in the in-phase amplifi er A, and varies in the usual way for a Class-B amplifi er
with a maximum at 63% of the maximum voltage output. PdissB is the dissipation in anti-phase
amplifi er B that receives a smaller drive signal and so never reaches its dissipation maximum; it
dissipates more power because it is handling the same current but has more voltage left across
the output devices, and this is what makes the overall effi ciency low. Ptot is the sum of the two
amplifi er dissipations. The dotted lines show the output power contribution from each amplifi er,
and the total output power in the load.
The bridging fraction can of course be set to other values to get other maximum outputs. The lower
it is, the lower the overall effi ciency of the amplifi er pair, reaching the limiting value when the
bridging fraction is zero. In this (quite pointless) situation the anti-phase amplifi er is simply being
used as an expensive alternative to connecting one end of the load to ground, and so it dissipates
a lot of heat. Figure 2.11 shows how the maximum effi ciency (which always occurs at maximum
output) varies with the bridging fraction. When it is unity, we get normal Class-B operation and the
maximum effi ciency is the familiar fi gure of 78.6%; when it is zero the overall effi ciency is halved
to 39.3%, with a linear variation between these two extremes. The second possible objection is that you might think it is a grievous offence against engineering
ethics to deliberately restrict the output of an amplifi er for marketing reasons, and you might be
right, but it kept people employed, including me. Nevertheless, given the current concerns about
energy, perhaps this sort of thing should not be encouraged. Chapter 9 gives another example of
devious engineering, where I describe how an input clipping circuit (the one I thought up in an
attempt to solve this problem, in fact) can be used to emulate the performance of a massive lowimpedance
power supply or a complicated regulated power supply. I have given semi-serious
thought to writing a book called How to Cheat with Amplifi ers .

AC- and DC-Coupled Amplifi ers
All power amplifi ers are either AC-coupled or DC-coupled. The fi rst kind have a single supply
rail, with the output biased to be halfway between this rail and ground to give the maximum
symmetrical voltage swing; a large DC-blocking capacitor is therefore used in series with the
output. The second kind have positive and negative supply rails, and the output is biased to be at
0 V, so no output DC-blocking is required in normal operation.
The Advantages of AC-Coupling
1. The output DC offset is always zero (unless the output capacitor is leaky).
2. It is very simple to prevent turn-on thump by purely electronic means; there is no need for an
expensive output relay. The amplifi er output must rise up to half the supply voltage at turn-on,
but providing this occurs slowly there is no audible transient. Note that in many designs this is
not simply a matter of making the input bias voltage rise slowly, as it also takes time for the DC
feedback to establish itself, and it tends to do this with a snap action when a threshold is reached.
The last AC-coupled power amplifi er I designed (which was in 1980, I think) had a simple RC
time-constant and diode arrangement that absolutely constrained the VAS collector voltage to rise
slowly at turn-on, no matter what the rest of the circuitry was doing – cheap but very effective.
3. No protection against DC faults is required, providing the output capacitor is voltage-rated
to withstand the full supply rail. A DC-coupled amplifi er requires an expensive and possibly
unreliable output relay for dependable speaker protection.
4. The amplifi er should be more easy to make short-circuit proof, as the output capacitor limits
the amount of electric charge that can be transferred each cycle, no matter how low the load
impedance. This is speculative; I have no data as to how much it really helps in practice.
5. AC-coupled amplifi ers do not in general appear to require output inductors for stability. Large
electrolytics have signifi cant equivalent series resistance (ESR) and a little series inductance.
For typical amplifi er output sizes the ESR will be of the order of 100 m Ω ; this resistance is
probably the reason why AC-coupled amplifi ers rarely had output inductors, as it is often enough
resistance to provide isolation from capacitive loading and so gives stability. Capacitor series
inductance is very low and probably irrelevant, being quoted by one manufacturer as ‘ a few tens
of nanohenrys ’. The output capacitor was often condemned in the past for reducing the lowfrequency
damping factor (DF), for its ESR alone is usually enough to limit the DF to 80 or so. As
explained above, this is not a technical problem because ‘ damping factor’ means virtually nothing.
The Advantages of DC-Coupling
1. No large and expensive DC-blocking capacitor is required. On the other hand, the dual supply
will need at least one more equally expensive reservoir capacitor, and a few extra components
such as fuses.
2. In principle there should be no turn-on thump, as the symmetrical supply rails mean the output
voltage does not have to move through half the supply voltage to reach its bias point – it
can just stay where it is. In practice the various fi ltering time-constants used to keep the bias
voltages free from ripple are likely to make various sections of the amplifi er turn on at different
times, and the resulting thump can be substantial. This can be dealt with almost for free, when
a protection relay is fi tted, by delaying the relay pull-in until any transients are over. The delay
required is usually less than a second.
3. Audio is a fi eld where almost any technical eccentricity is permissible, so it is remarkable that
AC-coupling appears to be the one technique that is widely regarded as unfashionable and
unacceptable. DC-coupling avoids any marketing diffi culties.
4. Some potential customers will be convinced that DC-coupled amplifi ers give better speaker
damping due to the absence of the output capacitor impedance. They will be wrong, as explained
in Chapter 1 , but this misconception has lasted at least 40 years and shows no sign of fading away.
5. Distortion generated by an output capacitor is avoided. This is a serious problem, as it is
not confi ned to low frequencies, as is the case in small-signal circuitry (see page 212).
For a 6800 μ F output capacitor driving 40 W into an 8 Ω load, there is signifi cant mid-band
third harmonic distortion at 0.0025%, as shown in Figure 2.12 . This is at least fi ve times
more than the amplifi er generates in this part of the frequency range. In addition, the THD
rise at the LF end is much steeper than in the small-signal case, for reasons that are not yet
clear. There are two cures for output capacitor distortion. The straightforward approach uses
a huge output capacitor, far larger in value than required for a good low-frequency response.
A 100,000 μ F/40 V Aerovox from BHC eliminated all distortion, as shown in Figure 2.13 .
An allegedly ‘ audiophile ’ capacitor gives some interesting results; a Cerafi ne Supercap of
only moderate size (4700 μ F/63 V) gave the result shown in Figure 2.14 , where the midband
distortion is gone but the LF distortion rise remains. What special audio properties this
component is supposed to have are unknown; as far as I know electrolytics are never advertised
as ‘ low mid-band THD ’ , but that seems to be the case here. The volume of the capacitor case is
about twice as great as conventional electrolytics of the same value, so it is possible the crucial
difference may be a thicker dielectric fi lm than is usual for this voltage rating.
Either of these special capacitors costs more than the rest of the amplifi er electronics put
together. Their physical size is large. A DC-coupled amplifi er with protective output relay will
be a more economical option.
A little-known complication with output capacitors is that their series reactance increases the
power dissipation in the output stage at low frequencies. This is counter-intuitive as it would
seem that any impedance added in series must reduce the current drawn and hence the power
dissipation. In fact it is the load phase shift that increases the amplifi er dissipation.
6. The supply currents can be kept out of the ground system. A single-rail AC amplifi er has halfwave
Class-B currents fl owing in the 0 V rail, and these can have a serious effect on distortion
and crosstalk performance.

Negative Feedback in Power Amplifi ers
It is not the role of this book to step through elementary theory that can be easily found in any number
of textbooks. However, correspondence in audio and technical journals shows that considerable
confusion exists on negative feedback as applied to power amplifi ers; perhaps there is something
inherently mysterious in a process that improves almost all performance parameters simply by
feeding part of the output back to the input, but infl icts dire instability problems if used to excess. I
therefore deal with a few of the less obvious points here; more information is provided in Chapter 8.
The main use of NFB in power amplifi ers is the reduction of harmonic distortion, the reduction
of output impedance, and the enhancement of supply-rail rejection. There are also analogous
improvements in frequency response and gain stability, and reductions in DC drift.
The basic feedback equation is dealt with in a myriad of textbooks, but it is so fundamental to
power amplifi er design that it is worth a look here. In Figure 2.15 , the open-loop amplifi er is the big block with open-loop gain A . The negative-feedback network is the block marked β ; this
could contain anything, but for our purposes it simply scales down its input, multiplying it by β ,
and is usually in the form of a potential divider. The funny round thing with the cross on is the
conventional control theory symbol for a block that adds or subtracts and does nothing else.
Firstly, it is pretty clear that one input to the subtractor is simply Vin , and the other is Vout · β , so
subtract these two, multiply by A , and you get the output signal Vout :This is the feedback equation, and it could not be more important. The fi rst thing it shows is that
negative feedback stabilizes the gain. In real-life circuitry A is a high but uncertain and variable
quantity, while β is fi rmly fi xed by resistor values. Looking at the equation, you can see that the
higher A is, the less signifi cant the 1 on the bottom is; the A values cancel out, and so with high A
the equation can be regarded as simply: This is demonstrated in Table 2.1 , where β is set at 0.04 with the intention of getting a closed-loop
gain of 25 times. With a low open-loop gain of 100, the closed-loop gain is only 20, a long way
short of 25. But as the open-loop gain increases, the closed-loop gain gets closer to the target. If
you look at the bottom two rows, you will see that an increase in open-loop gain of more than a
factor of 2 only alters the closed-loop gain by a trivial second decimal place.
In simple circuits with low open-loop gain you just apply negative feedback and that is the end of
the matter. In a typical power amplifi er, which cannot be operated without NFB, if only because
it would be saturated by its own DC offset voltages, there are several stages that may accumulate
phase shift, and simply closing the loop usually brings on severe Nyquist oscillation at HF. This is
a serious matter, as it will not only burn out any tweeters that are unlucky enough to be connected,
but can also destroy the output devices by overheating, as they may be unable to turn off fast
enough at ultrasonic frequencies.
The standard cure for this instability is compensation. A capacitor is added, usually in Millerintegrator
format, to roll off the open-loop gain at 6 dB/octave, so it reaches unity loop-gain before
enough phase shift can build up to allow oscillation. This means the NFB factor varies strongly
with frequency, an inconvenient fact that many audio commentators seem to forget.
It is crucial to remember that a distortion harmonic, subjected to a frequency-dependent NFB factor
as above, will be reduced by the NFB factor corresponding to its own frequency, not that of its
fundamental. If you have a choice, generate low-order rather than high-order distortion harmonics,
as the NFB deals with them much more effectively.
Negative feedback can be applied either locally (i.e. to each stage, or each active device) or globally,
in other words right around the whole amplifi er. Global NFB is more effi cient at distortion reduction
than the same amount distributed as local NFB, but places much stricter limits on the amount of
phase shift that may be allowed to accumulate in the forward path (more on this later in this chapter).
Above the dominant-pole frequency, the VAS acts as a Miller integrator, and introduces a constant 90 °
phase lag into the forward path. In other words, the output from the input stage must be in quadrature
if the fi nal amplifi er output is to be in phase with the input, which to a close approximation it is. This
raises the question of how the 90 ° phase shift is accommodated by the negative-feedback loop; the
answer is that the input and feedback signals applied to the input stage are there subtracted, and the
small difference between two relatively large signals with a small phase shift between them has a
much larger phase shift. This is the signal that drives the VAS input of the amplifi er.
Solid-state power amplifi ers, unlike many valve designs, are almost invariably designed to work
at a fi xed closed-loop gain. If the circuit is compensated by the usual dominant-pole method, the
HF open-loop gain is also fi xed, and therefore so is the important negative-feedback factor. This
is in contrast to valve amplifi ers, where the amount of negative feedback applied was regarded
as a variable, and often user-selectable, parameter; it was presumably accepted that varying the
negative-feedback factor caused signifi cant changes in input sensitivity. A further complication
was serious peaking of the closed-loop frequency response at both LF and HF ends of the spectrum
as negative feedback was increased, due to the inevitable bandwidth limitations in a transformercoupled
forward path. Solid-state amplifi er designers go cold at the thought of the customer
tampering with something as vital as the NFB factor, and such an approach is only acceptable in
cases like valve amplifi cation where global NFB plays a minor role.
Some Common Misconceptions about Negative Feedback
All of the comments quoted below have appeared many times in the hi-fi literature. All are wrong.
Negative feedback is a bad thing . Some audio commentators hold that, without qualifi cation, negative
feedback is a bad thing. This is of course completely untrue and based on no objective reality. Negative
feedback is one of the fundamental concepts of electronics, and to avoid its use altogether is virtually
impossible; apart from anything else, a small amount of local NFB exists in every common-emitter
transistor because of the internal emitter resistance. I detect here distrust of good fortune; the uneasy
feeling that if something apparently works brilliantly then there must be something wrong with it.
A low negative-feedback factor is desirable . Untrue – global NFB makes just about everything
better, and the sole effect of too much is HF oscillation, or poor transient behavior on the brink
of instability. These effects are painfully obvious on testing and not hard to avoid unless there is
something badly wrong with the basic design.
In any case, just what does low mean? One indicator of imperfect knowledge of negative feedback
is that the amount enjoyed by an amplifi er is almost always badly specifi ed as so many decibels on
the very few occasions it is specifi ed at all – despite the fact that most amplifi ers have a feedback
factor that varies considerably with frequency. A decibel fi gure quoted alone is meaningless, as it
cannot be assumed that this is the fi gure at 1 kHz or any other standard frequency.
My practice is to quote the NFB factor at 20 kHz, as this can normally be assumed to be above the
dominant pole frequency, and so in the region where open-loop gain is set by only two or three
components. Normally the open-loop gain is falling at a constant 6 dB/octave at this frequency on
its way down to intersect the unity-loop-gain line and so its magnitude allows some judgment as
to Nyquist stability. Open-loop gain at LF depends on many more variables such as transistor beta,
and consequently has wide tolerances and is a much less useful quantity to know. This is dealt with
in more detail in the chapter on voltage-amplifi er stages.
Negative feedback is a powerful technique, and therefore dangerous when misused . This bland
truism usually implies an audio Rake’s Progress that goes something like this: an amplifi er has too
much distortion, and so the open-loop gain is increased to augment the NFB factor. This causes HF
instability, which has to be cured by increasing the compensation capacitance. This is turn reduces
the slew-rate capability, and results in a sluggish, indolent, and generally bad amplifi er.
The obvious fl aw in this argument is that the amplifi er so condemned no longer has a high NFB
factor, because the increased compensation capacitor has reduced the open-loop gain at HF; therefore feedback itself can hardly be blamed. The real problem in this situation is probably
unduly low standing current in the input stage; this is the other parameter determining slew rate.
NFB may reduce low-order harmonics but increases the energy in the discordant higher
harmonics . A less common but recurring complaint is that the application of global NFB is a
shady business because it transfers energy from low-order distortion harmonics – considered
musically consonant – to higher-order ones that are anything but. This objection contains a grain
of truth, but appears to be based on a misunderstanding of one article in an important series by
Peter Baxandall [26] in which he showed that if you took an amplifi er with only second-harmonic
distortion, and then introduced NFB around it, higher-order harmonics were indeed generated as
the second harmonic was fed back round the loop. For example, the fundamental and the second
harmonic intermodulate to give a component at third-harmonic frequency. Likewise, the second
and third intermodulate to give the fi fth harmonic. If we accept that high-order harmonics should
be numerically weighted to refl ect their greater unpleasantness, there could conceivably be a rise
rather than a fall in the weighted THD when negative feedback is applied.
All active devices, in Class A or B (including FETs, which are often erroneously thought to be
purely square law), generate small amounts of high-order harmonics. Feedback could and would
generate these from nothing, but in practice they are already there.
The vital point is that if enough NFB is applied, all the harmonics can be reduced to a lower level
than without it. The extra harmonics generated, effectively by the distortion of a distortion, are at an
extremely low level providing a reasonable NFB factor is used. This is a powerful argument against
low feedback factors like 6 dB, which are most likely to increase the weighted THD. For a full
understanding of this topic, a careful reading of the Baxandall series is absolutely indispensable.
A low open-loop bandwidth means a sluggish amplifi er with a low slew rate. Great confusion
exists in some quarters between open-loop bandwidth and slew rate. In truth open-loop bandwidth
and slew rate are nothing to do with each other, and may be altered independently. Open-loop
bandwidth is determined by compensation Cdom , VAS β , and the resistance at the VAS collector,
while slew rate is set by the input stage standing current and Cdom . Cdom affects both, but all the
other parameters are independent (see Chapter 3 for more details).
In an amplifi er, there is a maximum amount of NFB you can safely apply at 20 kHz; this does not
mean that you are restricted to applying the same amount at 1 kHz, or indeed 10 Hz. The obvious
thing to do is to allow the NFB to continue increasing at 6 dB/octave – or faster if possible – as
frequency falls, so that the amount of NFB applied doubles with each octave as we move down in
frequency, and we derive as much benefi t as we can. This obviously cannot continue indefi nitely,
for eventually open-loop gain runs out, being limited by transistor beta and other factors. Hence the
NFB factor levels out at a relatively low and ill-defi ned frequency; this frequency is the open-loop
bandwidth, and for an amplifi er that can never be used open-loop, has very little importance.
It is diffi cult to convince people that this frequency is of no relevance whatever to the speed
of amplifi ers, and that it does not affect the slew rate. Nonetheless, it is so, and any fi rst-year
electronics textbook will confi rm this. High-gain op-amps with sub-1 Hz bandwidths and blindingly fast slewing are as common as the grass (if somewhat less cheap) and if that does not
demonstrate the point beyond doubt then I really do not know what will.
Limited open-loop bandwidth prevents the feedback signal from immediately following the system
input, so the utility of this delayed feedback is limited . No linear circuit can introduce a pure
time delay; the output must begin to respond at once, even if it takes a long time to complete its
response. In the typical amplifi er the dominant-pole capacitor introduces a 90 ° phase shift between
input pair and output at all but the lowest audio frequencies, but this is not a true time delay. The
phrase delayed feedback is often used to describe this situation, and it is a wretchedly inaccurate
term; if you really delay the feedback to a power amplifi er (which can only be done by adding a
time-constant to the feedback network rather than the forward path) it will quickly turn into the
proverbial power oscillator as sure as night follows day.
Amplifi er Stability and NFB
In controlling amplifi er distortion, there are two main weapons. The fi rst is to make the linearity of
the circuitry as good as possible before closing the feedback loop. This is unquestionably important,
but it could be argued it can only be taken so far before the complexity of the various amplifi er
stages involved becomes awkward. The second is to apply as much negative feedback as possible
while maintaining amplifi er stability. It is well known that an amplifi er with a single time-constant is
always stable, no matter how high the feedback factor. The linearization of the VAS by local Miller
feedback is a good example. However, more complex circuitry, such as the generic three-stage power
amplifi er, has more than one time-constant, and these extra poles will cause poor transient response or
instability if a high feedback factor is maintained up to the higher frequencies where they start to take
effect. It is therefore clear that if these higher poles can be eliminated or moved upward in frequency,
more feedback can be applied and distortion will be less for the same stability margins. Before they
can be altered – if indeed this is practical at all – they must be found and their impact assessed.
The dominant - pole frequency of an amplifi er is, in principle, easy to calculate; the mathematics
is very simple (see Chapter 3) . In practice, two of the most important factors, the effective beta of
the VAS and the VAS collector impedance, are only known approximately, so the dominant pole
frequency is a rather uncertain thing. Fortunately this parameter in itself has no effect on amplifi er
stability. What matters is the amount of feedback at high frequencies.
Things are different with the higher poles. To begin with, where are they? They are caused by
internal transistor capacitances and so on, so there is no physical component to show where the
roll-off is. It is generally regarded as fact that the next poles occur in the output stage, which
will use power devices that are slow compared with small-signal transistors. Taking the Class-
B design in Chapter 7 , the TO92 MPSA06 devices have an Ft of 100 MHz, the MJE340 drivers
about 15 MHz (for some reason this parameter is missing from the data sheet) and the MJ802
output devices an Ft of 2.0 MHz. Clearly the output stage is the prime suspect. The next question
is at what frequencies these poles exist. There is no reason to suspect that each transistor can be
modeled by one simple pole. There is a huge body of knowledge devoted to the art of keeping feedback loops stable while
optimizing their accuracy; this is called Control Theory, and any technical bookshop will yield
some intimidatingly fat volumes called things like ‘ Control System Design ’ . Inside, system
stability is tackled by Laplace-domain analysis, eigenmatrix methods, and joys like the Lyapunov
stability criterion. I think that makes it clear that you need to be pretty good at mathematics to
appreciate this kind of approach.
Even so, it is puzzling that there seems to have been so little application of Control Theory to audio
amplifi er design. The reason may be that so much Control Theory assumes that you know fairly
accurately the characteristics of what you are trying to control, especially in terms of poles and zeros.
One approach to appreciating negative feedback and its stability problems is SPICE simulation.
Some SPICE simulators have the ability to work in the Laplace or s-domain, but my own
experiences with this have been deeply unhappy. Otherwise respectable simulator packages output
complete rubbish in this mode. Quite what the issues are here I do not know, but it does seem that
s-domain methods are best avoided. The approach suggested here instead models poles directly as
poles, using RC networks to generate the time-constants. This requires minimal mathematics and
is far more robust. Almost any SPICE simulator – evaluation versions included – should be able to
handle the simple circuit used here.
Figure 2.17 shows the basic model, with SPICE node numbers. The scheme is to idealize the
situation enough to highlight the basic issues and exclude distractions like nonlinearities or
clipping. The forward gain is simply the transconductance of the input stage multiplied by the
transadmittance of the VAS integrator. An important point is that with correct parameter values, the
current from the input stage is realistic, and so are all the voltages.
The input differential amplifi er is represented by G. This is a standard SPICE element – the VCIS,
or voltage-controlled current source. It is inherently differential, as the output current from Node 4
is the scaled difference between the voltages at Nodes 3 and 7. The scaling factor of 0.009 sets the
input stage transconductance ( gm ) to 9 mA/V, a typical fi gure for a bipolar input with some local
feedback. Stability in an amplifi er depends on the amount of negative feedback available at 20 kHz. This is set at the design stage by choosing the input gm and Cdom , which are the only two factors
affecting the open-loop gain. In simulation it would be equally valid to change gm instead; however,
in real life it is easier to alter Cdom as the only other parameter this affects is slew rate. Changing
input stage transconductance is likely to mean altering the standing current and the amount of local
feedback, which will in turn impact input stage linearity.
The VAS with its dominant pole is modeled by the integrator Evas , which is given a high but fi nite
open-loop gain, so there really is a dominant pole P 1 created when the gain demanded becomes
equal to that available. With Cdom 100 pF this is below 1 Hz. With infi nite (or as near infi nite as
SPICE allows) open-loop gain the stage would be a perfect integrator. A explained elsewhere, the
amount of open-loop gain available in real versions of this stage is not a well-controlled quantity,
and P 1 is liable to wander about in the 1 – 100 Hz region; fortunately this has no effect at all on HF
stability. Cdom is the Miller capacitor that defi nes the transadmittance, and since the input stage
has a realistic transconductance Cdom can be set to 100 pF, its usual real-life value. Even with this
simple model we have a nested feedback loop. This apparent complication here has little effect, so
long as the open-loop gain of the VAS is kept high.
The output stage is modeled as a unity-gain buffer, to which we add extra poles modeled by R1,
C1 and R2, C2. Eout1 is a unity-gain buffer internal to the output stage model, added so the second
pole does not load the fi rst. The second buffer Eout2 is not strictly necessary as no real loads are
being driven, but it is convenient if extra complications are introduced later. Both are shown here
as a part of the output stage but the fi rst pole could equally well be due to input stage limitations
instead; the order in which the poles are connected makes no difference to the fi nal output. Strictly
speaking, it would be more accurate to give the output stage a gain of 0.95, but this is so small a
factor that it can be ignored.
The component values used to make the poles are of course completely unrealistic, and chosen
purely to make the maths simple. It is easy to remember that 1 Ω and 1 μ F make up a 1 μ s timeconstant.
This is a pole at 159 kHz. Remember that the voltages in the latter half of the circuit are
realistic, but the currents most certainly are not.
The feedback network is represented simply by scaling the output as it is fed back to the input
stage. The closed-loop gain is set to 23 times, which is representative of many power amplifi ers.
Note that this is strictly a linear model, so the slew-rate limiting that is associated with Miller
compensation is not modeled here. It would be done by placing limits on the amount of current
that can fl ow in and out of the input stage.
Figure 2.18 shows the response to a 1 V step input, with the dominant pole the only time element
in the circuit. (The other poles are disabled by making C1, C2 0.00001 pF, because this is quicker
than changing the actual circuit.) The output is an exponential rise to an asymptote of 23 V, which
is exactly what elementary theory predicts. The exponential shape comes from the way that the
error signal that drives the integrator becomes less as the output approaches the desired level. The
error, in the shape of the output current from G , is the smaller signal shown; it has been multiplied
by 1000 to get mA onto the same scale as volts. The speed of response is inversely proportional to the size of Cdom , and is shown here for values of 50 and 220 pF as well as the standard 100 pF. This
simulation technique works well in the frequency domain, as well as the time domain. Simply tell
SPICE to run an AC simulation instead of a TRANS (transient) simulation. The frequency response
in Figure 2.19 exploits this to show how the closed-loop gain in an NFB amplifi er depends on the
open-loop gain available. Once more elementary feedback theory is brought to life. The value of
Cdom controls the bandwidth, and it can be seen that the values used in the simulation do not give a
very extended response compared with a 20 kHz audio bandwidth. In Figure 2.20 , one extra pole P 2 at 1.59 MHz (a time-constant of only 100 ns) is added to the
output stage, and Cdom stepped through 50, 100 and 200 pF as before: 100 pF shows a slight
overshoot that was not there before; with 50 pF there is a serious overshoot that does not bode
well for the frequency response. Actually, it’s not that bad; Figure 2.21 returns to the frequencyresponse
domain to show that an apparently vicious overshoot is actually associated with a very
mild peaking in the frequency domain.
From here on Cdom is left set to 100 pF, its real value in most cases. In Figure 2.22 P 2 is stepped
instead, increasing from 100 ns to 5 μ s, and while the response gets slower and shows more
overshoot, the system does not become unstable. The reason is simple: sustained oscillation (as
opposed to transient ringing) in a feedback loop requires positive feedback, which means that a
total phase shift of 180 ° must have accumulated in the forward path, and reversed the phase of
the feedback connection. With only two poles in a system the phase shift cannot reach 180 ° . The
VAS integrator gives a dependable 90 ° phase shift above P 1, being an integrator, but P 2 is instead
a simple lag and can only give 90 ° phase lag at infi nite frequency. So, even this very simple model
gives some insight. Real amplifi ers do oscillate if Cdom is too small, so we know that the frequency
response of the output stage cannot be meaningfully modeled with one simple lag.
As President Nixon is alleged to have said: ‘ Two wrongs don’t make a right – so let’s see if three
will do it! ’ Adding in a third pole P 3 in the shape of another simple lag gives the possibility of
sustained oscillation. This is case A in Table 2.2 .
Stepping the value of P 2 from 0.1 to 5 μ s with P 3 500 ns in Figure 2.23 shows that damped
oscillation is present from the start. Figure 2.23 also shows over 50 μ s what happens when the
amplifi er is made very unstable (there are degrees of this) by setting P 2 5 μ s and P 3 500 ns. It still takes time for the oscillation to develop, but exponentially diverging oscillation like this is
a sure sign of disaster. Even in the short time examined here the amplitude has exceeded a rather
theoretical half a kilovolt. In reality oscillation cannot increase indefi nitely, if only because the
supply rail voltages would limit the amplitude. In practice slew-rate limiting is probably the major
controlling factor in the amplitude of high-frequency oscillation. We have now modeled a system that will show instability. But does it do it right? Sadly, no. The
oscillation is about 200 kHz, which is a rather lower frequency than is usually seen when an amplifi er
misbehaves. This low frequency stems from the low P 2 frequency we have to use to provoke
oscillation; apart from anything else this seems out of line with the known fT of power transistors.
Practical amplifi ers are likely to take off at around 500 kHz to 1 MHz when Cdom is reduced, and
this seems to suggest that phase shift is accumulating quickly at this sort of frequency. One possible
explanation is that there are a large number of poles close together at a relatively high frequency.
A fourth pole can be simply added to Figure 2.17 by inserting another RC-buffer combination into
the system. With P 2 0.5 μ s and P 3 P 4 0.2 μ s, instability occurs at 345 kHz, which is a step
towards a realistic frequency of oscillation. This is case B in Table 2.2 .
When a fi fth output stage pole is grafted on, so that P 3 P 4 P 5 0.2 μ s the system just
oscillates at 500 kHz with P 2 set to 0.01 μ s. This takes us close to a realistic frequency of
oscillation. Rearranging the order of poles so P 2 P 3 P 4 0.2 μ s, while P 5 0.01 μ s, is
tidier, and the stability results are of course the same; this is a linear system so the order does not
matter. This is case C in Table 2.2 . Having P 2 , P 3, and P 4 all at the same frequency does not seem very plausible in physical terms,
so case D shows what happens when the fi ve poles are staggered in frequency. P 2 needs to be
increased to 0.3 μ s to start the oscillation, which is now at 400 kHz. Case E is another version with
fi ve poles, showing that if P 5 is reduced P 2 needs to be doubled to 0.4 μ s for instability to begin.
In the fi nal case F, a sixth pole is added to see if this permitted sustained oscillation is above
500 kHz. This seems not to be the case; the highest frequency that could be obtained after a lot of
pole twiddling was 475 kHz. This makes it clear that this model is of limited accuracy (as indeed
are all models – it is a matter of degree) at high frequencies, and that further refi nement is required
to gain further insight.
Maximizing the NFB
Having hopefully freed ourselves from fear of feedback, and appreciating the dangers of using only
a little of it, the next step is to see how much can be used. It is my view that the amount of negative
feedback applied should be maximized at all audio frequencies to maximize linearity, and the only
limit is the requirement for reliable HF stability. In fact, global or Nyquist oscillation is not normally
a diffi cult design problem in power amplifi ers; the HF feedback factor can be calculated simply
and accurately, and set to whatever fi gure is considered safe. (Local oscillations and parasitics are
beyond the reach of design calculations and simulations, and cause much more trouble in practice.)
In classical Control Theory, the stability of a servomechanism is specifi ed by its phase margin ,
the amount of extra phase shift that would be required to induce sustained oscillation, and its gain
margin , the amount by which the open-loop gain would need to be increased for the same result.
These concepts are not very useful in audio power amplifi er work, where many of the signifi cant
time-constants are only vaguely known. However, it is worth remembering that the phase margin
will never be better than 90 ° , because of the phase lag caused by the VAS Miller capacitor;
fortunately this is more than adequate.
In practice designers must use their judgment and experience to determine an NFB factor that
will give reliable stability in production. My own experience leads me to believe that when the
conventional three-stage architecture is used, 30 dB of global feedback at 20 kHz is safe, providing
an output inductor is used to prevent capacitive loads from eroding the stability margins. I would
say that 40 dB was distinctly risky, and I would not care to pin it down any more closely than that.
The 30 dB fi gure assumes simple dominant-pole compensation with a 6 dB/octave roll-off for the
open-loop gain. The phase and gain margins are determined by the angle at which this slope cuts
the horizontal unity-loop-gain line. (I am deliberately terse here; almost all textbooks give a very
full treatment of this stability criterion.) An intersection of 12 dB/octave is defi nitely unstable.
Working within this, there are two basic ways in which to maximize the NFB factor:
1. While a 12 dB/octave gain slope is unstable, intermediate slopes greater than 6 dB/octave can be
made to work. The maximum usable is normally considered to be 10 dB/octave, which gives a
phase margin of 30 ° . This may be acceptable in some cases, but I think it cuts it a little fi ne. The
steeper fall in gain means that more NFB is applied at lower frequencies, and so less distortion is
produced. Electronic circuitry only provides slopes in multiples of 6 dB/octave, so 10 dB/octave requires multiple overlapping time-constants to approximate a straight line at an intermediate
slope. This gets complicated, and this method of maximizing NFB is not popular.
2. The gain slope varies with frequency, so that maximum open-loop gain and hence NFB factor
is sustained as long as possible as frequency increases; the gain then drops quickly, at 12 dB/
octave or more, but fl attens out to 6 dB/octave before it reaches the critical unity loop – gain
intersection. In this case the stability margins should be relatively unchanged compared with
the conventional situation. This approach is dealt with in Chapter 8.
Overall Feedback versus Local Feedback
It is one of the fundamental principles of negative feedback that if you have more than one stage in
an amplifi er, each with a fi xed amount of open-loop gain, it is more effective to close the feedback
loop around all the stages, in what is called an overall or global feedback confi guration, rather than
applying the feedback locally by giving each stage its own feedback loop. I hasten to add that this
does not mean you cannot or should not use local feedback as well as overall feedback – indeed,
one of the main themes of this book is that it is a very good idea, and indeed probably the only
practical route to very low distortion levels. This is dealt with in more detail in the chapters on
input stages and voltage-amplifi er stages.
It is worth underlining the effectiveness of overall feedback because some of the less informed
audio commentators have been known to imply that overall feedback is in some way decadent or
unhealthy, as opposed to the upright moral rigor of local feedback. The underlying thought, insofar
as there is one, appears to be that overall feedback encloses more stages each with their own phase
shift, and therefore requires compensation which will reduce the maximum slew rate. The truth, as
is usual with this sort of moan, is that this could happen if you get the compensation all wrong; so
get it right – it isn’t hard.
It has been proposed on many occasions that if there is an overall feedback loop, the output stage
should be left outside it. I have tried this, and believe me, it is not a good idea. The distortion
produced by an output stage so operated is jagged and nasty, and I think no one could convince
themselves it was remotely acceptable if they had seen the distortion residuals.
Figure 2.24 shows a negative-feedback system based on that in Figure 2.12 , but with two stages.
Each has its own open-loop gain A , its own NFB factor β , and its own open-loop error Vd added
to the output of the amplifi er. We want to achieve the same closed-loop gain of 25 as in Table 2.1
and we will make the wild assumption that the open-loop error of 1 in that table is now distributed
equally between the two amplifi ers A1 and A2. There are many ways the open- and closed-loop
gains could be distributed between the two sections, but for simplicity we will give each section
a closed-loop gain of 5; this means the conditions on the two sections are identical. The openloop
gains are also equally distributed between the two amplifi ers so that their product is equal to
column 3 in Table 2.1 . The results are shown in Table 2.3 : columns 1 – 7 show what’s happening in
each loop, and columns 8 and 9 give the results for the output of the two loops together, assuming
for simplicity that the errors from each section can be simply added together; in other words there
is no partial cancelation due to differing phases and so on. This fi nal result is compared with the overall feedback case of Table 2.1 in Table 2.4 , where
column 1 gives total open-loop gain, and column 2 is a copy of column 7 in Table 2.1 and gives
the closed-loop error for the overall feedback case. Column 3 gives the closed-loop error for the
two-stage feedback case, and it is brutally obvious that splitting the overall feedback situation
into two local feedback stages has been a pretty bad move. With a modest total open-loop gain of
100, the local feedback system is almost twice as bad. Moving up to total open-loop gains that are
more realistic for real power amplifi ers, the factor of deterioration is between six and 40 times – an
amount that cannot be ignored. With higher open-loop gains the ratio gets even worse. Overall
feedback is totally and unarguably superior at dealing with all kinds of amplifi er errors, though
in this book distortion is often the one at the front of our minds.
While there is space here to give only one illustration in detail, you may be wondering what
happens if the errors are not equally distributed between the two stages; the signal level at the output of the second stage will be greater than that at the output of the fi rst stage, so it is plausible
(but by no means automatically true in the real world) that the second stage will generate more
distortion than the fi rst. If this is so, and we stick with the assumption that open-loop gain is
equally distributed between the two stages, then the best way to distribute the closed-loop gain is
to put most of it in the fi rst stage so we can get as high a feedback factor as possible in the second
stage. As an example, take the case where the total open-loop gain is 40,000.
Assume that all the distortion is in the second stage, so its open-loop error is 1 while that of the
fi rst stage is zero. Now redistribute the total closed-loop gain of 25 so the fi rst stage has a closedloop
gain of 10 and the second stage has a closed-loop gain of 2.5. This gives a closed-loop error
of 0.0123, which is about half of 0.0244, the result we got with the closed-loop gain equally
distributed. Clearly things have been improved by applying the greater part of the local negative
feedback where it is most needed. But our improved fi gure is still about 20 times worse than if we
had used overall feedback.
In a real power amplifi er, the situation is of course much more complex than this. To start with,
there are usually three rather than two stages, the distortion produced by each one is leveldependent,
and in the case of the voltage-amplifi er stage the amount of local feedback (and hence
also the amount of overall feedback) varies with frequency. Nonetheless, it will be found that
overall feedback always gives better results.
Maximizing Linearity before Feedback
Make your amplifi er as linear as possible before applying NFB has long been a clich é . It blithely
ignores the diffi culty of running a typical solid-state amplifi er without any feedback, to determine
its basic linearity.
Virtually no dependable advice on how to perform this desirable linearization has been published.
The two factors are the basic linearity of the forward path, and the amount of negative feedback
applied to further straighten it out. The latter cannot be increased beyond certain limits or highfrequency
stability is put in peril, whereas there seems no reason why open-loop linearity could
not be improved without limit, leading us to what in some senses must be the ultimate goal – a
distortionless amplifi er. This book therefore takes as one of its main aims the understanding and
improvement of open-loop linearity; as it proceeds we will develop circuit blocks culminating in
some practical amplifi er designs that exploit the techniques presented here.
References
[1] J. Linsley-Hood , Simple Class-A amplifi er , Wireless World ( April 1969 ) p. 148 .
[2] B. Olsson , Better audio from non-complements? Electronics World ( December 1994 ) p. 988 .
[3] J. Lohstroh , M. Otala , An audio power amplifi er for ultimate quality requirements , IEEE
Trans. Audio Electroacoustics AU-21 ( 6 ) ( December 1973 ) .
[4] D. Self , Self On Audio , second ed . , Newnes , 2006, Chapter 32 .

วันศุกร์ที่ 27 สิงหาคม พ.ศ. 2553

Introduction and General Survey

The Economic Importance of Power Amplifi ers
Audio power amplifi ers are of considerable economic importance. They are built in their hundreds
of thousands every year, and have a history extending back to the 1920s. It is therefore surprising
there have been so few books dealing in any depth with solid-state power amplifi er design.
The fi rst aim of this text is to fi ll that need, by providing a detailed guide to the many design
decisions that must be taken when a power amplifi er is designed.
The second aim is to disseminate the results of the original work done on amplifi er design in the
last few years. The unexpected result of these investigations was to show that power amplifi ers of
extraordinarily low distortion could be designed as a matter of routine, without any unwelcome
side-effects, so long as a relatively simple design methodology was followed. This methodology
will be explained in detail.
Assumptions
To keep its length reasonable, a book such as this must assume a basic knowledge of audio
electronics. I do not propose to plough through the defi nitions of frequency response, total
harmonic distortion (THD) and signal-to-noise ratio; these can be found anywhere. Commonplace
facts have been ruthlessly omitted where their absence makes room for something new or unusual,
so this is not the place to start learning electronics from scratch. Mathematics has been confi ned
to a few simple equations determining vital parameters such as open-loop gain; anything more
complex is best left to a circuit simulator you trust. Your assumptions, and hence the output, may
be wrong, but at least the calculations in between will be correct . . .
The principles of negative feedback as applied to power amplifi ers are explained in detail, as there
is still widespread confusion as to exactly how it works.
Origins and Aims
The core of this book is based on a series of eight articles originally published in Electronics World
as ‘ Distortion in Power Amplifi ers ’ . This series was primarily concerned with distortion as the
most variable feature of power amplifi er performance. You may have two units placed side by side,one giving 2% THD and the other 0.0005% at full power, and both claiming to provide the ultimate
audio experience. The ratio between the two fi gures is a staggering 4000:1, and this is clearly a
remarkable state of affairs. One might be forgiven for concluding that distortion was not a very
important parameter. What is even more surprising to those who have not followed the evolution
of audio over the last two decades is that the more distortive amplifi er will almost certainly be the
more expensive. I shall deal in detail with the reasons for this astonishing range of variation.
The original series was inspired by the desire to invent a new output stage that would be as linear
as Class-A, without the daunting heat problems. In the course of this work it emerged that output
stage distortion was completely obscured by nonlinearities in the small-signal stages, and it was
clear that these distortions would need to be eliminated before any progress could be made. The
small-signal stages were therefore studied in isolation, using model amplifi ers with low-power
and very linear Class-A output stages, until the various overlapping distortion mechanisms had
been separated out. It has to be said this was not an easy process. In each case there proved to be
a simple, and sometimes well-known, cure and perhaps the most novel part of my approach is that
all these mechanisms are dealt with, rather than one or two, and the fi nal result is an amplifi er with
unusually low distortion, using only modest and safe amounts of global negative feedback.
Much of this book concentrates on the distortion performance of amplifi ers. One reason is that this
varies more than any other parameter – by up to a factor of 1000. Amplifi er distortion was until
recently an enigmatic fi eld – it was clear that there were several overlapping distortion mechanisms
in the typical amplifi er, but it is the work reported here that shows how to disentangle them, so they
may be separately studied and then, with the knowledge thus gained, minimized.
I assume here that distortion is a bad thing, and should be minimized; I make no apology for
putting it as plainly as that. Alternative philosophies hold that as some forms of nonlinearity are
considered harmless or even euphonic, they should be encouraged, or at any rate not positively
discouraged. I state plainly that I have no sympathy with the latter view; to my mind the goal is to
make the audio path as transparent as possible. If some sort of distortion is considered desirable,
then surely the logical way to introduce it is by an outboard processor, working at line level. This
is not only more cost-effective than generating distortion with directly heated triodes, but has the
important attribute that it can be switched off . Those who have brought into being our current
signal-delivery chain, i.e. mixing consoles, multitrack recorders, CDs, etc., have done us proud
in the matter of low distortion, and to willfully throw away this achievement at the very last stage
strikes me as curious at best.
In this book I hope to provide information that is useful to all those interested in power amplifi ers.
Britain has a long tradition of small and very small audio companies, whose technical and
production resources may not differ very greatly from those available to the committed amateur.
I hope this volume will be of service to both.
I have endeavored to address both the quest for technical perfection – which is certainly not over,
as far as I am concerned – and also the commercial necessity of achieving good specifi cations at
minimum cost.The fi eld of audio is full of statements that appear plausible but in fact have never been tested and
often turn out to be quite untrue. For this reason, I have confi ned myself as closely as possible to
facts that I have verifi ed myself. This volume may therefore appear somewhat idiosyncratic in
places. For example, fi eld-effect transistor (FET) output stages receive much less coverage than
bipolar ones because the conclusion appears to be inescapable that FETs are both more expensive
and less linear; I have therefore not pursued the FET route very far. Similarly, most of my
practical design experience has been on amplifi ers of less than 300 W power output, and so heavyduty
designs for large-scale public address (PA) work are also under-represented. I think this is
preferable to setting down untested speculation.
The Study of Amplifi er Design
Although solid-state amplifi ers have been around for some 40 years, it would be a great mistake
to assume that everything possible is known about them. In the course of my investigations, I
discovered several matters which, not appearing in the technical literature, appear to be novel, at
least in their combined application:
● The need to precisely balance the input pair to prevent second-harmonic generation.
● The demonstration of how a beta-enhancement transistor increases the linearity and
reduces the collector impedance of the voltage-amplifi er stage (VAS).
● An explanation of why BJT output stages always distort more into 4 Ω than 8 Ω .
● In a conventional BJT output stage, quiescent current as such is of little importance. What
is crucial is the voltage between the transistor emitters.
● Power FETs, though for many years touted as superior in linearity, are actually far less
linear than bipolar output devices.
● In most amplifi ers, the major source of distortion is not inherent in the amplifying stages,
but results from avoidable problems such as induction of supply-rail currents and poor
power-supply rejection.
● Any number of oscillograms of square waves with ringing have been published that claim
to be the transient response of an amplifi er into a capacitive load. In actual fact this ringing
is due to the output inductor resonating with the load, and tells you precisely nothing about
amplifi er stability.
The above list is by no means complete.
As in any developing fi eld, this book cannot claim to be the last word on the subject; rather it hopes
to be a snapshot of the state of understanding at this time. Similarly, I certainly do not claim that
this book is fully comprehensive; a work that covered every possible aspect of every conceivable
power amplifi er would run to thousands of pages. On many occasions I have found myself about to write: ‘ It would take a whole book to deal properly with. . . . ’ Within a limited compass I have tried
to be innovative as well as comprehensive, but in many cases the best I can do is to give a good
selection of references that will enable the interested to pursue matters further. The appearance of a
reference means that I consider it worth reading, and not that I think it to be correct in every respect.
Sometimes it is said that discrete power amplifi er design is rather unenterprising, given the
enormous outpouring of ingenuity in the design of analog integrated circuits. Advances in op-amp
design would appear to be particularly relevant. I have therefore spent some considerable time
studying this massive body of material and I have had to regretfully conclude that it is actually
a very sparse source of inspiration for new audio power amplifi er techniques; there are several
reasons for this, and it may spare the time of others if I quickly enumerate them here:
● A large part of the existing data refers only to small-signal MOSFETs, such as those
used in (CMOS) op-amps, and is dominated by the ways in which they differ from BJTs,
for example in their low transconductance. CMOS devices can have their characteristics
customized to a certain extent by manipulating the width/length ratio of the channel.
● In general, only the earlier material refers to bipolar junction transistor (BJT) circuitry, and
then it is often mainly concerned with the diffi culties of making complementary circuitry
when the only PNP transistors available are the slow lateral kind with limited beta and poor
frequency response.
● Many of the CMOS op-amps studied are transconductance amplifi ers, i.e. voltage
difference in, current out. Compensation is usually based on putting a specifi ed load
capacitance across the high-impedance output. This does not appear to be a promising
approach to making audio power amplifi ers.
● Much of the op-amp material is concerned with the common-mode performance of the
input stage. This is pretty much irrelevant to power amplifi er design.
● Many circuit techniques rely heavily on the matching of device characteristics possible in
IC fabrication, and there is also an emphasis on minimizing chip area to reduce cost.
● A good many IC techniques are only necessary because it is (or was) diffi cult to make
precise and linear IC resistors. Circuit design is also infl uenced by the need to keep
compensation capacitors as small as possible, as they take up a disproportionately large
amount of chip area for their function.
The material here is aimed at all audio power amplifi ers that are still primarily built from discrete
components, which can include anything from 10 W mid-fi systems to the most rarefi ed reaches of
what is sometimes called the ‘ high end ’ , though the ‘ expensive end ’ might be a more accurate term.
There are of course a large number of IC and hybrid amplifi ers, but since their design details are
fi xed and inaccessible they are not dealt with here. Their use is (or at any rate should be) simply
a matter of following the relevant application note. The quality and reliability of IC power amps
has improved noticeably over the last decade, but low distortion and high power still remain the
province of discrete circuitry, and this situation seems likely to persist for the foreseeable future.
Power amplifi er design has often been treated as something of a black art, with the implication that
the design process is extremely complex and its outcome not very predictable. I hope to show that
this need no longer be the case, and that power amplifi ers are now designable – in other words it is
possible to predict reasonably accurately the practical performance of a purely theoretical design.
I have done a considerable amount of research work on amplifi er design, much of which appears to
have been done for the fi rst time, and it is now possible for me to put forward a design methodology
that allows an amplifi er to be designed for a specifi c negative-feedback factor at a given frequency,
and to a large extent allows the distortion performance to be predicted. I shall show that this
methodology allows amplifi ers of extremely low distortion (sub-0.001% at 1 kHz) to be designed
and built as a matter of routine, using only modest amounts of global negative feedback.
Misinformation in Audio
Few fi elds of technical endeavor are more plagued with errors, misstatements and confusion than
audio. In the last 20 years, the rise of controversial and non-rational audio hypotheses, gathered
under the title Subjectivism has deepened these diffi culties. It is commonplace for hi-fi reviewers
to claim that they have perceived subtle audio differences that cannot be related to electrical
performance measurements. These claims include the alleged production of a ‘ three-dimensional
sound stage and protests that the rhythm of the music has been altered ’ ; these statements are
typically produced in isolation, with no attempt made to correlate them to objective test results.
The latter in particular appears to be a quite impossible claim.
This volume does not address the implementation of subjectivist notions, but confi nes itself to
the measurable, the rational, and the repeatable. This is not as restrictive as it may appear; there
is nothing to prevent you using the methodology presented here to design an amplifi er that is
technically excellent, and then gilding the lily by using whatever brands of expensive resistor
or capacitor are currently fashionable, and doing the internal wiring with cable that costs more
per meter than the rest of the unit put together. Such nods to subjectivist convention are unlikely
to damage the real performance; this is, however, not the case with some of the more damaging
hypotheses, such as the claim that negative feedback is inherently harmful. Reduce the feedback
factor and you will degrade the real-life operation of almost any design.
Such problems arise because audio electronics is a more technically complex subject than it at
fi rst appears. It is easy to cobble together some sort of power amplifi er that works, and this can
give people an altogether exaggerated view of how deeply they understand what they have created.
In contrast, no one is likely to take a ‘ subjective ’ approach to the design of an aeroplane wing or
a rocket engine; the margins for error are rather smaller, and the consequences of malfunction
somewhat more serious.
The subjectivist position is of no help to anyone hoping to design a good power amplifi er.
However, it promises to be with us for some further time yet, and it is appropriate to review it
here and show why it need not be considered at the design stage. The marketing stage is of
course another matter.
Science and Subjectivism
Audio engineering is in a singular position. There can be few branches of engineering science rent
from top to bottom by such a basic division as the subjectivist/rationalist dichotomy. Subjectivism
is still a signifi cant issue in the hi-fi section of the industry, but mercifully has made little headway
in professional audio, where an intimate acquaintance with the original sound, and the need to earn
a living with reliable and affordable equipment, provides an effective barrier against most of the
irrational infl uences. (Note that the opposite of subjectivist is not ‘ objectivist ’ . This term refers to
the followers of the philosophy of Ayn Rand.)
Most fi elds of technology have defi ned and accepted measures of excellence; car makers compete
to improve mph and mpg; computer manufacturers boast of MIPs (millions of instructions per
second) and so on. Improvement in these real quantities is regarded as unequivocally a step
forward. In the fi eld of hi-fi , many people seem to have diffi culty in deciding which direction
forward is.
Working as a professional audio designer, I often encounter opinions which, while an integral part
of the subjectivist offshoot of hi-fi , are treated with ridicule by practitioners of other branches of
electrical engineering. The would-be designer is not likely to be encouraged by being told that
audio is not far removed from witchcraft, and that no one truly knows what they are doing. I have
been told by a subjectivist that the operation of the human ear is so complex that its interaction
with measurable parameters lies forever beyond human comprehension. I hope this is an extreme
position; it was, I may add, proffered as a fl at statement rather than a basis for discussion.
I have studied audio design from the viewpoints of electronic design, psychoacoustics, and my own
humble efforts at musical creativity. I have found complete skepticism towards subjectivism to be
the only tenable position. Nonetheless, if hitherto unsuspected dimensions of audio quality are ever
shown to exist, then I look forward keenly to exploiting them. At this point I should say that no
doubt most of the esoteric opinions are held in complete sincerity.
The Subjectivist Position
A short defi nition of the subjectivist position on power amplifi ers might read as follows:
● Objective measurements of an amplifi er’s performance are unimportant compared with the
subjective impressions received in informal listening tests. Should the two contradict, the
objective results may be dismissed.
● Degradation effects exist in amplifi ers that are unknown to orthodox engineering science,
and are not revealed by the usual objective tests.
● Considerable latitude may be employed in suggesting hypothetical mechanisms of audio
impairment, such as mysterious capacitor shortcomings and subtle cable defects, without
reference to the plausibility of the concept, or the gathering of objective evidence of any
kind.
I hope that this is considered a reasonable statement of the situation; meanwhile the great majority
of the paying public continue to buy conventional hi-fi systems, ignoring the expensive and esoteric
high-end sector where the debate is fi ercest.
It may appear unlikely that a sizeable part of an industry could have set off in a direction that is
quite counter to the facts; it could be objected that such a loss of direction in a scientifi c subject
would be unprecedented. This is not so.
Parallel events that suggest themselves include the destruction of the study of genetics under
Lysenko in the USSR [1] . Another possibility is the study of parapsychology, now in deep trouble
because after some 100 years of investigation it has not uncovered the ghost (sorry) of a repeatable
phenomenon[2] . This sounds all too familiar. It could be argued that parapsychology is a poor
analogy because most people would accept that there was nothing there to study in the fi rst place,
whereas nobody would assert that objective measurements and subjective sound quality have no
correlation at all; one need only pick up the telephone to remind oneself what a 4 kHz bandwidth
and 10% or so THD sounds like.
The most startling parallel I have found in the history of science is the almost forgotten affair of
Blondlot and the N-rays [3] . In 1903, Rene Blondlot, a respected French physicist, claimed to have
discovered a new form of radiation he called ‘ N-rays ’ . (This was shortly after the discovery of
X-rays by Roentgen, so rays were in the air, as it were.) This invisible radiation was apparently
mysteriously refracted by aluminum prisms; but the crucial factor was that its presence could only
be shown by subjective assessment of the brightness of an electric arc allegedly affected by N-rays.
No objective measurement appeared to be possible. To Blondlot, and at least 14 of his professional
colleagues, the subtle changes in brightness were real, and the French Academy published more
than 100 papers on the subject.
Unfortunately N-rays were completely imaginary, a product of the ‘ experimenter-expectancy ’
effect. This was demonstrated by American scientist Robert Wood, who quietly pocketed the
aluminum prism during a demonstration, without affecting Bondlot’s recital of the results. After
this the N-ray industry collapsed very quickly, and while it was a major embarrassment at the time,
it is now almost forgotten.
The conclusion is inescapable that it is quite possible for large numbers of sincere people to
deceive themselves when dealing with subjective assessments of phenomena.
A Short History of Subjectivism
The early history of sound reproduction is notable for the number of times that observers reported
that an acoustic gramophone gave results indistinguishable from reality. The mere existence of such
statements throws light on how powerfully mindset affects subjective impressions. Interest in sound
reproduction intensifi ed in the postwar period, and technical standards such as DIN 45 – 500 were
set, though they were soon criticized as too permissive. By the late 1960s it was widely accepted
that the requirements for hi-fi would be satisfi ed by ‘ THD less than 0.1%, with no signifi cant
crossover distortion, frequency response 20 Hz– 20 kHz and as little noise as possible, please The early 1970s saw this expanded to include slew rates and properly behaved overload protection,
but the approach was always scientifi c and it was normal to read amplifi er reviews in which
measurements were dissected but no mention made of listening tests.
Following the growth of subjectivism through the pages of one of the leading subjectivist magazines
(Hi-Fi News ), the fi rst intimation of what was to come was the commencement of Paul Messenger’s
column ‘ Subjective Sounds ’ in September 1976, in which he said: ‘ The assessment will be (almost)
purely subjective, which has both strengths and weaknesses, as the inclusion of laboratory data
would involve too much time and space, and although the ear may be the most fallible, it is also
the most sensitive evaluation instrument . ’ This is subjectivism as expedient rather than policy.
Signifi cantly, none of the early installments contained references to amplifi er sound. In March 1977,
an article by Jean Hiraga was published vilifying high levels of negative feedback and praising
the sound of an amplifi er with 2% THD. In the same issue, Paul Messenger stated that a Radford
valve amplifi er sounded better than a transistor one, and by the end of the year the amplifi er-sound
bandwagon was rolling. Hiraga returned in August 1977 with a highly contentious set of claims
about audible speaker cables, and after that no hypothesis was too unlikely to receive attention.
The Limits of Hearing
In evaluating the subjectivist position, it is essential to consider the known abilities of the human
ear. Contrary to the impression given by some commentators, who call constantly for more
psychoacoustical research, a vast amount of hard scientifi c information already exists on this
subject, and some of it may be briefl y summarized thus:
● The smallest step-change in amplitude that can be detected is about 0.3 dB for a pure tone.
In more realistic situations it is 0.5 – 1.0 dB. This is about a 10% change [4] .
● The smallest detectable change in frequency of a tone is about 0.2% in the band 500 Hz –
2 kHz. In percentage terms, this is the parameter for which the ear is most sensitive [5] .
● The least detectable amount of harmonic distortion is not an easy fi gure to determine,
as there is a multitude of variables involved, and in particular the continuously varying
level of program means that the level of THD introduced is also dynamically changing.
With mostly low-order harmonics present the just-detectable amount is about 1%, though
crossover effects can be picked up at 0.3%, and probably lower. There is certainly no
evidence that an amplifi er producing 0.001% THD sounds any cleaner than one producing
0.005%[6] .
It is acknowledged that THD measurements, taken with the usual notch-type analyzer, are of
limited use in predicting the subjective impairment produced by an imperfect audio path. With
music, etc. intermodulation effects are demonstrably more important than harmonics. However,
THD tests have the unique advantage that visual inspection of the distortion residual gives an
experienced observer a great deal of information about the root cause of the nonlinearity. Many other distortion tests exist which, while yielding very little information to the designer, exercise
the whole audio bandwidth at once and correlate well with properly conducted tests for subjective
impairment by distortion. The Belcher intermodulation test (the principle is shown in Figure 1.1 )
deserves more attention than it has received, and may become more popular now that DSP chips
are cheaper.
One of the objections often made to THD tests is that their resolution does not allow verifi cation
that no nonlinearities exist at very low level – a sort of micro-crossover distortion. Hawksford,
for example, has stated ‘ Low-level threshold phenomena . . . set bounds upon the ultimate
transparency of an audio system ’ [7] , and several commentators have stated their belief that some
metallic contacts consist of a net of so-called ‘ micro-diodes ’ . In fact, this kind of mischievous
hypothesis can be disposed of using THD techniques.
I evolved a method of measuring THD down to 0.01% at 200 μ V rms, and applied it to large
electrolytics, connectors of varying provenance, and lengths of copper cable with and without alleged
magic properties. The method required the design of an ultra-low noise (EIN 150 dBu for a 10 Ω source resistance) and very low THD [8] . The measurement method is shown in Figure 1.2 ; using an
attenuator with a very low value of resistance to reduce the incoming signal keeps the Johnson noise
to a minimum. In no case was any unusual distortion detected, and it would be nice to think that this
red herring at least has been laid to rest.
● Interchannel crosstalk can obviously degrade stereo separation, but the effect is not
detectable until it is worse than 20 dB, which would be a very bad amplifi er indeed [9] .
● Phase and group delay have been an area of dispute for a long time. As Stanley Lipshitz
et al. have pointed out, these effects are obviously perceptible if they are gross enough;
if an amplifi er was so heroically misconceived as to produce the top half of the audio
spectrum 3 hours after the bottom, there would be no room for argument. In more practical
terms, concern about phase problems has centered on loudspeakers and their crossovers,
as this would seem to be the only place where a phase shift might exist without an
accompanying frequency-response change to make it obvious. Lipshitz appears to have
demonstrated[10] that a second-order all-pass fi lter (an all-pass fi lter gives a frequencydependent
phase shift without level changes) is audible, whereas BBC fi ndings reported
by Harwood [11] indicate the opposite, and the truth of the matter is still not clear. This
controversy is of limited importance to amplifi er designers, as it would take spectacular
incompetence to produce a circuit that included an accidental all-pass fi lter. Without such,
the phase response of an amplifi er is completely defi ned by its frequency response, and
vice versa; in Control Theory this is Bode’s Second Law [12] , and it should be much more
widely known in the hi-fi world than it is. A properly designed amplifi er has its response
roll-off points not too far outside the audio band, and these will have accompanying phase
shifts; there is no evidence that these are perceptible [8] .
The picture of the ear that emerges from psychoacoustics and related fi elds is not that of a precision
instrument. Its ultimate sensitivity, directional capabilities and dynamic range are far more
impressive than its ability to measure small level changes or detect correlated low-level signals
like distortion harmonics. This is unsurprising; from an evolutionary viewpoint the functions of the
ear are to warn of approaching danger (sensitivity and direction-fi nding being paramount) and for
speech. In speech perception the identifi cation of formants (the bands of harmonics from vocalchord
pulse excitation, selectively emphasized by vocal-tract resonances) and vowel/consonant discriminations are infi nitely more important than any hi-fi parameter. Presumably the whole
existence of music as a source of pleasure is an accidental side-effect of our remarkable powers of
speech perception: how it acts as a direct route to the emotions remains profoundly mysterious.
Articles of Faith: The Tenets of Subjectivism
All of the alleged effects listed below have received considerable affi rmation in the audio press, to
the point where some are treated as facts. The reality is that none of them has in the last 15 years
proved susceptible to objective confi rmation. This sad record is perhaps equalled only by students
of parapsychology. I hope that the brief statements below are considered fair by their proponents. If
not I have no doubt I shall soon hear about it:
● Sine waves are steady-state signals that represent too easy a test for amplifi ers, compared
with the complexities of music.
This is presumably meant to imply that sine waves are in some way particularly easy for an
amplifi er to deal with, the implication being that anyone using a THD analyzer must be hopelessly
naive. Since sines and cosines have an unending series of non-zero differentials, steady hardly
comes into it. I know of no evidence that sine waves of randomly varying amplitude (for example)
would provide a more searching test of amplifi er competence.
I hold this sort of view to be the result of anthropomorphic thinking about amplifi ers, treating them
as though they think about what they amplify. Twenty sine waves of different frequencies may
be conceptually complex to us, and the output of a symphony orchestra even more so, but to an
amplifi er both composite signals resolve to a single instantaneous voltage that must be increased in
amplitude and presented at low impedance. An amplifi er has no perspective on the signal arriving
at its input, but must literally take it as it comes.
● Capacitors affect the signal passing through them in a way invisible to distortion
measurements .
Several writers have praised the technique of subtracting pulse signals passed through two different
sorts of capacitor, claiming that the non-zero residue proves that capacitors can introduce audible
errors. My view is that these tests expose only well-known capacitor shortcomings such as
dielectric absorption and series resistance, plus perhaps the vulnerability of the dielectric fi lm in
electrolytics to reverse-biasing. No one has yet shown how these relate to capacitor audibility in
properly designed equipment.
● Passing an audio signal through cables, printed-circuit board (PCB) tracks or switch
contacts causes a cumulative deterioration. Precious metal contact surfaces alleviate but
do not eliminate the problem. This too is undetectable by tests for nonlinearity .
Concern over cables is widespread, but it can be said with confi dence that there is as yet not a shred
of evidence to support it. Any piece of wire passes a sine wave with unmeasurable distortion, and
so simple notions of inter-crystal rectifi cation or ‘ micro-diodes ’ can be discounted, quite apart from the fact that such behaviour is absolutely ruled out by established materials science. No plausible
means of detecting, let alone measuring, cable degradation has ever been proposed.
The most signifi cant parameter of a loudspeaker cable is probably its lumped inductance. This can
cause minor variations in frequency response at the very top of the audio band, given a demanding
load impedance. These deviations are unlikely to exceed 0.1 dB for reasonable cable constructions
(say, inductance less than 4 μ H). The resistance of a typical cable (say, 0.1 Ω ) causes response
variations across the band, following the speaker impedance curve, but these are usually even
smaller at around 0.05 dB. This is not audible.
Corrosion is often blamed for subtle signal degradation at switch and connector contacts; this is
unlikely. By far the most common form of contact degradation is the formation of an insulating
sulfi de layer on silver contacts, derived from hydrogen sulfi de air pollution. This typically cuts
the signal altogether, except when signal peaks temporarily punch through the sulfi de layer. The
effect is gross and seems inapplicable to theories of subtle degradation. Gold-plating is the only
certain cure. It costs money.
● Cables are directional, and pass audio better in one direction than the other .
Audio signals are AC. Cables cannot be directional any more than 2 2 can equal 5. Anyone
prepared to believe this nonsense will not be capable of designing amplifi ers, so there seems no
point in further comment.
● The sound of valves is inherently superior to that of any kind of semiconductor .
The ‘ valve sound’ is one phenomenon that may have a real existence; it has been known for a long
time that listeners sometimes prefer to have a certain amount of second-harmonic distortion added
in[13] , and most valve amplifi ers provide just that, due to grave diffi culties in providing good linearity
with modest feedback factors. While this may well sound nice, hi-fi is supposedly about accuracy, and
if the sound is to be thus modifi ed it should be controllable from the front panel by a ‘ niceness ’ knob.
The use of valves leads to some intractable problems of linearity, reliability and the need for
intimidatingly expensive (and, once more, nonlinear) iron-cored transformers. The current fashion
is for exposed valves, and it is not at all clear to me that a fragile glass bottle, containing a red-hot
anode with hundreds of volts DC on it, is wholly satisfactory for domestic safety.
A recent development in subjectivism is enthusiasm for single-ended directly heated triodes,
usually in extremely expensive monoblock systems. Such an amplifi er generates large amounts of
second-harmonic distortion, due to the asymmetry of single-ended operation, and requires a very
large output transformer as its primary carries the full DC anode current, and core saturation must
be avoided. Power outputs are inevitably very limited at 10 W or less. In a recent review, the Cary
CAD-300SEI triode amplifi er yielded 3% THD at 9 W, at a cost of £ 3400[14] . And you still need to
buy a pre-amp.
● Negative feedback is inherently a bad thing; the less it is used, the better the amplifi er
sounds, without qualifi cation . Negative feedback is not inherently a bad thing; it is an absolutely indispensable principle of
electronic design, and if used properly has the remarkable ability to make just about every parameter
better. It is usually global feedback that the critic has in mind. Local negative feedback is grudgingly
regarded as acceptable, probably because making a circuit with no feedback of any kind is near
impossible. It is often said that high levels of NFB enforce a low slew rate. This is quite untrue; and
this thorny issue is dealt with in detail in Chapters 4 and 8 . For more on slew rate, see also Ref. [15].
● Tone controls cause an audible deterioration even when set to the fl at position .
This is usually blamed on ‘ phase shift ’ . At the time of writing, tone controls on a pre-amp badly
damage its chances of street (or rather sitting-room) credibility, for no good reason. Tone controls
set to ‘ fl at ’ cannot possibly contribute any extra phase shift and must be inaudible. My view is
that they are absolutely indispensable for correcting room acoustics, loudspeaker shortcomings, or
tonal balance of the source material, and that a lot of people are suffering suboptimal sound as a
result of this fashion. It is now commonplace for audio critics to suggest that frequency-response
inadequacies should be corrected by changing loudspeakers. This is an extraordinarily expensive
way of avoiding tone controls.
● The design of the power supply has subtle effects on the sound, quite apart from ordinary
dangers like ripple injection .
All good amplifi er stages ignore imperfections in their power supplies, op-amps in particular
excelling at power-supply rejection ratio. More nonsense has been written on the subject of subtle
PSU failings than on most audio topics; recommendations of hard-wiring the mains or using
gold-plated 13 A plugs would seem to hold no residual shred of rationality, in view of the usual
processes of rectifi cation and smoothing that the raw AC undergoes. And where do you stop? At
the local substation? Should we gold-plate the pylons?
● Monobloc construction (i.e. two separate power amplifi er boxes) is always audibly
superior, due to the reduction in crosstalk .
There is no need to go to the expense of monobloc power amplifi ers in order to keep crosstalk
under control, even when making it substantially better than the 20 dB that is actually necessary.
The techniques are conventional; the last stereo power amplifi er I designed managed an
easy 90 dB at 10 kHz without anything other than the usual precautions. In this area dedicated
followers of fashion pay dearly for the privilege, as the cost of the mechanical parts will be nearly
doubled.
● Microphony is an important factor in the sound of an amplifi er, so any attempt at vibration
damping is a good idea .
Microphony is essentially something that happens in sensitive valve preamplifi ers. If it happens in
solid-state power amplifi ers the level is so far below the noise it is effectively nonexistent.
Experiments on this sort of thing are rare (if not unheard of) and so I offer the only scrap of
evidence I have. Take a microphone pre-amp operating at a gain of 70 dB, and tap the input capacitors (assumed electrolytic) sharply with a screwdriver; the pre-amp output will be a dull
thump, at low level. The physical impact on the electrolytics (the only components that show this
effect) is hugely greater than that of any acoustic vibration; and I think the effect in power amps, if
any, must be so vanishingly small that it could never be found under the inherent circuit noise.
Let us for a moment assume that some or all of the above hypotheses are true, and explore the
implications. The effects are not detectable by conventional measurement, but are assumed to be
audible. First, it can presumably be taken as axiomatic that for each audible defect some change
occurs in the pattern of pressure fl uctuations reaching the ears, and therefore a corresponding
modifi cation has occurred to the electrical signal passing through the amplifi er. Any other starting
point supposes that there is some other route conveying information apart from the electrical
signals, and we are faced with magic or forces unknown to science. Mercifully no commentator has
(so far) suggested this. Hence there must be defects in the audio signals, but they are not revealed by
the usual test methods. How could this situation exist? There seem to be two possible explanations
for this failure of detection: one is that the standard measurements are relevant but of insuffi cient
resolution, and we should be measuring frequency response, etc., to thousandths of a decibel. There
is no evidence whatsoever that such micro-deviations are audible under any circumstances.
An alternative (and more popular) explanation is that standard sine-wave THD measurements miss
the point by failing to excite subtle distortion mechanisms that are triggered only by music, the
spoken word, or whatever. This assumes that these music-only distortions are also left undisturbed
by multi-tone intermodulation tests, and even the complex pseudorandom signals used in the
Belcher distortion test [16] . The Belcher method effectively tests the audio path at all frequencies at
once, and it is hard to conceive of a real defect that could escape it.
The most positive proof that subjectivism is fallacious is given by subtraction testing. This is the
devastatingly simple technique of subtracting before and after amplifi er signals and demonstrating
that nothing audibly detectable remains.
It transpires that these alleged music-only mechanisms are not even revealed by music, or indeed
anything else, and it appears the subtraction test has fi nally shown as nonexistent these elusive
degradation mechanisms.
The subtraction technique was proposed by Baxandall in 1977 [17] . The principle is shown in
Figure 1.3 ; careful adjustment of the roll-off balance network prevents minor bandwidth variations
from swamping the true distortion residual. In the intervening years the subjectivist camp has made
no effective reply.
A simplifi ed version of the test was introduced by Hafl er [18] . This method is less sensitive, but has the
advantage that there is less electronics in the signal path for anyone to argue about (see Figure 1.4 ).
A prominent subjectivist reviewer, on trying this demonstration, was reduced to claiming that the
passive switchbox used to implement the Hafl er test was causing so much sonic degradation that
all amplifi er performance was swamped [19] . I do not feel that this is a tenable position. So far all
experiments such as these have been ignored or brushed aside by the subjectivist camp; no attempt
has been made to answer the extremely serious objections that this demonstration raises.
The Length of the Audio Chain
An apparently insurmountable objection to the existence of non-measurable amplifi er quirks is
that recorded sound of almost any pedigree has passed through a complex mixing console at least
once; prominent parts like vocals or lead guitar will almost certainly have passed through at least
twice, once for recording and once at mix-down. More signifi cantly, it must have passed through
the potential quality bottleneck of an analog tape machine or more likely the A – D converters
of digital equipment. In its long path from here to ear the audio passes through at least 100
op-amps, dozens of connectors, and several hundred meters of ordinary screened cable. If mystical
degradations can occur, it defi es reason to insist that those introduced by the last 1% of the path are
the critical ones.
The Implications
This confused state of amplifi er criticism has negative consequences. First, if equipment is
reviewed with results that appear arbitrary, and which are in particular incapable of replication
or confi rmation, this can be grossly unfair to manufacturers who lose out in the lottery. Since
subjective assessments cannot be replicated, the commercial success of a given make can depend
entirely on the vagaries of fashion. While this is fi ne in the realm of clothing or soft furnishings, the
hi-fi business is still claiming accuracy of reproduction as its raison d ’ ê tre, and therefore you would
expect the technical element to be dominant.
A second consequence of placing subjectivism above measurements is that it places designers in
a most unenviable position. No degree of ingenuity or attention to technical detail can ensure a
good review, and the pressure to adopt fashionable and expensive expedients (such as linear-crystal
internal wiring) is great, even if the designer is certain that they have no audible effect for good or
evil. Designers are faced with a choice between swallowing the subjectivist credo whole or keeping
very quiet and leaving the talking to the marketing department.
If objective measurements are disregarded, it is inevitable that poor amplifi ers will be produced,
some so bad that their defects are unquestionably audible. In recent reviews [20] it was easy to
fi nd a £ 795 preamplifi er (Counterpoint SA7) that boasted a feeble 12 dB disk overload margin
(another pre-amp costing £ 2040 struggled up to 15 dB – Burmester 838/846) and another costing
£ 1550 that could only manage a 1 kHz distortion performance of 1%, a lack of linearity that would
have caused consternation 10 years ago (Quicksilver). However, by paying £ 5700 one could inch
this down to 0.3% (Audio Research M100-2 monoblocs). This does not of course mean that it is
impossible to buy an ‘ audiophile ’ amplifi er that does measure well; another example would be the
preamplifi er/power amplifi er combination that provides a very respectable disk overload margin
of 31 dB and 1 kHz rated-power distortion below 0.003%, the total cost being £ 725 (Audiolab
8000C/8000P). I believe this to be a representative sample, and we appear to be in the paradoxical
situation that the most expensive equipment provides the worst objective performance. Whatever
the rights and wrongs of subjective assessment, I think that most people would agree that this is a
strange state of affairs. Finally, it is surely a morally ambiguous position to persuade non-technical
people that to get a really good sound they have to buy £ 2000 pre-amps and so on, when both
technical orthodoxy and common sense indicate that this is quite unnecessary.
The Reasons Why
Some tentative conclusions are possible as to why hi-fi engineering has reached the pass that it
has. I believe one basic reason is the diffi culty of defi ning the quality of an audio experience; you
cannot draw a diagram to communicate what something sounded like. In the same way, acoustical
memory is more evanescent than visual memory. It is far easier to visualize what a London bus
looks like than to recall the details of a musical performance. Similarly, it is diffi cult to ‘ look more
closely ’ : turning up the volume is more like turning up the brightness of a TV picture; once an
optimal level is reached, any further increase becomes annoying, then painful.
It has been universally recognized for many years in experimental psychology, particularly in
experiments about perception, that people tend to perceive what they want to perceive. This is
often called the experimenter-expectancy effect; it is more subtle and insidious than it sounds, and
the history of science is littered with the wrecked careers of those who failed to guard against it.
Such self-deception has most often occurred in fi elds like biology, where although the raw data
may be numerical, there is no real mathematical theory to check it against. When the only ‘ results ’
are vague subjective impressions, the danger is clearly much greater, no matter how absolute the
integrity of the experimenter. Thus in psychological work great care is necessary in the use of
impartial observers, double-blind techniques, and rigorous statistical tests for signifi cance. The vast
majority of subjectivist writings wholly ignore these precautions, with predictable results. In a few
cases properly controlled listening tests have been done, and at the time of writing all have resulted
in different amplifi ers sounding indistinguishable. I believe the conclusion is inescapable that
experimenter expectancy has played a dominant role in the growth of subjectivism.
It is notable that in subjectivist audio the ‘ correct ’ answer is always the more expensive or
inconvenient one. Electronics is rarely as simple as that. A major improvement is more likely to be
linked with a new circuit topology or new type of semiconductor, than with mindlessly specifying
more expensive components of the same type; cars do not go faster with platinum pistons.
It might be diffi cult to produce a rigorous statistical analysis, but it is my view that the reported
subjective quality of a piece of equipment correlates far more with the price than with anything else.
There is perhaps here an echo of the Protestant work ethic: you must suffer now to enjoy yourself
later. Another reason for the relatively effortless rise of subjectivism is the me-too effect; many people
are reluctant to admit that they cannot detect acoustic subtleties as nobody wants to be labeled as
insensitive, outmoded, or just plain deaf. It is also virtually impossible to absolutely disprove any
claims, as the claimant can always retreat a fraction and say that there was something special about
the combination of hardware in use during the disputed tests, or complain that the phenomena are too
delicate for brutal logic to be used on them. In any case, most competent engineers with a taste for
rationality probably have better things to do than dispute every controversial report.
Under these conditions, vague claims tend, by a kind of intellectual infl ation, to gradually become
regarded as facts. Manufacturers have some incentive to support the subjectivist camp as they can
claim that only they understand a particular non-measurable effect, but this is no guarantee that the
dice may not fall badly in a subjective review.
The Outlook
It seems unlikely that subjectivism will disappear for a long time, if ever, given the momentum
that it has gained, the entrenched positions that some people have taken up, and the sadly uncritical
way in which people accept an unsupported assertion as the truth simply because it is asserted
with frequency and conviction. In an ideal world every such statement would be greeted by
loud demands for evidence. However, the history of the world sometimes leads one to suppose
pessimistically that people will believe anything. By analogy, one might suppose that subjectivism would persist for the same reason that parapsychology has; there will always be people who will
believe what they want to believe rather than what the hard facts indicate.
More than 10 years have passed since the above material on subjectivism was written, but there
seems to be no reason to change a word of it. Amplifi er reviews continue to make completely
unsupportable assertions, of which the most obtrusive these days is the notion that an amplifi er
can in some way alter the ‘ timing ’ of music. This would be a remarkable feat to accomplish with a
handful of transistors, were it not wholly imaginary.
During my sojourn at TAG-McLaren Audio, we conducted an extensive set of double-blind
listening tests, using a lot of experienced people from various quarters of the hi-fi industry. An
amplifi er loosely based on the Otala four-stage architecture was compared with a Blameless threestage
architecture perpetrated by myself (these terms are fully explained in Chapter 2). The two
amplifi ers could not have been more different – the four-stage had complex lead-lag compensation
and a buffered complementary feedback pair (CFP) output, while my three-stage had conventional
Miller dominant-pole compensation. There were too many other detail differences to list here.
After a rigorous statistical analysis the result – as you may have guessed – was that nobody could
tell the two amplifi ers apart.
Technical Errors
Misinformation also arises in the purely technical domain; I have also found some of the most
enduring and widely held technical beliefs to be unfounded. For example, if you take a Class-B
amplifi er and increase its quiescent current so that it runs in Class-A at low levels, i.e. in Class-AB,
most people will tell you that the distortion will be reduced as you have moved nearer to the full
Class-A condition. This is untrue. A correctly confi gured amplifi er gives more distortion in Class-
AB, not less, because of the abrupt gain changes inherent in switching from A to B every cycle.
Discoveries like this can only be made because it is now straightforward to make testbed amplifi ers
with ultra-low distortion – lower than that which used to be thought possible. The reduction of
distortion to the basic or inherent level that a circuit confi guration is capable of is a fundamental
requirement for serious design work in this fi eld; in Class-B at least this gives a defi ned and
repeatable standard of performance that in later chapters I name a Blameless amplifi er, so called
because it avoids error rather than claiming new virtues.
It has proved possible to take the standard Class-B power amplifi er confi guration, and by minor
modifi cations reduce the distortion to below the noise fl oor at low frequencies. This represents
approximately 0.0005 – 0.0008% THD, depending on the exact design of the circuitry, and the
actual distortion can be shown to be substantially below this if spectrum-analysis techniques are
used to separate the harmonics from the noise.
The Performance Requirements for Amplifi ers
This section is not a recapitulation of international standards, which are intended to provide a
minimum level of quality rather than extend the art. It is rather my own view of what you should be worrying about at the start of the design process, and the fi rst items to consider are the brutally
pragmatic ones related to keeping you in business and out of prison.
Safety
In the drive to produce the fi nest amplifi er ever made, do not forget that the Prime Directive of audio
design is – Thou Shalt Not Kill. Every other consideration comes a poor second, not only for ethical
reasons, but also because one serious lawsuit will close down most audio companies forever.
Reliability
If you are in the business of manufacturing, you had better make sure that your equipment keeps
working, so that you too can keep working. It has to be admitted that power amplifi ers – especially
the more powerful ones – have a reputation for reliability that is poor compared with most
branches of electronics. The ‘ high end’ in particular has gathered to itself a bad reputation for
dependability[21] .
Power Output
In commercial practice, this is decided for you by the marketing department. Even if you can
please yourself, the power output capability needs careful thought as it has a powerful and
nonlinear effect on the cost.
The last statement requires explanation. As the output power increases, a point is reached when
single output devices are incapable of sustaining the thermal dissipation; parallel pairs are
required, and the price jumps up. Similarly, transformer laminations come in standard sizes, so the
transformer size and cost will also increase in discrete steps.
Domestic hi-fi amplifi ers usually range from 20 to 150 W into 8 Ω , though with a scattering of
much higher powers. PA units will range from 50 W, for foldback purposes (i.e. the sound the
musician actually hears, to monitor his/her playing, as opposed to that thrown out forwards by the
main PA stacks, also called stage monitoring) to 1 kW or more. Amplifi ers of extreme high power
are not popular, partly because the economies of scale are small, but mainly because it means
putting all your eggs in one basket, and a failure becomes disastrous. This is accentuated by the
statistically unproven but almost universally held opinion that high-power solid-state amplifi ers are
inherently less reliable than others.
If an amplifi er gives a certain output into 8 Ω , it will not give exactly twice as much into 4 Ω loads;
in fact it will probably be much less than this, due to the increased resistive losses in 4 Ω operation,
and the way that power alters as the square of voltage. Typically, an amplifi er giving 180 W into 8 Ω
might be expected to yield 260 W into 4 Ω and 350 W into 2 Ω , if it can drive so low a load at all.
These fi gures are approximate, depending very much on power supply design.
Nominally 8 Ω loudspeakers are the most common in hi-fi applications. The ‘ nominal ’ title
accommodates the fact that all loudspeakers, especially multi-element types, have marked changes in input impedance with frequency, and are only resistive at a few spot frequencies. Nominal 8 Ω
loudspeakers may be expected to drop to at least 6 Ω in some part of the audio spectrum. To allow
for this, almost all amplifi ers are rated as capable of 4 Ω as well as 8 Ω loads. This takes care of
almost any nominal 8 Ω speaker, but leaves no safety margin for nominal 4 Ω designs, which are
likely to dip to 3 Ω or less. Extending amplifi er capability to deal with lower load impedances
for anything other than very short periods has serious cost implications for the power-supply
transformer and heat-sinking; these already represent the bulk of the cost.
The most important thing to remember in specifying output power is that you have to increase it
by an awful lot to make the amplifi er signifi cantly louder. We do not perceive acoustic power as
such – there is no way we could possibly integrate the energy liberated in a room, and it would be
a singularly useless thing to perceive if we could. It is much nearer the truth to say that we perceive
pressure. It is well known that power in watts must be quadrupled to double sound pressure level
(SPL), but this is not the same as doubling subjective loudness; this is measured in Sones rather
than dB above threshold, and some psychoacousticians have reported that doubling subjective
loudness requires a 10 dB rather than 6 dB rise in SPL, implying that amplifi er power must be
increased tenfold, rather than merely quadrupled [22] . It is at any rate clear that changing from a
25 W to a 30 W amplifi er will not give an audible increase in level.
This does not mean that fractions of a watt are never of interest. They can matter either in pursuit of
maximum effi ciency for its own sake, or because a design is only just capable of meeting its output
specifi cation.
Some hi-fi reviewers set great value on very high peak current capability for short periods. While
it is possible to think up special test waveforms that demand unusually large peak currents, any
evidence that this effect is important in use is so far lacking.
Frequency Response
This can be dealt with crisply; the minimum is 20 Hz – 20 kHz, 0.5 dB, though there should never
be any plus about it when solid-state amplifi ers are concerned. Any hint of a peak before the rolloff
should be looked at with extreme suspicion, as it probably means doubtful HF stability. This is
less true of valve amplifi ers, where the bandwidth limits of the output transformer mean that even
modest NFB factors tend to cause peaking at both high and low ends of the spectrum.
Having dealt with the issue crisply, there is no hope that everyone will agree that this is adequate.
CDs do not have the built-in LF limitations of vinyl and could presumably encode the barometric
pressure in the recording studio if this was felt to be desirable, and so an extension to 0.5 dB
at 5 or 10 Hz is perfectly feasible. However, if infrabass information does exist down at these
frequencies, no domestic loudspeaker will reproduce them.
Noise
There should be as little as possible without compromising other parameters. The noise
performance of a power amplifi er is not an irrelevance [23] , especially in a domestic setting.
Distortion
Once more, a sensible target might be: as little as possible without messing up something else . This
ignores the views of those who feel a power amplifi er is an appropriate device for adding distortion
to a musical performance. Such views are not considered in the body of this book; it is, after all,
not a treatise on fuzz-boxes or other guitar effects.
I hope that the techniques explained in this book have a relevance beyond power amplifi ers.
Applications obviously include discrete op-amp-based preamplifi ers [24] , and extend to any
amplifi er aiming at static or dynamic precision.
My philosophy is the simple one that distortion is bad and high-order distortion is worse. The
fi rst part of this statement is, I suggest, beyond argument, and the second part has a good deal
of evidence to back it. The distortion of the n th harmonic should be weighted by n2 /4 worse,
according to many authorities [25] . This leaves the second harmonic unchanged, but scales up the
third by 9/4, i.e. 2.25 times, the fourth by 16/4, i.e. 4 times, and so on. It is clear that even small
amounts of high-order harmonics could be unpleasant, and this is one reason why even modest
crossover distortion is of such concern.
Digital audio now routinely delivers the signal with less than 0.002% THD, and I can earnestly
vouch for the fact that analog console designers work furiously to keep the distortion in long
complex signal paths down to similar levels. I think it an insult to allow the very last piece of
electronics in the chain to make nonsense of these efforts.
I would like to make it clear that I do not believe that an amplifi er yielding 0.001% THD is going
to sound much better than its fellow giving 0.002%. However, if there is ever a scintilla of doubt
as to what level of distortion is perceptible, then using the techniques I have presented it should be
possible to routinely reduce the THD below the level at which there can be any rational argument.
I am painfully aware that there is a school of thought that regards low THD as inherently immoral,
but this is to confuse electronics with religion. The implication is that very low THD can only be
obtained by huge global NFB factors that require heavy dominant-pole compensation that severely
degrades slew rate; the obvious fl aw in this argument is that once the compensation is applied the
amplifi er no longer has a large global NFB factor, and so its distortion performance presumably
reverts to mediocrity, further burdened with a slew rate of 4 V per fortnight.
To me low distortion has its own aesthetic and philosophical appeal; it is satisfying to know that the
amplifi er you have just designed and built is so linear that there simply is no realistic possibility of it
distorting your favorite material. Most of the linearity-enhancing strategies examined in this book are
of minimal cost (the notable exception being resort to Class-A) compared with the essential heat-sinks,
transformer, etc., and so why not have ultra-low distortion? Why put up with more than you must?
Damping Factor
Audio amplifi ers, with a few very special exceptions [26] , approximate to perfect voltage sources, i.e.
they aspire to a zero output impedance across the audio band. The result is that amplifi er output is unaffected by loading, so that the frequency-variable impedance of loudspeakers does not give an
equally variable frequency response, and there is some control of speaker cone resonances.
While an actual zero impedance is impossible, a very close approximation is possible if large
negative-feedback factors are used. (Actually, a judicious mixture of voltage and current feedback
will make the output impedance zero, or even negative – i.e. increasing the loading makes the
output voltage increase. This is clever, but usually pointless, as will be seen.) Solid-state amplifi ers
are quite happy with lots of feedback, but it is usually impractical in valve designs.
Damping factor (DF) is defi ned as the ratio of the load impedance Rload to the amplifi er output
resistance Rout :
Damping factor load
out

R
R
Equation 1.1
A solid-state amplifi er typically has output resistance of the order of 0.05 Ω , so if it drives an 8 Ω
speaker we get a damping factor of 160 times. This simple defi nition ignores the fact that amplifi er
output impedance usually varies considerably across the audio band, increasing with frequency
as the negative feedback factor falls; this indicates that the output resistance is actually more like
an inductive reactance. The presence of an output inductor to give stability with capacitive loads
further complicates the issue.
Mercifully, damping factor as such has very little effect on loudspeaker performance. A damping
factor of 160 times, as derived above, seems to imply a truly radical effect on cone response – it
implies that resonances and such have been reduced by 160 times as the amplifi er output takes an
iron grip on the cone movement. Nothing could be further from the truth.
The resonance of a loudspeaker unit depends on the total resistance in the circuit. Ignoring the
complexities of crossover circuitry in multi-element speakers, the total series resistance is the sum
of the speaker coil resistance, the speaker cabling and, last of all, the amplifi er output impedance.
The values will be typically 7, 0.5, and 0.05 Ω respectively, so the amplifi er only contributes 0.67%
to the total, and its contribution to speaker dynamics must be negligible.
The highest output impedances are usually found in valve equipment, where global feedback
including the output transformer is low or nonexistent; values around 0.5 Ω are usual. However,
idiosyncratic semiconductor designs sometimes also have high output resistances; see Olsher [27] for
a design with Rout 0.6 Ω , which I feel is far too high.
This view of the matter was practically investigated and fully confi rmed by James Moir as far back
as 1950 [28] , though this has not prevented periodic resurgences of controversy.
The only reason to strive for a high damping factor – which can, after all, do no harm – is the
usual numbers game of impressing potential customers with specifi cation fi gures. It is as certain
as anything can be that the subjective difference between two amplifi ers, one with a DF of 100 and
the other boasting 2000, is undetectable by human perception. Nonetheless, the specifi cations look very different in the brochure, so means of maximizing the DF may be of some interest. This is
examined further in Chapter 8.
Absolute Phase
Concern for absolute phase has for a long time hovered ambiguously between real audio concerns
like noise and distortion, and the subjective realm where solid copper is allegedly audible. Absolute
phase means the preservation of signal phase all the way from microphone to loudspeaker, so that a
drum impact that sends an initial wave of positive pressure towards the live audience is reproduced
as a similar positive pressure wave from the loudspeaker. Since it is known that the neural impulses
from the ear retain the periodicity of the waveform at low frequencies, and distinguish between
compression and rarefaction, there is a prima facie case for the audibility of absolute phase.
It is unclear how this applies to instruments less physical than a kickdrum. For the drum the
situation is simple – you kick it, the diaphragm moves outwards and the start of the transient
must be a wave of compression in the air (followed almost at once by a wave of rarefaction). But
what about an electric guitar? A similar line of reasoning – plucking the string moves it in a given
direction, which gives such and such a signal polarity, which leads to whatever movement of the
cone in the guitar amp speaker cabinet – breaks down at every point in the chain. There is no way
to know how the pickups are wound, and indeed the guitar will almost certainly have a switch for
reversing the phase of one of them. I also suggest that the preservation of absolute phase is not the
prime concern of those who design and build guitar amplifi ers.
The situation is even less clear if more than one instrument is concerned, which is of course almost
all the time. It is very diffi cult to see how two electric guitars played together could have a ‘ correct ’
phase in which to listen to them.
Recent work on the audibility of absolute phase [29,30] shows it is sometimes detectable. A
single tone fl ipped back and forth in phase, providing it has a spiky asymmetrical waveform
and an associated harsh sound, will show a change in perceived timbre and, according to some
experimenters, a perceived change in pitch. A monaural presentation has to be used to yield a
clear effect. A complex sound, however, such as that produced by a musical ensemble, does not in
general show a detectable difference.
Proposed standards for the maintenance of absolute phase have just begun to appear [31] , and the
implication for amplifi er designers is clear; whether absolute phase really matters or not, it is
simple to maintain phase in a power amplifi er and so it should be done (compare a complex mixing
console, where correct phase is absolutely vital, and there are hundreds of inputs and outputs, all
of which must be in phase in every possible confi guration of every control). In fact, it probably
already has been done, even if the designer has not given absolute phase a thought, because almost
all power amplifi ers use series negative feedback, and this is inherently non-inverting. Care is,
however, required if there are stages such as balanced line input amplifi ers before the power
amplifi er itself; if the hot and cold inputs get swapped by mistake then the amplifi er output will be
phase inverted.Amplifi er Formats
When the fi rst edition of this book appeared in 1996, the vast majority of domestic amplifi ers were
two-channel stereo units. Since then there has been a great increase in other formats, particularly
in multichannel units having seven or more channels for audio-visual use, and in single-channel
amplifi ers built into subwoofer loudspeakers.
Multichannel amplifi ers come in two kinds. The most cost-effective way to build a multichannel
amplifi er is to put as many power amplifi er channels as convenient on each PCB, and group
them around a large toroidal transformer that provides a common power supply for all of them.
While this keeps the costs down there are inevitable compromises on interchannel crosstalk and
rejection of the transformer’s stray magnetic fi elds. The other method is to make each channel (or,
in some cases, each pair of channels) into a separate amplifi er module with its own transformer,
power supply, heat-sinks, and separate input and output connections – a sort of multiple-monobloc
format. The modules usually share a microcontroller housekeeping system but nothing else. This
form of construction gives much superior interchannel crosstalk, as the various audio circuits need
have no connection with each other, and much less trouble with transformer hum as the modules
are relatively long and thin so that a row of them can be fi tted into a chassis, and thus the mains
transformer can be put right at one end and the sensitive input circuitry right at the other. Inevitably
this is a more expensive form of construction.
Subwoofer amplifi ers are single channel and of high power. There seems to be a general consensus
that the quality of subwoofer amplifi ers is less critical than that of other amplifi ers, and this
has meant that both Class-G and Class-D designs have found homes in subwoofer enclosures.
Subwoofer amplifi ers differ from others in that they often incorporate their own specialized
fi ltering (typically at 200 Hz) and equalization circuitry.