วันศุกร์ที่ 3 กันยายน พ.ศ. 2553

Power Amplifi er Architecture and Negative Feedback

Amplifi er Architectures
This grandiose title simply refers to the large-scale structure of the amplifi er; that is, the block
diagram of the circuit one level below that representing it as a single white block labeled Power
Amplifi er. Almost all solid-state amplifi ers have a three-stage architecture as described below,
though they vary in the detail of each stage. Two-stage architectures have occasionally been used,
but their distortion performance is not very satisfactory. Four-stage architectures have been used
in signifi cant numbers, but they are still much rarer than three-stage designs, and usually involve
relatively complex compensation schemes to deal with the fact that there is an extra stage to add
phase shift and potentially imperil high-frequency stability.

The Three-Stage Amplifi er Architecture

The vast majority of audio amplifi ers use the conventional architecture, shown in Figure 2.1 , and
so it is dealt with fi rst. There are three stages, the fi rst being a transconductance stage (differential
voltage in, current out), the second a transimpedance stage (current in, voltage out), and the third
a unity-voltage-gain output stage. The second stage clearly has to provide all the voltage gain
and I have therefore called it the voltage-amplifi er stage or VAS. Other authors have called it the pre-driver stage but I prefer to reserve this term for the fi rst transistors in output triples. This threestage
architecture has several advantages, not least being that it is easy to arrange things so that
interaction between stages is negligible. For example, there is very little signal voltage at the input
to the second stage, due to its current-input (virtual-earth) nature, and therefore very little on the
fi rst stage output; this minimizes Miller phase shift and possible Early effect in the input devices.
Similarly, the compensation capacitor reduces the second stage output impedance, so that the
nonlinear loading on it due to the input impedance of the third stage generates less distortion than
might be expected. The conventional three-stage structure, familiar though it may be, holds several
elegant mechanisms such as this. They will be fully revealed in later chapters. Since the amount of
linearizing global negative feedback (NFB) available depends upon amplifi er open-loop gain, how
the stages contribute to this is of great interest. The three-stage architecture always has a unity-gain
output stage – unless you really want to make life diffi cult for yourself – and so the total forward
gain is simply the product of the transconductance of the input stage and the transimpedance
of the VAS, the latter being determined solely by the Miller capacitor Cdom , except at very low
frequencies. Typically, the closed-loop gain will be between 20 and 30 dB. The NFB factor
at 20 kHz will be 25 – 40 dB, increasing at 6 dB/octave with falling frequency until it reaches the
dominant pole frequency P 1, when it fl attens out. What matters for the control of distortion is
the amount of NFB available, rather than the open-loop bandwidth, to which it has no direct
relationship. In my Electronics World Class-B design, the input stage gm is about 9 mA/V, and Cdom
is 100 pF, giving an NFB factor of 31 dB at 20 kHz. In other designs I have used as little as 26 dB
(at 20 kHz) with good results.
Compensating a three-stage amplifi er is relatively simple; since the pole at the VAS is already
dominant, it can be easily increased to lower the HF negative-feedback factor to a safe level. The
local NFB working on the VAS through Cdom has an extremely valuable linearizing effect.
The conventional three-stage structure represents at least 99% of the solid-state amplifi ers built,
and I make no apology for devoting much of this book to its behavior. I am quite sure I have not
exhausted its subtleties.

The Two-Stage Amplifi er Architecture

In contrast with the three-stage approach, the architecture in Figure 2.2 is a two-stage amplifi er,
the fi rst stage being once more a transconductance stage, though now without a guaranteed low
impedance to accept its output current. The second stage combines VAS and output stage in
one block; it is inherent in this scheme that the VAS must double as a phase splitter as well as a
generator of raw gain. There are then two quite dissimilar signal paths to the output, and it is not
at all clear that trying to break this block down further will assist a linearity analysis. The use of a
phase-splitting stage harks back to valve amplifi ers, where it was inescapable, as a complementary
valve technology has so far eluded us.
Paradoxically, a two-stage amplifi er is likely to be more complex in its gain structure than a threestage.
The forward gain depends on the input stage gm , the input stage collector load (because the input stage can no longer be assumed to be feeding a virtual earth) and the gain of the output
stage, which will be found to vary in a most unsettling manner with bias and loading. Choosing
the compensation is also more complex for a two-stage amplifi er, as the VAS/phase splitter has
a signifi cant signal voltage on its input and so the usual pole-splitting mechanism that enhances
Nyquist stability by increasing the pole frequency associated with the input stage collector will no
longer work so well. (I have used the term Nyquist stability, or Nyquist oscillation, throughout this
book to denote oscillation due to the accumulation of phase shift in a global NFB loop, as opposed
to local parasitics, etc.)
The LF feedback factor is likely to be about 6 dB less with a 4 Ω load, due to lower gain in the
output stage. However, this variation is much reduced above the dominant pole frequency, as there
is then increasing local NFB acting in the output stage.
Here are two examples of two-stage amplifi ers: Linsley-Hood [1] and Olsson [2] . The two-stage
amplifi er offers little or no reduction in parts cost, is harder to design, and in my experience
invariably gives a poor distortion performance.

The Four-Stage Amplifi er Architecture
The best-known example of a four-stage architecture is probably that published by Lohstroh and
Otala in their infl uential paper, which was confi dently entitled ‘ An audio power amplifi er for
ultimate quality requirements ’ and appeared in December 1973 [3] . A simplifi ed circuit diagram of
their design is shown in Figure 2.3 . One of their design objectives was the use of a low value of
overall feedback, made possible by heavy local feedback in the fi rst three amplifi er stages, in the
form of emitter degeneration; the closed-loop gain was 32 dB (40 times) and the feedback factor
20 dB, allegedly fl at across the audio band. Another objective was the elimination of so-called
transient intermodulation distortion, which after many years of argument and futile debate has at last been accepted to mean nothing more than old-fashioned slew-rate limiting. To this end
dominant-pole compensation was avoided in this design. The compensation scheme that was used
was complex, but basically the lead capacitors C1, C2 and the lead-lag network R19, C3 were
intended to cancel out the internal poles of the amplifi er. According to Lohstroh and Otala, these
lay between 200 kHz and 1 MHz, but after compensation the open-loop frequency response had its
fi rst pole at 1 MHz. A fi nal lag compensation network R15, C4 was located outside the feedback
loop. An important point is that the third stage was heavily loaded by the two resistors R11, R12.
The emitter-follower (EF)-type output stage was biased far into Class-AB by a conventional
Vbe -multiplier, drawing 600 mA of quiescent current. As explained later in Chapter 6, this gives
poor linearity when you run out of the Class-A region.
You will note that the amplifi er uses shunt feedback; this certainly prevents any possibility of
common-mode distortion in the input stage, as there is no common-mode voltage, but it does have
the frightening drawback of going berserk if the source equipment is disconnected, as there is then
a greatly increased feedback factor, and high-frequency instability is pretty much inevitable. Input
common-mode nonlinearity is dealt with in Chapter 4, where it is shown that in normal amplifi er
designs it is of negligible proportions, and certainly not a good reason to adopt overall shunt
feedback.
Many years ago I was asked to put a version of this amplifi er circuit into production for one of the
major hi-fi companies of the time. It was not a very happy experience. High-frequency stability was
very doubtful and the distortion performance was distinctly unimpressive, being in line with that
quoted in the original paper as 0.09% at 50 W, 1 kHz [3] . After a few weeks of struggle the four-stage architecture was abandoned and a more conventional (and much more tractable) threestage
architecture was adopted instead.
Another version of the four-stage architecture is shown in Figure 2.4 ; it is a simplifi ed version of
a circuit used for many years by another of the major hi-fi companies. There are two differential
stages, the second one driving a push – pull VAS Q8, Q9. Once again the differential stages
have been given a large amount of local negative feedback in the form of emitter degeneration.
Compensation is by the lead-lag network R14, C1 between the two input stage collectors and the
two lead-lag networks R15, C2 and R16, C3 that shunt the collectors of Q5, Q7 in the second
differential stage. Unlike the Lohstroh and Otala design, series overall feedback was used,
supplemented with an op-amp DC servo to control the DC offset at the output.
Having had some experience with this design (no, it’s not one of mine) I have to report that while
in general the amplifi er worked soundly and reliably, it was unduly fussy about transistor types and
the distortion performance was not of the best.
The question now obtrudes itself: what is gained by using the greater complexity of a four-stage
architecture? So far as I can see at the moment, little or nothing. The three-stage architecture
appears to provide as much open-loop gain as can be safely used with a conventional output stage;
if more is required then the Miller compensation capacitor can be reduced, which will also improve
the maximum slew rates. A four-stage architecture does, however, present some interesting
possibilities for using nested Miller compensation, a concept which has been extensively used in
op-amps.

Power Amplifi er Classes
For a long time the only amplifi er classes relevant to high-quality audio were Class-A and Class-
AB. This is because valves were the only active devices, and Class-B valve amplifi ers generated so
much distortion that they were barely acceptable even for public address purposes. All amplifi ers
with pretensions to high fi delity operated in push – pull Class-A.
Solid-state gives much more freedom of design; all of the amplifi er classes below have been
commercially exploited. This book deals in detail with Classes A, AB, B, D and G, and this certainly
covers the vast majority of solid-state amplifi ers. For the other classes plentiful references are given
so that the intrigued can pursue matters further. In particular, my book Self On Audio[4] contains a
thorough treatment of all known audio amplifi er classes, and indeed suggests some new ones.
Class-A
In a Class-A amplifi er current fl ows continuously in all the output devices, which enables the
nonlinearities of turning them on and off to be avoided. They come in two rather different kinds,
although this is rarely explicitly stated, which work in very different ways. The fi rst kind is simply
a Class-B stage (i.e. two emitter-followers working back to back) with the bias voltage increased so
that suffi cient current fl ows for neither device to cut off under normal loading. The great advantage
of this approach is that it cannot abruptly run out of output current; if the load impedance becomes
lower than specifi ed then the amplifi er simply takes brief excursions into Class-AB, hopefully with
a modest increase in distortion and no seriously audible distress.
The other kind could be called the controlled-current-source (VCIS) type, which is in essence
a single emitter-follower with an active emitter load for adequate current-sinking. If this latter
element runs out of current capability it makes the output stage clip much as if it had run out of
output voltage. This kind of output stage demands a very clear idea of how low an impedance it
will be asked to drive before design begins.
Valve textbooks will be found to contain enigmatic references to classes of operation called AB1
and AB2; in the former grid current did not fl ow for any part of the cycle, but in the latter it did.
This distinction was important because the fl ow of output-valve grid current in AB2 made the
design of the previous stage much more diffi cult.
AB1 or AB2 has no relevance to semiconductors, for in BJTs base current always fl ows when
a device is conducting, while in power FETs gate current never does, apart from charging and
discharging internal capacitances.
Class-AB
This is not really a separate class of its own, but a combination of A and B. If an amplifi er is biased
into Class-B, and then the bias further increased, it will enter AB. For outputs below a certain level
both output devices conduct, and operation is Class-A. At higher levels, one device will be turned
completely off as the other provides more current, and the distortion jumps upward at this point as AB action begins. Each device will conduct between 50% and 100% of the time, depending on the
degree of excess bias and the output level.
Class-AB is less linear than either A or B, and in my view its only legitimate use is as a fallback
mode to allow Class-A amplifi ers to continue working reasonably when faced with a low-load
impedance.
Class-B
Class-B is by far the most popular mode of operation, and probably more than 99% of the amplifi ers
currently made are of this type. Most of this book is devoted to it. My defi nition of Class-B is that
unique amount of bias voltage which causes the conduction of the two output devices to overlap
with the greatest smoothness and so generate the minimum possible amount of crossover distortion.
Class-C
Class-C implies device conduction for signifi cantly less than 50% of the time, and is normally only
usable in radio work, where an LC circuit can smooth out the current pulses and fi lter harmonics.
Current-dumping amplifi ers can be regarded as combining Class-A (the correcting amplifi er) with
Class-C (the current-dumping devices); however, it is hard to visualize how an audio amplifi er
using devices in Class-C only could be built. I regard a Class-B stage with no bias voltage as
working in Class-C.
Class-D
These amplifi ers continuously switch the output from one rail to the other at a supersonic
frequency, controlling the mark/space ratio to give an average representing the instantaneous
level of the audio signal; this is alternatively called pulse width modulation (PWM). Great effort
and ingenuity has been devoted to this approach, for the effi ciency is in theory very high, but the
practical diffi culties are severe, especially so in a world of tightening EMC legislation, where it
is not at all clear that a 200 kHz high-power square wave is a good place to start. Distortion is not
inherently low [5] , and the amount of global negative feedback that can be applied is severely limited
by the pole due to the effective sampling frequency in the forward path. A sharp cut-off low-pass
fi lter is needed between amplifi er and speaker, to remove most of the RF; this will require at least
four inductors (for stereo) and will cost money, but its worst feature is that it will only give a fl at
frequency response into one specifi c load impedance.
Chapter 13 in this book is devoted to Class-D. Important references to consult for further
information are Goldberg and Sandler [6] and Hancock [7] .
Class-E
This is an extremely ingenious way of operating a transistor so that it has either a small voltage
across it or a small current through it almost all the time, so that the power dissipation is kept very
low [8] . Regrettably this is an RF technique that seems to have no sane application to audio. Class-F
There is no Class-F, as far as I know. This seems like a gap that needs fi lling . . .
Class-G
This concept was introduced by Hitachi in 1976 with the aim of reducing amplifi er power
dissipation. Musical signals have a high peak/mean ratio, spending most of the time at low levels,
so internal dissipation is much reduced by running from low-voltage rails for small outputs,
switching to higher rails current for larger excursions [9,10] .
The basic series Class-G with two rail voltages (i.e. four supply rails, as both voltages are ) is
shown in Figure 2.5 . Current is drawn from the lower V1 supply rails whenever possible; should
the signal exceed V1, TR6 conducts and D3 turns off, so the output current is now drawn entirely
from the higher V2 rails, with power dissipation shared between TR3 and TR6. The inner stage
TR3, TR4 is usually operated in Class-B, although AB or A are equally feasible if the output
stage bias is suitably increased. The outer devices are effectively in Class-C as they conduct for
signifi cantly less than 50% of the time.
In principle movements of the collector voltage on the inner device collectors should not
signifi cantly affect the output voltage, but in practice Class-G is often considered to have poorer
linearity than Class-B because of glitching due to charge storage in commutation diodes D3, D4. However, if glitches occur they do so at moderate power, well displaced from the crossover region,
and so appear relatively infrequently with real signals.
An obvious extension of the Class-G principle is to increase the number of supply voltages.
Typically the limit is three. Power dissipation is further reduced and effi ciency increased as the
average voltage from which the output current is drawn is kept closer to the minimum. The inner
devices operate in Class-B/AB as before, and the middle devices are in Class-C. The outer devices
are also in Class-C, but conduct for even less of the time.
To the best of my knowledge three-level Class-G amplifi ers have only been made in Shunt mode, as
described below, probably because in Series mode the cumulative voltage drops become too great and
compromise the effi ciency gains. The extra complexity is signifi cant, as there are now six supply rails
and at least six power devices, all of which must carry the full output current. It seems most unlikely
that this further reduction in power consumption could ever be worthwhile for domestic hi-fi .
A closely related type of amplifi er is Class-G Shunt [11] . Figure 2.6 shows the principle; at low
outputs only Q3, Q4 conduct, delivering power from the low-voltage rails. Above a threshold set
by Vbias3 and Vbias4, D1 or D2 conduct and Q6, Q8 turn on, drawing current from the highvoltage
rails, with D3, D4 protecting Q3, Q4 against reverse bias. The conduction periods of the
Q6, Q8 Class-C devices are variable, but inherently less than 50%. Normally the low-voltage
section runs in Class-B to minimize dissipation. Such shunt Class-G arrangements are often called
‘ commutating amplifi ers ’ .
Some of the more powerful Class-G Shunt PA amplifi ers have three sets of supply rails to
further reduce the average voltage drop between rail and output. This is very useful in large
PA amplifi ers.
Chapter 12 in this book is devoted to Class-G.
Class-H
Class-H is once more basically Class-B, but with a method of dynamically boosting the single
supply rail (as opposed to switching to another one) in order to increase effi ciency [12] . The usual
mechanism is a form of bootstrapping. Class-H is occasionally used to describe Class-G as above;
this sort of confusion we can do without.
Class-S
Class-S, so named by Dr Sandman [13] , uses a Class-A stage with very limited current capability,
backed up by a Class-B stage connected so as to make the load appear as a higher resistance that
is within the fi rst amplifi er’s capability. The method used by the Technics SE-A100 amplifi er is
extremely similar [14] . I hope that this necessarily brief catalog is comprehensive; if anyone knows
of other bona fi de classes I would be glad to add them to the collection. This classifi cation does not
allow a completely consistent nomenclature; for example, Quad-style current-dumping can only be
specifi ed as a mixture of Classes A and C, which says nothing about the basic principle of operation,
which is error correction.
Variations on Class-B
The solid-state Class-B three-stage amplifi er has proved both successful and fl exible, so many
attempts have been made to improve it further, usually by trying to combine the effi ciency of
Class-B with the linearity of Class-A. It would be impossible to give a comprehensive list of the
changes and improvements attempted, so I give only those that have been either commercially
successful or particularly thought-provoking to the amplifi er-design community.
Error-Correcting Amplifi ers
This refers to error-cancelation strategies rather than the conventional use of negative feedback.
This is a complex fi eld, for there are at least three different forms of error correction, of which the
best known is error feedforward as exemplifi ed by the groundbreaking Quad 405 [15] . Other versions
include error feedback and other even more confusingly named techniques, some at least of which
turn out on analysis to be conventional NFB in disguise. For a highly ingenious treatment of the
feedforward method see a design by Giovanni Stochino [16] . A most interesting recent design using
the Hawksford correction topology has recently been published by Jan Didden [17] .Non-Switching Amplifi ers
Most of the distortion in Class-B is crossover distortion, and results from gain changes in the
output stage as the power devices turn on and off. Several researchers have attempted to avoid this
by ensuring that each device is clamped to pass a certain minimum current at all times [18] . This
approach has certainly been exploited commercially, but few technical details have been published.
It is not intuitively obvious (to me, anyway) that stopping the diminishing device current in its
tracks will give less crossover distortion (see also Chapter 10).
Current-Drive Amplifi ers
Almost all power amplifi ers aspire to be voltage sources of zero output impedance. This minimizes
frequency-response variations caused by the peaks and dips of the impedance curve, and gives a
universal amplifi er that can drive any loudspeaker directly.
The opposite approach is an amplifi er with a suffi ciently high output impedance to act as a
constant-current source. This eliminates some problems – such as rising voice-coil resistance with
heat dissipation – but introduces others such as control of the cone resonance. Current amplifi ers
therefore appear to be only of use with active crossovers and velocity feedback from the cone [19] .
It is relatively simple to design an amplifi er with any desired output impedance (even a negative
one), and so any compromise between voltage and current drive is attainable. The snag is that
loudspeakers are universally designed to be driven by voltage sources, and higher amplifi er
impedances demand tailoring to specifi c speaker types [20] .
The Blomley Principle
The goal of preventing output transistors from turning off completely was introduced by Peter
Blomley in 1971 [21] ; here the positive/negative splitting is done by circuitry ahead of the output
stage, which can then be designed so that a minimum idling current can be separately set up in
each output device. However, to the best of my knowledge this approach has not yet achieved
commercial exploitation.
I have built Blomley amplifi ers twice (way back in 1975) and on both occasions I found that there
were still unwanted artefacts at the crossover point, and that transferring the crossover function from
one part of the circuit to another did not seem to have achieved much. Possibly this was because the
discontinuity was narrower than the usual crossover region and was therefore linearized even less
effectively by negative feedback that reduces as frequency increases. I did not have the opportunity to
investigate very deeply and this is not to be taken as a defi nitive judgment on the Blomley concept.
Geometric Mean Class-AB
The classical explanations of Class-B operation assume that there is a fairly sharp transfer of
control of the output voltage between the two output devices, stemming from an equally abrupt
switch in conduction from one to the other. In practical audio amplifi er stages this is indeed the case, but it is not an inescapable result of the basic principle. Figure 2.7 shows a conventional
output stage, with emitter resistors Re1, Re2 included to increase quiescent-current stability and
allow current sensing for overload protection; it is these emitter resistances that to a large extent
make classical Class-B what it is.
However, if the emitter resistors are omitted, and the stage biased with two matched diode
junctions, then the diode and transistor junctions form a translinear loop[22] , around which the
junction voltages sum to zero. This links the two output transistor currents Ip , In in the relationship
In · Ip constant, which in op-amp practice is known as Geometric-Mean Class-AB operation.
This gives smoother changes in device current at the crossover point, but this does not necessarily
mean lower THD. Such techniques are not very practical for discrete power amplifi ers; fi rst, in
the absence of the very tight thermal coupling between the four junctions that exists in an IC, the
quiescent-current stability will be atrocious, with thermal runaway and spontaneous combustion a
near certainty. Second, the output device bulk emitter resistance will probably give enough voltage
drop to turn the other device off anyway, when current fl ows. The need for drivers, with their extra
junction-drops, also complicates things.
A new extension of this technique is to redesign the translinear loop so that 1/ In 1/ Ip constant,
this being known as Harmonic-Mean Class-AB operation [23] . It is too early to say whether this
technique (assuming it can be made to work outside an IC) will be of use in reducing crossover
distortion and thus improving amplifi er performance.
Nested Differentiating Feedback Loops
This is a most ingenious but conceptually complex technique for signifi cantly increasing the
amount of NFB that can be applied to an amplifi er. I wish I could tell you how well it works but
I have never found the time to investigate it practically. For the original paper see Cherry [24] , but
it’s tough going mathematically. A more readable account was published in Electronics Today
International in 1983, and included a practical design for a 60 W NDFL amplifi er [25]

Amplifi er Bridging
When two power amplifi ers are driven with anti-phase signals and the load connected between their
outputs, with no connection to ground, this is called bridging. It is a convenient and inexpensive
way to turn a stereo amplifi er into a more powerful mono amplifi er. It is called bridging because
if you draw the four output transistors with the load connected between them, it looks something
like the four arms of a Wheatstone bridge (see Figure 2.8 ). Doubling the voltage across a load of
the same resistance naturally quadruples the output power – in theory. In harsh reality the available
power will be considerably less, due to the power supply sagging and extra voltage losses in the two
output stages. In most cases you will get something like three times the power rather than four, the
ratio depending on how seriously the bridge mode was regarded when the initial design was done. It
has to be said that in many designs the bridging mode looks like something of an afterthought.
In Figure 2.8 an 8 Ω load has been divided into two 4 Ω halves, to underline the point that the
voltage at their center is zero, and so both amplifi ers are effectively driving 4 Ω loads to ground,
with all that that implies for increased distortion and increased losses in the output stages. A unitygain
inverting stage is required to generate the anti-phase signal; nothing fancy is required and
the simple shunt-feedback stage shown does the job nicely. I have used it in several products. The
resistors in the inverter circuit need to be kept as low in value as possible to reduce their Johnson
noise contribution, but not of course so low that the op-amp distortion is increased by driving
them; this is not too hard to arrange as the op-amp will only be working over a small fraction of its
voltage output capability, because the power amplifi er it is driving will clip a long time before the
op-amp does. The capacitor assures stability – it causes a roll-off of 3 dB down at 5 MHz, so it does
not in any way imbalance the audio frequency response of the two amplifi ers.
You sometimes see the statement that bridging reduces the distortion seen across the load because
the push – pull action causes cancelation of the distortion products. In brief, it is not true. Push – pull
systems can only cancel even-order distortion products, and in a well-found amplifi er these are in
short supply. In such an amplifi er the input stage and the output stage will both be symmetrical
(it is hard to see why anyone would choose them to be anything else) and produce only odd-order harmonics, which will not be canceled. The only asymmetrical stage is the VAS, and the distortion
contribution from that is, or at any rate should be, very low. In reality, switching to bridging mode
will almost certainly increase distortion, because as noted above, the output stages are now in effect
driving 4 Ω loads to ground instead of 8 Ω .
Fractional Bridging
I will now tell you how I came to invent the strange practice of ‘ fractional bridging ’ . I was tasked
with designing a two-channel amplifi er module for a multichannel unit. Five of these modules fi tted
into the chassis, and if each one was made independently bridgeable, you got a very fl exible system
that could be confi gured for anywhere between fi ve and ten channels of amplifi cation. The normal
output of each amplifi er was 85 W into 8 Ω , and the bridged output was about 270 W as opposed to
the theoretical 340 W. And now the problem. The next unit up in the product line had modules that
gave 250 W into 8 Ω unbridged, and the marketing department felt that having the small modules
giving more power than the large ones was really not on; I’m not saying they were wrong. The
problem was therefore to create an amplifi er that only doubled its power when bridged. Hmm!
One way might have been to develop a power supply with deliberately poor regulation, but this
implies a mains transformer with high-resistance windings that would probably have overheating
problems. Another possibility was to make the bridged mode switch in a circuit that clipped the
input signal before the power amplifi ers clipped. The problem is that building a clipping circuit that
does not exhibit poor distortion performance below the actual clipping level is actually surprisingly
diffi cult – think about the nonlinear capacitance of signal diodes. I worked out a way to do it, but it
took up an amount of PCB area that simply wasn’t available. So the ultimate solution was to let one
of the power amplifi ers do the clipping, which it does cleanly because of the high level of negative
feedback, and the fractional bridging concept was born.
Figure 2.9 shows how it works. An inverter is still used to drive the anti-phase amplifi er, but now
it is confi gured with a gain G that is less than unity. This means that the in-phase amplifi er will clip when the anti-phase amplifi er is still well below maximum output, and the bridged output is
therefore restricted. Double output power means an output voltage increased by root-2 or 1.41
times, and so the anti-phase amplifi er is driven with a signal attenuated by a factor of 0.41, which
I call the bridging fraction, giving a total voltage swing across the load of 1.41 times. It worked
very well, the product was a considerable success, and no salesmen were plagued with awkward
questions about power output ratings.
There are two possible objections to this cunning plan, the fi rst being that it is obviously ineffi cient
compared with a normal Class-B amplifi er. Figure 2.10 shows how the power is dissipated in the
pair of amplifi ers; this is derived from basic calculations and ignores output stage losses. PdissA is
the power dissipated in the in-phase amplifi er A, and varies in the usual way for a Class-B amplifi er
with a maximum at 63% of the maximum voltage output. PdissB is the dissipation in anti-phase
amplifi er B that receives a smaller drive signal and so never reaches its dissipation maximum; it
dissipates more power because it is handling the same current but has more voltage left across
the output devices, and this is what makes the overall effi ciency low. Ptot is the sum of the two
amplifi er dissipations. The dotted lines show the output power contribution from each amplifi er,
and the total output power in the load.
The bridging fraction can of course be set to other values to get other maximum outputs. The lower
it is, the lower the overall effi ciency of the amplifi er pair, reaching the limiting value when the
bridging fraction is zero. In this (quite pointless) situation the anti-phase amplifi er is simply being
used as an expensive alternative to connecting one end of the load to ground, and so it dissipates
a lot of heat. Figure 2.11 shows how the maximum effi ciency (which always occurs at maximum
output) varies with the bridging fraction. When it is unity, we get normal Class-B operation and the
maximum effi ciency is the familiar fi gure of 78.6%; when it is zero the overall effi ciency is halved
to 39.3%, with a linear variation between these two extremes. The second possible objection is that you might think it is a grievous offence against engineering
ethics to deliberately restrict the output of an amplifi er for marketing reasons, and you might be
right, but it kept people employed, including me. Nevertheless, given the current concerns about
energy, perhaps this sort of thing should not be encouraged. Chapter 9 gives another example of
devious engineering, where I describe how an input clipping circuit (the one I thought up in an
attempt to solve this problem, in fact) can be used to emulate the performance of a massive lowimpedance
power supply or a complicated regulated power supply. I have given semi-serious
thought to writing a book called How to Cheat with Amplifi ers .

AC- and DC-Coupled Amplifi ers
All power amplifi ers are either AC-coupled or DC-coupled. The fi rst kind have a single supply
rail, with the output biased to be halfway between this rail and ground to give the maximum
symmetrical voltage swing; a large DC-blocking capacitor is therefore used in series with the
output. The second kind have positive and negative supply rails, and the output is biased to be at
0 V, so no output DC-blocking is required in normal operation.
The Advantages of AC-Coupling
1. The output DC offset is always zero (unless the output capacitor is leaky).
2. It is very simple to prevent turn-on thump by purely electronic means; there is no need for an
expensive output relay. The amplifi er output must rise up to half the supply voltage at turn-on,
but providing this occurs slowly there is no audible transient. Note that in many designs this is
not simply a matter of making the input bias voltage rise slowly, as it also takes time for the DC
feedback to establish itself, and it tends to do this with a snap action when a threshold is reached.
The last AC-coupled power amplifi er I designed (which was in 1980, I think) had a simple RC
time-constant and diode arrangement that absolutely constrained the VAS collector voltage to rise
slowly at turn-on, no matter what the rest of the circuitry was doing – cheap but very effective.
3. No protection against DC faults is required, providing the output capacitor is voltage-rated
to withstand the full supply rail. A DC-coupled amplifi er requires an expensive and possibly
unreliable output relay for dependable speaker protection.
4. The amplifi er should be more easy to make short-circuit proof, as the output capacitor limits
the amount of electric charge that can be transferred each cycle, no matter how low the load
impedance. This is speculative; I have no data as to how much it really helps in practice.
5. AC-coupled amplifi ers do not in general appear to require output inductors for stability. Large
electrolytics have signifi cant equivalent series resistance (ESR) and a little series inductance.
For typical amplifi er output sizes the ESR will be of the order of 100 m Ω ; this resistance is
probably the reason why AC-coupled amplifi ers rarely had output inductors, as it is often enough
resistance to provide isolation from capacitive loading and so gives stability. Capacitor series
inductance is very low and probably irrelevant, being quoted by one manufacturer as ‘ a few tens
of nanohenrys ’. The output capacitor was often condemned in the past for reducing the lowfrequency
damping factor (DF), for its ESR alone is usually enough to limit the DF to 80 or so. As
explained above, this is not a technical problem because ‘ damping factor’ means virtually nothing.
The Advantages of DC-Coupling
1. No large and expensive DC-blocking capacitor is required. On the other hand, the dual supply
will need at least one more equally expensive reservoir capacitor, and a few extra components
such as fuses.
2. In principle there should be no turn-on thump, as the symmetrical supply rails mean the output
voltage does not have to move through half the supply voltage to reach its bias point – it
can just stay where it is. In practice the various fi ltering time-constants used to keep the bias
voltages free from ripple are likely to make various sections of the amplifi er turn on at different
times, and the resulting thump can be substantial. This can be dealt with almost for free, when
a protection relay is fi tted, by delaying the relay pull-in until any transients are over. The delay
required is usually less than a second.
3. Audio is a fi eld where almost any technical eccentricity is permissible, so it is remarkable that
AC-coupling appears to be the one technique that is widely regarded as unfashionable and
unacceptable. DC-coupling avoids any marketing diffi culties.
4. Some potential customers will be convinced that DC-coupled amplifi ers give better speaker
damping due to the absence of the output capacitor impedance. They will be wrong, as explained
in Chapter 1 , but this misconception has lasted at least 40 years and shows no sign of fading away.
5. Distortion generated by an output capacitor is avoided. This is a serious problem, as it is
not confi ned to low frequencies, as is the case in small-signal circuitry (see page 212).
For a 6800 μ F output capacitor driving 40 W into an 8 Ω load, there is signifi cant mid-band
third harmonic distortion at 0.0025%, as shown in Figure 2.12 . This is at least fi ve times
more than the amplifi er generates in this part of the frequency range. In addition, the THD
rise at the LF end is much steeper than in the small-signal case, for reasons that are not yet
clear. There are two cures for output capacitor distortion. The straightforward approach uses
a huge output capacitor, far larger in value than required for a good low-frequency response.
A 100,000 μ F/40 V Aerovox from BHC eliminated all distortion, as shown in Figure 2.13 .
An allegedly ‘ audiophile ’ capacitor gives some interesting results; a Cerafi ne Supercap of
only moderate size (4700 μ F/63 V) gave the result shown in Figure 2.14 , where the midband
distortion is gone but the LF distortion rise remains. What special audio properties this
component is supposed to have are unknown; as far as I know electrolytics are never advertised
as ‘ low mid-band THD ’ , but that seems to be the case here. The volume of the capacitor case is
about twice as great as conventional electrolytics of the same value, so it is possible the crucial
difference may be a thicker dielectric fi lm than is usual for this voltage rating.
Either of these special capacitors costs more than the rest of the amplifi er electronics put
together. Their physical size is large. A DC-coupled amplifi er with protective output relay will
be a more economical option.
A little-known complication with output capacitors is that their series reactance increases the
power dissipation in the output stage at low frequencies. This is counter-intuitive as it would
seem that any impedance added in series must reduce the current drawn and hence the power
dissipation. In fact it is the load phase shift that increases the amplifi er dissipation.
6. The supply currents can be kept out of the ground system. A single-rail AC amplifi er has halfwave
Class-B currents fl owing in the 0 V rail, and these can have a serious effect on distortion
and crosstalk performance.

Negative Feedback in Power Amplifi ers
It is not the role of this book to step through elementary theory that can be easily found in any number
of textbooks. However, correspondence in audio and technical journals shows that considerable
confusion exists on negative feedback as applied to power amplifi ers; perhaps there is something
inherently mysterious in a process that improves almost all performance parameters simply by
feeding part of the output back to the input, but infl icts dire instability problems if used to excess. I
therefore deal with a few of the less obvious points here; more information is provided in Chapter 8.
The main use of NFB in power amplifi ers is the reduction of harmonic distortion, the reduction
of output impedance, and the enhancement of supply-rail rejection. There are also analogous
improvements in frequency response and gain stability, and reductions in DC drift.
The basic feedback equation is dealt with in a myriad of textbooks, but it is so fundamental to
power amplifi er design that it is worth a look here. In Figure 2.15 , the open-loop amplifi er is the big block with open-loop gain A . The negative-feedback network is the block marked β ; this
could contain anything, but for our purposes it simply scales down its input, multiplying it by β ,
and is usually in the form of a potential divider. The funny round thing with the cross on is the
conventional control theory symbol for a block that adds or subtracts and does nothing else.
Firstly, it is pretty clear that one input to the subtractor is simply Vin , and the other is Vout · β , so
subtract these two, multiply by A , and you get the output signal Vout :This is the feedback equation, and it could not be more important. The fi rst thing it shows is that
negative feedback stabilizes the gain. In real-life circuitry A is a high but uncertain and variable
quantity, while β is fi rmly fi xed by resistor values. Looking at the equation, you can see that the
higher A is, the less signifi cant the 1 on the bottom is; the A values cancel out, and so with high A
the equation can be regarded as simply: This is demonstrated in Table 2.1 , where β is set at 0.04 with the intention of getting a closed-loop
gain of 25 times. With a low open-loop gain of 100, the closed-loop gain is only 20, a long way
short of 25. But as the open-loop gain increases, the closed-loop gain gets closer to the target. If
you look at the bottom two rows, you will see that an increase in open-loop gain of more than a
factor of 2 only alters the closed-loop gain by a trivial second decimal place.
In simple circuits with low open-loop gain you just apply negative feedback and that is the end of
the matter. In a typical power amplifi er, which cannot be operated without NFB, if only because
it would be saturated by its own DC offset voltages, there are several stages that may accumulate
phase shift, and simply closing the loop usually brings on severe Nyquist oscillation at HF. This is
a serious matter, as it will not only burn out any tweeters that are unlucky enough to be connected,
but can also destroy the output devices by overheating, as they may be unable to turn off fast
enough at ultrasonic frequencies.
The standard cure for this instability is compensation. A capacitor is added, usually in Millerintegrator
format, to roll off the open-loop gain at 6 dB/octave, so it reaches unity loop-gain before
enough phase shift can build up to allow oscillation. This means the NFB factor varies strongly
with frequency, an inconvenient fact that many audio commentators seem to forget.
It is crucial to remember that a distortion harmonic, subjected to a frequency-dependent NFB factor
as above, will be reduced by the NFB factor corresponding to its own frequency, not that of its
fundamental. If you have a choice, generate low-order rather than high-order distortion harmonics,
as the NFB deals with them much more effectively.
Negative feedback can be applied either locally (i.e. to each stage, or each active device) or globally,
in other words right around the whole amplifi er. Global NFB is more effi cient at distortion reduction
than the same amount distributed as local NFB, but places much stricter limits on the amount of
phase shift that may be allowed to accumulate in the forward path (more on this later in this chapter).
Above the dominant-pole frequency, the VAS acts as a Miller integrator, and introduces a constant 90 °
phase lag into the forward path. In other words, the output from the input stage must be in quadrature
if the fi nal amplifi er output is to be in phase with the input, which to a close approximation it is. This
raises the question of how the 90 ° phase shift is accommodated by the negative-feedback loop; the
answer is that the input and feedback signals applied to the input stage are there subtracted, and the
small difference between two relatively large signals with a small phase shift between them has a
much larger phase shift. This is the signal that drives the VAS input of the amplifi er.
Solid-state power amplifi ers, unlike many valve designs, are almost invariably designed to work
at a fi xed closed-loop gain. If the circuit is compensated by the usual dominant-pole method, the
HF open-loop gain is also fi xed, and therefore so is the important negative-feedback factor. This
is in contrast to valve amplifi ers, where the amount of negative feedback applied was regarded
as a variable, and often user-selectable, parameter; it was presumably accepted that varying the
negative-feedback factor caused signifi cant changes in input sensitivity. A further complication
was serious peaking of the closed-loop frequency response at both LF and HF ends of the spectrum
as negative feedback was increased, due to the inevitable bandwidth limitations in a transformercoupled
forward path. Solid-state amplifi er designers go cold at the thought of the customer
tampering with something as vital as the NFB factor, and such an approach is only acceptable in
cases like valve amplifi cation where global NFB plays a minor role.
Some Common Misconceptions about Negative Feedback
All of the comments quoted below have appeared many times in the hi-fi literature. All are wrong.
Negative feedback is a bad thing . Some audio commentators hold that, without qualifi cation, negative
feedback is a bad thing. This is of course completely untrue and based on no objective reality. Negative
feedback is one of the fundamental concepts of electronics, and to avoid its use altogether is virtually
impossible; apart from anything else, a small amount of local NFB exists in every common-emitter
transistor because of the internal emitter resistance. I detect here distrust of good fortune; the uneasy
feeling that if something apparently works brilliantly then there must be something wrong with it.
A low negative-feedback factor is desirable . Untrue – global NFB makes just about everything
better, and the sole effect of too much is HF oscillation, or poor transient behavior on the brink
of instability. These effects are painfully obvious on testing and not hard to avoid unless there is
something badly wrong with the basic design.
In any case, just what does low mean? One indicator of imperfect knowledge of negative feedback
is that the amount enjoyed by an amplifi er is almost always badly specifi ed as so many decibels on
the very few occasions it is specifi ed at all – despite the fact that most amplifi ers have a feedback
factor that varies considerably with frequency. A decibel fi gure quoted alone is meaningless, as it
cannot be assumed that this is the fi gure at 1 kHz or any other standard frequency.
My practice is to quote the NFB factor at 20 kHz, as this can normally be assumed to be above the
dominant pole frequency, and so in the region where open-loop gain is set by only two or three
components. Normally the open-loop gain is falling at a constant 6 dB/octave at this frequency on
its way down to intersect the unity-loop-gain line and so its magnitude allows some judgment as
to Nyquist stability. Open-loop gain at LF depends on many more variables such as transistor beta,
and consequently has wide tolerances and is a much less useful quantity to know. This is dealt with
in more detail in the chapter on voltage-amplifi er stages.
Negative feedback is a powerful technique, and therefore dangerous when misused . This bland
truism usually implies an audio Rake’s Progress that goes something like this: an amplifi er has too
much distortion, and so the open-loop gain is increased to augment the NFB factor. This causes HF
instability, which has to be cured by increasing the compensation capacitance. This is turn reduces
the slew-rate capability, and results in a sluggish, indolent, and generally bad amplifi er.
The obvious fl aw in this argument is that the amplifi er so condemned no longer has a high NFB
factor, because the increased compensation capacitor has reduced the open-loop gain at HF; therefore feedback itself can hardly be blamed. The real problem in this situation is probably
unduly low standing current in the input stage; this is the other parameter determining slew rate.
NFB may reduce low-order harmonics but increases the energy in the discordant higher
harmonics . A less common but recurring complaint is that the application of global NFB is a
shady business because it transfers energy from low-order distortion harmonics – considered
musically consonant – to higher-order ones that are anything but. This objection contains a grain
of truth, but appears to be based on a misunderstanding of one article in an important series by
Peter Baxandall [26] in which he showed that if you took an amplifi er with only second-harmonic
distortion, and then introduced NFB around it, higher-order harmonics were indeed generated as
the second harmonic was fed back round the loop. For example, the fundamental and the second
harmonic intermodulate to give a component at third-harmonic frequency. Likewise, the second
and third intermodulate to give the fi fth harmonic. If we accept that high-order harmonics should
be numerically weighted to refl ect their greater unpleasantness, there could conceivably be a rise
rather than a fall in the weighted THD when negative feedback is applied.
All active devices, in Class A or B (including FETs, which are often erroneously thought to be
purely square law), generate small amounts of high-order harmonics. Feedback could and would
generate these from nothing, but in practice they are already there.
The vital point is that if enough NFB is applied, all the harmonics can be reduced to a lower level
than without it. The extra harmonics generated, effectively by the distortion of a distortion, are at an
extremely low level providing a reasonable NFB factor is used. This is a powerful argument against
low feedback factors like 6 dB, which are most likely to increase the weighted THD. For a full
understanding of this topic, a careful reading of the Baxandall series is absolutely indispensable.
A low open-loop bandwidth means a sluggish amplifi er with a low slew rate. Great confusion
exists in some quarters between open-loop bandwidth and slew rate. In truth open-loop bandwidth
and slew rate are nothing to do with each other, and may be altered independently. Open-loop
bandwidth is determined by compensation Cdom , VAS β , and the resistance at the VAS collector,
while slew rate is set by the input stage standing current and Cdom . Cdom affects both, but all the
other parameters are independent (see Chapter 3 for more details).
In an amplifi er, there is a maximum amount of NFB you can safely apply at 20 kHz; this does not
mean that you are restricted to applying the same amount at 1 kHz, or indeed 10 Hz. The obvious
thing to do is to allow the NFB to continue increasing at 6 dB/octave – or faster if possible – as
frequency falls, so that the amount of NFB applied doubles with each octave as we move down in
frequency, and we derive as much benefi t as we can. This obviously cannot continue indefi nitely,
for eventually open-loop gain runs out, being limited by transistor beta and other factors. Hence the
NFB factor levels out at a relatively low and ill-defi ned frequency; this frequency is the open-loop
bandwidth, and for an amplifi er that can never be used open-loop, has very little importance.
It is diffi cult to convince people that this frequency is of no relevance whatever to the speed
of amplifi ers, and that it does not affect the slew rate. Nonetheless, it is so, and any fi rst-year
electronics textbook will confi rm this. High-gain op-amps with sub-1 Hz bandwidths and blindingly fast slewing are as common as the grass (if somewhat less cheap) and if that does not
demonstrate the point beyond doubt then I really do not know what will.
Limited open-loop bandwidth prevents the feedback signal from immediately following the system
input, so the utility of this delayed feedback is limited . No linear circuit can introduce a pure
time delay; the output must begin to respond at once, even if it takes a long time to complete its
response. In the typical amplifi er the dominant-pole capacitor introduces a 90 ° phase shift between
input pair and output at all but the lowest audio frequencies, but this is not a true time delay. The
phrase delayed feedback is often used to describe this situation, and it is a wretchedly inaccurate
term; if you really delay the feedback to a power amplifi er (which can only be done by adding a
time-constant to the feedback network rather than the forward path) it will quickly turn into the
proverbial power oscillator as sure as night follows day.
Amplifi er Stability and NFB
In controlling amplifi er distortion, there are two main weapons. The fi rst is to make the linearity of
the circuitry as good as possible before closing the feedback loop. This is unquestionably important,
but it could be argued it can only be taken so far before the complexity of the various amplifi er
stages involved becomes awkward. The second is to apply as much negative feedback as possible
while maintaining amplifi er stability. It is well known that an amplifi er with a single time-constant is
always stable, no matter how high the feedback factor. The linearization of the VAS by local Miller
feedback is a good example. However, more complex circuitry, such as the generic three-stage power
amplifi er, has more than one time-constant, and these extra poles will cause poor transient response or
instability if a high feedback factor is maintained up to the higher frequencies where they start to take
effect. It is therefore clear that if these higher poles can be eliminated or moved upward in frequency,
more feedback can be applied and distortion will be less for the same stability margins. Before they
can be altered – if indeed this is practical at all – they must be found and their impact assessed.
The dominant - pole frequency of an amplifi er is, in principle, easy to calculate; the mathematics
is very simple (see Chapter 3) . In practice, two of the most important factors, the effective beta of
the VAS and the VAS collector impedance, are only known approximately, so the dominant pole
frequency is a rather uncertain thing. Fortunately this parameter in itself has no effect on amplifi er
stability. What matters is the amount of feedback at high frequencies.
Things are different with the higher poles. To begin with, where are they? They are caused by
internal transistor capacitances and so on, so there is no physical component to show where the
roll-off is. It is generally regarded as fact that the next poles occur in the output stage, which
will use power devices that are slow compared with small-signal transistors. Taking the Class-
B design in Chapter 7 , the TO92 MPSA06 devices have an Ft of 100 MHz, the MJE340 drivers
about 15 MHz (for some reason this parameter is missing from the data sheet) and the MJ802
output devices an Ft of 2.0 MHz. Clearly the output stage is the prime suspect. The next question
is at what frequencies these poles exist. There is no reason to suspect that each transistor can be
modeled by one simple pole. There is a huge body of knowledge devoted to the art of keeping feedback loops stable while
optimizing their accuracy; this is called Control Theory, and any technical bookshop will yield
some intimidatingly fat volumes called things like ‘ Control System Design ’ . Inside, system
stability is tackled by Laplace-domain analysis, eigenmatrix methods, and joys like the Lyapunov
stability criterion. I think that makes it clear that you need to be pretty good at mathematics to
appreciate this kind of approach.
Even so, it is puzzling that there seems to have been so little application of Control Theory to audio
amplifi er design. The reason may be that so much Control Theory assumes that you know fairly
accurately the characteristics of what you are trying to control, especially in terms of poles and zeros.
One approach to appreciating negative feedback and its stability problems is SPICE simulation.
Some SPICE simulators have the ability to work in the Laplace or s-domain, but my own
experiences with this have been deeply unhappy. Otherwise respectable simulator packages output
complete rubbish in this mode. Quite what the issues are here I do not know, but it does seem that
s-domain methods are best avoided. The approach suggested here instead models poles directly as
poles, using RC networks to generate the time-constants. This requires minimal mathematics and
is far more robust. Almost any SPICE simulator – evaluation versions included – should be able to
handle the simple circuit used here.
Figure 2.17 shows the basic model, with SPICE node numbers. The scheme is to idealize the
situation enough to highlight the basic issues and exclude distractions like nonlinearities or
clipping. The forward gain is simply the transconductance of the input stage multiplied by the
transadmittance of the VAS integrator. An important point is that with correct parameter values, the
current from the input stage is realistic, and so are all the voltages.
The input differential amplifi er is represented by G. This is a standard SPICE element – the VCIS,
or voltage-controlled current source. It is inherently differential, as the output current from Node 4
is the scaled difference between the voltages at Nodes 3 and 7. The scaling factor of 0.009 sets the
input stage transconductance ( gm ) to 9 mA/V, a typical fi gure for a bipolar input with some local
feedback. Stability in an amplifi er depends on the amount of negative feedback available at 20 kHz. This is set at the design stage by choosing the input gm and Cdom , which are the only two factors
affecting the open-loop gain. In simulation it would be equally valid to change gm instead; however,
in real life it is easier to alter Cdom as the only other parameter this affects is slew rate. Changing
input stage transconductance is likely to mean altering the standing current and the amount of local
feedback, which will in turn impact input stage linearity.
The VAS with its dominant pole is modeled by the integrator Evas , which is given a high but fi nite
open-loop gain, so there really is a dominant pole P 1 created when the gain demanded becomes
equal to that available. With Cdom 100 pF this is below 1 Hz. With infi nite (or as near infi nite as
SPICE allows) open-loop gain the stage would be a perfect integrator. A explained elsewhere, the
amount of open-loop gain available in real versions of this stage is not a well-controlled quantity,
and P 1 is liable to wander about in the 1 – 100 Hz region; fortunately this has no effect at all on HF
stability. Cdom is the Miller capacitor that defi nes the transadmittance, and since the input stage
has a realistic transconductance Cdom can be set to 100 pF, its usual real-life value. Even with this
simple model we have a nested feedback loop. This apparent complication here has little effect, so
long as the open-loop gain of the VAS is kept high.
The output stage is modeled as a unity-gain buffer, to which we add extra poles modeled by R1,
C1 and R2, C2. Eout1 is a unity-gain buffer internal to the output stage model, added so the second
pole does not load the fi rst. The second buffer Eout2 is not strictly necessary as no real loads are
being driven, but it is convenient if extra complications are introduced later. Both are shown here
as a part of the output stage but the fi rst pole could equally well be due to input stage limitations
instead; the order in which the poles are connected makes no difference to the fi nal output. Strictly
speaking, it would be more accurate to give the output stage a gain of 0.95, but this is so small a
factor that it can be ignored.
The component values used to make the poles are of course completely unrealistic, and chosen
purely to make the maths simple. It is easy to remember that 1 Ω and 1 μ F make up a 1 μ s timeconstant.
This is a pole at 159 kHz. Remember that the voltages in the latter half of the circuit are
realistic, but the currents most certainly are not.
The feedback network is represented simply by scaling the output as it is fed back to the input
stage. The closed-loop gain is set to 23 times, which is representative of many power amplifi ers.
Note that this is strictly a linear model, so the slew-rate limiting that is associated with Miller
compensation is not modeled here. It would be done by placing limits on the amount of current
that can fl ow in and out of the input stage.
Figure 2.18 shows the response to a 1 V step input, with the dominant pole the only time element
in the circuit. (The other poles are disabled by making C1, C2 0.00001 pF, because this is quicker
than changing the actual circuit.) The output is an exponential rise to an asymptote of 23 V, which
is exactly what elementary theory predicts. The exponential shape comes from the way that the
error signal that drives the integrator becomes less as the output approaches the desired level. The
error, in the shape of the output current from G , is the smaller signal shown; it has been multiplied
by 1000 to get mA onto the same scale as volts. The speed of response is inversely proportional to the size of Cdom , and is shown here for values of 50 and 220 pF as well as the standard 100 pF. This
simulation technique works well in the frequency domain, as well as the time domain. Simply tell
SPICE to run an AC simulation instead of a TRANS (transient) simulation. The frequency response
in Figure 2.19 exploits this to show how the closed-loop gain in an NFB amplifi er depends on the
open-loop gain available. Once more elementary feedback theory is brought to life. The value of
Cdom controls the bandwidth, and it can be seen that the values used in the simulation do not give a
very extended response compared with a 20 kHz audio bandwidth. In Figure 2.20 , one extra pole P 2 at 1.59 MHz (a time-constant of only 100 ns) is added to the
output stage, and Cdom stepped through 50, 100 and 200 pF as before: 100 pF shows a slight
overshoot that was not there before; with 50 pF there is a serious overshoot that does not bode
well for the frequency response. Actually, it’s not that bad; Figure 2.21 returns to the frequencyresponse
domain to show that an apparently vicious overshoot is actually associated with a very
mild peaking in the frequency domain.
From here on Cdom is left set to 100 pF, its real value in most cases. In Figure 2.22 P 2 is stepped
instead, increasing from 100 ns to 5 μ s, and while the response gets slower and shows more
overshoot, the system does not become unstable. The reason is simple: sustained oscillation (as
opposed to transient ringing) in a feedback loop requires positive feedback, which means that a
total phase shift of 180 ° must have accumulated in the forward path, and reversed the phase of
the feedback connection. With only two poles in a system the phase shift cannot reach 180 ° . The
VAS integrator gives a dependable 90 ° phase shift above P 1, being an integrator, but P 2 is instead
a simple lag and can only give 90 ° phase lag at infi nite frequency. So, even this very simple model
gives some insight. Real amplifi ers do oscillate if Cdom is too small, so we know that the frequency
response of the output stage cannot be meaningfully modeled with one simple lag.
As President Nixon is alleged to have said: ‘ Two wrongs don’t make a right – so let’s see if three
will do it! ’ Adding in a third pole P 3 in the shape of another simple lag gives the possibility of
sustained oscillation. This is case A in Table 2.2 .
Stepping the value of P 2 from 0.1 to 5 μ s with P 3 500 ns in Figure 2.23 shows that damped
oscillation is present from the start. Figure 2.23 also shows over 50 μ s what happens when the
amplifi er is made very unstable (there are degrees of this) by setting P 2 5 μ s and P 3 500 ns. It still takes time for the oscillation to develop, but exponentially diverging oscillation like this is
a sure sign of disaster. Even in the short time examined here the amplitude has exceeded a rather
theoretical half a kilovolt. In reality oscillation cannot increase indefi nitely, if only because the
supply rail voltages would limit the amplitude. In practice slew-rate limiting is probably the major
controlling factor in the amplitude of high-frequency oscillation. We have now modeled a system that will show instability. But does it do it right? Sadly, no. The
oscillation is about 200 kHz, which is a rather lower frequency than is usually seen when an amplifi er
misbehaves. This low frequency stems from the low P 2 frequency we have to use to provoke
oscillation; apart from anything else this seems out of line with the known fT of power transistors.
Practical amplifi ers are likely to take off at around 500 kHz to 1 MHz when Cdom is reduced, and
this seems to suggest that phase shift is accumulating quickly at this sort of frequency. One possible
explanation is that there are a large number of poles close together at a relatively high frequency.
A fourth pole can be simply added to Figure 2.17 by inserting another RC-buffer combination into
the system. With P 2 0.5 μ s and P 3 P 4 0.2 μ s, instability occurs at 345 kHz, which is a step
towards a realistic frequency of oscillation. This is case B in Table 2.2 .
When a fi fth output stage pole is grafted on, so that P 3 P 4 P 5 0.2 μ s the system just
oscillates at 500 kHz with P 2 set to 0.01 μ s. This takes us close to a realistic frequency of
oscillation. Rearranging the order of poles so P 2 P 3 P 4 0.2 μ s, while P 5 0.01 μ s, is
tidier, and the stability results are of course the same; this is a linear system so the order does not
matter. This is case C in Table 2.2 . Having P 2 , P 3, and P 4 all at the same frequency does not seem very plausible in physical terms,
so case D shows what happens when the fi ve poles are staggered in frequency. P 2 needs to be
increased to 0.3 μ s to start the oscillation, which is now at 400 kHz. Case E is another version with
fi ve poles, showing that if P 5 is reduced P 2 needs to be doubled to 0.4 μ s for instability to begin.
In the fi nal case F, a sixth pole is added to see if this permitted sustained oscillation is above
500 kHz. This seems not to be the case; the highest frequency that could be obtained after a lot of
pole twiddling was 475 kHz. This makes it clear that this model is of limited accuracy (as indeed
are all models – it is a matter of degree) at high frequencies, and that further refi nement is required
to gain further insight.
Maximizing the NFB
Having hopefully freed ourselves from fear of feedback, and appreciating the dangers of using only
a little of it, the next step is to see how much can be used. It is my view that the amount of negative
feedback applied should be maximized at all audio frequencies to maximize linearity, and the only
limit is the requirement for reliable HF stability. In fact, global or Nyquist oscillation is not normally
a diffi cult design problem in power amplifi ers; the HF feedback factor can be calculated simply
and accurately, and set to whatever fi gure is considered safe. (Local oscillations and parasitics are
beyond the reach of design calculations and simulations, and cause much more trouble in practice.)
In classical Control Theory, the stability of a servomechanism is specifi ed by its phase margin ,
the amount of extra phase shift that would be required to induce sustained oscillation, and its gain
margin , the amount by which the open-loop gain would need to be increased for the same result.
These concepts are not very useful in audio power amplifi er work, where many of the signifi cant
time-constants are only vaguely known. However, it is worth remembering that the phase margin
will never be better than 90 ° , because of the phase lag caused by the VAS Miller capacitor;
fortunately this is more than adequate.
In practice designers must use their judgment and experience to determine an NFB factor that
will give reliable stability in production. My own experience leads me to believe that when the
conventional three-stage architecture is used, 30 dB of global feedback at 20 kHz is safe, providing
an output inductor is used to prevent capacitive loads from eroding the stability margins. I would
say that 40 dB was distinctly risky, and I would not care to pin it down any more closely than that.
The 30 dB fi gure assumes simple dominant-pole compensation with a 6 dB/octave roll-off for the
open-loop gain. The phase and gain margins are determined by the angle at which this slope cuts
the horizontal unity-loop-gain line. (I am deliberately terse here; almost all textbooks give a very
full treatment of this stability criterion.) An intersection of 12 dB/octave is defi nitely unstable.
Working within this, there are two basic ways in which to maximize the NFB factor:
1. While a 12 dB/octave gain slope is unstable, intermediate slopes greater than 6 dB/octave can be
made to work. The maximum usable is normally considered to be 10 dB/octave, which gives a
phase margin of 30 ° . This may be acceptable in some cases, but I think it cuts it a little fi ne. The
steeper fall in gain means that more NFB is applied at lower frequencies, and so less distortion is
produced. Electronic circuitry only provides slopes in multiples of 6 dB/octave, so 10 dB/octave requires multiple overlapping time-constants to approximate a straight line at an intermediate
slope. This gets complicated, and this method of maximizing NFB is not popular.
2. The gain slope varies with frequency, so that maximum open-loop gain and hence NFB factor
is sustained as long as possible as frequency increases; the gain then drops quickly, at 12 dB/
octave or more, but fl attens out to 6 dB/octave before it reaches the critical unity loop – gain
intersection. In this case the stability margins should be relatively unchanged compared with
the conventional situation. This approach is dealt with in Chapter 8.
Overall Feedback versus Local Feedback
It is one of the fundamental principles of negative feedback that if you have more than one stage in
an amplifi er, each with a fi xed amount of open-loop gain, it is more effective to close the feedback
loop around all the stages, in what is called an overall or global feedback confi guration, rather than
applying the feedback locally by giving each stage its own feedback loop. I hasten to add that this
does not mean you cannot or should not use local feedback as well as overall feedback – indeed,
one of the main themes of this book is that it is a very good idea, and indeed probably the only
practical route to very low distortion levels. This is dealt with in more detail in the chapters on
input stages and voltage-amplifi er stages.
It is worth underlining the effectiveness of overall feedback because some of the less informed
audio commentators have been known to imply that overall feedback is in some way decadent or
unhealthy, as opposed to the upright moral rigor of local feedback. The underlying thought, insofar
as there is one, appears to be that overall feedback encloses more stages each with their own phase
shift, and therefore requires compensation which will reduce the maximum slew rate. The truth, as
is usual with this sort of moan, is that this could happen if you get the compensation all wrong; so
get it right – it isn’t hard.
It has been proposed on many occasions that if there is an overall feedback loop, the output stage
should be left outside it. I have tried this, and believe me, it is not a good idea. The distortion
produced by an output stage so operated is jagged and nasty, and I think no one could convince
themselves it was remotely acceptable if they had seen the distortion residuals.
Figure 2.24 shows a negative-feedback system based on that in Figure 2.12 , but with two stages.
Each has its own open-loop gain A , its own NFB factor β , and its own open-loop error Vd added
to the output of the amplifi er. We want to achieve the same closed-loop gain of 25 as in Table 2.1
and we will make the wild assumption that the open-loop error of 1 in that table is now distributed
equally between the two amplifi ers A1 and A2. There are many ways the open- and closed-loop
gains could be distributed between the two sections, but for simplicity we will give each section
a closed-loop gain of 5; this means the conditions on the two sections are identical. The openloop
gains are also equally distributed between the two amplifi ers so that their product is equal to
column 3 in Table 2.1 . The results are shown in Table 2.3 : columns 1 – 7 show what’s happening in
each loop, and columns 8 and 9 give the results for the output of the two loops together, assuming
for simplicity that the errors from each section can be simply added together; in other words there
is no partial cancelation due to differing phases and so on. This fi nal result is compared with the overall feedback case of Table 2.1 in Table 2.4 , where
column 1 gives total open-loop gain, and column 2 is a copy of column 7 in Table 2.1 and gives
the closed-loop error for the overall feedback case. Column 3 gives the closed-loop error for the
two-stage feedback case, and it is brutally obvious that splitting the overall feedback situation
into two local feedback stages has been a pretty bad move. With a modest total open-loop gain of
100, the local feedback system is almost twice as bad. Moving up to total open-loop gains that are
more realistic for real power amplifi ers, the factor of deterioration is between six and 40 times – an
amount that cannot be ignored. With higher open-loop gains the ratio gets even worse. Overall
feedback is totally and unarguably superior at dealing with all kinds of amplifi er errors, though
in this book distortion is often the one at the front of our minds.
While there is space here to give only one illustration in detail, you may be wondering what
happens if the errors are not equally distributed between the two stages; the signal level at the output of the second stage will be greater than that at the output of the fi rst stage, so it is plausible
(but by no means automatically true in the real world) that the second stage will generate more
distortion than the fi rst. If this is so, and we stick with the assumption that open-loop gain is
equally distributed between the two stages, then the best way to distribute the closed-loop gain is
to put most of it in the fi rst stage so we can get as high a feedback factor as possible in the second
stage. As an example, take the case where the total open-loop gain is 40,000.
Assume that all the distortion is in the second stage, so its open-loop error is 1 while that of the
fi rst stage is zero. Now redistribute the total closed-loop gain of 25 so the fi rst stage has a closedloop
gain of 10 and the second stage has a closed-loop gain of 2.5. This gives a closed-loop error
of 0.0123, which is about half of 0.0244, the result we got with the closed-loop gain equally
distributed. Clearly things have been improved by applying the greater part of the local negative
feedback where it is most needed. But our improved fi gure is still about 20 times worse than if we
had used overall feedback.
In a real power amplifi er, the situation is of course much more complex than this. To start with,
there are usually three rather than two stages, the distortion produced by each one is leveldependent,
and in the case of the voltage-amplifi er stage the amount of local feedback (and hence
also the amount of overall feedback) varies with frequency. Nonetheless, it will be found that
overall feedback always gives better results.
Maximizing Linearity before Feedback
Make your amplifi er as linear as possible before applying NFB has long been a clich é . It blithely
ignores the diffi culty of running a typical solid-state amplifi er without any feedback, to determine
its basic linearity.
Virtually no dependable advice on how to perform this desirable linearization has been published.
The two factors are the basic linearity of the forward path, and the amount of negative feedback
applied to further straighten it out. The latter cannot be increased beyond certain limits or highfrequency
stability is put in peril, whereas there seems no reason why open-loop linearity could
not be improved without limit, leading us to what in some senses must be the ultimate goal – a
distortionless amplifi er. This book therefore takes as one of its main aims the understanding and
improvement of open-loop linearity; as it proceeds we will develop circuit blocks culminating in
some practical amplifi er designs that exploit the techniques presented here.
References
[1] J. Linsley-Hood , Simple Class-A amplifi er , Wireless World ( April 1969 ) p. 148 .
[2] B. Olsson , Better audio from non-complements? Electronics World ( December 1994 ) p. 988 .
[3] J. Lohstroh , M. Otala , An audio power amplifi er for ultimate quality requirements , IEEE
Trans. Audio Electroacoustics AU-21 ( 6 ) ( December 1973 ) .
[4] D. Self , Self On Audio , second ed . , Newnes , 2006, Chapter 32 .

ไม่มีความคิดเห็น:

แสดงความคิดเห็น