Its stunning, mathematics and geometry can be seen in an amazing sequence of numbers. The Fibonacci sequence is unbelievably presenting the patterns and orderliness of nature itself.
<br><br>
Nature by Numbers - Fibonacci Sequence
Broken Symmetry : Nature's Numbers Chapter 6
Something in the human mind is attracted to symmetry. Symmetry
appeals to our visual sense, and thereby plays a role in
our sense of beauty. However, perfect symmetry is repetitive
and predictable, and our minds also like surprises, so we
often consider imperfect symmetry to be more beautiful than
exact mathematical symmetry. Nature, too, seems to be
attracted to symmetry, for many of the most striking patterns
in the natural world are symmetric. And nature also seems to
be dissatisfied with too much symmetry, for nearly all the
symmetric patterns in nature are less symmetric than the
causes that give rise to them.
This may seem a strange thing to say; you may recall that
the great physicist Pierre Curie, who with his wife, Marie, discovered
radioactivity, stated the general principle that "effects
are as symmetric as their causes." However, the world is full
of effects that are not as symmetric as their causes, and the
reason for this is a phenomenon known as "spontaneous symmetry
breaking."
Symmetry is a mathematical concept as well as an aesthetic
one, and it allows us to classify different types of regular
pattern and distinguish between them. Symmetry breaking
is a more dynamic idea, describing changes in pattern. Before
we can understand where nature's patterns come from and
how they can change, we must find a language in which to
describe what they are.
What is symmetry?
Let's work our way to the general from the particular. One
of the most familiar symmetric forms is the one inside which
you spend your life. The human body is "bilaterally symmetric,"
meaning that its left half is (near enough) the same as its
right half. As noted, the bilateral symmetry of the human form
is only approximate: the heart is not central, nor are the two
sides of the face identical. But the overall form is very close to
one that has perfect symmetry, and in order to describe the
mathematics of symmetry we can imagine an idealized
human figure whose left side is exactly the same as its right
side. But exactly the same? Not entirely. The two sides of the
figure occupy different regions of space; moreover, the left
side is a reversal of the right-its mirror image.
As soon as we use words like "image," we are already
thinking of how one shape corresponds to the other-of how
you might move one shape to bring it into coincidence with
the other. Bilateral symmetry means that if you reflect the left
half in a mirror, then you obtain the right half. Reflection is a
mathematical concept, but it is not a shape, a number, or a
formula. It is a transformation-that is, a rule for moving
things around.
There are many possible transformations, but most are not
symmetries. To relate the halves correctly, the mirror must be
placed on the symmetry axis, which divides the figure into its
two related halves. Reflection then leaves the human form
invariant-that is, unchanged in appearance. So we have
found a precise mathematical characterization of bilateral
symmetry-a shape is bilaterally symmetric if it is invariant by
reflection. More generally, a symmetry of an object or system
is any transformation that leaves it invariant. This description
is a wonderful example of what I earlier called the "thingification
of processes": the process "move like this" becomes a
thing-a symmetry. This simple but elegant characterization
opens the door to an immense area of mathematics.
There are many different kinds of symmetry. The most
important ones are reflections, rotations, and translations-or,
less formally, flips, turns, and slides. If you take an object in
the plane, pick it up, and flip it over onto its back, you get the
same effect as if you had reflected it in a suitable mirror. To
find where the mirror should go, choose some point on the
original object and look at where that point ends up when the
object is flipped. The mirror must go halfway between the
point and its image, at right angles to the line that joins them
(see figure 3). Reflections can also be carried out in threedimensional
space, but now the mirror is of a more familiar
kind-namely, a flat surface.
To rotate an object in the plane, you choose a point, called
the center, and turn the object about that center, as a wheel
turns about its hub. The number of degrees through which you
turn the object determines the "size" of the rotation. For example,
imagine a flower with four identical equally spaced petals.
If you rotate the flower 90°, it looks exactly the same, so the
transformation "rotate through a right angle" is a symmetry of
the flower. Rotations can occur in three-dimensional space
too, but now you have to choose a line, the axis, and spin
objects on that axis as the Earth spins on its axis. Again, you
can rotate objects through different angles about the same axis.
FIGURE I.
Where is the mirror? Given an object and a mirror image of that object, choose any point of the object and the corresponding point of the image, Join them by a line, The mirror must be at right angles to the midpoint of that line.
Translations are transformations that slide objects along without rotating them. Think of a tiled bathroom wall. If you take a tile and slide it horizontally just the right distance, it will fit on top of a neighboring tile. That distance is the width of a tile. If you slide it two widths of a tile, or three, or any whole number, it also fits the pattern. The same is true if you slide it in a vertical direction, or even if you use a combination of horizontal and vertical slides. In fact, you can do more than just sliding one tile-you can slide the entire pattern of tiles. Again, the pattern fits neatly on top of its original position only when you use a combination of horizontal and vertical slides through distances that are whole number multiples of the width of a tile.
Reflections capture symmetries in which the left half of a pattern is the same as the right half, like the human body. Rotations capture symmetries in which the same units repeat around circles, like the petals of a flower. Translations capture symmetries in which units are repeated, like a regular array of tiles; the bees' honeycomb, with its hexagonal "tiles," is an excellent naturally occurring example.
Where do the symmetries of natural patterns come from?
Think of a still pond, so flat that it can be thought of as a mathematical plane, and large enough that it might as well be a plane for all that the edges matter. Toss a pebble into the pond. You see patterns, ripples, circular waves seemingly moving outward away from the point of impact of the pebble. We've all seen this, and nobody is greatly surprised. After all, we saw the cause: it was the pebble. If you don't throw pebbles in, or anything else that might disturb the surface, then you won't get waves. All you'll get is a still, flat, planar pond.
Ripples on a pond are examples of broken symmetry. An ideal mathematical plane has a huge amount of symmetry: every part of it is identical to every other part. You can translate the plane through any distance in any direction, rotate it through any angle about any center, reflect it in any mirror line, and it still looks exactly the same. The pattern of circular ripples, in contrast, has less symmetry. It is symmetric only with respect to rotations about the point of impact of the pebble, and reflections in mirror lines that run through that point. No translations, no other rotations, no other reflections. The pebble breaks the symmetry of the plane, in the sense that after the pebble has disturbed the pond, many of its symmetries are lost. But not all, and that's why we see a pattern.
However, none of this is surprising, because of the pebble, In fact, since the impact of the pebble creates a special point, different from all the others, the symmetries of the ripples are exactly what you would expect. They are precisely the symmetries that do not move that special point. So the symmetry of the pond is not spontaneously broken when the ripples appear, because you can detect the stone that causes the translational symmetries to be lost.
You would be more surprised-a lot more surprised-if a perfectly flat pond suddenly developed a series of concentric circular ripples without there being any obvious cause, You would imagine that perhaps a fish beneath the surface had disturbed it, or that something had fallen in and you had not seen it because it was moving too fast. So strong is the ingrained assumption that patterns must have evident causes that when in 1958 the Russian chemist B. P. Belousov discovered a chemical reaction that spontaneously formed patterns, apparently out of nothing, his colleagues refused to believe him. They assumed that he had made a mistake. They didn't bother checking his work: he was so obviously wrong that checking his work would be a waste of time.
Which was a pity, because he was right. The particular pattern that Belousov discovered existed not in space but in time: his reaction oscillated through a periodic sequence of chemical changes. By 1963, another Russian chemist, A. M. Zhabotinskii, had modified Belousov's reaction so that it formed patterns in space as well. In their honor, any similar chemical reaction is given the generic name "Belousov-Zhabotinskii [or B-ZJ reaction." The chemicals used nowadays are different and simpler, thanks to some refinements made by the British reproductive biologist
Jack Cohen and the American mathematical biologist Arthur Winfree, and the experiment is now so simple that it can be done by anyone with access to the necessary chemicals. These are slightly esoteric, but there are only four of them.' In the absence of the appropriate apparatus, I'll tell you what happens if you do the experiment. The chemicals are all liquids: you mix them together in the right order and pour them into a flat dish. The mixture turns blue, then red: let it stand for a while. For ten or sometimes even twenty minutes, nothing happens; it's just like gazing at a featureless flat pond-except that it is the color of the liquid that is featureless, a uniform red. This uniformity is not surprising; after all, you blended the liquids. Then you notice a few tiny blue spots appearing-and that is a surprise. They spread, forming circular blue disks. Inside each disk, a red spot appears, turning the disk into a blue ring with a red center. Both the blue ring and the red disk grow, and when the red disk gets big enough, a blue spot appears inside it. The process continues, forming an ever-growing series of "target patterns"-concentric rings of red and blue. These target patterns have exactly the same symmetries as the rings of ripples on a pond; but this time you can't see any pebble. It is a strange and mysterious process in which paUern-order-appears to arise of its own accord from the disordered, randomly mixed liquid. No wonder the chemists didn't believe Belousov. But that's not the end of the B-Z reaction's party tricks. If you tilt the dish slightly and then put it back where it was, or dip a hot wire into it, you can break the rings and turn them 'The precise recipe is given in the Notes to The Collapse of Chaos, by Jack Cohen and Ian Stewart. into rotating red and blue spirals, If Belousov had claimed that, you would have seen steam coming out of his colleagues' ears,
This kind of behavior is not just a chemical conjuring trick. The regular beating of your heart relies on exactly the same patterns, but in that case they are patterns in waves of electrical activity, Your heart is not just a lump of undifferentiated muscle tissue, and it doesn't automatically contract all at once, Instead, it is composed of millions of tiny muscle fibers, each one of them a single cell. The fibers contract in response to electrical and chemical signals, and they pass those signals on to their neighbors, The problem is to make sure that they all contract roughly in synchrony, so that the heart beats as a whole, To achieve the necessary degree of synchronization, your brain sends electrical signals to your heart. These signals trigger electrical changes in some of the muscle fibers, which then affect the muscle fibers next to them-so that ripples of activity spread, just like the ripples on a pond or the blue disks in the B-Z reaction. As long as the waves form complete rings, the heart's muscle fibers contract in synchrony and the heart beats normally. But if the waves become spirals-as they can do in diseased hearts-the result is an incoherent set of local contractions, and the heart fibrillates. If fibrillation goes unchecked for more than a few minutes, it results in death. So every single one of us has a vested interest in circular and spiral wave patterns.
However in the heart, as in the pond, we can see a specific cause for the wave patterns: the signals from the brain. In the B-Z reaction, we cannot: the symmetry breaks spontaneously-" of its own accord"-without any external stimulus.
The term "spontaneous" does not imply that there is no cause, however: it indicates that the cause can be as tiny and as insignificant as you please. Mathematically, the crucial point is that the uniform distribution of chemicals-the featureless red liquid-is unstable. If the chemicals cease to be equally mixed, then the delicate balance that keeps the solution red is upset, and the resulting chemical changes trigger the formation of a blue spot. From that moment on, the whole process becomes much more comprehensible, because now the blue spot acts like a chemical "pebble," creating sequential ripples of chemical activity. But-at least, as far as the mathematics goes-the imperfection in the symmetry of the liquid which triggers the blue spot can be vanishingly small, provided it is not zero. In a real liquid, there are always tiny bits of dust, or bubbles-or even just molecules undergoing the vibrations we call "heat"-to disturb the perfect symmetry.
That's all it takes. An infinitesimal cause produces a large-scale effect, and that effect is a symmetric pattern. Nature's symmetries can be found on every scale, from the structure of subatomic particles to that of the entire universe.
Many chemical molecules are symmetric. The methane molecule is a tetrahedron-a triangular-sided pyramid-with one carbon atom at its center and four hydrogen atoms at its corners. Benzene has the sixfold symmetry of a regular hexagon. The fashionable molecule buckminsterfullerene is a truncated icosahedral cage of sixty carbon atoms. (An icosahedron is a regular solid with twenty triangular faces; "truncated" means that the corners are cut off.) Its symmetry lends it a remarkable stability, which has opened up new possibilities for organic chemistry.
On a slightly larger scale than molecules, we find symmetries in cellular structure; at the heart of cellular replication lies a tiny piece of mechanical engineering. Deep within each living cell, there is a rather shapeless structure known as the centrosome, which sprouts long thin microtubules, basic components of the cell's internal "skeleton," like a diminutive sea urchin. Centro somes were first discovered in 1887 and play an important role in organizing cell division. However, in one respect the structure of the centrosome is astonishingly symmetric. Inside it are two structures, known as centrioles, positioned at right angles to each other. Each centriole is cylindrical, made from twenty-seven microtubules fused together along their lengths in threes, and arranged with perfect ninefold symmetry. The microtubules themselves also have an astonishingly regular symmetric form. They are hollow tubes, made from a perfect regular checkerboard pattern of units that contain two distinct proteins, alpha- and betatubulin. One day, perhaps, we will understand why nature chose these symmetric forms. But it is amazing to see symmetric structures at the core of a living cell.
Viruses are often symmetric, too, the commonest shapes being helices and icosahedrons. The helix is the form of the influenza virus, for instance. Nature prefers the icosahedron above all other viral forms: examples include herpes, chickenpox, human wart, canine infectious hepatitis, turnip yellow mosaic, adenovirus, and many others. The adenovirus is another striking example of the artistry of molecular engineering. It is made from 252 virtually identical subunits, with 21 of them, fitted together like billiard balls before the break, making up each triangular face. (Subunits along the edges lie on more than one face and corner units lie on three, which is why 20 x 21 is not equal to 252.)
Nature exhibits symmetries on larger scales, too. A developing frog embryo begins life as a spherical cell, then loses symmetry step by step as it divides, until it has become a blastula, thousands of tiny cells whose overall form is again spherical. Then the blastula begins to engulf part of itself, in the process known as gastrulation. During the early stages of this collapse, the embryo has rotational symmetry about an axis, whose position is often determined by the initial distribution of yolk in the egg, or sometimes by the point of sperm entry. Later this symmetry is broken, and only a single mirror symmetry is retained, leading to the bilateral symmetry of the adult.
Volcanoes are conical, stars are spherical, galaxies are spiral or elliptical. According to some cosmologists, the universe itself resembles nothing so much as a gigantic expanding ball. Any understanding of nature must include an understanding of these prevalent patterns. It must explain why they are so common, and why many different aspects of nature show the same patterns. Raindrops and stars are spheres, whirlpools and galaxies are spirals, honeycombs and the Devil's Causeway are arrays of hexagons. There has to be a general principle underlying such patterns; it is not enough just to study each example in isolation and explain it in terms of its own internal mechanisms.
Symmetry breaking is just such a principle. But in order for symmetry to break, it has to be present to start with. At first this would seem to replace one problem of pattern formation with another: before we can explain the circular rings on the pond, in other words, we have to explain the pond. But there is a crucial difference between the rings and the pond. The symmetry of the pond is so extensiveevery point on its surface being equivalent to every otherthat we do not recognize it as being a pattern. Instead, we see it as bland uniformity. It is very easy to explain bland uniformity: it is what happens to systems when there is no reason for their component parts to differ from each other. It is, so to speak, nature's default option. If something is symmetric, its component features are replaceable or interchangeable. One corner of a square looks pretty much the same as any other, so we can interchange the corners without altering the square's appearance. One atom of hydrogen in methane looks pretty much like any other, so we can interchange those atoms. One region of stars in a galaxy looks pretty much like any other, so we can interchange parts of two different spiral arms without making an important difference.
In short, nature is symmetric because we live in a massproduced universe-analogous to the surface of a pond. Every electron is exactly the same as every other electron, every proton is exactly the same as every other proton, every region of empty space is exactly the same as every other region of empty space, every instant of time is exactly the same as every other instant of time. And not only are the structure of space, time, and matter the same everywhere: so are the laws that govern them. Albert Einstein made such "invariance principles" the cornerstone of his approach to physics; he based his thinking on the idea that no particular point in spacetime is special. Among other things, this led him to the principle of relativity, one of the greatest physical discoveries ever made.
This is all very well, but it produces a deep paradox. If the laws of physics are the same at all places and at all times, why is there any "interesting" structure in the universe at all? Should it not be homogeneous and changeless? If every place in the universe were interchangeable with every other place, then all places would be indistinguishable; and the same would hold for all times. But they are not. The problem is, if anything, made worse by the cosmological theory that the universe began as a single point, which exploded from nothingness billions of years ago in the big bang. At the instant of the universe's formation, all places and all times were not only indistinguishable but identical. So why are they different now?
The answer is the failure of Curie's Principle, noted at the start of this chapter. Unless that principle is hedged around with some very subtle caveats about arbitrarily tiny causes, it offers a misleading intuition about how a symmetric system should behave. Its prediction that adult frogs should be bilaterally symmetric (because embryonic frogs are bilaterally symmetric, and according to Curie's Principle the symmetry cannot change) appears at first sight to be a great success; but the same argument applied at the spherical blastula stage predicts with equal force that an adult frog should be a sphere.
A much better principle is the exact opposite, the principle of spontaneous symmetry breaking. Symmetric causes often produce less symmetric effects. The evolving universe can break the initial symmetries of the big bang. The spherical blastula can develop into the bilateral frog. The 252 perfectly interchangeable units of adenovirus can arrange themselves into an icosahedron-an arrangement in which some units will occupy special places, such as corners. A set of twentyseven perfectly ordinary microtubules can get together to create a centriole.
Fine, but why patterns? Why not a structureless mess, in which all symmetries are broken? One of the strongest threads that runs through every study ever made of symmetry breaking is that the mathematics does not work this way. Symmetries break reluctantly. There is so much symmetry lying around in our mass-produced universe that there is seldom a good reason to break all of it. So rather a lot survives. Even those symmetries that do get broken are still present, in a sense, but now as potential rather than actual form. For example, when the 252 units of the adenovirus began to link up, anyone of them could have ended up in a particular comer. In that sense, they are interchangeable. But only one of them actually does end up there, and in that sense the symmetry is broken: they are no longer fully interchangeable. But some of the symmetry remains, and we see an icosahedron.
In this view, the symmetries we observe in nature are broken traces of the grand, universal symmetries of our massproduced universe. Potentially the universe could exist in any of a huge symmetric system of possible states, but actually it must select one of them. In so doing, it must trade some of its actual symmetry for unobservable, potential symmetry. But some of the actual symmetry may remain, and when it does we observe a pattern. Most of nature's symmetric patterns arise out of some version of this general mechanism.
In a negative sort of way, this rehabilitates Curie's Principle: if we permit tiny asymmetric disturbances, which can trigger an instability of the fully symmetric state, then our mathematical system is no longer perfectly symmetric. But the important point is that the tinest departure from symmetry in the cause can lead to a total loss of symmetry in the resulting effect-and there are always tiny departures. That makes Curie's principle useless for the prediction of symmetries. It is much more informative to model a real system after one with perfect symmetry, but to remember that such a model has many possible states, only one of which will be realized in practice. Small disturbances cause the real system to select states from the range available to the idealized perfect system. Today this approach to the behavior of symmetric systems provides one of the main sources of understanding of the general principles of pattern formation.
In particular, the mathematics of symmetry breaking unifies what at first sight appear to be very disparate phenomena. For example, think about the patterns in sand dunes mentioned in chapter 1. The desert can be modeled as a flat plane of sandy particles, the wind can be modeled as a fluid flowing across the plane. By thinking about the symmetries of such a system, and how they can break, many of the observed patterns of dunes can be deduced. For example, suppose the wind blows steadily in a fixed direction, so that the whole system is invariant under translations parallel to the wind. One way to break these translational symmetries is to create a periodic pattern of parallel stripes, at right angles to the wind direction. But this is the pattern that geologists call transverse dunes. If the pattern also becomes periodic in the direction along the stripes, then more symmetry breaks, and the wavy barchanoid ridges appear. And so on.
However, the mathematical principles of symmetry-breaking do not just work for sand dunes. They work for any system with the same symmetries-anything that flows across a planar surface creating patterns. You can apply the same basic model to a muddy river flowing across a coastal plain and depositing sediment, or the waters of a shallow sea in ebb and flow across the seabed-phenomena important in geology, because millions of years later the patterns that result have been frozen into the rock that the sandy seabed and the muddy delta became. The list of patterns is identical to that for dunes.
Or the fluid might be a liquid crystal, as found in digitalwatch displays, which consist of a lot of long thin molecules that arrange themselves in patterns under the influence of a magnetic or electric field. Again, you find the same patterns. Or there might not be a fluid at all: maybe what moves is a chemical, diffusing through tissue and laying down genetic instructions for patterns on the skin of a developing animal. Now the analogue of transverse dunes is the stripes of a tiger or a zebra, and that of barchanoid ridges is the spots on a leopard or a hyena.
The same abstract mathematics; different physical and biological realizations. Mathematics is the ultimate in technology transfer-but with mental technology, ways of thinking, being transferred, rather than machines. This universality of symmetry breaking explains why living systems and nonliving ones have many patterns in common. Life itself is a process of symmetry creation-of replication; the universe of biology is just as mass-produced as the universe of physics, and the organic world therefore exhibits many of the patterns found in the inorganic world. The most obvious symmetries of living organisms are those of form-icosahedral viruses, the spiral shell of Nautilus, the helical horns of gazelles, the remarkable rotational symmetries of starfish and jellyfish and flowers. But symmetries in the living world go beyond form into behavior-and not just the symmetric rhythms of locomotion I mentioned earlier. The territories of fish in Lake Huron are arranged just like the cells in a honeycomb-and for the same reasons. The territories, like the bee grubs, cannot all be in the same place-which is what perfect symmetry would imply. Instead, they pack themselves as tightly as they can without one being different from another, and the behavioral constraint by itself produces a hexagonally symmetric tiling. And that resembles yet another striking instance of mathematical technology transfer, for the same symmetry breaking mechanism arranges the atoms of a crystal into a regular lattice-a physical process that ultimately supports Kepler's theory of the snowflake.
One of the more puzzling types of symmetry in nature is mirror symmetry, symmetry with respect to a reflection. Mirror symmetries of three-dimensional objects cannot be realized by turning the objects in space-a left shoe cannot be turned into a right shoe by rotating it. However, the laws of physics are very nearly mirror-symmetric, the exceptions being certain interactions of subatomic particles. As a result, any molecule that is not mirror-symmetric potentially exists in two different forms-left- and right-handed, so to speak.
On Earth, life has selected a particular molecular handedness: for example, for amino acids. Where does this particular handedness of terrestrial life come from? It could have been just an accident-primeval chance propagated by the massproduction techniques of replication. If so, we might imagine that on some distant planet, creatures exist whose molecules are mirror images of ours. On the other hand, there may be a deep reason for life everywhere to choose the same direction. Physicists currently recognize four fundamental forces in nature: gravity, electromagnetism, and the strong and weak nuclear interactions. It is known that the weak force violates mirror symmetry-that is, it behaves differently in left- or right-handed versions of the same physical problem. As the Austrian-born physicist Wolfgang Pauli put it, "The Lord is a weak left-hander." One remarkable consequence of this violation of mirror symmetry is the fact that the energy levels of molecules and that of their mirror images are not exactly equal. The effect is extremely small: the difference in energy levels between one particular amino acid and its mirror image is roughly one part in 1017• This may seem very tiny-but we saw that symmetry breaking requires only a very tiny disturbance. In general, lower-energy forms of molecules should be favored in nature. For this amino acid, it can be calculated that with 98% probability the lower energy form will become dominant within a period of about a hundred thousand years. And indeed, the version of this amino acid which is found in living organisms is the lower-energy one.
In chapter 5, I mentioned the curious symmetry of Maxwell's equations relating electricity and magnetism. Roughly speaking, if you interchange all the symbols for the electric field with those for the magnetic field, you re-create the same equations. This symmetry lies behind Maxwell's unification of electrical and magnetic forces into a single electromagnetic force. There is an analogous symmetry-though an imperfect one-in the equations for the four basic forces of nature, suggesting an even grander unification: that all four forces are different aspects of the same thing. Physicists have already achieved a unification of the weak and electromagnetic forces. According to current theories, all four fundamental forces should become unified-that is, symmetrically related-at the very high energy levels prevailing in the early universe. This symmetry of the early universe is broken in our own universe. In short, there is an ideal mathematical universe in which all of the fundamental forces are related in a perfectly symmetric manner-but we don't live in it.
That means that our universe could have been different; it could have been any of the other universes that, potentially, could arise by breaking symmetry in a different way. That's quite a thought. But there is an even more intriguing thought: the same basic method of pattern formation, the same mechanism of symmetry breaking in a mass-produced universe, governs the cosmos, the atom, and us.
Chapter 5 : From Violins to Videos
Chapter 6 : Broken Symmetry
Chapter 7 : The Rhythm of Life
Chapter 8 : Do dice Play God
Chapter 9 : Drops Dynamics and Daisies Read More......
The Rhythm of Life : Nature's Numbers Chapter 7
Nature is nothing if not rhythmic, and its rhythms are many
and varied. Our hearts and lungs follow rhythmic cycles
whose timing is adapted to our body's needs. Many of
nature's rhythms are like the heartbeat: they take care of
themselves, running "in the background." Others are like
breathing: there is a simple "default" pattern that operates as
long as nothing unusual is happening, but there is also a more
sophisticated control mechanism that can kick in when necessary
and adapt those rhythms to immediate needs. Controllable
rhythms of this kind are particularly common-and particularly
interesting-in locomotion. In legged animals, the
default patterns of motion that occur when conscious control
is not operating are called gaits.
Until the development of high-speed photography, it was
virtually impossible to find out exactly how an animal's legs
moved as it ran or galloped: the motion is too fast for the
human eye to discern. Legend has it that the photographic
technique grew out of a bet on a horse. In the 1870s, the railroad
tycoon Leland Stanford bet twenty-five thousand dollars
that at some times a trotting horse has all four feet completely
off the ground. To settle the issue, a photographer, who was
born Edward Muggeridge but changed his name to Eadweard
Muybridge, photographed the different phases of the gait of
the horse, by placing a line of cameras with tripwires for the
horse to trot past. Stanford, it is said, won his bet. Whatever
the truth of the story, we do know that Muybridge went on to
pioneer the scientific study of gaits. He also adapted a
mechanical device known as the zoetrope to display them as
"moving pictures," a road that in short order led to Hollywood.
So Muybridge founded both a science and an art.
Most of this chapter is about gait analysis, a branch of
mathematical biology that grew up around the questions
"How do animals move?" and "Why do they move like that?"
To introduce a little more variety, the rest is about rhythmic
patterns that occur in entire animal populations, one dramatic
example being the synchronized flashing of some species of
fireflies, which is seen in some regions of the Far East, including
Thailand. Although biological interactions that take place
in individual animals are very different from those that take
place in populations of animals, there is an underlying mathematical
unity, and one of the messages of this chapter is that
the same general mathematical concepts can apply on many
different levels and to many different things. Nature respects
this unity, and makes good use of it.
The organizing principle behind many such biological
cycles is the mathematical concept of an oscillator-a unit
whose natural dynamic causes it to repeat the same cycle of
behavior over and over again. Biology hooks together huge
"circuits" of oscillators, which interact with each other to create
complex patterns of behavior. Such "coupled oscillator
networks" are the unifying theme of this chapter.
Why do systems oscillate at all? The answer is that this is
the simplest thing you can do if you don't want, or are not
allowed, to remain still. Why does a caged tiger pace up and
down? Its motion results from a combination of two constraints.
First, it feels restless and does not wish to sit still.
Second, it is confined within the cage and cannot simply disappear
over the nearest hill. The simplest thing you can do
when you have to move but can't escape altogether is to oscillate.
Of course, there is nothing that forces the oscillation to
repeat a regular rhythm; the tiger is free to follow an irregular
path around the cage. But the simplest option-and therefore
the one most likely to arise both in mathematics and in
nature--is to find some series of motions that works, and
repeat it over and over again. And that is what we mean by a
periodic oscillation. In chapter 5, I described the vibration of
a violin string. That, too, moves in a periodic oscillation, and
it does so for the same reasons as the tiger. It can't remain still
because it has been plucked, and it can't get away altogether
because its ends are pinned down and its total energy cannot
increase.
Many oscillations arise out of steady states. As conditions
change, a system that has a steady state may lose it and begin
to wobble periodically. In 1942, the German mathematician
Eberhard Hopf found a general mathematical condition that
guarantees such behavior: in his honor, this scenario is
known as Hopf bifurcation. The idea is to approximate the
dynamics of the original system in a particularly simple way,
and to see whether a periodic wobble arises in this simplified
system. Hopf proved that if the simplified system wobbles,
then so does the original system. The great advantage of this
method is that the mathematical calculations are carried out
only for the simplified system, where they are relatively
straightforward, whereas the result of those calculations tells
us how the original system behaves. It is difficult to tackle the
original system directly, and Hopf's approach sidesteps the
difficulties in a very effective manner.
The word "bifurcation" is used because of a particular
mental image of what is happening, in which the periodic
oscillations "grow out from" the original steady state like a
ripple on a pond growing out from its center. The physical
interpretation of this mental picture is that the oscillations are
very small to start with, and steadily become larger. The
speed with which they grow is unimportant here.
For example, the sounds made by a clarinet depend on
Hopf bifurcation. As the clarinetist blows air into the instrument,
the reed-which was stationary-starts to vibrate. If the
air flows gently, the vibration is small and produces a soft
note. If the musician blows harder, the vibration grows and
the note becomes louder. The important thing is that the
musician does not have to blow in an oscillatory way (that is,
in a rapid series of short puffs) to make the reed oscillate.
This is typical of Hopf bifurcation: if the simplified system
passes Hopf's mathematical test, then the real system will
begin to oscillate of its own accord. In this case, the simplified
system can be interpreted as a fictitious mathematical
clarinet with a rather simple reed, although such an interpretation
is not actually needed to carry out the calculations.
Hopf bifurcation can be seen as a special type of symmetry
breaking. Unlike the examples of symmetry breaking
described in the previous chapter, the symmetries that break
relate not to space but to time. Time is a single variable, so
mathematically it corresponds to a line-the time axis. There
are only two types of line symmetry: translations and reflections.
What does it mean for a system to be symmetric under
time translation? It means that if you observe the motion of
the system and then wait for some fixed interval and observe
the motion of the system again, you will see exactly the same
behavior. That is a description of periodic oscillations: if you
wait for an interval equal to the period, you see exactly the
same thing. So periodic oscillations have time-translation
symmetry.
What about reflectional symmetries of time? Those correspond
to reversing the direction in which time flows, a more
subtle and philosophically difficult concept. Time reversal is
peripheral to this chapter, but it is an extremely interesting
question, which deserves to be discussed somewhere, so why
not here? The law of motion is symmetric under time reversal.
If you make a film of any "legal" physical motion (one that
obeys the laws), and run the movie backward, what you see is
also a legal motion. However, the legal motions common in
our world often look bizarre when run backward. Raindrops
falling from the sky to create puddles are an everyday sight;
puddles that spit raindrops skyward and vanish are not. The
source of the difference lies in the initial conditions. Most initial
conditions break time-reversal symmetry. For example,
suppose we decide to start with raindrops falling downward.
This is not a time-symmetric state: its time reversal would
have raindrops falling upward. Even though the laws are
time-reversible, the motion they produce need not be, because
once the time-reversal symmetry has been broken by the
choice of initial conditions, it remains broken.
Back to the oscillators. I've now explained that periodic
oscillations possess time-translation symmetry, but I haven't yet
told you what symmetry is broken to create that pattern. The
answer is "all time translations." A state that is invariant under
these symmetries must look exactly the same at all instants of
time--not just intervals of one period. That is, it must be a
steady state. So when a system whose state is steady begins to
oscillate periodically, its time-translational symmetries decrease
from all translations to only translations by a fixed interval.
This all sounds rather theoretical. However, the realization
that Hopf bifurcation is really a case of temporal symmetry
breaking has led to an extensive theory of Hopf bifurcation in
systems that have other symmetries as well-especially spatial
ones. The mathematical machinery does not depend on
particular interpretations and can easily work with several
different kinds of symmetry at once. One of the success stories
of this approach is a general classification of the patterns
that typically set in when a symmetric network of oscillators
undergoes a Hopf bifurcation, and one of the areas to which it
has recently been applied is animal locomotion.
Two biologically distinct but mathematically similar types
of oscillator are involved in locomotion. The most obvious
oscillators are the animal's limbs, which can be thought of as
mechanical systems-linked assemblies of bones, pivoting at
the joints, pulled this way and that by contracting muscles.
The main oscillators that concern us here, however, are to be
found in the creature's nervous system, the neural circuitry
that generates the rhythmic electrical signals that in tum stimulate
and control the limbs' activity. Biologists call such a circuit
a CPG, which stands for "central pattern generator." Correspondingly,
a student of mine took to referring to a limb by
the acronym LEG, allegedly for "locomotive excitation generator."
Animals have two, four, six, eight, or more LEGs, but we
know very little directly about the ePGs that control them, for
reasons I shall shortly explain. A lot of what we do know has
been arrived at by working backward-or forward, if you
like-from mathematical models.
Some animals possess only one gait-only one rhythmic
default pattern for moving their limbs. The elephant, for
example, can only walk. When it wants to move faster, it
ambles-but an amble is just a fast walk, and the patterns of
leg movement are the same. Other animals possess many different
gaits; take the horse, for example. At low speeds, horses
walk; at higher speeds, they trot; and at top speed they gallop.
Some insert yet another type of motion, a canter, between a
trot and a gallop. The differences are fundamental: a trot isn't
just a fast walk but a different kind of movement altogether.
In 1965, the American zoologist Milton Hildebrand
noticed that most gaits possess a degree of symmetry. That is,
when an animal bounds, say, both front legs move together
and both back legs move together; the bounding gait preserves
the animal's bilateral symmetry. Other symmetries are more
subtle: for example, the left half of a camel may follow the
same sequence of movements as the right, but half a period
out of phase-that is, after a time delay equal to half the
period. So the pace gait has its own characteristic symmetry:
"reflect left and right, and shift the phase by half a period."
You use exactly this type of symmetry breaking to move yourself
around: despite your bilateral symmetry, you don't move
both legs simultaneously! There's an obvious advantage to
bipeds in not doing so: if they move both legs slowly at the
same time they fall over.
The seven most common quadrupedal gaits are the trot,
pace, bound, walk, rotary gallop, transverse gallop, and can
ter, In the trot, the legs are in effect linked in diagonal pairs.
First the front left and back right hit the ground together, then
the front right and back left. In the bound, the front legs hit
the ground together, then the back legs, The pace links the
movements fore and aft: the two left legs hit the ground, then
the two right. The walk involves a more complex but equally
rhythmic pattern: front left, back right, front right, back left,
then repeat. In the rotary gallop, the front legs hit the ground
almost together, but with the right (say) very slightly later
than the left; then the back legs hit the ground almost
together, but this time with the left very slightly later than the
right. The transverse gallop is similar, but the sequence is
reversed for the rear legs. The canter is even more curious:
first front left, then back right, then the other two legs simultaneously,
There is also a rarer gait, the pronk, in which all
four legs move simultaneously.
The pronk is uncommon, outside of cartoons, but is sometimes
seen in young deer. The pace is observed in camels, the
bound in dogs; cheetahs use the rotary gallop to travel at top
speed, Horses are among the more versatile quadrupeds,
using the walk, trot, transverse gallop, and canter, depending
on circumstances.
The ability to switch gaits comes from the dynamics of
CPGs. The basic idea behind CPG models is that the rhythms
and the phase relations of animal gaits are determined by the
natural oscillation patterns of relatively simple neural circuits.
What might such a circuit look like? Trying to locate a
specific piece of neural circuitry in an animal's body is like
searching for a particular grain of sand in a desert: to map out
the nervous system of all but the simplest of animals is well
beyond the capabilities even of today's science. So we have
to sneak up on the problem of ePG design in a less direct
manner.
One approach is to work out the simplest type of circuit
that might produce all the distinct but related symmetry patterns
of gaits. At first, this looks like a tall order, and we
might be forgiven if we tried to concoct some elaborate structure
with switches that effected the change from one gait to
another, like a car gearbox. But the theory of Hopf bifurcation
tells us that there is a simpler and more natural way. It turns
out that the symmetry patterns observed in gaits are strongly
reminiscent of those found in symmetric networks of oscillators.
Such networks naturally possess an entire repertoire of
symmetry-breaking oscillations, and can switch between
them in a natural manner. You don't need a complicated gearbox.
For example, a network representing the ePG of a biped
requires only two identical oscillators, one for each leg. The
mathematics shows that if two identical oscillators are coupled-
connected so that the state of each affects that of the
other-then there are precisely two typical oscillation patterns.
One is the in-phase pattern, in which both oscillators
behave identically. The other is the out-oJ-phase pattern, in
which both oscillators behave identically except for a halfperiod
phase difference. Suppose that this signal from the
ePG is used to drive the muscles that control a biped's legs,
by assigning one leg to each oscillator. The resulting gaits
inherit the same two patterns. For the in-phase oscillation of
the network, both legs move together: the animal performs a
two-legged hopping motion, like a kangaroo. In contrast, the
out-of-phase motion of the ePG produces a gait resembling
the human walk. These two gaits are the ones most commonly
observed in bipeds, (Bipeds can, of course, do other things;
for example, they can hop along on one leg-but in that case
they effectively turn themselves into one-legged animals.)
What about quadrupeds? The simplest model is now a system
of four coupled oscillators-one for each leg. Now the
mathematics predicts a greater variety of patterns, and nearly
all of them correspond to observed gaits. The most symmetric
gait, the pronk, corresponds to all four oscillators being synchronized-
that is, to unbroken symmetry. The next most
symmetric gaits-the bound, the pace, and the trot-correspond
to grouping the oscillators as two out-of-phase pairs:
front/back, left/right, or diagonally. The walk is a circulating
figure-eight pattern and, again, occurs naturally in the mathematics.
The two kinds of gallop are more subtle. The rotary
gallop is a mixture of pace and bound, and the transverse gallop
is a mixture of bound and trot. The canter is even more
subtle and not as well understood.
The theory extends readily to six-legged creatures such as
insects. For example, the typical gait of a cockroach-and,
indeed, of most insects-is the tripod, in which the middle
leg on one side moves in phase with the front and back legs
on the other side, and then the other three legs move together,
half a period out of phase with the first set. This is one of the
natural patterns for six oscillators connected in a ring.
The symmetry-breaking theory also explains how animals
can change gait without having a gearbox: a single network of
oscillators can adopt different patterns under different conditions.
The possible transitions between gaits are also organized
by symmetry. The faster the animal moves, the less
symmetry its gait has: more speed breaks more symmetry. But
an explanation of why they change gait requires more detailed
information on physiology. In 1981, D. F. Hoyt and R. C. Taylor
discovered that when horses are permitted to select their
own speeds, depending on terrain, they choose whichever
gait minimizes their oxygen consumption.
I've gone into quite a lot of detail about the mathematics of
gaits because it is an unusual application of modern mathematical
techniques in an area that at first sight seems totally
unrelated. To end this chapter, I want to show you another
application of the same general ideas, except that in this case
it is biologically important that symmetry not be broken.
One of the most spectacular displays in the whole of
nature occurs in Southeast Asia, where huge swarms of fireflies
flash in synchrony. In his 1935 article" Synchronous
Flashing of Fireflies" in the journal Science, the American
biologist Hugh Smith provides a compelling description of
the phenomenon:
Imagine a tree thirty-five to forty feet high. apparently with a
firefly on every leaf. and all the fireflies flashing in perfect unison
at the rate of about three times in two seconds. the tree being
in complete darkness between flashes. Imagine a tenth of a mile
of river front with an unbroken line of mangrove trees with fireflies
on every leaf flashing in synchronism, the insects on the
trees at the ends of the line acting in perfect unison with those
between. Then. if one's imagination is sufficiently vivid. he may
form some conception of this amazing spectacle.
Why do the flashes synchronize? In 1990, Renato Mirollo
and Steven Strogatz showed that synchrony is the rule for
mathematical models in which every firefly interacts with
every other. Again, the idea is to model the insects as a population
of oscillators coupled together-this time by visual signals.
The chemical cycle used by each firefly to create a flash
of light is represented as an oscillator. The population of fireflies
is represented by a network of such oscillators with fully
symmetric coupling-that is, each oscillator affects all of the
others in exactly the same manner. The most unusual feature
of this model, which was introduced by the American biologist
Charles Peskin in 1975, is that the oscillators are pulsecoupled.
That is, an oscillator affects its neighbors only at the
instant when it creates a flash of light.
The mathematical difficulty is to disentangle all these
interactions, so that their combined effect stands out clearly.
Mirollo and Strogatz proved that no matter what the initial
conditions are, eventually all the oscillators become synchronized.
The proof is based on the idea of absorption, which
happens when two oscillators with different phases "lock
together" and thereafter stay in phase with each other. Because
the coupling is fully symmetric, once a group of oscillators has
locked together, it cannot unlock. A geometric and analytic
proof shows that a sequence of these absorptions must occur,
which eventually locks all the oscillators together.
The big message in both locomotion and synchronization
is that nature's rhythms are often linked to symmetry, and
that the patterns that occur can be classified mathematically
by invoking the general principles of symmetry breaking. The
principles of symmetry breaking do not answer every question
about the natural world, but they do provide a unifying
framework, and often suggest interesting new questions. In
particular, they both pose and answer the question, Why
these patterns but not others?
The lesser message is that mathematics can illuminate
many aspects of nature that we do not normally think of as
being mathematical. This is a message that goes back to the
Scottish zoologist D' Arcy Thompson, whose classic but maverick
book On Growth and Form set out, in 1917, an enormous
variety of more or less plausible evidence for the role of
mathematics in the generation of biological form and behavior.
In an age when most biologists seem to think that the only
interesting thing about an animal is its DNA sequence, it is a
message that needs to be repeated, loudly and often.
Chapter 6 : Broken Symmetry
Chapter 7 : The Rhythm of Life
Chapter 8 : Do dice Play God
Chapter 9 : Drops Dynamics and Daisies
Do Dice Play God : Nature's Numbers Chapter 8
The intellectual legacy of Isaac Newton was a vision of the
clockwork universe, set in motion at the instant of creation
but thereafter running in prescribed grooves, like a well-oiled
machine. It was an image of a totally deterministic worldone
leaving no room for the operation of chance, one whose
future was completely determined by its present. As the great
mathematical astronomer Pierre-Simon de Laplace eloquently
put it in 1812 in his Analytic Theory of Probabilities:
An intellect which at any given moment knew all the forces that
animate Nature and the mutual positions of the beings that comprise
it, if this intellect were vast enough to submit its data to
analysis, could condense into a single formula the movement of
the greatest bodies of the universe and that of the lightest atom:
for such an intellect nothing could be uncertain, and the future
just like the past would be present before its eyes.
This same vision of a world whose future is totally predictable
lies behind one of the most memorable incidents in
Douglas Adams's 1979 science-fiction novel The Hitchhiker's
Guide to the Galaxy, in which the philosophers Majikthise and
Vroomfondel instruct the supercomputer "Deep Thought" to
calculate the answer to the Great Question of Life, the Uni-
verse, and Everything. Aficionados will recall that after five
million years the computer answered, "Forty-two," at which
point the philosophers realized that while the answer was
clear and precise, the question had not been. Similarly, the
fault in Laplace's vision lies. not in his answer-that the universe
is in principle predictable, which is an accurate statement
of a particular mathematical feature of Newton's law of
motion-but in his interpretation of that fact, which is a serious
misunderstanding based on asking the wrong question.
By asking a more appropriate question, mathematicians and
physicists have now come to understand that determinism
and predictability are not synonymous.
In our daily lives, we encounter innumerable cases where
Laplacian determinism seems to be a highly inappropriate
model. We walk safely down steps a thousand times, until
one day we turn our ankle and break it. We go to a tennis
match, and it is rained off by an unexpected thunderstorm.
We place a bet on the favorite in a horse race, and it falls at
the last fence when it is six lengths ahead of the field. It's not
so much a universe in which-as Albert Einstein memorably
refused to believe-God plays dice: it seems more a universe
in which dice play God.
Is our world deterministic, as Laplace claimed, or is it governed
by chance, as it so often seems to be? And if Laplace is
really right, why does so much of our experience indicate that
he is wrong? One of the most exciting new areas of mathematics,
nonlinear dynamics-popularly known as chaos theoryclaims
to have many of the answers. Whether or not it does, it
is certainly creating a revolution in the way we think about
order and disorder, law and chance, predictability and
randomness.
According to modern physics, nature is ruled by chance
on its smallest scales of space and time. For instance, whether
a radioactive atom-of uranium, say-does or does not decay
at any given instant is purely a matter of chance. There is no
physical difference whatsoever between a uranium atom that
is about to decay and one that is not about to decay. None.
Absolutely none.
There are at least two contexts in which to discuss these
issues: quantum mechanics and classical mechanics. Most of
this chapter is about classical mechanics, but for a moment let
us consider the quantum-mechanical context. It was this view
of quantum indeterminacy that prompted Einstein's famous
statement (in a letter to his colleague Max Born) that "you
believe in a God who plays dice, and 1 in complete law and
order." To my mind, there is something distinctly fishy about
the orthodox physical view of quantum indeterminacy, and 1
appear not to be alone, because, increasingly, many physicists
are beginning to wonder whether Einstein was right all along
and something is missing from conventional quantum
mechanics-perhaps "hidden variables," whose values tell an
atom when to decay. (I hasten to add that this is not the conventional
view.) One of the best known of them, the Princeton
physicist David Bohm, devised a modification of quantum
mechanics that is fully deterministic but entirely consistent
with all the puzzling phenomena that have been used to support
the conventional view of quantum indeterminacy.
Bohm's ideas have problems of their own, in particular a kind
of "action at a distance" that is no less disturbing than quantum
indeterminacy.
However, even if quantum mechanics is correct about
indeterminacy on the smallest scales, on macroscopic scales
of space and time the universe obeys deterministic laws, This
results from an effect called decoherence, which causes sufficiently
large quantum systems to lose nearly all of their indeterminacy
and behave much more like Newtonian systems. In
effect, this reinstates classical mechanics for most humanscale
purposes. Horses, the weather, and Einstein's celebrated
dice are not unpredictable because of quantum mechanics.
On the contrary, they are unpredictable within a Newtonian
model, too. This is perhaps not so surprising when it come to
horses-living creatures have their own hidden variables,
such as what kind of hay they had for breakfast. But it was
definitely a surprise to those meteorologists who had been
developing massive computer simulations of weather in the
hope of predicting it for months ahead. And it is really rather
startling when it comes to dice, even though humanity perversely
uses dice as one of its favorite symbols for chance.
Dice are just cubes, and a tumbling cube should be no less
predictable than an orbiting planet: after all, both objects obey
the same laws of mechanical motion. They're different
shapes, but equally regular and mathematical ones.
To see how unpredictability can be reconciled with determinism,
think about a much less ambitious system than the
entire universe-namely, drops of water dripping from a tap. *
This is a deterministic system: in principle, the flow of water
into the apparatus is steady and uniform, and what happens
to it when it emerges is totally prescribed by the laws of fluid
motion. Yet a simple but effective experiment demonstrates
that this evidently deterministic system can be made to
behave unpredictably; and this leads us to some mathematical
"lateral thinking," which explains why such a paradox is
possible.
If you turn on a tap very gently and wait a few seconds for
the flow to settle down, you can usually produce a regular
series of drops of water, falling at equally spaced times in a
regular rhythm. It would be hard to find anything more predictable
than this. But if you slowly turn the tap to increase
the flow, you can set it so that the sequence of drops falls in a
very irregular manner, one that sounds random. It may take a
little experimentation to succeed, and it helps if the tap turns
smoothly. Don't turn it so far that the water falls in an unbroken
stream; what you want is a medium-fast trickle. If you get
it set just right, you can listen for many minutes without any
obvious pattern becoming apparent.
In 1978, a bunch of iconoclastic young graduate students
at the University of California at Santa Cruz formed the
Dynamical Systems Collective. When they began thinking
about this water-drop system, they realized that it's not as
random as it appears to be. They recorded the dripping noises
with a microphone and analyzed the sequence of intervals
between each drop and the next. What they found was shortterm
predictability. If I tell you the timing of three successive
drops, then you can predict when the next drop will fall. For
example, if the last three intervals between drops have been
0.63 seconds, 1.17 seconds, and 0.44 seconds, then you can be
sure that the next drop will fall after a further 0.82 seconds.
(These numbers are for illustrative purposes only.) In fact, if
you know the timing of the first three drops exactly, then you
can predict the entire future of the system.
So why is Laplace wrong? The point is that we can never
measure the initial state of a system exactly. The most precise
measurements yet made in any physical system are correct to
about ten or twelve decimal places. But Laplace's statement is
correct only if we can make measurements to infinite precision,
infinitely many decimal places-and of course there's
no way to do that. People knew about this problem of measurement
error in Laplace's day, but they generally assumed
that provided you made the initial measurements to, say, ten
decimal places, then all subsequent prediction would also be
accurate to ten decimal places. The error would not disappear,
but neither would it grow.
Unfortunately, it does grow, and this prevents us from
stringing together a series of short-term predictions to get one
that is valid in the long term. For example, suppose I know
the timing of the first three water drops to an accuracy of ten
decimal places. Then I can predict the timing of the next drop
to nine decimal places, the drop after that to eight decimal
places, and so on. At each step, the error grows by a factor of
about ten, so I lose confidence in one further decimal place.
Therefore, ten steps into the future, I really have no idea at all
what the timing of the next drop will be. (Again, the precise
figures will probably be different: it may take half a dozen
drops to lose one decimal place in accuracy, but even then it
takes only sixty drops until the same problem arises.)
This amplification of error is the logical crack through
which Laplace's perfect determinism disappears. Nothing
short of total perfection of measurement will do. If we could
measure the timing to a hundred decimal places, our predictions
would fail a mere hundred drops into the future (or six
hundred, using the more optimistic estimate). This phenomenon
is called "sensitivity to initial conditions," or more informally
"the butterfly effect." (When a butterfly in Tokyo flaps
its wings, the result may be a hurricane in Florida a month
later.) It is intimately associated with a high degree of irregularity
of behavior. Anything truly regular is by definition
fairly predictable. But sensitivity to initial conditions renders
behavior unpredictable-hence irregular. For this reason, a
system that displays sensitivity to initial conditions is said to
be chaotic. Chaotic behavior obeys deterministic laws, but it
is so irregular that to the untrained eye it looks pretty much
random. Chaos is not just complicated, patternless behavior;
it is far more subtle. Chaos is apparently complicated, apparently
patternless behavior that actually has a simple, deterministic
explanation.
The discovery of chaos was made by many people, too
numerous to list here. It came about because of the conjunction
of three separate developments. One was a change of scientific
focus, away from simple patterns such as repetitive
cycles, toward more complex kinds of behavior. The second
was the computer, which made it possible to find approximate
solutions to dynamical equations easily and rapidly.
The third was a new mathematical viewpoint on dynamics-a
geometric rather than a numerical viewpoint. The first provided
motivation, the second provided technique, and the
third provided understanding.
The geometrization of dynamics began about a hundred
years ago, when the French mathematician Henri Poincare-a
maverick if ever there was one, but one so brilliant that his
views became orthodoxies almost overnight-invented the
concept of a phase space. This is an imaginary mathematical
space that represents all possible motions of a given dynamical
system. To pick a nonmechanical example, consider the
population dynamics of a predator-prey ecological system.
The predators are pigs and the prey are those exotically pungent
fungi, truffles. The variables upon which we focus attention
are the sizes of the two populations-the number of pigs
(relative to some reference value such as one million) and the
number of truffles (ditto). This choice effectively makes the
variables continuous-that is, they take real-number values
with decimal places, not just whole-number values. For
example, if the reference number of pigs is one million, then a
population of 17,439 pigs corresponds to the value 0.017439.
Now, the natural growth of truffles depends on how many
truffles there are and the rate at which pigs eat them: the
growth of the pig population depends on how many pigs
there are and how many truffles they eat. So the rate of
change of each variable depends on both variables, an observation
that can be turned into a system of differential equations
for the population dynamics. I won't write them down,
because it's not the equations that matter here: it's what you
do with them.
These equations determine-in principle-how any initial
population values will change over time. For example, if we
start with 17,439 pigs and 788,444 truffles, then you plug in
the initial values 0.017439 for the pig variable and 0.788444
for the truffle variable, and the equations implicitly tell you
how those numbers will change. The difficulty is to make the
implicit become explicit: to solve the equations. But in what
sense? The natural reflex of a classical mathematician would
be to look for a formula telling us exactly what the pig population
and the truffle population will be at any instant. Unfortunately,
such "explicit solutions" are so rare that it is
scarcely worth the effort of looking for them unless the equations
have a very special and limited form. An alternative is
to find approximate solutions on a computer; but that tells us
only what will happen for those particular initial values, and
most often we want to know what will happen for a lot of different
initial values.
Poincare's idea is to draw a picture that shows what happens
for all initial values. The state of the system-the sizes of
the two populations at some instant of time-can be represented
as a point in the plane, using the old trick of coordinates.
For example, we might represent the pig population by
the horizontal coordinate and the truffle population by the
vertical one. The initial state described above corresponds to
the point with horizontal coordinate 0.017439 and vertical
coordinate 0.788444. Now let time flow. The two coordinates
change from one instant to the next, according to the rule
expressed by the differential equation, so the corresponding
point moves. A moving point traces out a curve; and that
curve is a visual representation of the future behavior of the
entire system. In fact, by looking at the curve, you can "see"
important features of the dynamics without worrying about
the actual numerical values ofthe coordinates.
For example, if the curve closes up into a loop, then the
two populations are following a periodic cycle, repeating the
same values over and over again-just as a car on a racetrack
keeps going past the same spectator every lap. If the curve
homes in toward some particular point and stops, then the
populations settle down to a steady state, in which neither
changes-like a car that runs out of fuel. By a fortunate coincidence,
cycles and steady states are of considerable ecological
significance-in particular, they set both upper and lower
limits to populations sizes. So the features that the eye detects
most easily are precisely the ones that really matter.
Moreover, a lot of irrelevant detail can be ignored: for example, we
can see that there is a closed loop without having to work out
its precise shape (which represents the combined "waveforms"
of the two population cycles).
What happens if we try a different pair of initial values?
We get a second curve. Each pair of initial values defines a
new curve; and we can capture all possible behaviors of the
system, for all initial values, by drawing a complete set of
such curves. This set of curves resembles the flow lines of an
imaginary mathematical fluid, swirling around in the plane.
We call the plane the phase space of the system, and the set of
swirling curves is the system's phase portrait. Instead of the
symbol-based idea of a differential equation with various initial
conditions, we have a geometric, visual scheme of points
flowing through pig/truffle space. This differs from an ordinary
plane only in that many of its points are potential rather
than actual: their coordinates correspond to numbers of pigs
and truffles that could occur under appropriate initial conditions,
but may not occur in a particular case. So as well as the
mental shift from symbols to geometry, there is a philosophical
shift from the actual to the potential.
The same kind of geometric picture can be imagined for
any dynamical system. There is a phase space, whose coordinates
are the values of all the variables; and there is a phase
portrait, a system of swirling curves that represents all possible
behaviors starting from all possible initial conditions, and
that are prescribed by the differential equations. This idea
constitutes a major advance, because instead of worrying
about the precise numerical details of solutions to the equations,
we can focus upon the broad sweep of the phase portrait,
and bring humanity's greatest asset, its amazing image
processing abilities, to bear. The image of a phase space as a
way of organizing the total range of potential behaviors, from
among which nature selects the behavior actually observed,
has become very widespread in science.
The upshot of Poincare's gre'at innovation is that dynamics
can be visualized in terms of geometric shapes called attractors.
If you start a dynamical system from some initial point
and watch what it does in the long run, you often find that it
ends up wandering around on some well-defined shape in
phase space. For example, the curve may spiral in toward a
closed loop and then go around and around the loop forever.
Moreover, different choices of initial conditions may lead to
the same final shape. If so, that shape is known as an attractor.
The long-term dynamics of a system is governed by its
attractors, and the shape of the attractor determines what type
of dynamics occurs.
For example, a system that settles down to a steady state
has an attractor that is just a point. A system that settles down
to repeating the same behavior periodically has an attractor
that is a closed loop. That is, closed loop attractors correspond
to oscillators. Recall the description of a vibrating violin
string from chapter 5; the string undergoes a sequence of
motions that eventually puts it back where it started, ready to
repeat the sequence over and over forever. I'm not suggesting
that the violin string moves in a physical loop. But my
description of it is a closed loop in a metaphorical sense: the
motion takes a round trip through the dynamic landscape of
phase space.
Chaos has its own rather weird geometry: it is associated
with curious fractal shapes called strange attractors. The butterfly
effect implies that the detailed motion on a strange attractor
can't be determined in advance. But this doesn't alter the fact
that it is an attractor. Think of releasing a Ping-Pong ball into a
stormy sea. Whether you drop it from the air or release it from
underwater, it moves toward the surface. Once on the surface, it
follows a very complicated path in the surging waves, but however
complex that path is, the ball stays on-or at least very
near-the surface. In this image, the surface of the sea is an
attractor. So, chaos notwithstanding, no matter what the starting
point may be, the system will end up very close to its attractor.
Chaos is well established as a mathematical phenomenon,
but how can we detect it in the real world? We must perform
experiments-and there is a problem. The traditional role of
experiments in science is to test theoretical predictions, but if
the butterfly effect is in operation-as it is for any chaotic system-
how can we hope to test a prediction? Isn't chaos inherently
untestable, and therefore unscientific?
The answer is a resounding no, because the word "prediction"
has two meanings. One is "foretelling the future," and the
butterfly effect prevents this when chaos is present. But the
other is "describing in advance what the outcome of an experiment
will be." Think about tossing a coin a hundred times. In
order to predict-in the fortune-teller's sense-what happens,
you must list in advance the result of each of the tosses. But
you can make scientific predictions, such as "roughly half the
coins will show heads," without foretelling the future in
detail-even when, as here, the system is random. Nobody suggests
that statistics is unscientific because it deals with unpredictable
events, and therefore chaos should be treated in the
same manner. You can make all sorts of predictions about a
chaotic system; in fact, you can make enough predictions to
distinguish deterministic chaos from true randomness. One
thing that you can often predict is the shape of the attractor,
which is not altered by the butterfly effect. All the butterfly
effect does is to make the system follow different paths on the
same attractor. In consequence, the general shape of the attractor
can often be inferred from experimental observations.
The discovery of chaos has revealed a fundamental misunderstanding
in our views of the relation between rules and the
behavior they produce-between cause and effect. We used to
think that deterministic causes must produce regular effects,
but now we see that they can produce highly irregular effects
that can easily be mistaken for randomness. We used to think
that simple causes must produce simple effects (implying that
complex effects must have complex causes), but now we
know that simple causes can produce complex effects. We
realize that knowing the rules is not the same as being able to
predict future behavior.
How does this discrepancy between cause and effect arise?
Why do the same rules sometimes produce obvious patterns
and sometimes produce chaos? The answer is to be found in
every kitchen, in the employment of that simple mechanical
device, an eggbeater. The motion of the two beaters is simple
and predictable, just as Laplace would have expected: each
beater rotates steadily. The motion of the sugar and the egg
white in the bowl, however, is far more complex. The two
ingredients get mixed up-that's what eggbeaters are for. But
the two rotary beaters don't get mixed up-you don't have to
disentangle them from each other when you've finished. Why
is the motion of the incipient meringue so different from that
of the beaters? Mixing is a far more complicated, dynamic
process than we tend to think. Imagine trying to predict
where a particular grain of sugar will end up! As the mixture
passes between the pair of beaters, it is pulled apart, to left
and right, and two sugar grains that start very close together
soon get a long way apart and follow independent paths. This
is, in fact, the butterfly effect in action-tiny changes in initial
conditions have big effects. So mixing is a chaotic process.
Conversely, every chaotic process involves a kind of mathematical
mixing in Poincare's imaginary phase space. This is
why tides are predictable but weather is not. Both involve the
same kind of mathematics, but the dynamics of tides does not
get phase space mixed up, whereas that of the weather does.
It's not what you do, it's the way that you do it.
Chaos is overturning our comfortable assumptions about
how the world works. It tells us that the universe is far
stranger than we think. It casts doubt on many traditional
methods of science: merely knowing the laws of nature is no
longer enough. On the other hand, it tells us that some things
that we thought were just random may actually be consequences
of simple laws. Nature's chaos is bound by rules. In
the past, science tended to ignore events or phenomena that
seemed random, on the grounds that since they had no obvious
patterns they could not be governed by simple laws. Not
so. There are simple laws right under our noses-laws governing
disease epidemics, or heart attacks, or plagues of locusts.
If we learn those laws, we may be able to prevent the disasters
that follow in their wake.
Already chaos has shown us new laws, even new types of
laws. Chaos contains its own brand of new universal patterns.
One of the first to be discovered occurs in the dripping tap.
Remember that a tap can drip rhythmically or chaotically,
depending on the speed of the flow. Actually, both the regularly
dripping tap and the "random" one are following slightly
different variants of the same mathematical prescription. But as
the rate at which water passes through the tap increases, the
type of dynamics changes. The attractor in phase space that
represents the dynamics keeps changing-and it changes in a
predictable but highly complex manner.
Start with a regularly dripping tap: a repetitive drip-dripdrip-
drip rhythm, each drop just like the previous one. Then
tum the tap slightly, so that the drips come slightly faster.
Now the rhythm goes drip-DRIP-drip-DRIP, and repeats every
two drops. Not only the size of the drop, which governs how
loud the drip sounds, but also the timing changes slightly
from one drop to the next.
If you allow the water to flow slightly faster still, you get a
four-drop rhythm: drip-DRIP-drip-DRIP. A little faster still,
and you produce an eight-drop rhythm: drip-DRIP-drip-DRIPdrip-
DRIP-drip-DRIP. The length of the repetitive sequence
of drops keeps on doubling. In a mathematical model, this
process continues indefinitely, with rhythmic groups of 16,
32, 64 drops, and so on. But it takes tinier and tinier changes
to the flow rate to produce each successive doubling of the
period; and there is a flow rate by which the size of the group
has doubled infinitely often. At this point, no sequence of
drops repeats exactly the same pattern. This is chaos.
We can express what is happening in Poincare's geometric
language. The attractor for the tap begins as a closed loop,
representing a periodic cycle. Think of the loop as an elastic
band wrapped around your finger. As the flow rate increases,
this loop splits into two nearby loops, like an elastic band
wound twice around your finger. This band is twice as long as
the original, which is why the period is twice as long. Then in
exactly the same way, this already-doubled loop doubles
again, all the way along its length, to create the period-four
cycle, and so on. After infinitely many doublings, your finger
is decorated with elastic spaghetti, a chaotic attractor.
This scenario for the creation of chaos is called a perioddoubling
cascade. In 1975, the physicist Mitchell Feigenbaum
discovered that a particular number, which can be measured
in experiments, is associated with every period-doubling cascade.
The number is roughly 4.669, and it ranks alongside 1t
(pi) as one of those curious numbers that seem to have extraordinary
significance in both mathematics and its relation to
the natural world. Feigenbaum's number has a symbol, too:
the Greek letter () (delta). The number 1t tells us how the circumference
of a circle relates to its diameter. Analogously,
Feigenbaum's number () tells us how the period of the drips
relates to the rate of flow of the water. To be precise, the extra
amount by which you need to turn on the tap decreases by a
factor of 4.669 at each doubling of the period.
The number 1t is a quantitative signature for anything
involving circles. In the same way, the Feigenbaum number ()
is a quantitative signature for any period-doubling cascade,
no matter how it is produced or how it is realized experimentally.
That very same number shows up in experiments on liquid
helium, water, electronic circuits, pendulums, magnets,
and vibrating train wheels. It is a new universal pattern in
nature, one that we can see only through the eyes of chaos; a
quantitative pattern, a number, emerges from a qualitative
phenomenon. One of nature's numbers, indeed. The Feigenbaum
number has opened the door to a new mathematical
world, one we have only just begun to explore.
The precise pattern found by Feigenbaum, and other patterns
like it, is a matter of fine detail. The basic point is that
even when the consequences of natural laws seem to be patternless,
the laws are still there and so are the patterns. Chaos
is not random: it is apparently random behavior resulting
from precise rules. Chaos is a cryptic form of order.
Science has traditionally valued order, but we are beginning
to appreciate the fact that chaos can offer science distinct
advantages. Chaos makes it much easier to respond quickly to
an outside stimulus. Think of tennis players waiting to
receive a serve. Do they stand still? Do they move regularly
from side to side? Of course not. They dance erratically from
one foot to the other. In part, they are trying to confuse their
opponents, but they are also getting ready to respond to any
serve sent their way. In order to be able to move quickly in
any particular direction, they make rapid movements in many
different directions. A chaotic system can react to outside
events much more quickly, and with much less effort, than a
non chaotic one. This is important for engineering control
problems. For example, we now know that some kinds of turbulence
result from chaos-that's what makes turbulence look
random. It may prove possible to make the airflow past an aircraft's
skin much less turbulent, and hence less resistant to
motion, by setting up control mechanisms that respond
extremely rapidly to cancel out any small regions of incipient
turbulence. Living creatures, too, must behave chaotically in
order to respond rapidly to a changing environment.
This idea has been turned into an extremely useful practical
technique by a group of mathematicians and physicists,
among them William Ditto, Alan Garfinkel, and Jim Yorke:
they call it chaotic control. Basically, the idea is to make the
butterfly effect work for you. The fact that small changes in
initial conditions create large changes in subsequent behavior
can be an advantage; all you have to do is ensure that you get
the large changes you want. Our understanding of how
chaotic dynamics works makes it possible to devise control
strategies that do precisely this. The method has had several
successes. Space satellites use a fuel called hydrazine to make
course corrections. One of the earliest successes of chaotic
control was to divert a dead satellite from its orbit and send it
out for an encounter with an asteroid, using only the tiny
amount of hydrazine left on board. NASA arranged for the
satellite to swing around the Moon five times, nudging it
slightly each time with a tiny shot of hydrazine. Several such
encounters were achieved, in an operation that successfully
exploited the occurrence of chaos in the three-body problem
(here, Earth/Moon/satellite) and the associated butterfly
effect.
The same mathematical idea has been used to control a
magnetic ribbon in a turbulent fluid-a prototype for controlling
turbulent flow past a submarine or an aircraft. Chaotic
control has been used to make erratically beating hearts return
to a regular rhythm, presaging invention of the intelligent
pacemaker. Very recently, it has been used both to set up and
to prevent rhythmic waves of electrical activity in brain tissue,
opening up the possibility of preventing epileptic attacks.
Chaos is a growth industry. Every week sees new discoveries
about the underlying mathematics of chaos, new applications
of chaos to our understanding of the natural world, or
new technological uses of chaos-including the chaotic dishwasher,
a Japanese invention that uses two rotating arms,
spinning chaotically, to get dishes cleaner using less energy;
and a British machine that uses chaos-theoretic data analysis
to improve quality control in spring manufacture.
Much, however, remains to be done. Perhaps the ultimate
unsolved problem of chaos is the strange world of the quantum,
where Lady Luck rules. Radioactive atoms decay "at random";
their only regularities are statistical. A large quantity of
radioactive atoms has a well-defined half-life-a period of
time during which half the atoms will decay. But we can't
predict which half. Albert Einstein's protest, mentioned earlier,
was aimed at just this question. Is there really no difference
at all between a radioactive atom that is not going to
decay, and one that's just about to? Then how does the atom
know what to do?
Might the apparent randomness of quantum mechanics be
fraudulent? Is it really deterministic chaos? Think of an atom
as some kind of vibrating droplet of cosmic fluid. Radioactive
atoms vibrate very energetically, and every so often a smaller
drop can split off-decay. The vibrations are so rapid that we
can't measure them in detail: we can only measure averaged
quantities, such as energy levels. Now, classical mechanics
tells us that a drop of real fluid can vibrate chaotically. When
it does so, its motion is deterministic but unpredictable. Occasionally,
"at random," the vibrations conspire to split off a
tiny droplet. The butterfly effect makes it impossible to say in
advance just when the drop will split; but that event has precise
statistical features, including a well defined half-life.
Could the apparently random decay of radioactive atoms
be something similar, but on a microcosmic scale? After all,
why are there any statistical regularities at all? Are they traces
of an underlying determinism? Where else can statistical regularities
come from? Unfortunately, nobody has yet made this
seductive idea work-though it's similar in spirit to the fashionable
theory of superstrings, in which a subatomic particle
is a kind of hyped-up vibrating multidimensional loop. The
main similar feature here is that both the vibrating loop and
the vibrating drop introduce new "internal variables" into the
physical picture. A significant difference is the way these two
approaches handle quantum indeterminacy. Superstring theory,
like conventional quantum mechanics, sees this indeterminacy
as being genuinely random. In a system like the drop,
however, the apparent indeterminacy is actually generated by
a deterministic, but chaotic, dynamic. The trick-if only we
knew how to do it-would be to invent some kind of structure
that retains the successful features of superstring theory,
while making some of the internal variables behave chaotically.
It would be an appealing way to render the Deity's dice
deterministic, and keep the shade of Einstein happy.
Chapter 6 : Broken Symmetry
Chapter 7 : The Rhythm of Life
Chapter 8 : Do dice Play God
Chapter 9 : Drops Dynamics and Daisies