1. Trang chủ
  2. » Khoa Học Tự Nhiên

historic perspectives on modern physics

493 13,8K 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 493
Dung lượng 14,62 MB

Nội dung

Reviews of Modern Physics Volume 71 Special issue 1999 FOREWORD Martin Blume PREFACE Benjamin Bederson INTRODUCTION Hans A Bethe HISTORIC PERSPECTIVES—personal essays on historic developments This section presents articles describing historic developments in a number of major areas of physics, prepared by authors who played important roles in these developments The section was organized and coordinated with the help of Peter Galison, professor of the History of Science at Harvard University S1 Quantum theory Hans A Bethe S6 Nuclear physics Hans A Bethe S16 Theoretical particle physics A Pais S25 Elementary particle physics: The origins Val L Fitch S33 Astrophysics George Field S41 A century of relativity Irwin I Shapiro S54 From radar to nuclear magnetic resonance Robert V Pound S59 An essay on condensed matter physics in the twentieth century W Kohn S78 A short history of atomic physics in the twentieth century Daniel Kleppner PARTICLE PHYSICS AND RELATED TOPICS S85 Quantum field theory Frank Wilczek S96 The standard model of particle physics Mary K Gaillard Paul D Grannis Frank J Sciulli S112 String theory, supersymmetry, unification, and all that John H Schwarz Nathan Seiberg S121 Accelerators and detectors W K H Panofsky M Breidenbach S133 Anomalous g values of the electron and muon V W Hughes T Kinoshita S140 Neutrino physics L Wolfenstein ASTROPHYSICS S145 Cosmology at the millennium Michael S Turner J Anthony Tyson S165 Cosmic rays: the most energetic particles in the universe James W Cronin S173 Cosmic microwave background radiation Lyman Page David Wilkinson S180 Black holes Gary T Horowitz Saul A Teukolsky S187 Gravitational radiation Rainer Weiss S197 Deciphering the nature of dark matter Bernard Sadoulet NUCLEAR PHYSICS S205 Nuclear physics at the end of the century E M Henley J P Schiffer S220 Stellar nucleosynthesis Edwin E Salpeter ATOMIC, MOLECULAR, AND OPTICAL PHYSICS S223 Atomic physics Sheldon Datz G W F Drake T F Gallagher H Kleinpoppen G zu Putlitz S242 Laser spectroscopy and quantum optics ă T Hansch W Walther S253 Atom cooling, trapping, and quantum manipulation Carl E Wieman David E Pritchard David J Wineland S263 Laser physics: quantum controversy in action W E Lamb W P Schleich M O Scully C H Townes S274 Quantum effects in one- and two-photon interference L Mandel S283 From nanosecond to femtosecond science N Bloembergen S288 Experiment and foundations of quantum physics Anton Zeilinger CONDENSED MATTER PHYSICS S298 The fractional quantum Hall effect Horst L Stormer Daniel C Tsui Arthur C Gossard S306 Conductance viewed as transmission Yoseph Imry Rolf Landauer S313 Superconductivity J R Schrieffer M Tinkham S318 Superfluidity A J Leggett S324 In touch with atoms G Binnig H Rohrer S331 Materials physics P Chaudhari M S Dresselhaus S336 The invention of the transistor Michael Riordan Lillian Hoddeson Conyers Herring STATISTICAL PHYSICS AND FLUIDS S346 Statistical mechanics: A selective review of two central issues Joel L Lebowitz S358 Scaling, universality, and renormalization: Three pillars of modern critical phenomena H Eugene Stanley S367 Insights from soft condensed matter Thomas A Witten S374 Granular material: a tentative view P G de Gennes S383 Fluid turbulence Katepalli R Sreenivasan S396 Pattern formation in nonequilibrium physics J P Gollub J S Langer PLASMA PHYSICS S404 The collisionless nature of high-temperature plasmas T M O’Neil F V Coroniti CHEMICAL PHYSICS AND BIOLOGICAL PHYSICS S411 Chemical physics: Molecular clouds, clusters, and corrals Dudley Herschbach S419 Biological physics Hans Frauenfelder Peter G Wolynes Robert H Austin S431 Brain, neural networks, and computation J J Hopfield COMPUTATIONAL PHYSICS S438 Microscopic simulations in physics D M Ceperley APPLICATIONS OF PHYSICS TO OTHER AREAS S444 Physics and applications of medical imaging William R Hendee S451 Nuclear fission reactors Charles E Till S456 Nuclear power—fusion T Kenneth Fowler S460 Physics and U.S national security Sidney D Drell S471 Laser technology R E Slusher S480 Physics and the communications industry W F Brinkman D V Lang Quantum theory Hans A Bethe Floyd R Newman Laboratory of Nuclear Studies, Cornell University, Ithaca, New York 14853 [S0034-6861(99)04202-6] I EARLY HISTORY Twentieth-century physics began with Planck’s postulate, in 1900, that electromagnetic radiation is not continuously absorbed or emitted, but comes in quanta of energy h ␯ , where ␯ is the frequency and h Planck’s constant Planck got to this postulate in a complicated way, starting from statistical mechanics He derived from it his famous law of the spectral distribution of blackbody radiation, n ͑ ␯ ͒ ϭ ͓ e h ␯ /kT Ϫ1 ͔ Ϫ1 , (1) which has been confirmed by many experiments It is also accurately fulfilled by the cosmic background radiation, which is a relic of the big bang and has a temperature Tϭ2.7 K Einstein, in 1905, got to the quantum concept more directly, from the photoelectric effect: electrons can be extracted from a metal only by light of frequency above a certain minimum, where h ␯ minϭw, (2) with w the ‘‘work function’’ of the metal, i.e., the binding energy of the (most loosely bound) electron This law was later confirmed for x rays releasing electrons from inner shells Niels Bohr, in 1913, applied quantum theory to the motion of the electron in the hydrogen atom He found that the electron could be bound in energy levels of energy E n ϭϪ Ry , n2 (3) where n can be any integer The Rydberg constant is Ryϭ me 2ប (4) Light can be emitted or absorbed only at frequencies given by h ␯ ϭE m ϪE n , (5) where m and n are integers This daring hypothesis explained the observed spectrum of the hydrogen atom The existence of energy levels was later confirmed by the experiment of J Franck and G Hertz Ernest Rutherford, who had earlier proposed the nuclear atom, declared that now, after Bohr’s theory, he could finally believe that his proposal was right In 1917, Einstein combined his photon theory with statistical mechanics and found that, in addition to absorption and spontaneous emission of photons, there Reviews of Modern Physics, Vol 71, No 2, Centenary 1999 had to be stimulated emission This result, which at the time seemed purely theoretical, gave rise in the 1960s to the invention of the laser, an eminently practical and useful device A H Compton, in 1923, got direct evidence for light quanta: when x rays are scattered by electrons, their frequency is diminished, as if the quantum of energy h ␯ and momentum h ␯ /c had a collision with the electron in which momentum and energy were conserved This Compton effect finally convinced most physicists of the reality of light quanta Physicists were still confronted with the wave/particle duality of light quanta on the one hand and the phenomena of interference, which indicated a continuum theory, on the other This paradox was not resolved until Dirac quantized the electromagnetic field in 1927 Niels Bohr, ever after 1916, was deeply concerned with the puzzles and paradoxes of quantum theory, and these formed the subject of discussion among the many excellent physicists who gathered at his Institute, such as Kramers, Slater, W Pauli, and W Heisenberg The correspondence principle was formulated, namely, that in the limit of high quantum numbers classical mechanics must be valid The concept of oscillator strength f mn for the transition from level m to n in an atom was developed, and dispersion theory was formulated in terms of oscillator strength Pauli formulated the exclusion principle, stating that only one electron can occupy a given quantum state, thereby giving a theoretical foundation to the periodic system of the elements, which Bohr had explained phenomologically in terms of the occupation by electrons of various quantum orbits A great breakthrough was made in 1925 by Heisenberg, whose book, Physics and Beyond (Heisenberg, 1971), describes how the idea came to him while he was on vacation in Heligoland When he returned home to ă Gottingen and explained his ideas to Max Born the latter told him, ‘‘Heisenberg, what you have found here are matrices.’’ Heisenberg had never heard of matrices Born had already worked in a similar direction with P Jordan, and the three of them, Born, Heisenberg, and Jordan, then jointly wrote a definitive paper on ‘‘matrix mechanics.’’ They found that the matrices representing the coordinate of a particle q and its momentum p not commute, but satisfy the relation qpϪpqϭiប1, (6) where is a diagonal matrix with the number in each diagonal element This is a valid formulation of quantum mechanics, but it was very difficult to find the matrix 0034-6861/99/71(2)/1(5)/$16.00 ©1999 The American Physical Society S1 S2 Hans A Bethe: Quantum theory elements for any but the simplest problems, such as the harmonic oscillator The problem of the hydrogen atom was soon solved by the wizardry of W Pauli in 1926 The problem of angular momentum is still best treated by matrix mechanics, in which the three components of the angular momentum are represented by noncommuting matrices ă Erwin Schrodinger in 1926 found a different formulation of quantum mechanics, which turned out to be most useful for solving concrete problems: A system of n particles is represented by a wave function in 3n dimensions, which satises a partial differential equation, the ă ¨ ‘‘Schrodinger equation.’’ Schrodinger was stimulated by the work of L V de Broglie, who had conceived of particles as being represented by waves This concept was beautifully confirmed in 1926 by the experiment of Davisson and Germer on electron diffraction by a crystal of nickel ă Schrodinger showed that his wave mechanics was equivalent to Heisenberg’s matrix mechanics The elements of Heisenbergs matrix could be calculated from ă ă Schrodinger’s wave function The eigenvalues of Schrodinger’s wave equation gave the energy levels of the system ă It was relatively easy to solve the Schrodinger equaă tion for specic physical systems: Schrodinger solved it for the hydrogen atom, as well as for the Zeeman and the Stark effects For the latter problem, he developed perturbation theory, useful for an enormous number of problems A third formulation of quantum mechanics was found by P A M Dirac (1926), while he was still a graduate student at Cambridge It is more general than either of the former ones and has been used widely in the further development of the eld ă In 1926 Born presented his interpretation of Schrodinger’s wave function: ͉ ␺ (x ,x , ,x n ) ͉ gives the probability of finding one particle at x , one at x , etc When a single particle is represented by a wave function, this can be constructed so as to give maximum probability of finding the particle at a given position x and a given momentum p, but neither of them can be exactly specified This point was emphasized by Heisenberg in his uncertainty principle: classical concepts of motion can be applied to a particle only to a limited extent You cannot describe the orbit of an electron in the ground state of an atom The uncertainty principle has been exploited widely, especially by Niels Bohr ă Pauli, in 1927, amplied the Schrodinger equation by including the electron spin, which had been discovered by G Uhlenbeck and S Goudsmit in 1925 Pauli’s wave function has two components, spin up and spin down, and the spin is represented by a 2ϫ2 matrix The matrices representing the components of the spin, ␴ x , ␴ y , and ␴ z , not commute In addition to their practical usefulness, they are the simplest operators for demonstrating the essential difference between classical and quantum theory Dirac, in 1928, showed that spin follows naturally if the wave equation is extended to satisfy the requireRev Mod Phys., Vol 71, No 2, Centenary 1999 ments of special relativity, and if at the same time one requires that the differential equation be first order in time Dirac’s wave function for an electron has four components, more accurately 2ϫ2 One factor refers to spin, the other to the sign of the energy, which in relativity is given by EϭϮc ͑ p ϩm c ͒ 1/2 (7) States of negative energy make no physical sense, so Dirac postulated that nearly all such states are normally occupied The few that are empty appear as particles of positive electric charge Dirac first believed that these particles represented protons But H Weyl and J R Oppenheimer, independently, showed that the positive particles must have the same mass as electrons Pauli, in a famous article in the Handbuch der Physik (Pauli, 1933), considered this prediction of positively charged electrons a fundamental flaw of the theory But within a year, in 1933, Carl Anderson and S Neddermeyer discovered positrons in cosmic radiation Dirac’s theory not only provided a natural explanation of spin, but also predicted that the interaction of the spin magnetic moment with the electric field in an atom is twice the strength that might be naively expected, in agreement with the observed fine structure of atomic spectra Empirically, particles of zero (or integral) spin obey Bose-Einstein statistics, and particles of spin (or halfintegral), including electron, proton, and neutron, obey Fermi-Dirac statistics, i.e., they obey the Pauli exclusion principle Pauli showed that spin and statistics should indeed be related in this way II APPLICATIONS 1926, the year when I started graduate work, was a wonderful time for theoretical physicists Whatever problem you tackled with the new tools of quantum mechanics could be successfully solved, and hundreds of problems, from the experimental work of many decades, were around, asking to be tackled A Atomic physics The fine structure of the hydrogen spectrum was derived by Dirac Energy levels depend on the principal quantum number n and the total angular momentum j, orbital momentum plus spin Two states of orbital mo1 mentum l ϭjϩ and jϪ are degenerate The He atom had been an insoluble problem for the ă old (1913–1924) quantum theory Using the Schrodinger equation, Heisenberg solved it in 1927 He found that the wave function, depending on the position of the two electrons ⌿(r1 ,r2 ), could be symmetric or antisymmetric in r1 and r2 He postulated that the complete wave function should be antisymmetric, so a ⌿ symmetric in r1 and r2 should be multiplied by a spin wave function antisymmetric in ␴ and ␴ , hence belonging to a singlet state (parahelium) An antisymmetric spatial wave func- S3 Hans A Bethe: Quantum theory tion describes a state with total spin Sϭ1, hence a triplet state (orthohelium) Heisenberg thus obtained a correct qualitative description of the He spectrum The ground state is singlet, but for the excited states, the triplet has lower energy than the singlet There is no degeneracy in orbital angular momentum L Heisenberg used a well-designed perturbation theory and thus got only qualitative results for the energy levels To get accurate numbers, Hylleraas (in 1928 and later) used a variational method The ground-state wave function is a function of r , r , and r 12 , the distance of the two electrons from each other He assumed a ‘‘trial function’’ depending on these variables and on some parameters, and then minimized the total energy as a function of these parameters The resulting energy was very accurate Others improved the accuracy further I also was intrigued by Hylleraas’s success and applied his method to the negative hydrogen ion HϪ I showed that this ion was stable It is important for the outer layers of the sun and in the crystal LiH, which is ionic: Liϩ and HϪ For more complicated atoms, the first task was to obtain the structure of the spectrum J von Neumann and E Wigner applied group theory to this problem, and could reproduce many features of the spectrum, e.g., the feature that, for a given electron configuration, the state of highest total spin S and highest total orbital momentum L has the lowest energy In the late 1920’s J Slater showed that these (and other) results could be obtained without group theory, by writing the wave function of the atom as a determinant of the wave functions of the individual electrons The determinant form ensured antisymmetry To obtain the electron orbitals, D R Hartree in 1928 considered each electron as moving in the potential produced by the nucleus and the charge distribution of all the other electrons Fock extended this method to include the effect of the antisymmetry of the atomic wave function Hartree calculated numerically the orbitals in several atoms, first using his and later Fock’s formulation Group theory is important in the structure of crystals, as had been shown long before quantum mechanics I applied group theory in 1929 to the quantum states of an atom inside a crystal This theory has also been much used in the physical chemistry of atoms in solution With modern computers, the solution of the HartreeFock system of differential equations has become straightforward Once the electron orbitals are known, the energy levels of the atom can be calculated Relativity can be included The electron density near the nucleus can be calculated, and hence the hyperfine structure, isotope effect, and similar effects of the nucleus B Molecules A good approximation to molecular structure is to consider the nuclei fixed and calculate the electron wave function in the field of these fixed nuclei (Born and Oppenheimer, 1927) The eigenvalue of the electron enRev Mod Phys., Vol 71, No 2, Centenary 1999 ergy, as a function of the position of nuclei, can then be considered as a potential in which the nuclei move Heitler and F London, in 1927, considered the simplest molecule, H2 They started from the wave function of two H atoms in the ground state and calculated the energy perturbation when the nuclei are at a distance R If the wave function of the electrons is symmetric with respect to the position of the nuclei, the energy is lower than that of two separate H atoms, and they could calculate the binding energy of H2 and the equilibrium distance R of the two nuclei Both agreed reasonably well with observation At distances RϽR , there is repulsion If the wave function is antisymmetric in the positions of the two electrons, there is repulsion at all distances For a symmetric wave function, more accurate results can be obtained by the variational method Linus Pauling was able to explain molecular binding generally, in terms of quantum mechanics, and thereby helped create theoretical chemistry—see Herschbach (1999) An alternative to the Heitler-London theory is the picture of molecular orbitals: Given the distance R between two nuclei, one may describe each electron by a wave function in the field of the nuclei Since this field has only cylindrical symmetry, electronic states are described by two quantum numbers, the total angular momentum and its projection along the molecular axis; for example, p ␴ means a state of total angular momentum and component in the direction of the axis C Solid state In a metal, the electrons are (reasonably) free to move between atoms In 1927 Arnold Sommerfeld showed that the concept of free electron obeying the Pauli principle could explain many properties of metals, such as the relation between electric and thermal conductivity One phenomenon in solid-state physics, superconductivity, defied theorists for a long time Many wrong theories were published Finally, the problem was solved by John Bardeen, Leon Cooper, and Robert Schrieffer Pairs of electrons are traveling together, at a considerable distance from each other, and are interacting strongly with lattice vibrations [see Schrieffer and Tinkham (1999)] D Collisions The old (pre-1925) quantum theory could not treat collisions In quantum mechanics the problem was solved by Born If a particle of momentum p1 collides with a system ⌿ , excites that system to a state ⌿ , and thereby gets scattered to a momentum p2 , then in first approximation the probability of this process is proportional to the absolute square of the matrix element, Mϭ ͵ exp͓ i ͑ p1 Ϫp2 ͒ •r/ប ͔ ⌿ ⌿ * Vd ␶ , (8) S4 Hans A Bethe: Quantum theory where V is the interaction potential between particle and system, and the integration goes over the coordinates of the particle and all the components of the system More accurate prescriptions were also given by Born There is an extensive literature on the subject Nearly all physics beyond spectroscopy depends on the analysis of collisions see Datz et al (1999) E Radiation and electrodynamics The paradox of radiation’s being both quanta and waves is elucidated by second quantization Expanding the electromagnetic field in a Fourier series, F ͑ r,t ͒ ϭ ͚ a k exp i ͑ k•rϪ ␻ t ͒ , (9) one can consider the amplitudes a k as dynamic variables, with a conjugate variable a † They are quantized, k using the commutation relation a k a † Ϫa † a k ϭ1 k k (10) The energy of each normal mode is ប ␻ (nϩ ) Emission and absorption of light is straightforward The width of the spectral line corresponding to the transition of an atomic system from state m to state n was shown by E Wigner and V Weisskopf to be ⌬ ␻ ϭ ͑ ␥ mϩ ␥ n ͒, (11) where ␥ m is the rate of decay of state m (reciprocal of its lifetime) due to spontaneous emission of radiation Heisenberg and Pauli (1929, 1930) set out to construct a theory of quantum electrodynamics, quantizing the electric field at a given position rm Their theory is selfconsistent, but it had the unfortunate feature that the electron’s self-energy, i.e., its interaction with its own electromagnetic field, turned out to be infinite E Fermi (1932) greatly simplified the theory by considering the Fourier components of the field, rather than the field at a given point But the self-energy remained infinite This problem was only solved after World War II The key was the recognition, primarily due to Kramers, that the self-energy is necessarily included in the mass of the electron and cannot be separately measured The only observable quantity is then a possible change of that self-energy when the electron is subject to external forces, as in an atom J Schwinger (1948) and R Feynman (1948), in different ways, then constructed relativistically covariant, and finite, theories of quantum electrodynamics Schwinger deepened the existing theory while Feynman invented a completely novel technique which at the same time simplified the technique of doing actual calculations S Tomonaga had earlier (1943) found a formulation similar to Schwinger’s F J Dyson (1949) showed the equivalence of Schwinger and Feynman’s approaches and then showed that the results of the theory are finite in any order of ␣ ϭe /បc Nevertheless the perturbation series diverges, and infinities will appear in order exp Rev Mod Phys., Vol 71, No 2, Centenary 1999 (Ϫបc/e2) An excellent account of the development of quantum electrodynamics has been given by Schweber (1994) It was very fortunate that, just before Schwinger and Feynman, experiments were performed that showed the intricate effects of the self-interaction of the electron One was the discovery, by P Kusch and H M Foley (1948) that the magnetic moment of the electron is slightly (by about part in 1000) greater than predicted by Dirac’s theory The other was the observation by W Lamb and R Retherford (1947) that the 2s and the 2p 1/2 states of the H atom not coincide, 2s having an energy higher by the very small amount of about 1000 megaHertz (the total binding energy being of the order of 109 megaHertz) All these matters were discussed at the famous Shelter Island Conference in 1947 (Schweber, 1994) Lamb, Kusch, and I I Rabi presented experimental results, Kramers his interpretation of the self-energy, and Feynman and Schwinger were greatly stimulated by the conference So was I, and I was able within a week to calculate an approximate value of the Lamb shift After extensive calculations, the Lamb shift could be reproduced within the accuracy of theory and experiment The Lamb shift was also observed in Heϩ, and calculated for the 1s electron in Pb In the latter atom, its contribution is several Rydberg units The ‘‘anomalous’’ magnetic moment of the electron was measured in ingenious experiments by H Dehmelt and collaborators They achieved fabulous accuracy, viz., for the ratio of the anomalous to the Dirac moments aϭ1 159 652 188 ͑ ͒ ϫ10Ϫ12, (12) where the in parenthesis gives the probable error of the last quoted figure T Kinoshita and his students have evaluated the quantum electrodynamic (QED) theory with equal accuracy, and deduced from Eq (12) the fine-structure constant ␣ Ϫ1 ϭបc/e ϭ137.036 000 (13) At least three other, independent methods confirm this value of the fine-structure constant, albeit with less precision See also Hughes and Kinoshita (1999) III INTERPRETATION ă Schrodinger believed at first that his wave function gives directly the continuous distribution of the electron charge at a given time Bohr opposed this idea vigorously Guided by his thinking about quantum-mechanical collision theory (see Sec II.D.) Born proposed that the absolute square of the wave function gives the probability of finding the electron, or other particle or particles, at a given position This interpretation has been generally accepted For a free particle, a wave function (wave packet) may be constructed that puts the main probability near a Hans A Bethe: Quantum theory position x and near a momentum p But there is the uncertainty principle: position and momentum cannot be simultaneously determined accurately, their uncertainties are related by ⌬x⌬pу ប (14) The uncertainty principle says only this: that the concepts of classical mechanics cannot be directly applied in the atomic realm This should not be surprising because the classical concepts were derived by studying the motion of objects weighing grams or kilograms, moving over distances of meters There is no reason why they should still be valid for objects weighing 10Ϫ24 g or less, moving over distances of 10Ϫ8 cm or less The uncertainty principle has profoundly misled the lay public: they believe that everything in quantum theory is fuzzy and uncertain Exactly the reverse is true Only quantum theory can explain why atoms exist at all In a classical description, the electrons hopelessly fall into the nucleus, emitting radiation in the process With quantum theory, and only with quantum theory, we can understand and explain why chemistry exists—and, due to chemistry, biology (A small detail: in the old quantum theory, we had to speak of the electron ‘‘jumping’’ from one quantum state to another when the atom emits light In quantum mechanics, the orbit is sufficiently fuzzy that no jump is needed: the electron can move continuously in space; at worst it may change its velocity.) Perhaps more radical than the uncertainty principle is the fact that you cannot predict the result of a collision but merely the probability of various possible results From a practical point of view, this is not very different from statistical mechanics, where we also only consider probabilities But of course, in quantum mechanics the result is unpredictable in principle Several prominent physicists found it difficult to accept the uncertainty principle and related probability predictions, among them de Broglie, Einstein, and ă Schrodinger De Broglie tried to argue that there should be a deterministic theory behind quantum mechanics Einstein forever thought up new examples that might contradict the uncertainty principle and confronted Bohr with them; Bohr often had to think for hours before he could prove Einstein wrong Consider a composite object that disintegrates into AϩB The total momentum P A ϩP B and its coordinate separation x A Ϫx B can be measured and specified simultaneously For simplicity let us assume that P A ϩP B is zero, and that x A Ϫx B is a large distance If in this state Rev Mod Phys., Vol 71, No 2, Centenary 1999 S5 the momentum of A is measured and found to be P A , we know that the momentum of B is definitely ϪP A We may then measure x B and it seems that we know both P B and x B , in apparent conflict with the uncertainty principle The resolution is this: the measurement of x B imparts a momentum to B (as in a ␥-ray microscope) and thus destroys the previous knowledge of P B , so the two measurements have no predictive value Nowadays these peculiar quantum correlations are often discussed in terms of an ‘‘entangled’’ spin-zero state of a composite object AB, composed of two spin-onehalf particles, or two oppositely polarized photons (Bohm and Aharonov) Bell showed that the quantummechanical correlations between two such separable systems, A and B, cannot be explained by any mechanism involving hidden variables Quantum correlations between separated parts A and B of a composite system have been demonstrated by some beautiful experiments (e.g., Aspect et al.) The current status of these issues is further discussed by Mandel (1999) and Zeilinger (1999), in this volume REFERENCES Born, M., and J R Oppenheimer, 1927, Ann Phys (Leipzig) 84, 457 Datz, S., G W F Drake, T F Gallagher, H Kleinpoppen, and G zu Putlitz, 1999, Rev Mod Phys 71, (this issue) Dirac, P A M., 1926, Ph.D Thesis (Cambridge University) Dyson, F J., 1949, Phys Rev 75, 486 Fermi, E., 1932, Rev Mod Phys 4, 87 Feynman, R P., 1948, Rev Mod Phys 76, 769 Heisenberg, W., 1971, Physics and Beyond (New York, Harper and Row) Heisenberg, W., and W Pauli, 1929, Z Phys 56, Heisenberg, W., and W Pauli, 1930, Z Phys 59, 168 Herschbach, D., 1999, Rev Mod Phys 71 (this issue) Hughes, V., and T Kinoshita, 1999, Rev Mod Phys 71 (this issue) Kusch, P., and H M Foley, 1948, Phys Rev 73, 412; 74, 250 Lamb, W E., and R C Retherford, 1947, Phys Rev 72, 241 Mandel, L., 1999, Rev Mod Phys 71 (this issue) Pauli, W., 1933, Handbuch der Physik, 2nd Ed (Berlin, Springer) Schrieffer, J R., and M Tinkham, 1999, Rev Mod Phys 71 (this issue) Schweber, S S., 1994, QED and the Men who Made It (Princeton University Press, Princeton, NJ), pp 157–193 Schwinger, J., 1948, Phys Rev 73, 416 Tomonaga, S., 1943, Bull IPCR (Rikenko) 22, 545 [Eng Translation 1946] Zeilinger, A., 1999, Rev Mod Phys 71 (this issue) S474 R E Slusher: Laser technology the computer in the 1000 Mb/sec range will be required Note the coincidence of these rates and that both are increasing exponentially It is clear that there will continue to be an exponentially increasing demand for information transmission capacity In response to this demand, the information capacity on a single optical fiber during the past four years, between 1994 and 1998, has increased 160 fold in commercial systems from 2.5 Gbits/ sec to 400 Gbits/sec This amazing increase has been achieved by using up to 100 different laser wavelengths (dense wavelength division multiplexing, DWDM) on each fiber The data rates at a single wavelength have increased from tens of Mbits/sec in the 1970s to 10 Gbits/sec at present, and 40 Gbits/sec will probably be in use before the turn of the century This information revolution is reshaping the global community just as strongly as the printing press revolution and the industrial revolution reshaped their worlds Two of the basic technologies that support the information revolution are the semiconductor diode laser and the erbium-doped fiber optical amplifier The low noise, high intensity, and narrow line widths associated with laser oscillators and amplifiers are absolutely essential to optical fiber communications systems Wider bandwidth incoherent sources like light emitting diodes or thermal sources fall short of the needed intensities and spectral linewidths by many orders of magnitude Semiconductor laser diodes were first demonstrated in 1962 at GE, IBM, and Lincoln Laboratories as homojunction devices based on III-V materials A history of these early diode lasers and references can be found in Agrawal and Dutta (1993).When the first heterojunction GaAs/AlGaAs room temperature, continuous wave diode lasers were operated in 1970 by Hayashi and Panish (Hayashi et al., 1970) at Bell Labs and Alferov (Alferov et al., 1970) in Russia, their lifetimes were measured in minutes Diode laser reliabilities have increased dramatically since that time Diode laser lifetimes at present are estimated to be hundreds of years, and the wavelength stabilities are greater than 0.1 nm over a period of 25 years These amazing stabilities are necessary for the new DWDM systems with over 100 wavelength channels spanning 100 nm wavelength ranges As the optimum wavelength for low-loss in silica fiber increased in wavelength from 800 nm to 1500 nm during the 1970s, diode laser wavelengths followed by evolving from GaAs to the InGaAsP system During the late 1980s and early 1990s, quantum wells replaced the bulk semiconductor in the active optical gain region in order to enhance the laser operating characteristics A schematic diagram of a present-day telecommunications diode laser integrated with an electro-absorption modulator is shown in Fig The overall dimensions are less than mm An elevated refractive index region and buried distributed feedback (DFB) grating, below the active quantum wells, defines the laser optical cavity and laser wavelength, respectively Fiber optic communication systems also rely strongly on the erbium-doped fiber amplifier developed in the late 1980s (Urquhart, 1988) These amplifiers have high Rev Mod Phys., Vol 71, No 2, Centenary 1999 FIG A schematic diagram of a semiconductor laser diode with an electro-absorption modulator used in optical communications systems (Courtesy of R L Hartman, Lucent Technologies) gain, typically near 25 dB, and low noise figures near the dB quantum noise limit for a linear phase-insensitive amplifier The gain in these amplifiers can be equalized over bandwidths of up to 100 nm, covering nearly a quarter of the low-loss silica fiber window between 1.2 and 1.6 ␮m wavelengths Optical fiber systems can be made ‘‘transparent’’ over thousands of kilometers using erbium-doped fiber amplifiers spaced at distances of approximately 80 km, where fiber losses approach 20 dB As the century closes we are rapidly approaching fundamental physical limits for lasers, optical amplifiers, and silica fibers Laser linewidths are in the 10 MHz range, limited by fundamental spontaneous emission fluctuations and gain-index coupling in semiconductor materials The number of photons in a detected bit of information is approaching the fundamental limit of approximately 60 photons required when using coherentstate laser light fields in order to maintain an error rate of less than part in 109 A bandwidth utilization efficiency of bit/sec/Hz has recently been demonstrated Optical amplifier bandwidths not yet span the 400 nm width of the low-loss fiber window, but they are expanding rapidly Fundamental limits imposed by nonlinear and dispersive distortions in silica fibers make transmission at data rates over 40 Gbits/sec very difficult over long distances Optical solitons can be used to balance these distortions, but even with solitons fundamental limits remain for high bit rate, multiwavelength systems The channel capacity limits imposed by information theory are on the horizon It is clearly a challenge for the next centuries to find even more information transmission capacity for the ever-expanding desire to communicate V MATERIALS PROCESSING AND LITHOGRAPHY High power CO2 and Nd:YAG lasers are used for a wide variety of engraving, cutting, welding, soldering, and 3D prototyping applications rf-excited, sealed off CO2 lasers are commercially available that have output powers in the 10 to 600 W range and have lifetimes of R E Slusher: Laser technology over 10 000 hours Laser cutting applications include sailclothes, parachutes, textiles, airbags, and lace The cutting is very quick, accurate, there is no edge discoloration, and a clean fused edge is obtained that eliminates fraying of the material Complex designs are engraved in wood, glass, acrylic, rubber stamps, printing plates, plexiglass, signs, gaskets, and paper Threedimensional models are quickly made from plastic or wood using a CAD (computer-aided design) computer file Fiber lasers (Rossi, 1997) are a recent addition to the materials processing field The first fiber lasers were demonstrated at Bell Laboratories using crystal fibers in an effort to develop lasers for undersea lightwave communications Doped fused silica fiber lasers were soon developed During the late 1980s researchers at Polaroid Corp and at the University of Southampton invented cladding-pumped fiber lasers The glass surrounding the guiding core in these lasers serves both to guide the light in the single mode core and as a multimode conduit for pump light whose propagation is confined to the inner cladding by a low-refractive index outer polymer cladding Typical operation schemes at present use a multimode 20 W diode laser bar that couples efficiently into the large diameter inner cladding region and is absorbed by the doped core region over its entire length (typically 50 m) The dopants in the core of the fiber that provide the gain can be erbium for the 1.5 ␮m wavelength region or ytterbium for the 1.1 ␮m region High quality cavity mirrors are deposited directly on the ends of the fiber These fiber lasers are extremely efficient, with overall efficiencies as high as 60% The beam quality and delivery efficiency is excellent since the output is formed as the single mode output of the fiber These lasers now have output powers in the 10 to 40 W range and lifetimes of nearly 5000 hours Current applications of these lasers include annealing micromechanical components, cutting of 25 to 50 ␮m thick stainless steel parts, selective soldering and welding of intricate mechanical parts, marking plastic and metal components, and printing applications Excimer lasers are beginning to play a key role in photolithography used to fabricate VLSI (very large scale integrated circuit) chips As the IC (integrated circuit) design rules decrease from 0.35 ␮m (1995) to 0.13 ␮m (2002), the wavelength of the light source used for photolithographic patterning must correspondingly decrease from 400 nm to below 200 nm During the early 1990s mercury arc radiation produced enough power at sufficiently short wavelengths of 436 nm and 365 nm for high production rates of IC devices patterned to 0.5 ␮m and 0.35 ␮m design rules respectively As the century closes excimer laser sources with average output powers in the 200 W range are replacing the mercury arcs The excimer laser linewidths are broad enough to prevent speckle pattern formation, yet narrow enough, less than nm wavelength width, to avoid major problems with dispersion in optical imaging The krypton fluoride (KF) excimer laser radiation at 248 nm wavelength supports 0.25 ␮m design rules and the ArF laser transition at 193 Rev Mod Phys., Vol 71, No 2, Centenary 1999 S475 nm will probably be used beginning with 0.18 ␮m design rules At even smaller design rules, down to 0.1 ␮m by 2008, the F2 excimer laser wavelength at 157 nm is a possible candidate, although there are no photoresists developed for this wavelength at present Higher harmonics of solid-state lasers are also possibilities as high power UV sources At even shorter wavelengths it is very difficult for optical elements and photoresists to meet the requirements in the lithographic systems Electron beams, x-rays and synchrotron radiation are still being considered for the 70 nm design rules anticipated for 2010 and beyond VI LASERS IN MEDICINE Lasers with wavelengths from the infrared through the UV are being used in medicine for both diagnostic and therapeutic applications (Deutsch, 1997) Lasers interact with inhomogeneous tissues through absorption and scattering Absorbers include melanin skin pigment, hemoglobin in the blood, and proteins At wavelengths longer than ␮m the primary absorber is water Dyes can also be introduced into tissue for selective absorption For example, in photodynamic therapy hematoporphyrin dye photosensitizers that absorb in the 630 nm to 650 nm wavelength range can be introduced into the system and used to treat cancer tumors by local laser irradiation in the urinary tract or esophagus Scattering in tissue limits the penetration of radiation; for example, at a wavelength of ␮m scattering limits the penetration depths to a few millimeters Scattering processes are being studied in the hope of obtaining high-resolution images for breast cancer screening Laser interaction with tissue depends on whether the laser is pulsed or CW Short laser pulses where no thermal diffusion occurs during the pulse can be used to confine the depth of laser effects This phenomena along with selective tuning of the laser wavelength is used in dermatology for treatment of skin lesions and in the removal of spider veins, tattoos, and hair Nonlinear interactions also play an important role For example, laser-induced breakdown is used for fragmentation of kidney and gallbladder stones Since the interior of the eye is easily accessible with light, ophthalmic applications were the first widespread uses of lasers in medicine Argon lasers have now been used for many years to treat retinal detachment and bleeding from retinal vessels The widespread availability of the CO2 and Nd:YAG lasers that cut tissue while simultaneously coagulating the blood vessels led to their early use in general surgery The Er:YAG laser has recently been introduced for dental applications with the promise of dramatic reduction in pain, certainly a welcome contribution from laser technology Diagnostic procedures using the laser are proliferating rapidly Some techniques are widely used in clinical practice For example the flow cytometer uses two focused laser beams to sequentially excite fluorescence of cellular particles or molecules flowing in a liquid through a nozzle The measured fluorescent signals can S476 R E Slusher: Laser technology be used for cell sorting or analysis Routine clinical applications of flow cytometry include immunophenotyping and DNA content measurement Flow cytometers are used to physically separate large numbers of human chromosomes The sorted chromosomes provide DNA templates for the construction of recombinant DNA libraries for each of the human chromosomes These libraries are an important component of genetic engineering A new laser based medical imaging technique (Guillermo et al., 1997) based on laser technology called optical coherence tomography (OCT) is achieving spatial resolution of tissues in the 10 ␮m range Ultrasound and magnetic resonance imaging (MRI) resolutions are limited to the 100 ␮m to mm range The new highresolution OCT technique is sensitive enough to detect abnormalities associated with cancer and atherosclerosis at early stages The OCT technique is similar to ultrasound, but it makes use of a bright, broad spectral bandwidth infrared light source with a coherence length near 10 ␮m, resulting in at least an order of magnitude improvement in resolution over acoustic and MRI techniques The source can be a super luminescent diode, Cr:forsterite laser, or a mode-locked Ti:Sapphire laser OCT performs optical ranging in tissue by using a fiber optic Michelson interferometer Since interference is observed only when the optical path lengths of the sample and the reference arms of the interferometer match to within the coherence length of the source, precision distance measurements are obtained The amplitude of the reflected/scattered signal as a function of depth is obtained by varying the length of the reference arm of the interferometer A cross-sectional image is produced when sequential axial reflection/scattering profiles are recorded while the beam position is scanned across the sample Recent studies have shown that OCT can image architectural morphology in highly scattering tissues such as the retina, skin, the vascular system, the gastrointestinal tract, and developing embryos An image of a rabbit trachea obtained using this technique coupled with a catheterendoscope is shown in Fig OCT is already being used clinically for diagnosis of a wide range of retinal macular diseases An elegant and novel optical technique using spinpolarized gases (Mittleman et al., 1995) is being explored to enhance MRI images of the lungs and brain Nuclear spins in Xe and 3He gases are aligned using circularly polarized laser radiation These aligned nuclei have magnetizations nearly 105 times that for protons normally used for MRI imaging Xenon is used as a brain probe since it is soluble in lipids In regions like the lungs, that not contain sufficient water for highcontrast MRI images, 3He provides the high-contrast images One can even watch 3He flow in the lungs for functional diagnostics VII LASERS IN BIOLOGY Laser applications in biology can be illustrated with two examples, laser tweezers and two-photon microsRev Mod Phys., Vol 71, No 2, Centenary 1999 FIG Optical coherence tomography images of a rabbit trachea in vivo (a) This image allows visualization of distinct architechtual layers, including the epithelium (e), the mucosal stroma (m), cartilage (c), and adipose tissue (a) The trachealis muscle (tm) can be easily identified (B) Corresponding histology Bar, 500 ␮m copy When collimated laser light is focused near or inside a small dielectric body like a biological cell, refraction of the light in the cell causes a lensing effect A force is imparted to the cell by transfer of momentum from the bending light beam Arthur Ashkin at Bell Laboratories (Ashkin, 1997) found that by varying the shape and position of the focal volume in a microscopic arrangement, a cell can be easily moved or trapped with these ‘‘laser tweezer’’ forces using light intensities near 10 W/cm2 At these light levels and wavelengths in the near infrared, there is no significant damage or heating of cell constituents Laser tweezers are now being used to move subcellular bodies like mitochondria within a cell (Sheetz, 1998) Tweezer techniques can also be used to stretch DNA strands into linear configurations for detailed studies Two laser beams can be used to stabilize a cell and then a third laser beam at a different wavelength, can be used for spectroscopic or dynamic studies Pulsed lasers are being used as ‘‘scissors’’ to make specific modifications in cell structures or to make small holes in cell membranes so that molecules or genetic materials can be selectively introduced into the cell Scanning confocal and two-photon optical microscopy are excellent examples of the contribution of laser technology to biology Three-dimensional imaging of nerve cells nearly 200 ␮m into functioning brains and developing embryos is now a reality Practical confocal micro- R E Slusher: Laser technology S477 and of the rat somatosensory cortex have been imaged at depths of 200 ␮m below the brain surface Even more impressive are motion pictures of embryo development Embryo microscopy is particularly sensitive to photodamage and the two-photon technique is opening new vistas in this field VIII LASERS IN PHYSICS FIG (Color) Two-photon confocal microscope fluorescent image of a living Purkenji cell in a brain slice The cell dimensions are of the order of 100 ␮m scopes came into wide use in the late 1980s as a result of reliable laser light sources The resolution of the lens in a confocal microscope is used both to focus the light to a diffraction-limited spot and then again to image primarily the signal photons, i.e., those that are not strongly scattered by the sample, onto an aperture Even though high-resolution 3D images are obtained, this singlephoton scheme is a wasteful use of the illuminating light since a major fraction is scattered away from the aperture or is absorbed by the sample In fluorescent microscopy, photodamage to the fluorophore is an especially limiting factor for single-photon confocal microscopy Multiphoton scanning confocal microscopy was introduced in 1990 and solves many of the problems of single-photon techniques A typical two-photon microscope uses short 100 fs pulses from a Ti:sapphire modelocked laser at average power levels near 10 mW The high intensity at the peak of each pulse causes strong two-photon absorption and fluorescence only within the small focal volume, and all the fluorescent radiation can be collected for high efficiency The exciting light is chosen for minimal single-photon absorption and damage, so that the two-photon technique has very high resolution, low damage, and deep penetration A beautiful two-photon fluorescent image of a living Purkenji cell in a brain slice is shown in Fig (Denk and Svoboda 1997) Neocortical pyrimidal neurons in layers Rev Mod Phys., Vol 71, No 2, Centenary 1999 Laser technology has stimulated a renaissance in spectroscopies throughout the electromagnetic spectrum The narrow laser linewidth, large powers, short pulses, and broad range of wavelengths has allowed new dynamic and spectral studies of gases, plasmas, glasses, crystals, and liquids For example, Raman scattering studies of phonons, magnons, plasmons, rotons, and excitations in 2D electron gases have flourished since the invention of the laser Nonlinear laser spectroscopies have resulted in great increases in precision measureă ment as described in an article in this volume (Hansch and Walther 1999) Frequency-stabilized dye lasers and diode lasers precisely tuned to atomic transitions have resulted in ultracold atoms and Bose-Einstein condensates, also described in this volume (Wieman et al., 1999) Atomicstate control and measurements of atomic parity nonconservation have reached a precision that allows tests of the standard model in particle physics as well as crucial searches for new physics beyond the standard model In recent parity nonconservation experiments (Wood et al., 1997) Ce atoms are prepared in specific electronic states as they pass through two red diode laser beams These prepared atoms then enter an optical cavity resonator where the atoms are excited to a higher energy level by high-intensity green light injected into the cavity from a frequency-stabilized dye laser Applied electric and magnetic fields in this excitation region can be reversed to create a mirrored environment for the atoms After the atom exits the excitation region, the atom excitation rate is measured by a third red diode laser Very small changes in this excitation rate with a mirroring of the applied electric and magnetic fields indicate parity nonconservation The accuracy of the parity nonconservation measurement has evolved over several decades to a level of 0.35% This measurement accuracy corresponds to the first definitive isolation of nuclear-spin-dependent atomic parity violation At this accuracy level it is clear that a component of the electron-nuclear interaction is due to a nuclear anapole moment, a magnetic moment that can be visualized as being produced by toroidal current distributions in the nucleus Lasers are also contributing to the field of astrophysics A Nd:YAG laser at 10.6 ␮m wavelength will be used in the first experiments to attempt detecting gravitational waves from sources like supernovas and orbiting neutron stars These experiments use interferometers that should be capable of measuring a change in length between the two interferometer arms to a precision of one part in 1022 A space warp of this magnitude is pre- S478 R E Slusher: Laser technology dicted for gravitational radiation from astrophysical sources The terrestrial experiments are called LIGO (Light Interferometer Gravitational Wave Observatory) in the U.S and GEO in Europe A space-based experiment called LISA (Light Interferometer Space Antenna) is also in progress The LIGO interferometer arms are each km long A frequency-stable, low noise, high-spatial-beam-quality laser at a power level of 10 W is required for the light source Cavity mirrors form resonators in each interferometer arm that increase the power in the cavities to nearly kW Four Nd:YAG rods, each side pumped by two 20 W diode bars, amplify the single frequency output of a nonplanar ring oscillator from 700 mW to at least 10 W Achieving the required sensitivity for detecting gravitational waves means resolving each interferometer fringe to one part in 1011, a formidable, but hopefully achievable goal IX FUTURE LASER TECHNOLOGIES The free-electron laser and laser accelerators are examples of developing laser technologies that may have a large impact in the next century The free-electron laser (FEL) is based on optical gain from a relativistic electron beam undulating in a periodic magnetic field (Sessler and Vaugnan, 1987) Electron beam accelerators based on superconducting microwave cavities are being developed at a new FEL center at Jefferson Laboratories These accelerating cavities generate high fields in the 10 to 20 MeV/m range and allow very efficient generation of FEL light that can be tuned from the infrared to the deep ultraviolet with average power levels in the kilowatt range (Kelley et al., 1996) At present a kW average power infrared FEL is near completion and an upgrade to a powerful, deep-UV FEL is being planned At these immense powers, a number of new technologies may be commercially interesting Short, intense FEL pulses may allow rapid thermal annealing and cleaning of metal surfaces Pulsed laser annealing may result in nearly an order of magnitude increase in hardness for machine tools The high average FEL powers may be sufficient to make commercial production of laser-enhanced tools a reality Another large market that requires high powers for processing of large volumes is polymer wraps and cloth In this case intense FEL pulses can induce a wide range of modified polymer properties including antibacterial polymer surfaces that could be used for food wrappings and clothing with pleasing textures and improved durability High average powers and wavelength tunability are also important for patterning of large area micromaching tools used to imprint patterns in plastic sheets Petawatt-class lasers may provide the basis for a new generation of particle accelerators The frequency of microwave field accelerators being used at present will probably be limited by self-generated wakes to less than 100 GHz where the accelerating fields reach the 100 MeV/m range Intense laser beams are being used to generate much higher fields in the 100 GeV/m range (Madena et al., 1995) For example, one technique uses Rev Mod Phys., Vol 71, No 2, Centenary 1999 two laser beams whose difference frequency is tuned to the plasma frequency of a gas ionized by the laser Accelerating fields as high as 160 GeV/m can be generated between the periodic space charge regions of the plasma wave The propagation velocities of these gigantic fields can be engineered to match the relativistic velocities of the accelerated particles Much work remains in order to achieve practical accelerators but proof of principle has already been achieved Developing laser technologies and their contributions to science are too numerous to cover adequately in this brief review Laser communications between satellite networks, laser propelled spacecraft and laser fusion are additional examples of developing laser technologies In the basic sciences there are many new experiments that are being enabled by laser technology including correction for atmospheric distortions in astronomy using laser reflections from the sodium layer in the upper atmosphere and studies of quantum electrodynamics using ultra-intense laser beams Just as it was hard to envision the potential of laser technologies in the 1960s and 1970s, it seems clear that we cannot now envision the many new developments in lasers and their applications in the next century will see Our new laser light source is sure to touch us all, both in our ordinary lives and in the world of science REFERENCES Agrawal, G P., and N K Dutta, 1993, Semiconductor Lasers (Van Nostrand Reinfield, New York) Alferov, Zh I., V M Andreev, D Z Garbuzov, Yu V Zhilyaev, E P Morozov, E L Portnoi, and V G Trofim, 1971, Sov Phys Semicond 4, 1573 Anderson, S G., 1998, Laser Focus World 34, 78 Ashkin, A., 1997, Proc Natl Acad Sci USA 94, 4853 Ausubel, J H and H D Langford 1987, Eds., Lasers: Invention to Application (National Academy, Washington, DC) Basov, N G., and A M Prokhorov, 1954, J Exp Theor Phys 27, 431 Bloembergen, 1999 Rev Mod Phys 71 (this issue) Bromberg, J L., 1988, Phys Today 41, 26 Choquette, K D., and H Q Hou, 1997, Proc IEEE 85, 1730 Deutsch, T F., 1997, Proc IEEE 85, 1797 DenBaars, S P., 1997, Proc IEEE 85, 1740 Denk, W., and K Svoboda, 1997, Neuron 18, 351 Einstein, J., 1917, Phys Z 18, 121 Forden, G E., 1997, IEEE Spectr 34(9), 40 Gibbs, R., 1998, Laser Focus World 34, 135 Gordon, J P., H J Zeigler, and C H Townes, 1954, Phys Rev 95, 282 Gordon, J P., H J Zeigler, and C H Townes, 1955, Phys Rev 99, 1264 Guillermo, J T., M E Brezinski, B E Bouma, S A Boppart, C Pitris, J F Southern, and J G Fujimoto, 1997, Science 276, 2037 ă Hansch T W., and H Walther, Rev Mod Phys 71 (this issue) Hayashi, I., M B Panish, P W Foy, and S Sumski, 1970, Appl Phys Lett 17, 109 Javan, A., W R Bennett, Jr., and D R Herriott, 1961, Phys Rev Lett 6, 106 R E Slusher: Laser technology Kelley, M J., H F Dylla, G R Neil, L J Brillson, D P Henkel, and H Helvajian, 1996, SPIE 2703, 15 Maiman, T H., 1960, Nature (London) 187, 493 Mittleman, H., R D Black, B Saam, G D Cates, G P Cofer, R Guenther, W Happer, L W Hedlund, G A Johnson, K Juvan, and J Scwartz, 1995, Magn Reson Med 33, 271 Modena, A., Z Najmudin, A E Dangor, C E Clayton, K A Marsh, C Joshi, V Malka, C B Darrow, C Danson, D Neely, and F N Walsh, 1995, Nature (London) 377, 606 Ready, J., 1997, Industrial Applications of Lasers (Academic, San Diego) Rossi, B., 1997, Laser Focus World 33, 78 Schawlow, A L., and C H Townes, 1958, Phys Rev 112, 1940 Rev Mod Phys., Vol 71, No 2, Centenary 1999 S479 Sesseler, A M., and D Vaughan, 1987, Am Sci 75, 34 Sheetz, M P., 1998, Ed., Laser Tweezers in Cell Biology,in Methods in Cell Biology (Academic, San Diego), Vol 55 Siegman, A E., 1986, Lasers (University Science Books, Mill Valley, CA) Steele, R., 1998, Laser Focus World 34, 72 Townes, C H., 1995, Making Waves (AIP, New York) Urquhart, P., 1988, Proc IEEE 135, 385 Weber, J., 1953, IRE Trans Prof Group on Electron Devices 3, Wieman, C E., P E Pritchand, and D J Wineland, 1999, Rev Mod Phys 71 (this issue) Wood, C S., S C Bennett, D Cho, B P Masterson, J L Roberts, C E Tanner, and C E Wieman, 1997, Science 275, 1759 Physics and the communications industry W F Brinkman and D V Lang Bell Laboratories, Lucent Technologies, Murray Hill, New Jersey 07974 This review explores the relationship between physics and communications over the past 125 years The authors find that four eras of major change in communications technology can be traced to the corresponding major discoveries of physics that directly influenced the communications industry The four areas of physics that define these periods are electromagnetism, the electron, quantum mechanics, and quantum optics [S0034-6861(99)02602-1] I INTRODUCTION Today’s communications industry is a leading force in the world’s economy Our lives would be vastly different without the telephone, fax, cell-phone, and the Internet The commonplace and ubiquitous nature of this technology, which has been evolving over a period of 125 years,1 tends to overshadow the dominant role that physics and physicists have played in its development.2 The purpose of this review is to explore this coupling and to show that the communications industry has not only made use of the results of academic physics research but has also contributed significantly to our present understanding of fundamental physics Due to limitations of space, we shall primarily use examples from the Bell System and Bell Laboratories The foundations of communications technology lay in the discoveries of the great physicists of the early 19th century: Oersted, Ampere, Faraday, and Henry The telegraph was invented only seventeen years after the discovery of electromagnetism by H C Oersted in 1820 In spite of this connection, however, much of the early work on communications was done by inventors, such as Morse, Bell, and Edison, who had no formal scientific background The telegraph was a fairly simple electromechanical system which did not require the development of new scientific principles to flourish commercially in the mid 19th century Closer coupling between physics and communications occurred shortly after the invention of the telephone by Alexander Graham Bell in 1876 The telephone concept immediately captured the imagination of the scientific community, where the ‘‘hot physics’’ of the period was electromagnetism and wave propagation Over the next century, industrial physics research on communications improved the technology, as well as spawning fundamental results of interest to the broad physics community The types of physics specifically devoted to communications have varied continuously with the evolution of the technology and discontinuously with major physics discoveries over the past 125 years We Much of this review is based on the telecommunications histories complied by Fagen (1975) and Millman (1983, 1984) The solid-state physics history is based on Hoddeson et al (1992) and Riordan and Hoddeson (1997) The laser and optical communications history is based on Whinnery (1987), Agrawal and Dutta (1993), and Kaminow and Koch (1997) S480 Reviews of Modern Physics, Vol 71, No 2, Centenary 1999 can identify four broad eras of physics that have impacted communications: (1) the era of electromagnetism (starting in 1820); (2) the era of the electron (starting in 1897); (3) the era of quantum mechanics (starting in the 1920s); and (4) the era of quantum optics (starting in 1958) II THE ERA OF ELECTROMAGNETISM This era dates from 1820, when Oersted discovered that an electric current generates a magnetic field The first electromagnet was built in 1825, and in 1831 Faraday and Henry independently discovered that electric currents can be induced in wires moving in a magnetic field The concept of the electromagnet was exploited in two independently invented telegraph systems in 1837—an analog system by Cooke and Wheatstone in Britain and a digital system by Samuel Morse in the U.S., using a dot-dash code that he also invented for the purpose Because of its simple and robust design, the latter system achieved widespread commercial use in less than fifteen years In 1861 Western Union had completed the first transcontinental telegraph line across the U.S The British physicist William Thomson (later Lord Kelvin) was largely responsible for the construction of the first successful transatlantic telegraph cable in 1866 and developed the mirror galvanometer needed to detect the extremely weak signals One of the many young inventors excited by telegraph technology was Alexander Graham Bell, who was also interested in teaching the deaf to speak Bell was working on devices that would enable deaf people to visualize the sounds they could make but not hear He constructed a mechanical strip chart recorder, known as a phonautograph, which used human ear bones to couple the vibrations of a diaphragm to a stylus that traced the voice oscillations on a moving glass slide covered with lampblack He was interested in making an electromagnetic analog for ‘‘electric speech’’ and as early as 1874 had the concept of vibrating a small magnet with sound waves and inducing a speech current in an electromagnet In 1875, during his concurrent work on the multiplexed ‘‘harmonic telegraph,’’ he accidentally discovered that useful audio-frequency currents could indeed be induced by a vibrating magnetic reed over an electromagnet and could be transmitted over such a system By coupling the vibrating magnet to a diaphragm, as in his 0034-6861/99/71(2)/480(9)/$16.80 ©1999 The American Physical Society W F Brinkman and D V Lang: Physics and the communications industry FIG Alexander Graham Bell’s first telephone Voice sounds were transmitted for the first time on June 3, 1875, over this gallows-shaped instrument From Fagen, 1975 phonautograph, he could transmit speech sounds Figure shows this first crude telephone, which Bell patented in 1876 For the next several years he empirically optimized the telephone while he and his financial backers started various corporations to exploit his invention Bell was not the only person exploring speech transmission The idea of transmitting voice using ‘‘harmonic telegraph’’ technology also occurred to Elisha Gray of the Western Union Company Working with Thomas Edison, who invented the carbon microphone in 1877, Gray developed and patented a telephone design that was technically superior to Bell’s The sound transmission efficiency of the transmitter-receiver pair in Bell’s original telephone was roughly Ϫ60 dB Edison’s granular carbon microphone transmitter increased this efficiency by 30 dB As a result, Western Union had 50 000 of Gray’s telephones in service by 1881 when various patent lawsuits were settled giving the American Bell Telephone Company complete control of the technology Note that the rapid commercialization of the telephone occurred within only five years of its invention—a time scale usually associated with contemporary computer technology While the telephone entrepreneurs were busy deploying the new technology, academic physicists were laying the groundwork that would be necessary to create a long-distance telephone network James Clerk Maxwell developed the unified equations governing electromagnetism in 1864 Electromagnetic wave propagation, predicted by Maxwell, was observed in the 1880s by Heinrich Hertz Also in the 1880s, the British physicist Oliver Heaviside applied Maxwell’s theory to show that the propagation of speech currents over wires in telephone systems needed to be understood on the basis of wave propagation, not simple currents In 1884, Lord Rayleigh showed that such speech currents would be exponentially attenuated in a telegraph cable, calculating that a 600-Hz signal would be reduced by a factor of 0.135 over 20 miles in the transatlantic telegraph cable (0.27 dB/km) His paper was very pessimistic about the prosRev Mod Phys., Vol 71, No 2, Centenary 1999 S481 pects for use of telephone technology for long-distance communication, compared to the well-established telegraph Results such as Rayleigh’s, as well as the poor sound quality of the early telephone system, created the first real opportunity for physics in the infant telephone industry In 1885, the managers of the engineering department of the American Bell Telephone Company, which Bell had left in 1881 and which later became AT&T, realized that it would not be easy to improve the transmission distance and quality on long-distance lines by trial and error Therefore they formed a research department in Boston specifically focused on the physics of electromagnetic propagation on long-distance telephone lines Hammond Hayes, one of Harvard’s first physics Ph.D.s, organized the department and over the next twenty years hired other physicists from Harvard, MIT, Yale, Chicago, and Johns Hopkins to explore this new area of applied physics This marked the beginning of industrial research in applied physics for communications The problem faced by early telephone researchers can be described rather simply from the vantage point of today’s understanding The original telephone transmission system was based on telegraph technology and used a single iron wire for each circuit with a return path through the ground The attenuation A (in dB per unit length) of such a line at telephone frequencies (ϳkHz) is given approximately by AϳR ͱC/LϩG ͱL/C (1) where R is the series resistance, L the series inductance, C the shunt capacitance, and G the shunt conductance, all per unit length At telegraph frequencies (ϳ10 Hz) such a line is almost purely resistive For a multiple-wire telephone cable, which has much higher capacitance and lower inductance, the attenuation is 10 to 25 times greater than Eq (1) and can be approximated by Aϳ ͱRCf (2) where f is the frequency The first solution to the telephone attenuation problem was to reduce R by an order of magnitude by replacing the iron wire with copper A second problem was that the interference from outside sources picked up by the single, unshielded wire was an order of magnitude greater at telephone frequencies than for the telegraph This was solved by adding a second wire to make a so-called ‘‘metallic circuit.’’ However, such solutions dramatically increased costs First, the copper wire had to be of large enough gauge to be self-supporting, and second, twice as much copper was needed in a metallic circuit The situation in longdistance cables was even worse, where the higher intrinsic attenuation of Eq (2) could only be solved by using even heavier-gauge copper (the longest cables used onetenth-inch-diameter wire) Thus cables were only cost effective within cities to solve the congestion and weather problems illustrated in Fig By the turn of the century, an understanding of the physics of transmission lines had produced a dramatic S482 W F Brinkman and D V Lang: Physics and the communications industry AT&T’s Boston laboratory and used commercially in the New York to Chicago route in 1905 Such electromechanical amplifiers had considerable distortion and narrow dynamic range, but their limited success served to focus research energies on finding a more useful type of amplifier This leads us to the next era in the relationship between physics and the communications industry, when such an amplifier was indeed invented III THE ERA OF THE ELECTRON FIG Boston central telephone station at 40 Pearl Street after the blizzard of 1881 Inset shows the installation of underground cables which solved the weather and congestion problems of thousands of open wires From Fagen, 1975 solution to these problems In 1887, Heaviside developed the transmission line theory which we now understand as Eqs (1) and (2) He pointed out that the attenuation could be reduced by increasing the series inductance per unit length, which is obvious from Eq (1) when the first term dominates, as is usually the case In 1899, George Campbell, at AT&T’s Boston laboratory, and Michael Pupin, at Columbia University, almost simultaneously concluded that discrete inductors could simulate the continuous inductance of Eq (1) as long as the spacing was not larger than one-tenth of a wavelength For telephone frequencies this corresponded to a spacing of eight miles in open lines and one mile in cables The effect of these so-called ‘‘loading coils’’ was dramatic The maximum transmission distance of open lines nearly doubled, thus allowing the long-distance network to extend from New York to Denver by 1911 The effect on telephone cables was even more dramatic The attenuation was decreased by a factor of four and the frequency distortion of Eq (2) was greatly reduced Since cables were primarily used for relatively short distances, these gains were traded off against the series resistance of the conductor and allowed smaller-gauge wires to be used, thus saving an order of magnitude in the cost of copper for the same cable length In addition to the transmission enhancements of loaded lines, there was a strong desire to develop some kind of ‘‘repeater’’ that would strengthen the weakened signals and retransmit them Electromechanical repeaters were commonly used in the telegraph system for many years and were the reason that the telegraph was quickly extended coast to coast The early attempts to invent telephone repeaters were electromechanical analogs of the telegraph systems A number of inventors patented telephone repeaters that were essentially a telephone receiver placed next to a carbon microphone The carbon microphone modulated a large current by means of speech-induced vibrations of the weakly touching particles, and the resulting gain formed the basis of an amplifier An improved version was developed at Rev Mod Phys., Vol 71, No 2, Centenary 1999 At about the same time that the telephone was invented, various electrical experiments in gas-discharge tubes were laying the physics groundwork for the next phase of communications technology In about 1878, Sir William Crookes developed a specially designed tube to study the mysterious phenomenon of cathode rays which caused gas-discharge tubes to glow In 1897, the cathode-ray studies of J J Thomson, Cavendish Professor of Physics at Cambridge, led to the discovery of the electron The first thermionic vacuum-tube diode was invented by Sir John Fleming in 1904, following Edison’s 1883 observation of current flow between the filament of a light bulb and a nearby electrode Fleming’s device was an excellent detector of wireless telegraph signals, which had been invented by Marconi in 1896 In 1907, Lee de Forest invented the vacuum-tube triode, which for five years remained almost the exclusive province of wireless entrepreneurs It was a much better rf detector than Fleming’s diode but was never used as a power amplifier Meanwhile, pressure to develop a telephone amplifier intensified In 1909, Theodore Vail, the new president of AT&T, who was rapidly acquiring the small telephone companies that had sprung up after Bell’s patents expired, set forth the vision that would define the Bell System for many years to come—‘‘One Policy, One System, Universal Service.’’ He probably did not know that his grand vision of coast-to-coast service was not possible without an effective telephone amplifier In about 1910, Frank B Jewett, who was in charge of transmission engineering at AT&T’s Western Electric subsidiary, guessed that an improved amplifier might be possible with the ‘‘inertialess moving parts’’ of the electron beams that his friend Robert Millikan at the University of Chicago had been studying Jewett asked Millikan to recommend one of his top students trained in the ‘‘new physics’’ whom he could hire to amplifier research The new hire, H D Arnold, started in 1911 and within two years achieved the goal of the useful vacuum-tube amplifier that Vail’s vision required In 1913, Arnold’s vacuum-tube amplifier, shown in Fig 3, was being used in commercial service As a result, the Panama-Pacific International Exposition in San Francisco opened to great fanfare in 1915 with the dedication of the first transcontinental telephone circuit and a conversation between President Woodrow Wilson in the White House, Alexander Graham Bell in New York, and Bell’s original assistant Thomas Watson in San Francisco This unqualified success of physics convinced AT&T officials W F Brinkman and D V Lang: Physics and the communications industry FIG Arnold’s high-vacuum tube, first used as a telephone repeater at Philadelphia on a New York to Washington cable circuit in October, 1913 Other, later models, were used on the transcontinental circuit opened for service in 1915 From Fagen, 1975 that paying topnotch Ph.D physicists to communications research was good business Arnold’s vacuum-tube triumph was facilitated by de Forest’s triode demonstration to Bell officials in 1912 Even though the demonstration failed under the output power levels required for telephone applications, Arnold immediately saw how to improve the tube He made three improvements that were critical: a higher vacuum, an oxide-coated cathode, and a better grid position Irving Langmuir of General Electric had independently pursued a high-vacuum design for improving the triode in 1913 These vacuum-tube amplifiers not only dramatically increased the transmission distance of telephone lines, but also made long-distance wireless telephony possible In 1915, using high-power vacuum-tube amplifiers, Bell System scientists transmitted speech for the first time by wireless telephony from Arlington, Virginia, to both Paris and Honolulu Wireless telephony was not widely used for two-way communications, however, until the development of microwaves during W.W II and cellular wireless technology in the latter part of the 20th century Vacuum tubes also made possible dramatic increases in the capacity of the long-distance telephone system, which lowered the cost per call This was done by multiplexing a number of calls in parallel over the same pair of wires using independently modulated carriers of different frequencies, a technique sometimes called ‘‘wired wireless.’’ The idea of multiplexing was originally developed for the telegraph, using electromechanical resonators to generate and separate the various carrier tones Indeed, Bell had been experimenting with just such a ‘‘harmonic telegraph’’ when he invented the telephone However, to apply the same principle to voice-frequency signals required carriers of much higher frequency, ϳ5–25 kHz This was not possible without vacuum tubes for oscillators, modulators, and amplifiers In addition, technology from the era of electromagnetism was needed for the bandpass filters to demultiplex the signals at the receiver In about 1910, the Bell System theoretician George Campbell, who also introduced loading Rev Mod Phys., Vol 71, No 2, Centenary 1999 S483 coils, invented the so-called ‘‘wave filter.’’ The value of this invention was not immediately appreciated, however, and the concept was not patented until 1917 It is of interest to note that today’s technology for increasing transmission capacity on a glass fiber is wavelengthdivision multiplexing (WDM), which, although digital, is nevertheless almost an exact optical analog of the electronic multiplexing developed in the 1920s The fundamental physics underlying telephone transmission was fairly well established by 1920 Thus communications-related physics research turned to an exploration of the underlying materials The primary areas were thermionic emission, noise in vacuum tubes, and the magnetic materials used in transformers and loading coils During the 1920s physicists at the Western Electric laboratory, which became Bell Telephone Laboratories in 1925, made a number of important physics contributions in these areas In 1925, J B Johnson made the first observation of the thermal noise predicted in 1918 by W Schottky Thermionic emission was first described theoretically by O W Richardson in Britain in 1901, and the benefit of alkaline-earth oxides in enhancing thermionic emission was observed by A Wehnelt in Germany in 1904 About 10 years later, C D Child at Colgate and Langmuir at GE independently described the physics of space-charge-limited current, which is essential to the operation of a vacuum tube Nevertheless, by 1920 there was still a long-standing controversy as to whether the enhanced performance of oxide-coated cathodes was due to chemical or physical effects This was settled by Arnold, who showed in 1920 that the enhanced emission was indeed due to thermionic emission In 1917, Clinton J Davisson, a student of Richardson’s at Princeton, came to Western Electric in order to understand the fundamental physics of why alkaline-earth oxides had a lower work function than tungsten As a result of the invention of the ionization vacuum gauge by Oliver E Buckley in 1916, it was possible to characterize the vacuum conditions necessary for this early surface-physics research, thus allowing Davisson and Lester Germer to some of the key physics on thermionic emission from oxides and tungsten in the early 1920s Physics research on magnetic materials was also important at Bell Labs during the 1920s Permalloy had been developed at Western Electric for use in the loading coils discussed earlier The first scientific paper on permalloy was given by Arnold and G W Elmen at the 1923 Spring Meeting of the American Physical Society Another highlight of this period was Richard Bozorth’s internationally acclaimed work on the Barkhausen effect Perhaps the most famous physics experiment at Bell Labs during the 1920s was the observation of the wave nature of the electron, for which Davisson received the 1937 Nobel Prize in Physics This is indicative of the systematic business practice of recruiting the very best physicists to applied research in the communications industry In order to attract and retain top physics talent, it was necessary to allow such researchers the freedom to pursue the fundamental questions uncovered in their S484 W F Brinkman and D V Lang: Physics and the communications industry applied research The history of Davisson’s Nobel Prize is instructive At the same time that Davisson was working on thermionic emission, he was also pursuing experiments to study the atomic structure of magnetic metals using inelastic electron scattering This was relevant to both magnetic materials and the problem of secondary electron emission in vacuum tubes During these experiments, Davisson noticed that some of the electrons were elastically scattered While pursuing this unexpected result, he noticed some angular structure in the scattering pattern In 1925, W Elsasser suggested that Davisson’s data could be evidence of ‘‘de Broglie waves,’’ which had been proposed in 1923 During a 1926 trip to Euă rope, Davisson obtained a copy of Schrodinger’s paper on wave mechanics, which had been published earlier that year Upon returning to Bell Labs, he and Germer repeated the experiments with a single crystal of nickel, looking at the specific angles where electrons would be ă diffracted according to Schrodingers equation The results led to their famous 1927 paper, which also introduced low-energy electron diffraction (LEED) as an important tool for surface physics Similar stories hold for Karl Jansky, who discovered radio astronomy in 1933 while studying noise in transatlantic radio telephony, and Arno Penzias and Robert Wilson, who discovered the 3K microwave background radiation of the big bang in 1964 while studying noise in telecommunications satellites IV THE ERA OF QUANTUM MECHANICS The fundamental-physics roots of this era began with the explosive growth of quantum mechanics in Europe in the 1920s The first application of this new theory to solids was Felix Bloch’s 1928 quantum theory of metals The foundations of semiconductor physics quickly followed with Rudolf Peierls’s 1929 theory of the positive Hall effect due to holes, Brillouin’s 1930 concept that band gaps are related to the Bragg scattering conditions, and Alan Wilson’s 1931 band theory of semiconductors, including the effects of doping A major development in the physics of real materials was E Wigner and F Seitz’s 1933 approximate method for calculating band structure This marked the beginning of a shift from the fundamental studies of the 1920s to the practical solidstate physics which would dominate the second half of the 20th century The experimental roots of semiconductor physics date from the 19th century In the 1870s, at almost exactly the same time that Bell was inventing the telephone, physicists working on selenium, copper oxide, and various metallic sulfides (all materials we know today to be semiconductors) were discovering diode rectification behavior, the Hall effect, photoconductivity, and the photovoltaic effect In fact, even the idea of inventing a solid-state analog of the vacuum tube had occurred to a number of people during the 1920s and 1930s—J E Lilienfeld patented the field-effect concept in 1926, and Rev Mod Phys., Vol 71, No 2, Centenary 1999 Walter Brattain and Joseph Becker at Bell Labs contemplated putting a grid into copper oxide rectifiers during the 1930s By the late 1930s, solid-state physics was well established and had the potential for major applications In a move reminiscent of earlier eras, Mervin Kelly, Bell Labs’ Director of Research, sought out the best of the new breed of solid-state physicists to explore the potential of semiconductors for communications; in 1936 he hired William Shockley from John Slater’s group at MIT However, the effort to make devices of possible use in communications, e.g., solid-state switches or amplifiers, did not start seriously until 1946 when nonmilitary research resumed at Bell Labs after W.W II Shockley was put in charge of a new solid-state research group specifically chartered to obtain a fundamental understanding of the device potential of silicon and germanium, which had been developed into excellent microwave detectors during the war One of his first moves was to hire John Bardeen The subsequent path to success was as rapid as Arnold’s development of the vacuum-tube amplifier in 1912 The point-contact transistor was demonstrated within two years, by the end of 1947 The birth of the transistor is covered in a number of 50th Anniversary reviews (Brinkman et al., 1997; Riordan and Hoddeson, 1997; Ross, 1998), including one in this volume (Riordan et al., 1999, which includes a photograph of the first transistor) Therefore our focus will be to review the relationship of the transistor to the technology changes that have revolutionized communications over the past 50 years.3 The application of the transistor to communications occurred in two phases The first, during the 1950s, was simply the replacement of vacuum tubes in various circuits The first commercial use of the transistor in the Bell System was in 1954; the first fully ‘‘transistorized’’ product (the E6 repeater) was in 1959 There were some benefits of size and power reduction, but the functionality and design of the telephone system was not changed In the second phase, the transistor made possible digital transmission and switching—an entirely new communications technology that revolutionized the industry The concept of digital voice communications, known as pulse code modulation (PCM), was first demonstrated in 1947 at Bell Labs This early demonstration was based on voice coding ideas developed in the 1930s and telephone encryption devices used by the military during the war Commercial use of PCM, however, was not possible without transistors to make the complex circuits practi3 We should note in passing, however, that Bell Labs’ broader focus on physics research in areas relevant to communications—magnetism, semiconductors, and surfaces— grew substantially during the postwar period and is still a major effort today Even though much of this work is directly relevant to communications, it is well beyond the scope of this review to cover it extensively One of the highlights is P W Anderson’s theoretical work on magnetism and disordered solids during the 1950s, for which he received the 1977 Nobel Prize in Physics W F Brinkman and D V Lang: Physics and the communications industry S485 FIG Size reduction of cellular telephones as a result of progress in increasing the number of transistors in an integrated circuit chip, as expressed by Moore’s Law cal The first digital transmission system, the so-called T1 carrier, was introduced in 1962 and carried 24 digital voice channels with an overall bit rate of 1.5 Mbit/sec Even though a combined digital switching and transmission system was demonstrated in 1959 at Bell Labs, the first commercial use of fully digital switching and transmission was not until the introduction of the 4ESS switch for long-distance traffic in 1976 Digital technology has profoundly affected the communications industry It could be argued that the 1984 breakup of the Bell System was due to the ease with which various competitors could develop digital telephone systems, as opposed to the complex electromechanical switching systems that required the substantial resources of the Bell System to develop and maintain An additional factor was the close relationship between the digital technology for communications and for computers Thus the 1958 invention of the integrated circuit by J S Kilby at TI, along with major improvements by R N Noyce at Fairchild and the 1960 invention of the MOSFET (metal-oxide-semiconductor field-effect transistor) by D Kahng and M M Attala at Bell Labs, affected both industries in fundamental ways As a result of the exponential growth in the number of transistors per chip (Moore’s Law), the cost and size of electronic devices have changed by orders of magnitude since the 1960s In communications this made possible the wireless revolution that we are seeing today Figure shows the reductions in size of cellular telephone equipment since the invention of the concept by Bell Labs in 1960; dramatic reductions have also occurred in the cost In a Rev Mod Phys., Vol 71, No 2, Centenary 1999 similar way, digital electronics and integrated circuits have made the ‘‘old’’ technology of facsimile transmission, introduced by AT&T in 1925, into the practical communication medium of today’s ubiquitous Fax machine At this moment we stand on the threshold of yet another revolution brought on by the transistor and integrated circuit—the Internet The widespread use of personal computers and the Internet have made data networking one of the hottest growth industries Because of digital technology, telecommunications is being redefined to include data and video, as well as voice It is ironic that after the telephone made the telegraph obsolete, the latest technology is essentially reverting back to an ultrafast version of the telegraph, in which coded digital messages, not analog voice, dominate the system However, this revolution is not based solely on the transistor and integrated circuit We require the ultrahigh transmission bandwidth of fiber optics to complete our story This brings us to the fourth era in the relationship between physics and communications—learning how to communicate with light V THE ERA OF QUANTUM OPTICS Alexander Graham Bell invented the photophone in 1880, just a few years after the telephone This device used a mirrored diaphragm to modulate a beam of sunlight with speech vibrations, analogous to the modulation of an electric current in Bell’s telephone The re- S486 W F Brinkman and D V Lang: Physics and the communications industry ceiver was a selenium photocell, which had been discovered in 1876 The fiber-optic communications systems of today differ in only three major respects: (1) a glass waveguide, invented by the British physicist John Tyndall in 1870, replaces free-space propagation; (2) high-speed, multichannel digital modulation replaces the single analog voice channel; and (3) a coherent light source replaces the sun The coherence of the source is critical for two reasons First, coherent light has much less noise than incoherent, and second, coherent light can be focused into a high-power beam without much loss in intensity In fact, the Bell Labs communications engineer W A Tyrrell pointed out in 1951 the advantages of using optical frequencies instead of microwaves for communications, but noted the lack of the required coherent source The major impact of physics has been to provide the coherent light source—the laser The date commonly cited for the invention of the laser is the theory of A Schawlow and C H Townes in 1958 (see the articles in this issue by Lamb et al and by Slusher) However, the fundamental physics of stimulated emission was first recognized in 1917 by Einstein Stimulated emission was first observed at microwave frequencies (24 GHz) in the ammonia beam maser by Townes in 1955 N Bloembergen demonstrated in 1956 the importance of a three-level system in obtaining the population inversion for maser amplification In 1957, Bell Labs developed the first solid-state maser, which was used as a low-noise microwave amplifier in the Telstar communications satellite in 1962 With such a background, Schawlow and Townes’s 1958 theory for extending stimulated emission to optical frequencies was not terribly surprising However, it was two years before the first laser (the pulsed ruby laser) was demonstrated by Theodore Maiman at Hughes Research Labs in 1960 Subsequent lasers of various types came at a rapid pace in the early 1960s The laser most directly relevant to communications is the semiconductor laser, demonstrated independently in 1962 by GE, IBM, and Lincoln Laboratory This type of laser can be fabricated from a variety of direct-band-gap III-V semiconductors, which were introduced by H Welker in 1952 However, it was not until 1970 that laser diodes could be made to operate continuously at room temperature, a prerequisite for optical communications In order to explain this and other improvements in laser diodes for communications, we must return to the previous era of semiconductor physics and the concept of the heterojunction (see Riordan et al., 1999 in this issue) A heterojunction is formed when two semiconductors of different band gap are joined together William Shockley pointed out in 1951 that the performance of a bipolar transistor would be enhanced if the emitter had a wider band gap than the base, i.e., the emitter-base junction was also a heterojunction In 1963, it was independently suggested by Herbert Kroemer and by Zh I Alferov and R F Kazarinov that a laser having such heterojunctions would be superior to the homojunction diodes first demonstrated Materials of different band gaps, however, generally also have different lattice conRev Mod Phys., Vol 71, No 2, Centenary 1999 stants, making good crystal growth difficult, if not impossible Therefore heterojunctions remained a theoretical curiosity until 1967 when J M Woodall, H Rupprecht, and G D Pettit at IBM produced goodquality GaAs/Alx Ga1Ϫx As heterojunctions by liquidphase epitaxy In 1970, I Hayashi and M B Panish at Bell Labs and Alferov in Russia obtained continuous operation at room temperature using doubleheterojunction lasers consisting of a thin layer of GaAs sandwiched between two layers of Alx Ga1Ϫx As This design achieved better performance by confining both the injected carriers (by the band-gap discontinuity) and emitted photons (by the refractive-index discontinuity) The double-heterojunction concept has been modified and improved over the years, but the central idea of confining both the carriers and the photons by heterojunctions is the fundamental approach used in all semiconductor lasers The second essential ingredient for optical communications is low-loss silica fiber Fiber-optic illuminators were developed in the mid 1950s, but this type of glass has an attenuation of about 100 dB/km and would only work for communications systems a few hundred meters in length A major breakthrough occurred in 1970, when F P Kapron and co-workers at Corning produced the first fiber with a loss less than 20 dB/km at 850 nm, the GaAs/Alx Ga1Ϫx As laser wavelength This marked the beginning of optical fiber communications technology By 1976, fiber loss was reduced to 1.0 dB/km at 1300 nm, increasing the distance between repeaters to tens of miles The first commercial trial of optical communications was carried out in Chicago in 1978; by 1982, fifty systems were installed in the Bell System, corresponding to 25 000 total miles of fiber In 1985, the loss in silica fiber reached a low of 0.15 dB/km at a wavelength of 1550 nm By this time, the systems had migrated to the lower-loss long-wavelength region of 1300 to 1550 nm, and the laser diodes were being fabricated in the InP/ InGaAsP materials system The first transatlantic optical fiber system (TAT-8), operating at 1300 nm, was installed in 1988 The capacity of commercial optical fiber communications systems has increased about as fast as Moore’s Law for integrated circuits, doubling every two years, as shown in Fig The recent rate of increase for experimental systems has been even faster In the latest of a sequence of major advances, Bell Labs reported in 1998 the transmission of 1.0 Terabit/sec over a distance of 400 km on a single fiber using 100 different wavelengths, each modulated at 10 Gb/sec—the so-called dense wavelength-division-multiplexing (DWDM) technology Three additional elements of physics are necessary, however, to complete our story and bring us to the technology of the 1990s: quantum wells, optical amplifiers, and nonlinear optics A quantum well is a double-heterojunction sandwich of semiconductors of different band gap, discussed above, with the central, lower-band-gap layer so thin (typically less than 100 Å) that the quantum states of the confined carriers dominate the properties of the mate- W F Brinkman and D V Lang: Physics and the communications industry FIG Progress in optical fiber transmission capacity The growth in commercial capacity is due to the increasing bit rate of electronic-time-division multiplexing (ETDM) and the introduction of multichannel wavelength-division multiplexing (WDM) Experimental systems used ETDM, WDM, and optical-time-division multiplexing (OTDM) to achieve recordsetting results rial This was first demonstrated by R Dingle, W Wiegmann, and C H Henry at Bell Labs in 1974 The first quantum-well laser was made at Bell Labs in 1975, and a patent on the quantum-well laser was granted to Dingle and Henry in 1976 Such structures were made possible by the crystal-growth technique of molecular-beam epitaxy, developed into a high-quality epitaxy method by A Y Cho in the early 1970s By using multiple quantum wells in the active layer of a laser diode, the properties needed for communications systems are dramatically improved In fact, most multiple-quantum-well lasers today also use intentionally strained quantum-well layers, a method first proposed by G C Osbourn at Sandia in 1982 to modify the band structure and further enhance device performance The most recent device, which is essential for wavelength-division multiplexing systems, is a laser and electroabsorption modulator integrated on the same chip In this case both the laser and modulator are based on quantum wells, with the modulator using the quantum-confined Stark effect discovered at Bell Labs by D A B Miller and co-workers in 1985 The physics of quantum wells is another example in which communications research has impacted the broad physics community The 1978 invention of modulation doping by Stormer and co-workers at Bell Labs made possible ultrahigh-mobility two-dimensional electron systems which have dominated the research on mesoscopic and quantum-confined systems over the past twenty years In 1982, D C Tsui, H L Stormer, and A Rev Mod Phys., Vol 71, No 2, Centenary 1999 S487 C Gossard at Bell Labs discovered the fractional quantum Hall effect using high-mobility quantum-well structures Research on the fractional quantum Hall effect is still a hot topic in fundamental physics today (see Stormer and Tsui, 1999, in this issue) Another recent example of fundamental quantum-well research is the quantum cascade laser, invented by J Faist, F Capasso, and Cho at Bell Labs in 1994 This is the first semiconductor laser to use the quantum-well states, rather than the band gap, to define the wavelength of laser emission Such lasers currently operate at wavelengths longer than those needed for optical communications; however, the insights gained will most likely impact communications systems in the future The optical amplifier is a major mid 1980s advance in communication systems Such amplifiers are, of course, based on the same physics of stimulated emission as the laser and maser, but it was not at all clear whether such an amplifier would have the required low noise and low cross-talk properties essential for communications systems Several amplifier designs were explored, including laser diodes with antireflective coatings and Raman amplifiers, but the best amplifier for communications systems proved to be optically pumped erbium-doped silica fiber The considerable body of physics knowledge developed in connection with rare-earth ion lasers (such as Nd:YAG) invented in the mid 1960s greatly accelerated the development of the erbium-doped fiber amplifier (EDFA) Basic research at Bell Labs and elsewhere on the spectroscopy of rare-earth ions in various matrices, including silica glass, was a necessary precursor for the rapid development of EDFAs By the late 1980s, EDFAs were widely used in experimental systems Optical amplifiers make WDM systems possible, since the parallel wavelength channels can be simultaneously boosted without each having to be separated and routed through an expensive optoelectronic repeater typical of older fiber-optic systems Nonlinear optics was discovered in 1961 with the observation of two-photon absorption, almost immediately after the first laser was constructed in 1960 Even though the optical nonlinearities of silica are very small, such effects become important in communications systems because of the long distances involved In 1975, Roger Stolen observed four-photon mixing in silica fibers This third-order nonlinear effect, analogous to intermodulation distortion in electrical systems, emerged as a serious problem for WDM systems in the 1990s The problem arose from the typical practice of designing optically amplified transmission systems with the chromatic dispersion of the fiber carefully adjusted to zero, to prevent pulse spreading as a result of the finite spectral width of the modulated source Such a system creates good phase matching between the WDM channels and, hence, generates four-wave mixing products that seriously degrade the performance of very long systems The solution was to introduce a new fiber design, the so-called True Wave fiber invented by Bell Labs in 1993, which introduces a small, controlled amount of dispersion to break up the S488 W F Brinkman and D V Lang: Physics and the communications industry phase matching and prevent four-wave mixing This makes WDM technology practical The extremely high transmission capacity possible with today’s optical fiber systems (a Terabit/sec corresponds to over 15 million simultaneous phone calls) along with the extremely large number of transistors on a single chip (over one billion expected in 2001) will undoubtedly lead us into future eras of communications technology which will be poised to benefit from future discoveries of physics VI CONCLUSIONS We have shown the critical impact of the four major eras of physics on the communications industry over the past 125 years The industry rapidly applied the major physics discoveries during this period and thus made dramatic improvements in communications technology, with demonstrable benefits to society We note that in all four major eras of physics—electromagnetism, the electron, quantum mechanics, and quantum optics—the fundamental discoveries were applied by the communications industry within 15–20 years Further, it is evident that the communications industry’s practice of employing the best physicists to both basic and applied research resulted in the successes noted in this review Indeed, the development of marketable communications technology would most certainly not have occurred so rapidly had this not been the case This reality was well understood by such visionary communications research leaders as Hayes, Jewett, Buckley, and Kelly They recognized the potential of applying the latest physics dis- Rev Mod Phys., Vol 71, No 2, Centenary 1999 coveries to enhance communications technology The current leaders of the communications industry continue in this tradition REFERENCES Agrawal, G P., and N K Dutta, 1993, Semiconductor Lasers (Van Nostrand Reinhold, New York) Brinkman, W F., D E Haggan, and W W Troutman, 1997, IEEE J Solid-State Circuits 32, 1858 Fagen, M D., 1975, A History of Engineering and Science in the Bell System: The Early Years (1875–1925) (Bell Telephone Laboratories, New York) Hoddeson, L., E Braun, J Teichmann, and S Weart, 1992, Out of the Crystal Maze (Oxford, New York) Kaminow, I P., and T L Koch, 1997, Optical Fiber Telecommunications IIIA (Academic, San Diego) Millman, S., 1983, A History of Engineering and Science in the Bell System: Physical Sciences (1925–1980) (Bell Telephone Laboratories, Murray Hill, NJ) Millman, S., 1984, A History of Engineering and Science in the Bell System: Communications Sciences (1925–1980) (AT&T Bell Laboratories, Murray Hill, NJ) Riordan, M., and L Hoddeson, 1997, Crystal Fire: the Birth of the Information Age (Norton, New York) Riordan, M., L Hoddeson, and C L Herring, 1999, Rev Mod Phys 71 (this issue) Ross, I M., 1998, Proc IEEE 86, Stormer, H L., and D C Tsui, 1999, Rev Mod Phys 71 (this issue) Whinnery, J R., 1987, Lasers: Invention to Application (National Academy, Washington, D.C.) ... which include the contributions of vacuum polarization, the magnetic moment interaction, and finite size of the neutron and proton The longer range nuclear interaction is onepion exchange (OPE)... conclusion’’ as Anderson IV THE SLOW DISCOVERY OF THE MESOTRON In contrast to the sudden recognition of the existence of the positron from one remarkable photograph, the mesotron had a much longer... Method (Pergamon, London) Rochester, G D., and J G Wilson, 1952, Cloud Chamber Photographs of the Cosmic Radiation (Pergamon, London) REFERENCES Anderson, C D., 1932, Science 76, 238 Anderson, C D.,

Ngày đăng: 24/04/2014, 17:20

TỪ KHÓA LIÊN QUAN