Universe a grand tour of modern science Phần 2 pdf

77 348 0
Universe a grand tour of modern science Phần 2 pdf

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

By the time that report was published, in 2000, Sellers was in training as a NASA astronaut, so as to observe the biosphere from the International Space Station. The systematic monitoring of the land’s vegetation by unmanned spacecraft already spanned two decades. Tucker collaborated with a team at Boston University that quarried the vast amounts of data accumulated daily over that period, to investigate long-term changes. Between 1981 and 1999 the plainest trend in vegetation seen from space was towards longer growing seasons and more vigorous growth. The most dramatic effects were in Eurasia at latitudes above 40 degrees nor th, meaning roughly the line from Naples to Beijing. The vegetation increased not in area, but in density. The greening was most evident in the forests and woodland that cover a broad swath of land at mid-latitudes from central Europe and across the entire width of Russia to the Far East. On average, the first leaves of spring were appearing a week earlier at the end of the period, and autumn was delayed by ten days. At the same mid-latitudes in North America, the satellite data showed extra growth in New England’s forests, and grasslands of the upper Midwest. Otherwise the changes were scrappier than in Eurasia, and the extension of the growing season was somewhat shorter. ‘We saw that year to year changes in growth and duration of the growing season of northern vegetation are tightly linked to year to year changes in temperature,’ said Liming Zhou of Boston. I The colour of the sea Life on land is about twice as productive as life in the sea, hectare for hectare, but the oceans are about twice as big. Being useful only on terra firma, the satellite vegetation index therefore covered barely half of the biosphere. For the rest, you have to gauge from space the productivity of the ‘grass’ of the sea, the microscopic green algae of the phytoplankton, drifting in the surface waters lit by the Sun. Research ships can sample the algae only locally and occasionally, so satellite measurements were needed even more badly than on land. Estimates of ocean productivity differed not by percentage points but by a factor of six times from the lowest to the highest. The infrared glow of plants on land is not seen in the marine plants that float beneath the sea surface. Instead the space scientists had to look at the visible colour of the sea. ‘In flying from Plymouth to the western mackerel grounds we passed over a sharp line separating the green water of the Channel from the deep blue of the Atlantic,’ Alister Hardy of Oxford recorded in 1956. With the benefit of an aircraft’s altitude, this noted marine biologist saw phenomena known to fishermen down the ages—namely that the most fertile water is green and 65 biosphere from space murky, and that the transition can be sudden. The boundary near the mouth of the English Channel marks the onset of fertilization by nutrients brought to the surface by the churning action of tidal currents. In 1978 the US satellite Nimbus-7 went into orbit carrying a variety of experimental instruments for remote sensing of the Ear th. Among them was a Coastal Zone Color Scanner, which looked for the green chlorophyll of marine plants. Despite its name, its measurements in the open ocean were more reliable than inshore, where the waters are literally muddied. In eight years of intermittent operation, the Color Scanner gave wonderful impressions of springtime blooms in the North Atlantic and North Pacific, like those seen on land by the vegetation index. New images for the textbooks showed high fertility in regions where nutrient-rich water wells up to the surface from below. The Equator turned out to be no imaginar y line but a plainly visible green belt of chlorophyll separating the bluer, much less fertile regions in the tropical oceans to the north and south. But, for would-be bookkeepers of the biosphere, the Nimbus-7 observations were frustratingly unsystematic and incomplete. A fuller accounting began with the launch by NASA in 1997 of OrbView-2, the first satellite capable of gauging the entire biosphere, by both sea and land. An oddly named instrument, SeaWiFS, combined the red and infrared sensors needed for the vegetation index on land with an improved sea-colour scanner. SeaWiFS surveyed the whole world every two days. After three years the scientists were ready to announce the net primary productivity of all the world’s plants, marine and terrestrial, deduced from the satellite data. The answer was 111 to 117 billion tonnes of carbon downloaded from the air and fixed by photosynthesis, in the course of a year, after subtracting the carbon that the plants’ respiration returned promptly to the air. The satellite’s launch coincided with a period of strong warming in the Eastern Pacific, in the El Nin ˜ o event of 1997–98. During an El Nin ˜ o, the tropical ocean is depleted in mineral nutrients needed for life, hence the lower global figure in the SeaWiFS results. The higher figure was from the subsequent period of Pacific cooling: a La Nin ˜ a. Between 1997 and 2000, ocean productivity increased by almost ten per cent, from 54 to 59 billion tonnes per year. In the same period the total productivity on land increased only slightly, from 57 to 58 billion tonnes of fixed carbon, although the El Nin ˜ o to La Nin ˜ a transition brought more drastic changes from region to region. North–south differences were already known from space observations of vegetation ashore. The sheer extent of the northern lands explains the strong seasonal drawdown of carbon dioxide from the air by plants growing there in the northern summer. But the SeaWiFS results showed that summer productivity 66 biosphere from space is higher also in the northern Atlantic and Pacific than in the more spacious Southern Ocean. The blooms are more intense. ‘The summer blooms in the southern hemisphere are limited by light and by a chronic shortage of essential nutrients, especially iron,’ noted Michael Behrenfeld of NASA’s Laboratory of Hydrospheric Sciences, lead author of the first report on the SeaWiFS data. ‘If the northern and southern hemispheres exhibited equivalent seasonal blooms, ocean productivity would be higher by some 9 billion tonnes of carbon.’ In that case, ocean productivity would exceed the land’s. Although uncertainties remained about the calculations for both parts of the biosphere, there was no denying the remarkable similarity in plant growth by land and by sea. Previous estimates of ocean productivity had been too low. I New slants to come The study of the biosphere as a whole is in its infancy. Before the Space Age it could not seriously begin, because you would have needed huge armies and navies of scientists, on the ground and at sea, to make the observations. By the early 21st century the political focus had shifted from Soviet grain production to the role of living systems in mopping up man-made emissions of carbon dioxide. The possible uses of augmented forests or fertilization of the oceans, for controlling carbon dioxide levels, were already of interest to treaty negotiators. In parallel with the developments in space observations of the biosphere, ecologists have developed computer models of plant productivity. Discrepancies between their results show how far there is to go. For example, in a study repor ted in 2000, different calculations of how much carbon dioxide was taken in by plants and soil in the south-east USA, between 1980 and 1993, disagreed not by some percentage points but by a factor of more than three. Such uncertainties undermine the attempts to make global ecology a more exact science. Improvements will come from better data, especially from observations from space of the year-to-year variability in plant growth by land and sea. These will help to pin down the effects of different factors and events. The lucky coincidence of the SeaWIFS launch and a dramatic El Nin ˜ o event was a case in point. A growing number of satellites in orbit measure the vegetation index and the sea colour. Future space missions will distinguish many more wavelengths of visible and infrared light, and use slanting angles of view to amplify the data. The space scientists won’t leave unfinished the job they have started well. E See also Carbon cycle. For views on the Earth’s vegetation at ground level, see Biodiversity. For components of the biosphere hidden from cameras in space, see Extremophiles. 67 biosphere from space O n a visit to bell labs in New Jersey, if you met a man coming down the corridor on a unicycle it would probably be Claude Shannon, especially if he were juggling at the same time. According to his wife: ‘He had been a gymnast in college, so he was better at it than you might have thought.’ His after-hours capers were tolerated because he had come up single-handedly with two of the most consequential ideas in the histor y of technology, each of them roughly comparable to inventing the wheel on which he was performing. In 1937, when a 21-year-old graduate student of electrical engineering at the Massachusetts Institute of Technology, Shannon saw in simple relays—electric switches under electric control—the potential to make logical decisions. Suppose two relays represent propositions X and Y. If the switch is open, the proposition is false, and if connected it is true. Put the relays in a line, in series, then a current can flow only if X AND Y are true. But br anch the circuit so that the switches operate in parallel, then if either X OR Y is true a current flows. And as Shannon pointed out in his eventual dissertation, the false/true dichotomy could equally well represent the digits 0 or 1. He wrote: ‘It is possible to perform complex mathematical operations by means of relay circuits.’ In the history of computers, Alan Turing in England and John von Neumann in the USA are rightly famous for their notions about programmable machinery, in the 1930s and 1940s when code-breaking and other military needs gave an urgency to innovation. Electric relays soon made way for thermionic valves in early computers, and then for transistors fashioned from semiconductors. The fact remains that the boy Shannon’s AND and OR gates are still the principle of the design and operation of the microchips of every digital computer, whilst the binary arithmetic of 0s and 1s now runs the working world. Shannon’s second gigantic contribution to modern life came at Bell Labs. By 1943 he realized that his 0s and 1s could represent information of kinds going far wider than logic or arithmetic. Many questions like ‘Do you love me?’ invite a simple yes or no answer, which might be communicated very economically by a single 1 or 0, a binary digit. Shannon called it a bit for short. More complicated communications—strings of text for example—require more bits. Just how many 68 is easily calculable, and this is a measure of the information content of a message. So you have a message of so many bits. How quickly can you send it? That depends on how many bits per second the channel of communication can handle. Thus you can rate the capacity of the channel using the same binary units, and the reckoning of messages and communication power can apply to any kind of system: printed words in a telegraph, voices on the radio, pictures on television, or even a carrier pigeon, limited by the weight it can carry and the sharpness of vision of the reader of the message. In an electromagnetic channel, the theoretical capacity in bits per second depends on the frequency range. Radio with music requires tens of kilocycles per second, whilst television pictures need megacycles. Real communications channels fall short of their theoretical capacity because of interference from outside sources and internally generated noise, but you can improve the fidelity of transmission by widening the bandwidth or sending the message more slowly. Shannon went on polishing his ideas quietly, not discussing them even with close colleagues. He was having fun, but he found writing up the work for publication quite painful. Not until 1948 did his classic paper called ‘A mathematical theory of communication’ appear. It won instant acceptance. Shannon had invented his own branch of science and was treading on nobody else’s toes. His propositions, though wholly new and surprising, were quickly digestible and then almost self-evident. The most sensational result from Shannon’s mathematics was that near-perfect communication is possible in principle if you convert the information to be sent into digital form. For example, the light wanted in a picture element of an image can be specified, not as a relative intensity, but as a number, expressed in binary digits. Instead of being roughly right, as expected in an analogue system, the intensity will be precisely right. Scientific and militar y systems were the first to make intensive use of Shannon’s principles. The general public became increasingly aware of the digital world through personal computers and digitized music on compact discs. By the end of the 20th century, digital radio, television and video recording were becoming widespread. Further spectacular innovations began with the marriage of computing and digital communication, to bring all the world’s information resources into your office or living room. From a requirement for survivable communications, in the aftermath of a nuclear war, came the Inter net, developed as Arpanet by the US Advanced Research Project Agency. It provided a means of finding routes through a shattered telephone system where many links were unavailable. That was the origin of emails. By the mid-1980s, many computer scientists and 69 bits and qubits physicists were using the net, and in 1990 responsibility for the system passed from the military to the US National Science Foundation. Meanwhile at CERN, Europe’s particle physics lab in Geneva, the growing complexity of experiments brought a need for advanced digital links between scientists in widely scattered labs. It prompted Tim Berners-Lee and his colleagues to invent the World Wide Web in 1990, and within a few years everyone was joining in. The World Wide Web’s impact on human affairs was comparable with the invention of steam trains in the 19th century, but more sudden. Just because the systems of modern information technology are so familiar, it can be hard to grasp how innovative and fundamental Shannon’s ideas were. A couple of scientific pointers may help. In relation to the laws of heat, his quantifiable infor mation is the exact opposite of entropy, which means the degradation of high forms of energy into mere heat and disorder. Life itself is a non-stop battle of hereditary information against deadly disorder, and Mother Nature went digital long ago. Shannon’s mathematical theory of communication applies to the genetic code and to the on–off binary pulses operating in your brain as you read these words. I Towards quantum computers For a second revolution in information technology, the experts looked to the spooky behaviour of electrons and atoms known in quantum theory. By 2002 physicists in Australia had made the equivalent of Shannon’s relays of 65 years earlier, but now the switches offered not binary bits, but qubits, pronounced cue-bits. They raised hopes that the first quantum computers might be operating before the first decade of the new century was out. Whereas electric relays, and their electronic successors in microchips, provide the simple on/off, true/false, 1/0 options expressed as bits of information, the qubits in the corresponding quantum devices will have many possible states. In theory it is possible to make an extremely fast computer by exploiting ambiguities that are present all the time in quantum theory. If you’re not sure whether an electron in an atom is in one possible energy state, or in the next higher energy state permitted by the physical laws, then it can be considered to be in both states at once. In computing terms it represents both 1 and 0 at the same time. Two such ambiguities give you four numbers, 00, 01, 10 and 11, which are the binary-number equivalents of good old 0, 1, 2 and 3. Three ambiguities give eight numbers, and so on, until with 50 you have a million billion numbers represented simultaneously in the quantum computer. In theory the machine can compute with all of them at the same time. Such quantum spookiness spooks the spooks. The world’s secret services are still engaged in the centuries-old contest between code-makers and code-breakers. 70 bits and qubits There are new concepts called quantum one-time pads for a supposedly unbreakable cipher, using existing technology, and future quantum computers are expected to be able to cr ack many of the best codes of pre-existing kinds. Who knows what developments may be going on behind the scenes, like the secret work on digital computing by Alan Turing at Bletchley Park in England during the Second World War? A widespread opinion at the start of the 21st century held that quantum computing was beyond practical reach for the time being. It was seen as requiring exquisite delicacy in construction and operation, with the ever-present danger that the slightest external interference, or a premature leakage of infor mation from the system, could cause the whole multiply parallel computation to cave in, like a mistimed souffle ´ . Colorado and Austria were the settings for early steps towards a practical quantum computer, announced in 2003. At the US National Institute of Standards and Technology, finely tuned laser beams played on a pair of beryllium ions (charged atoms) trapped in a vacuum. If both ions were spinning the same way, the laser beams had no effect, but if they had contrary spins the beams made them prance briefly away from each other and change their spins according to subtle but predictable quantum rules. Simultaneously a team at Universita ¨ t Innsbruck reported the use of a pair of calcium ions. In this case, laser beams controlled the ions individually. All possible combinations of parallel and anti-parallel spins could be created and read out. Commenting on the progress, Andrew Steane at Oxford’s Centre for Quantum Computation declared, ‘The experiments . . . represent, for me, the first hint that there is a serious possibility of making logic gates, precise to one part in a thousand or even ten thousand, that could be scaled up to many qubits.’ Quantum computing is not just a new technology. For David Deutsch at Oxford, who developed the seminal concept of a quantum computer from 1977 onwards, it opened a road for exploring the nature of the Universe in its quantum aspects. In particular it illustrated the theory of the quantum multiverse, also promulgated by Deutsch. The many ambiguities of quantum mechanics represent, in his theory, multiple universes like our own that co-exist in parallel with what we know and experience. Deutsch’s idea should not be confused with the multiple universes offered in some Big Bang theories. Those would have space and time separate from our own, whilst the universes of the quantum multiverse supposedly operate within our own cosmic framework, and provide a complexity and richness unseen by mortal eyes. ‘In quantum computation the complexity of what is happening is very high so that, philosophically, it becomes an unavoidable obligation to try to explain it,’ 71 bits and qubits Deutsch said. ‘This will have philosophical implications in the long run, just in the way that the existence of Newton’s laws profoundly affected the debate on things like determinism. It is not that people actually used Newton’s laws in that debate, but the fact that they existed at all coloured a great deal of philosophical discussions subsequently. That will happen with quantum computers I am sure.’ E For the background on quantum mechanics, and on cryptic long-distance communication in the quantum manner, see Quantum tangles. ‘T he virginity of sense,’ the writer and traveller Rober t Louis Stevenson called it. Only once in a lifetime can you first experience the magic of a South Sea island as your schooner draws near. With scientific discoveries, too, there are unrepeatable moments for the individuals who make them, or for the many who first thrill to the news. Then the magic fades into commonplace facts that students mug up for their exams. Even about quasars, the lords of the sky. In 1962 a British radio astronomer, Cyril Hazard, was in Australia with a bright idea for pinpointing a mysteriously small but powerful radio star. He would watch it disappear behind the Moon, and then reappear again, using a new radio telescope at Parkes in New South Wales. Only by having the engineers remove bolts from the huge structure would it tilt far enough to point in the right direction. The station’s director, John Bolton, authorized that, and even made the observations for him when Hazard took the wrong train from Sydney. Until then, object No. 273 in the 3rd Cambridge Catalogue of Radio Sources, or 3C 273 for short, had no obvious visible counterpart at the place in the sky from which the radio waves were coming. But its position was known only roughly, until the lunar occultation at Parkes showed that it corresponded with a faint star in the Virgo constellation. A Dutch-born astronomer, Maarten Schmidt, examined 3C 273 with what was then the world’s biggest telescope for visible light, the five-metre Palomar instrument in Califor nia. 72 black holes He smeared the object’s light into a spectrum showing the different wavelengths. The pattern of lines was very unusual and Schmidt puzzled over a photograph of the spectrum for six weeks. In February 1963, the penny dropped. He recognized three features due to hydrogen, called Lyman lines, normally seen as ultraviolet light. Their wavelengths were so greatly stretched, or red- shifted, by the expansion of the Universe that 3C 273 had to be very remote— billions of light-years away. The object was far more luminous than a galaxy and too long-lived to be an exploding star. The star-like appearance meant it produced its light from a very small volume, and no conventional astrophysical theory could explain it. ‘I went home in a state of disbelief,’ Schmidt recalled. ‘I said to my wife, ‘‘It’s horrible. Something incredible happened today.’’’ Horrible or not, a name was needed for this new class of objects—3C 273 was the brightest but by no means the only quasi-stellar radio source. Astrophysicists at NASA’s Goddard Space Flight Center who were native speakers of German and Chinese coined the name early in 1964. Wolfgang Priester suggested quastar,but Hong-yee Chiu objected that Questar was the name of a telescope. ‘It will have to be quasar,’ he said. The New York Times adopted the term, and that was that. The nuclear power that lights the Sun and other ordinary stars could not convincingly account for the output of energy. Over the years that followed the pinpointing of 3C 273, astronomers came reluctantly to the conclusion that only a gravitational engine could explain the quasars. They reinvented the Minotaur, the creature that lived in a Cretan maze and demanded a diet of young people. Now the maze is a galaxy, and at the core of that vast congregation of stars lurk s a black hole that feeds on gas or dismembered stars. By 1971 Donald Lynden-Bell and Martin Rees at Cambridge could sketch the theory. They reasoned that doomed matter would swirl around the black hole in a flat disk, called an accretion disk, and gradually spiral inwards like water running into a plughole, releasing energy. The idea was then developed to explain jets of particles and other features seen in quasars and in disturbed objects called active galaxies. Apart from the most obvious quasars, a wide variety of galaxies display violent activity. Some are strangely bright centrally or have great jets spouting from their nuclei. The same active galaxies tend to show up conspicuously by radio, ultraviolet, X-rays and gamma rays, and some have jet-generated lobes of radio emission like enormous wings. All are presumed to harbour quasars, although dust often hides them from direct view. In 1990 Rees noted the general acceptance of his ideas. ‘There is a growing consensus,’ he wrote, ‘that every quasar, or other active galactic nucleus, is powered by a giant black hole, a million or a billion times more massive than the 73 black holes Sun. Such an awesome monster could be formed by a runaway catastrophe in the very heart of the galaxy. If the black hole is subsequently fuelled by capturing gas and stars from its surroundings, or if it interacts with the galaxy’s magnetic fields, it can liber ate the copious energy needed to explain the violent events.’ I A ready-made idea Since the American theorist John Wheeler coined the term in 1967, for a place in the sky where gravity can trap even light, the black hole has entered everyday speech as the ultimate waste bin. Familiarity should not diminish this invention of the human mind, made doubly amazing by Mother Nature’s anticipation and employment of it. Strange effects on space and time distinguish modern black holes from those imagined in the Newtonian era. In 1784 John Michell, a Yorkshire clergyman who moonlighted as a scientific genius, forestalled Einstein by suggesting that light was subject to the force of gravity. A very large star might therefore be invisible, he reasoned, if its gravity were too strong for light to escape. Since early in the 20th century, Michell’s gigantic star has been replaced by matter compacted by gravity into an extremely small volume—perhaps even to a geometric point, though we can’t see that far in. Surrounding the mass, at some distance from the centre, is the surface of the black hole where matter and light can pass inwards but not outwards. This picture came first from Karl Schwarzschild who, on his premature deathbed in Potsdam in 1916, applied Albert Einstein’s new theory of gravity to a single massive object like the Earth or the Sun. The easiest way to calculate the object’s effects on space and time around it is to imagine all of its mass concentrated in the middle. And a magic membrane, where escaping light and time itself are brought to a halt, appears in Schwarzschild’s maths. If the Earth were really squeezed to make a black hole, the distance of its surface from the massy centre would be just nine millimetres. This distance, proportional to the mass, is called the Schwarzschild radius and is still used for sizing up black holes. Mathematical convenience was one thing, but the reality of black holes—called dark stars or collapsed stars until Wheeler coined the popular term—was something else entirely. While admiring Schwarzschild’s ingenuity, Einstein himself disliked the idea. It languished until the 1960s, when astrophysicists were thinking about the fate of very massive stars. They realized that when the stars exploded at the end of their lives, their cores might collapse under a pressure that even the nuclei of atoms could not resist. Matter would disappear, leaving behind only its intense gravity, like the grin of Lewis Carroll’s Cheshire Cat. Roger Penrose in Oxford, Stephen Hawking in Cambridge, Yakov Zel’dovich in Moscow and Edwin Salpeter at Cornell were among those who developed the 74 black holes [...]... Astrofisica de Canarias, Alister Graham and his colleagues realized that you could judge the mass just by looking at a galaxy’s overall appearance The concentration of visible matter towards the centre depends on the black hole’s mass But whilst this provided a quick and easy way of making the estimate, it also raised questions about how the concentration of matter arose ‘We now know that any viable theory of. .. millimetre-wave radio telescopes at Kirkkonummi in Finland and La Silla in Chile After observing upheavals in more than 100 galaxies for more than 20 years, Esko Valtaoja at Turku suspected that the most intensely active galaxies have more than one giant black hole in their nuclei ‘If many galaxies contain central black holes and many galaxies have merged, then it’s only reasonable to expect plenty of cases...black holes theory of such stellar black holes It helped to explain some of the cosmic sources of intense X-rays in our own Galaxy then being discovered by satellites They have masses a few times greater than the Sun’s, and nowadays they are called microquasars The black hole idea was thus available, ready made, for explaining the quasars and active galaxies with far more massive pits of gravity... neuropsychology on which brain imagers would later build Sadly, Luria had an unlimited caseload of brain damage left over from the Second World War One patient was Lev Zassetsky, a Red Army officer who had part of his head shot away, on the left and towards the back His personality was unimpaired but his vision was partly affected and he lost his ability to read and write When Luria found that Zassetsky could... was born in north-east Spain in 18 52 Cajal’s real love was drawing but he had to earn a living Failing to shine as either a shoemaker or a barber, he qualified as a physician in Zaragoza After military service in Cuba, the young doctor had saved just enough pesetas to buy an old-fashioned microscope, with which he made elegant drawings of muscle ´ fibres But then Cajal married the beautiful Silver a, ... babies to keep him permanently short of cash In particular, he couldn’t afford a decent Zeiss microscope Ten years passed before he won one as a civic reward for heroic services during a cholera outbreak Meanwhile, Cajal’s micro-anatomical drawings had earned him a professorship, first at Valencia and then at Barcelona All this was just the preamble to the day in 1887 when, on a trip to Madrid, Cajal... of the black surface that gives a black hole its name is the prime aim of a rival American scheme for around 20 20, called Maxim It would use a technique called X-ray interferometry, demonstrated in laboratory tests by Webster Cash of Colorado and his colleagues, to achieve a sharpness of vision a million times better than Chandra’s The idea is to gather X-ray beams from the black hole and its surroundings... similar spacetime carousel around a much smaller object, a suspected stellar black hole in the Ara constellation called XTE J1650-500 After more than 30 years of controversy, calculation, speculation and investigation, the black hole theory was at last secure I The adventure continues Giant black holes exist in many normal galaxies, including our own Milky Way So quasars and associated activity may be... Direct confirmation of two giant black holes in one galaxy came first from the Chandra satellite, observing NGC 624 0 in the Ophiuchus constellation This is a starburst galaxy, where the merger of two galaxies has provoked a frenzy of star formation The idea of Gaskell and Valtaoja was beautifully confirmed E C For more on Einstein’s general relativity, see Gravi ty For the use of a black hole as a power... more was known about wayfinding, both from further studies of effects of brain damage and from the new brain imaging A false trail came from animal experiments These suggested that an internal part of the brain called the hippocampus was heavily involved in wayfinding By brain imaging in human beings confronted with mazes, Mark D’Esposito and colleagues at the University of Pennsylvania were able to . reality of black holes was always the number one aim of X-ray astronomers,’ said Yasuo Tanaka, of Japan’s Institute for Space and Astronautical Science. ‘Our satellite was not large, but rather. greater than the Sun’s, and nowadays they are called microquasars. The black hole idea was thus available, ready made, for explaining the quasars and active galaxies with far more massive pits of. de Astrofisica de Canarias, Alister Graham and his colleagues realized that you could judge the mass just by looking at a galaxy’s overall appearance. The concentration of visible matter towards

Ngày đăng: 08/08/2014, 01:20

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan