1. Trang chủ
  2. » Công Nghệ Thông Tin

INFORMATION RANDOMNESS INCOMPLETENESS papers on algorithmic information theory 2nd ed g j chaitin

534 31 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 534
Dung lượng 2,18 MB

Nội dung

INFORMATION, RANDOMNESS & INCOMPLETENESS Papers on Algorithmic Information Theory Second Edition G J Chaitin IBM, P O Box 704 Yorktown Heights, NY 10598 chaitin@watson.ibm.com September 30, 1997 This collection of reprints was published by World Scienti c in Singapore The rst edition appeared in 1987, and the second edition appeared in 1990 This is the second edition with an updated bibliography Acknowledgments The author and the publisher are grateful to the following for permission to reprint the papers included in this volume Academic Press, Inc (Adv Appl Math.) American Mathematical Society (AMS Notices) Association for Computing Machinery (J ACM, SICACT News, SIGACT News) Cambridge University Press (Algorithmic Information Theory) Elsevier Science Publishers (Theor Comput Sci.) IBM (IBM J Res Dev.) IEEE (IEEE Trans Info Theory, IEEE Symposia Abstracts) N Ikeda, Osaka University (Osaka J Math.) John Wiley & Sons, Inc (Encyclopedia of Statistical Sci., Commun Pure Appl Math.) I Kalantari, Western Illinois University (Recursive Function Theory: Newsletter) MIT Press (The Maximum Entropy Formalism) Pergamon Journals Ltd (Comp Math Applic.) Plenum Publishing Corp (Int J Theor Phys.) I Prigogine, Universite Libre de Bruxelles (Mondes en Developpement) Springer-Verlag (Open Problems in Communication and Computation) Verlag Kammerer & Unverzagt (The Universal Turing Machine| A Half-Century Survey) W H Freeman and Company (Sci Amer.) Preface God not only plays dice in quantum mechanics, but even with the whole numbers! The discovery of randomness in arithmetic is presented in my book Algorithmic Information Theory published by Cambridge University Press There I show that to decide if an algebraic equation in integers has nitely or in nitely many solutions is in some cases absolutely intractable I exhibit an in nite series of such arithmetical assertions that are random arithmetical facts, and for which it is essentially the case that the only way to prove them is to assume them as axioms This extreme form of Godel incompleteness theorem shows that some arithmetical truths are totally impervious to reasoning The papers leading to this result were published over a period of more than twenty years in widely scattered journals, but because of their unity of purpose they fall together naturally into the present book, intended as a companion volume to my Cambridge University Press monograph I hope that it will serve as a stimulus for work on complexity, randomness and unpredictability, in physics and biology as well as in metamathematics For the second edition, I have added the article \Randomness in arithmetic" (Part I), a collection of abstracts (Part VII), a bibliography (Part VIII), and, as an Epilogue, two essays which have not been published elsewhere that assess the impact of algorithmic information theory on mathematics and biology, respectively I should also like to point out that it is straightforward to apply to LISP the techniques used in Part VI to study bounded-transfer Turing machines A few footnotes have been added to Part VI, but the subject richly deserves book length treatment, and I intend to write a book about LISP in the near future.1 Gregory Chaitin LISP program-size complexity is discussed at length in my book Information- Theoretic Incompleteness published by World Scienti c in 1992.] Contents I Introductory/Tutorial/Survey Papers Randomness and mathematical proof Randomness in arithmetic On the di culty of computations Information-theoretic computational complexity Algorithmic information theory Algorithmic information theory 11 29 41 57 75 83 II Applications to Metamathematics 109 Godel's theorem and information Randomness and Godel's theorem An algebraic equation for the halting probability Computing the busy beaver function 111 131 137 145 III Applications to Biology 151 To a mathematical de nition of \life" 153 Toward a mathematical de nition of \life" 165 IV Technical Papers on Self-Delimiting Programs 195 A theory of program size formally identical to information theory 197 Incompleteness theorems for random reals 225 Algorithmic entropy of sets 261 V Technical Papers on Blank-Endmarker Programs 289 Information-theoretic limitations of formal systems 291 A note on Monte Carlo primality tests and algorithmic information theory 335 Information-theoretic characterizations of recursive in nite strings 345 Program size, oracles, and the jump operation 351 VI Technical Papers on Turing Machines & LISP 367 On the length of programs for computing nite binary sequences 369 On the length of programs for computing nite binary sequences: statistical considerations 411 On the simplicity and speed of programs for computing in nite sets of natural numbers 435 VII Abstracts 461 On the length of programs for computing nite binary sequences by bounded-transfer Turing machines 463 On the length of programs for computing nite binary sequences by bounded-transfer Turing machines II 465 Computational complexity and Godel's incompleteness theorem 467 Computational complexity and Godel's incompleteness theorem 469 Information-theoretic aspects of the Turing degrees 473 Information-theoretic aspects of Post's construction of a simple set 475 On the di culty of generating all binary strings of complexity less than n 477 On the greatest natural number of de nitional or information complexity n 479 A necessary and su cient condition for an in nite binary string to be recursive 481 There are few minimal descriptions 483 Information-theoretic computational complexity 485 A theory of program size formally identical to information theory 487 Recent work on algorithmic information theory 489 VIII Bibliography 491 Publications of G J Chaitin Discussions of Chaitin's work 493 499 Epilogue 503 Undecidability & randomness in pure mathematics Algorithmic information & evolution 503 517 About the author 529 518 Epilogue halting probability of a universal Turing machine plays a fundamental role is an abstract example of evolution: it is of in nite complexity and the limit of a computable sequence of rational numbers Algorithmic information theory Algorithmic information theory 11{16] is a branch of computational complexity theory concerned with the size of computer programs rather than with their running time In other words, it deals with the di culty of describing or specifying algorithms, rather than with the resources needed to execute them This theory combines features of probability theory, information theory, statistical mechanics and thermodynamics, and recursive function or computability theory It has so far had two principal applications The rst is to provide a new conceptual foundation for probability theory based on the notion of an individual random or unpredictable sequence, instead of the usual measure-theoretic formulation in which the key notion is the distribution of measure among an ensemble of possibilities The second major application of algorithmic information theory has been the dramatic new light it throws on Godel's famous incompleteness theorem and on the limitations of the axiomatic method The main concept of algorithmic information theory is that of the program-size complexity or algorithmic information content of an object (usually just called its \complexity") This is de ned to be the size in bits of the shortest computer program that calculates the object, i.e., the size of its minimal algorithmic description Note that we consider computer programs to be bit strings and we measure their size in bits If the object being calculated is itself a nite string of bits, and its minimal description is no smaller than the string itself, then the bit string is said to be algorithmically incompressible, algorithmically irreducible, or algorithmically random Such strings have the statistical properties that one would expect For example, 0's and 1's must occur with nearly equal relative frequency otherwise the bit string could be compressed An in nite bit string is said to be algorithmically incompressible, algorithmically irreducible, or algorithmically random if all its initial Algorithmic Information & Evolution 519 segments are algorithmically random nite bit strings A related concept is the mutual algorithmic information content of two objects This is the extent to which it is simpler to calculate them together than to calculate them separately, i.e., the extent to which their joint algorithmic information content is less than the sum of their individual algorithmic information contents Two objects are algorithmically independent if their mutual algorithmic information content is zero, i.e., if calculating them together doesn't help These concepts provide a new conceptual foundation for probability theory based on the notion of an individual random string of bits, rather than the usual measure-theoretic approach They also shed new light on Godel's incompleteness theorem, for in some circumstances it is possible to argue that the unprovability of certain true assertions follows naturally from the fact that their algorithmic information content is greater than the algorithmic information content of the axioms and rules of inference being employed For example, the N -bit string of outcomes of N successive independent tosses of a fair coin almost certainly has algorithmic information content greater than N and is algorithmically incompressible or random But to prove this in the case of a particular N -bit string turns out to require at least N bits of axioms, even though it is almost always true In other words, most nite bit strings are random, but individual bits strings cannot be proved to be random 3] Here is an even more dramatic example of this information-theoretic approach to the incompleteness of formal systems of axioms I have shown that there is sometimes complete randomness in elementary number theory 11, 13, 15{16] I have constructed 11] a two-hundred page exponential diophantine equation with the property that the number of solutions jumps from nite to in nite at random as a parameter is varied In other words, whether the number of solutions is nite or in nite in each case cannot be distinguished from independent tosses of a fair coin This is an in nite amount of independent, irreducible mathematical information that cannot be compressed into any nite number of axioms I.e., essentially the only way to prove these assertions is to assume them as axioms! This completes our sketch of algorithmic information theory Now 520 Epilogue let's turn to biology Evolution The origin of life and its evolution from simpler to more complex forms, the origin of biological complexity and diversity, and more generally the reason for the essential di erence in character between biology and physics, are of course extremely fundamental scienti c questions While Darwinian evolution, Mendelian genetics, and modern molecular biology have immensely enriched our understanding of these questions, it is surprising to me that such fundamental scienti c ideas should not be re ected in any substantive way in the world of mathematical ideas In spite of the persuasiveness of the informal considerations that adorn biological discussions, it has not yet been possible to extract any nuggets of rigorous mathematical reasoning, to distill any fundamental new rigorous mathematical concepts In particular, by historical coincidence the extraordinary recent progress in molecular biology has coincided with parallel progress in the emergent eld of computational complexity, a branch of theoretical computer science But in spite of the fact that the word \complexity" springs naturally to mind in both elds, there is at present little contact between these two worlds of ideas! The ultimate goal, in fact, would be to set up a toy world, to de ne mathematically what is an organism and how to measure its complexity, and to prove that life will spontaneously arise and increase in complexity with time Does algorithmic information theory apply to biology? Can the concepts of algorithmic information theory help us to de ne mathematically the notion of biological complexity? One possibility is to ask what is the algorithmic information content of the sequence of bases in a particular strand of DNA Another possibility is to ask what is the algorithmic information content of the Algorithmic Information & Evolution 521 organism as a whole (it must be in discrete symbolic form, e.g., imbedded in a cellular automata model) Mutual algorithmic information might also be useful in biology For example, it could be used for pattern recognition, to determine the physical boundaries of an organism This approach to a task which is sort of like de ning the extent of a cloud, de nes an organism to be a region whose parts have high mutual algorithmic information content, i.e., to be a highly correlated, in an information-theoretic sense, region of space Another application of the notion of mutual algorithmic information content might be to measure how closely related are two strands of DNA, two cells, or two organisms The higher the mutual algorithmic information content, the more closely related they are These would be one's initial hopes But, as we shall see in reviewing previous work, it is not that easy! Previous work I have been concerned with these extremely di cult questions for the past twenty years, and have a series of publications 1{2, 7{13] devoted in whole or in part to searching for ties between the concepts of algorithmic information theory and the notion of biological information and complexity In spite of the fact that a satisfactory de nition of randomness or lack of structure has been achieved in algorithmic information theory, the rst thing that one notices is that it is not ipso facto useful in biology For applying this notion to physical structures, one sees that a gas is the most random, and a crystal the least random, but neither has any signi cant biological organization My rst thought was therefore that the notion of mutual or common information, which measures the degree of correlation between two structures, might be more appropriate in a biological context I developed these ideas in a 1970 paper 1], and again in a 1979 paper 8] using the more-correct self-delimiting program-size complexity measures In the concluding chapter of my Cambridge University Press book 11] I turned to these questions again, with a number of new thoughts, 522 Epilogue among them to determine where biological questions fall in what logicians call the \arithmetical hierarchy." The concluding remarks of my 1988 Scienti c American article 13] emphasize what I think is probably the main contribution of the chapter at the end of my book 11] This is the fact that in a sense there is a kind of evolution of complexity taking place in algorithmic information theory, and indeed in a very natural context The remaining publications 2, 7, 9{10, 12] emphasize the importance of the problem, but not make new suggestions The halting probability as a model of evolution What is this natural and previously unappreciated example of the evolution of complexity in algorithmic information theory? In this theory the halting probability of a universal Turing machine plays a fundamental role is used to construct the two-hundred page equation mentioned above If the value of its parameter is K , this equation has nitely or in nitely many solutions depending on whether the K th bit of the base-two expansion of is a or a Indeed, to Turing's fundamental theorem in computability theory that the halting problem is unsolvable, there corresponds in algorithmic information theory my theorem 4] that the halting probability is a random real number In other words, any program that calculates N bits of the binary expansion of is no better than a table look-up, because it must itself be at least N bits long I.e., is incompressible, irreducible information And it is itself that is our abstract example of evolution! For even though is of in nite complexity, it is the limit of a computable sequence of rational numbers, each of which is of nite but eventually increasing complexity Here of course I am using the word \complexity" in the technical sense of algorithmic information theory, in which the complexity of something is measured by the size in bits of the smallest program for calculating it However this computable sequence of rational numbers converges to very, very slowly Algorithmic Information & Evolution 523 In precisely what sense are we getting in nite complexity in the limit of in nite time? Well, it is trivial that in any in nite set of objects, almost all of them are arbitrarily complex, because there are only nitely many objects of bounded complexity (In fact, there are less than 2N objects of complexity less than N ) So we should not look at the complexity of each of the rational numbers in the computable sequence that gives in the limit The right way to see the complexity increase is to focus on the rst K bits of each of the rational numbers in the computable sequence The complexity of this sequence of K bits initially jumps about but will eventually stay above K What precisely is the origin of this metaphor for evolution? Where does this computable sequence of approximations to come from? It arises quite naturally, as I explain in my 1988 Scienti c American article 13] The N th approximation to , that is to say, the N th stage in the computable evolution leading in the in nite limit to the violently uncomputable in nitely complex number , is determined as follows One merely considers all programs up to N bits in size and runs each member of this nite set of programs for N seconds on the standard universal Turing machine Each program K bits long that halts before its time runs out contributes measure 2;K to the halting probability Indeed, this is a computable monotone increasing sequence of lower bounds on the value of that converges to , but very, very slowly indeed This \evolutionary" model for computing shows that one way to produce algorithmic information or complexity is by doing immense amounts of computation Indeed, biology has been \computing" using molecular-size components in parallel across the entire surface of the earth for several billion years, which is an awful lot of computing On the other hand, an easier way to produce algorithmic information or complexity is, as we have seen, to simply toss a coin This would seem to be the predominant biological source of algorithmic information, the frozen accidents of the evolutionary trail of mutations that are preserved in DNA So two di erent sources of algorithmic information seem biologically plausible, and would seem to give rise to di erent kinds of 524 Epilogue algorithmic information Technical note: A nite version of this model There is also a \ nite" version of this abstract model of evolution In it one xes N and constructs a computable in nite sequence st = s(t) of N -bit strings, with the property that for all su ciently large times t, st = st+1 is a xed random N -bit string, i.e., one for which its programsize complexity H (st) is not less than its size in bits N In fact, we can take st to be the rst N -bit string that cannot be produced by any program less than N bits in size in less than t seconds In a sense, the N bits of information in st for t large are coming from t itself So one way to state this, is that knowing a su ciently large natural number t is \equivalent to having an oracle for the halting problem" (as a logician would put it) That is to say, it provides as much information as one wants By the way, computations in the limit are extensively discussed in my two papers 5{6], but in connection with questions of interest in algorithmic information theory rather than in biology Conclusion To conclude, I must emphasize a number of disclaimers First of all, is a metaphor for evolution only in an extremely abstract mathematical sense The measures of complexity that I use, while very pretty mathematically, pay for this prettiness by having limited contact with the real world In particular, I postulate no limit on the amount of time that may be taken to compute an object from its minimal-size description, as long as the amount of time is nite Nine months is already a long time to ask a woman to devote to producing a working human infant from its DNA description A pregnancy of a billion years, while okay in algorithmic information theory, is ridiculous in a biological context Algorithmic Information & Evolution 525 Yet I think it would also be a mistake to underestimate the significance of these steps in the direction of a fundamental mathematical theory of evolution For it is important to start bringing rigorous concepts and mathematical proofs into the discussion of these absolutely fundamental biological questions, and this, although to a very limited extent, has been achieved References Items to 10 are reprinted in item 12 1] G J Chaitin, \To a mathematical de nition of `life'," ACM SICACT News, January 1970, pp 12{18 2] G J Chaitin, \Information-theoretic computational complexity," IEEE Transactions on Information Theory IT-20 (1974), pp 10{ 15 3] G J Chaitin, \Randomness and mathematical proof," Scienti c American, May 1975, pp 47{52 4] G J Chaitin, \A theory of program size formally identical to information theory," Journal of the ACM 22 (1975), pp 329{340 5] G J Chaitin, \Algorithmic entropy of sets," Computers & Mathematics with Applications (1976), pp 233{245 6] G J Chaitin, \Program size, oracles, and the jump operation," Osaka Journal of Mathematics 14 (1977), pp 139{149 7] G J Chaitin, \Algorithmic information theory," IBM Journal of Research and Development 21 (1977), pp 350{359, 496 8] G J Chaitin, \Toward a mathematical de nition of `life'," in R.D Levine and M Tribus, The Maximum Entropy Formalism, MIT Press, 1979, pp 477{498 9] G J Chaitin, \Algorithmic information theory," in Encyclopedia of Statistical Sciences, Volume 1, Wiley, 1982, pp 38{41 526 Epilogue 10] G J Chaitin, \Godel's theorem and information," International Journal of Theoretical Physics 22 (1982), pp 941{954 11] G J Chaitin, Algorithmic Information Theory, Cambridge University Press, 1987 12] G J Chaitin, Information, Randomness & Incompleteness| Papers on Algorithmic Information Theory, World Scienti c, 1987 13] G J Chaitin, \Randomness in arithmetic," Scienti c American, July 1988, pp 80{85 14] P Davies, \A new science of complexity," New Scientist, 26 November 1988, pp 48{50 15] J P Delahaye, \Une extension spectaculaire du theoreme de Godel: l'equation de Chaitin," La Recherche, juin 1988, pp 860{ 862 English translation, AMS Notices, October 1989, pp 984{ 987 16] I Stewart, \The ultimate in undecidability," Nature, 10 March 1988, pp 115{116 527 About the author Gregory J Chaitin is a member of the theoretical physics group at the IBM Thomas J Watson Research Center in Yorktown Heights, New York He created algorithmic information theory in the mid 1960's when he was a teenager In the two decades since he has been the principal architect of the theory His contributions include: the de nition of a random sequence via algorithmic incompressibility, the reformulation of program-size complexity in terms of self-delimiting programs, the de nition of the relative complexity of one object given a minimalsize program for another, the discovery of the halting probability Omega and its signi cance, the information-theoretic approach to Godel's incompleteness theorem, the discovery that the question of whether an exponential diophantine equation has nitely or in nitely many solutions is in some cases absolutely random, and the theory of program size for Turing machines and for LISP He is the author of the monograph \Algorithmic Information Theory" published by Cambridge University Press in 1987 529 530 INFORMATION, RANDOMNESS & INCOMPLETENESS Papers on Algorithmic Information Theory | Second Edition World Scienti c Series in Computer Science | Vol by Gregory J Chaitin (IBM) This book is an essential companion to Chaitin's monograph ALGORITHMIC INFORMATION THEORY and includes in easily accessible form all the main ideas of the creator and principal architect of algorithmic information theory This expanded second edition has added thirteen abstracts, a 1988 SCIENTIFIC AMERICAN article, a transcript of a EUROPALIA 89 lecture, an essay on biology, and an extensive bibliography Its larger format makes it easier to read Chaitin's ideas are a fundamental extension of those of Godel and Turing and have exploded some basic assumptions of mathematics and thrown new light on the scienti c method, epistemology, probability theory, and of course computer science and information theory 531 532 Back Cover Readership: Computer scientists, mathematicians, physicists, philo- sophers and biologists ... Gregory J Chaitin Gregory J Chaitin is on the sta of the IBM Thomas J Watson Research Center in Yorktown Heights, N.Y He is the principal architect of algorithmic information theory and has just... identical to information theory 487 Recent work on algorithmic information theory 489 VIII Bibliography 491 Publications of G J Chaitin Discussions of Chaitin' s work 493 499 Epilogue 503 Undecidability... Introductory/Tutorial/Survey Papers Randomness and mathematical proof Randomness in arithmetic On the di culty of computations Information- theoretic computational complexity Algorithmic information theory Algorithmic information

Ngày đăng: 23/10/2019, 17:05

TỪ KHÓA LIÊN QUAN