1. Trang chủ
  2. » Thể loại khác

Manzi uncontrolled; the surprising payoff of trial and error for business, politics, and society (2012)

202 117 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 202
Dung lượng 1,52 MB

Nội dung

UNCONTROLLED UNCONTROLLED The Surprising Payoff of Trial-and-Error for Business, Politics, and Society JIM MANZI A Member of the Perseus Books Group New York Copyright © 2012 by Jim Manzi Published by Basic Books, A Member of the Perseus Books Group All rights reserved No part of this book may be reproduced in any manner whatsoever without written permission except in the case of brief quotations embodied in critical articles and reviews For information, address the Perseus Books Group, 387 Park Avenue South, New York, NY 10016-8810 Books published by Basic Books are available at special discounts for bulk purchases in the United States by corporations, institutions, and other organizations For more information, please contact the Special Markets Department at the Perseus Books Group, 2300 Chestnut Street, Suite 200, Philadelphia, PA 19103, or call (800) 810-4145, ext 5000, or e-mail special.markets@perseusbooks.com Designed by Brent Wilcox Library of Congress Cataloging-in-Publication Data Manzi, Jim Uncontrolled : the surprising payoff of trial-and-error for business, politics, and society / Jim Manzi p cm Includes bibliographical references and index ISBN 978-0-465-02931-0 (e-book) Social sciences—Experiments Social sciences—Research Experimental design—Social aspects I Title H62.M264 2012 001.4'34—dc23 2012004409 10 For Margaret Jennings Manzi CONTENTS INTRODUCTION PART I PART II 10 11 12 PART III 13 14 15 SCIENCE Induction and the Problem of Induction Falsification and Paradigms Implicit and Explicit Knowledge Science as a Social Enterprise Science Without Experiments Some Observations Concerning Probability The Invention and Application of the Randomized Trial Limitations of Randomized Trials SOCIAL SCIENCE Nonexperimental Social Science Business Strategy as Applied Social Science The Experimental Revolution in Business Experimental Social Science POLITICAL ACTION Liberty as Means Innovation and Cohesion Sustainable Innovation ACKNOWLEDGMENTS NOTES INDEX INTRODUCTION As a young corporate strategy consultant, I once was on a team tasked with analyzing a proposed business program for a major retail chain This company was considering a very large investment to improve its stores through a combination of a brighter layout, a different mix of merchandise, and more in-store employees to assist shoppers The company believed consumers would positively receive this program, but the open question was whether it would lead to enough new sales to justify the substantial extra costs it would require I developed a complicated analytical process to predict the size of the sales gain, including qualitative and quantitative consumer research, competitive benchmarking, and internal capability modeling With great pride I described this plan to a partner in our consulting firm, who responded by saying, “Okay but why wouldn’t you just it to a few stores and see how it works?” This seemed so simple that I thought it couldn’t be right But as I began a series of objections to his question, I kept stopping myself midsentence I realized that each of my potential responses was incorrect: an experiment really would provide the most definitive available answer to the question Over the next twenty years I became increasingly aware that real experiments were required for adjudicating among competing theories for the effects of business interventions intended to change consumer behavior Cost changes often could be predicted reliably through engineering studies But when it came to predicting how people would respond to interventions, I discovered that I could almost always use historical data, surveys, and other information to build competing analyses that would “prove” that almost any realistically proposed business program would succeed or fail, just by making tiny adjustments to analytical assumptions And the more sophisticated the analysis, the more unavoidable this kind of subterranean model-tuning became Even after executing some business program, debates about how much it really changed profit often would continue, because so many other things changed at the same time Only controlled experiments could cut through the complexity and create a reliable foundation for predicting consumer response to proposed interventions This fundamental problem, albeit at vastly greater scale and severity, applies whenever we listen to impressive-sounding arguments that predict the society-wide effects of proposed major economic, welfare, educational, and other policy interventions As an example, consider the deliberations around how to respond to the 2008 economic crisis The country was facing a terrifying situation, and there was a widespread belief that emergency measures of some kind were called for as a matter of prudence The incoming Obama administration proposed a large stimulus program, which led to an intense public debate in January and February 2009 Setting aside for a moment ideological predispositions and value judgments, this presented a specific technical issue: What would be the effects of any given stimulus proposal on general economic welfare? This was a practical question worth trillions of dollars that got to the reliability of our predictions about government programs The role of government spending and deficits in a major economic downturn has been the subject of extensive academic study for decades, and many leading economists actively participated in the public discussion in early 2009 Paul Krugman and Joseph Stiglitz, both Nobel laureates in economics, argued that stimulus would improve economic performance In fact, they both argued that it should be bigger On the other hand, James Buchanan, Edward Prescott, and Vernon Smith—all Nobel laureates in economics—argued that the stimulus would not improve economic performance enough to justify the investment, saying that “notwithstanding reports that all economists are now Keynesians it is a triumph of hope over experience to believe that more government spending will help the US today.” This was not an argument about precise quantities, but a disagreement about the policy’s basic effects Although fierce debates can be found in frontier areas of all sciences, this one would be as if, on the night before the Apollo moon launch, numerous Nobel laureates in physics were asserting that rockets couldn’t get as far as the moon, almost as many were saying they could get there in theory but we need much more fuel, and some were arguing that the moon did not exist The only thing an observer could say with high confidence before the stimulus program launched was that at least several Nobel laureates in economics would be directionally incorrect about its effects But the stimulus situation was even worse It was clear at the time that we would not know which of them were right or wrong even after the fact Suppose Professor Famous Economist X predicted on February 1, 2009, that “unemployment will be about 10 percent in two years without the bill, and about percent with the bill.” What you think would happen when 2011 rolled around and unemployment was 10 percent? It’s a very, very safe bet that Professor X would say something like, “Yes, but other conditions deteriorated faster than anticipated, so if we hadn’t passed the stimulus bill, unemployment would have been more like 12 percent So you see, I was right after all; it reduced unemployment by about percentage points.” The key problem is that we have no reliable way to measure the counterfactual—that is, to know what would have happened had we not executed the policy—because so many other factors influence the outcome This seemingly narrow and technical issue of counterfactuals turns out to be central to our continuing inability to use social sciences to adjudicate most policy debates rationally This statement is not to make the trivial point that social sciences are not like physics in some ineffable sense, but rather that the social sciences have not produced a substantial body of useful, nonobvious, and reliable rules that would allow us to predict the effect of such proposed government programs I believe that recognizing this deep uncertainty should influence how we organize our political and economic institutions In the most direct terms, it should lead us to value the freedom to experiment and discover workable arrangements through an open-ended process of trial and error This is not a new insight, but is the central theme of an Anglo-American tradition of liberty that runs from Locke and Milton through Adam Smith and on to the twentieth-century libertarian thinkers, preeminently Sir Karl Popper and F A Hayek In this tradition, markets, democracy, and other related institutions are seen as instruments for discovering practical methods for improving our material position in an uncertain environment The resulting system of democratic capitalism experiences (and perhaps creates) periodic crises We are living through one today And as with all such crises, this has produced a loss of confidence in economic and political liberty Examining the debates that took place in prior crises of democratic capitalism can help us to navigate this one The Great Depression understandably led to an enormous increase in government activity to try to tame markets to work for the common good But a small group of loosely affiliated thinkers were careful to point out the trade-offs involved The most important were Popper and Hayek, who argued that this degree of government control—or in Hayek’s language, “planning”—would necessarily limit growth because human society is far more complex than the understanding of the planners Hayek termed this the “knowledge problem.” By this line of thinking, we need the trial-and-error process created by the free play of markets, social tolerance, and experiments in living—what Popper called the “open society”—to determine what permits the society to thrive materially, and then to propagate this information In short, we need freedom because we are ignorant It is a subtle but crucial distinction that Popper and Hayek argued not for some kind of absolute freedom, but for social adaptability They were not (nor were Smith and some of their antecedents) arguing against all market regulations, government investments, lifestyle restrictions, and so forth Rather, they were arguing against an unwarranted assumption of knowledge by those who would attempt to control society’s evolution In our current crisis, sales of Hayek’s 1944 popular classic, The Road to Serfdom, have skyrocketed If we are now living through a more moderated version of the Great Depression, then why isn’t the proper response to the current fashion for government control simply to dust off our copies of Hayek and Popper? The short answer is: because of science Science and technology have made astounding advances over the past half-century The most significant relevant developments have been in biology and information technology The tradition of liberty has always had a strong “evolutionist” bent, in that it has seen order in society as emerging from a process that cannot be predicted or planned, rather than as the product of human design But as I’ll describe in detail, the mechanics of genetic evolution provide a clear and compelling picture of how a system can capture and exploit implicit insight without creating explicit knowledge, and this naturally becomes the model for the mechanism by which trial and error advances society’s material interests without conscious knowledge or planning A further technical development enabled by information technology—the explosion in randomized clinical trials that first achieved scale in clinical biology, and has started to move tentatively into social program evaluation—provides a crucial tool that could be much more widely applied to testing claims for many political and economic policies Combining these ideas of evolution and randomized trials led Donald T Campbell, a twentiethcentury social scientist at Northwestern University, to create a theory of knowledge, which he termed “evolutionary epistemology.” It has a practical implication that can be summarized as the idea that any complex system, such as our society, evolves beliefs about what practices work by layering one kind of trial-and-error learning upon another The foundation is unstructured trial and error, in which real people try out almost random practices, and those that work better are more likely to be retained Layered on top of this is structured trial and error, in which human minds consciously develop ideas for improved practices, and then use rigorous experiments to identify those that work This is a modernized and practical version of what Popper called “piecemeal social engineering”: the idea of testing targeted reforms designed to meet immediate challenges, rather than reforming society by working backward from a vision of the ideal “Engineering” is a well-chosen term This is a much humbler view of social science than what was entertained by the eighteenth-century founders of the discipline, such as Auguste Comte and Henri de Saint-Simon, whose ideology continues to animate large areas of social science These early pioneers expected that social science eventually would resemble Newtonian physics, with powerful theories expressed as compact mathematical laws describing a vast array of phenomena Campbell’s vision looked a lot more like therapeutic biology: extremely incomplete theory, combined with clinical trials designed to sort out which interventions really worked His approach is more like searching for a polio vaccine than it is like discovering the laws of motion and putting a man on the moon But I will argue that we should be humbler still The reason we have increasing trouble building compact and comprehensive predictive theories as we go from physics to biology to social science is the increasing complexity of the phenomena under investigation But this same increasing complexity has another pernicious effect: it becomes far harder to generalize the results of experiments We can run a clinical trial in Norfolk, Virginia, and conclude with tolerable reliability that “Vaccine X prevents disease Y.” We can’t conclude that if literacy program X works in Norfolk, then it will work everywhere The real predictive rule is usually closer to something like “Literacy program X is effective for children in urban areas, and who have the following range of incomes and prior test scores, when the following alternatives are not available in the school district, and the teachers have the following qualifications, and overall economic conditions in the district are within the following range.” And by the way, even this predictive rule stops working ten years from now, when different background conditions obtain in the society The problem of generalization would not be news to Campbell—he invented the terminology still used to discuss it But it is deadly to the practical workability of the idea that we can identify a range o f broadly effective policies via experiment This is because the vast majority of reasonablesounding interventions will work under at least some conditions, and not under others For the hypothetical literacy program described above, an experiment to test the program is not really a test of the program; it is a test of how well the program applies to a specific situation A brute-force approach to this problem would be to run not one experiment to evaluate whether this program works, but to run hundreds or thousands of experiments to evaluate the conditions under which it works If it could be tested in a very large number of school districts, we might very well discover some useful approximation to the highly conditional rule that predicts its success This is the opposite of elegant theory-building, and is even more limited than either Popper’s or Campbell’s version of social engineering But it might provide practically useful information Of course, this would require that each experiment be cheap enough to make this many tests feasible Over the past couple of decades, this has been accomplished for certain kinds of tests The capability has emerged not within formal social science, but in commercial enterprises The motivation has been the desire to more reliably predict the causal effects of business interventions like the example of the retail-store upgrade program that opened this book The enabling technological development has been the radical decreases in the costs of storing, processing, and transmitting information created by Moore’s Law The method has been to use information technology to routinize, and ultimately automate, many aspects of testing This division of labor should not be surprising Biological and social science researchers developed the randomized trial, and then the conceptual apparatus for thinking rigorously about the problem of generalization Commercial enterprises have figured out how, in specific contexts, to convert this kind of experimentation from a customized craft to a high-volume, low-cost, and partially automated process I found myself in the middle of this experimental revolution in business when some friends and I started what eventually became a global software company that produces the tools to apply markets and, 50 science and, 48, 50, 51 scientists and, 48–50 Donohue III, John, 111, 112, 113, 114 Dubner, Stephen, 110, 116 Duflo, Esther, 195–196 Duhem/Quine Thesis, 22 eBay and experiments, 146 “Econometric estimates” defined, 171 Economics causality and, 99–100, 233 certainty and, 99–100 “efficient frontier,” 262 environment and, 234 Mankiw on “accepted” findings, 99–100 policy disagreements, 99–100 vagueness and, 100 See also Evolutionary economics; specific examples Economics/RFTs emerging research, 196 history, 191–192 key findings, 192 mainstream economics examples, 195–196 overview, 191–196 replication and, 195 Economist magazine, 195–196 Economy abstraction and, 239 conveyor belt analogy, 237–238 global changes (1960s-1970s), 119–120 “not real work” and, 239 predictions/forecasting, 238 routinization/automation and, xv, 228, 229, 232, 234, 237, 238, 239, 240, 247 world economy, 232–233, 233 (fig.) Economy (US) of 1945, 119 manufacturing sector/comparisons, 236–237 services economy, 237 transition from agriculture/manufacturing, 235–237 See also Business strategies; specific areas/examples Edison, Thomas, 175 Education charter schools, 187–188, 189–190, 250 deregulating schools, 248–253 improving human capital and, 247–248 lotteries, 188–190, 248 lotteries vs RFTs, 188 magnet schools, 187–188, 190 need for, 190 overview, 186–191 proposals for, 251–252 reducing disparities and, 247 replication and, 187, 191 RFTs findings, 189 RFTs in 1960s/1970s, 186 RFTs in 1980s/1990s, 186–187 RFTs statistics, 174, 186 school choice and, 187–190, 248–249 school desegregation, 186 school vouchers, 187–188, 189–190, 248–249 US Department of Education and, 174 What Works Clearinghouse, 174 Einstein, Albert “apparent superluminal motion,” 23 Newton’s theories and, 22, 26 relativity theory and, 22, 23, 26 Eldersveld, Samuel, 197 Emerson, Ralph Waldo, Enquiry Concerning Human Understanding, An (Hume), 10–11 Entrepreneurs environment and, 234 role, 221, 229, 230 “Essence,” Scholastics, Essentials of Economics (Mankiw), 99–100 “Evaluating the Econometric Evaluations of Training Programs with Experimental Data” (LaLonde), 170–171 Evolution through natural selection complexity of, 32 crossover significance, 36, 37 explicit knowledge and, 38 genetic algorithm analogy, 32–37, 34 (fig.), 35 (fig.) group level and, 219 implicit theories/knowledge and, 32–33 mutation, 36 science progress metaphor, 40 social evolution and, 36–37 trial-and-error processes and, 31, 36–37 variations and randomness, 31 See also Genetic algorithms (GA) Evolutionary economics description, 224–231 implicit knowledge and, 225 leaders in, 224–225 names for, 224 uncertainty and, 225–226 Experience curve, 120–121, 122 Experiments feedback loop and, 54, 72, 161–163, 164, 169–170, 191 “noise,” 71, 74, 77, 82 observation vs., 15–16, 20 power of, ix, x, 16 requirements to demonstrate causality, 15 theories and, 18, 159 See also RFT (randomized field trial); specific experiments; specific individuals Experiments/business asserted paradox of choice, 149–152 Capital One example, 144–145, 146 causal conclusions and, 151–152, 153–154 causal density and, 153 common results, 163–164 credibility and, 166 deciding what to test, 160 executive commitment, 164, 165 feedback loop, 161–162, 164 Harrah’s Entertainment example, 145–146 holistic integration and, 159–160 independent testing function, 164–165 information technology and, 144 institutionalization conditions, 164–166 Internet commerce, 146 nonexperimental methods with, 154–158 paradigms (science) analogy, 161–163 reliability of methodologies, 154 repeatable process and, 165–166 replication and, 148–154 resistance/overcoming resistance, 166, 167 RFT numbers/importance, 200–201 rise of experimentalists, 143–148 selecting store locations problem, 154–156 strategy and, 158–163 success rates, 163–164 units of analysis, 159–160 variation in background conditions, 151 window displays example, 158–159 See also Applied Predictive Technologies (APT); “Jam experiment”; RFT (randomized field trial) “Experiments of Fruit,” Bacon, “Experiments of Light,” Bacon, Experiments/social sciences budget cuts and, 170 control need, 200 criticism of, 171–173 decline (1980s) reasons, 170 development history, 169–174 econometric estimates and, 171–172 failure vs success, 202 future, 204–206 holistic integration and, 204 “magic” and, xvi, 203–204 overcoming resistance, 202 as pattern-finding, 200 policy debates and, 170–172 randomization need, 200 replication and, 201 resistance, 201–202 RFT numbers (1970s through 2000), 17, 173, 175 RFT numbers/importance, 200–201 RFTs (1960s through early 1980s), 169–170 success/failure rates, 163–164, 170, 176, 179, 181, 190–191, 202–203 summary, 199–206 systematic reviews and, 173–174 See also specific areas Fairbank, Rich, 144–145 Falsification concept Bacon and, 18–19 naive falsification problem, 26–27, 28 Newton’s theories/Uranus’s orbit and, 22 nonobviousness and, 17 Popper and, 17, 19, 25, 26, 27, 28, 29, 103 problems with, 21–22 replication of experiments and, 19–20 significance of, 19, 25–26 “trial and error”/”trial and success,” 18–19 vested interests/coercion and, 21–22 Ferguson, Adam, 193 Financial crisis (2008) hidden conditionals and, 61–62 “reference class” problem and, 61–62 See also Stimulus package (2009) Fisher, R.A Design of Experiments, 75, 90, 143 design of experiments (DOE), 75, 140, 143 legacy, 148 on smoking-lung cancer debate, 90–91, 92 Food and Drug Administration (FDA), 245–246 Ford Foundation, 177 Freakonomics (Levitt and Dubner), 110, 111, 113, 116 French Enlightenment, 103 Freudians, 25, 26 Friedman, Milton, 182 Fryer, Roland, Jr., 195 GA See Genetic algorithms (GA) Galileo experiments with falling objects, 16–17, 73 hidden conditionals and, 16–17 Generalization predictive generalization, 157–158 strategic generalization, 158 See also Induction Genetic algorithms (GA) bit, 33 chemical factory examples, 33–36, 34 (fig.), 35 (fig.), 37–39 competition and, 37–39 description, 33 evolution analogy, 32–37, 34 (fig.), 35 (fig.) explicit knowledge and, 38 fitness value, 34, 36 genomes in, 33–34, 34 (fig.), 36 genotype in, 33–34, 34 (fig.) implicit theories/knowledge and, 33 mutation/mutation rates, 34, 36, 37–38 phenotype in, 33–34, 34 (fig.) reproduction, 34–35, 35 (fig.) reproduction via crossover, 35, 35 (fig.), 38 reproduction via direct replication, 35, 35 (fig.) selection, 34, 36 steps with, 34–36 vicarious adaptiveness and, 39–40 Gerber, Alan S., 197–198, 199 Get Out the Vote (Green and Gerber), 197 Ghana childhood disease RFT, 87–88 Glass-based products manufacturer example, 124–125, 130, 131, 226 Globalization, 127, 248 Goldin, Claudia, 247 Google and experiments, 146 Gosnell, Harold, 197 GOTV tactics, 197–199 Green, Donald P., 197–198, 199 Growth-share matrix, 121, 122 Handbook of Experimental Economics Results, 192 Harrah’s Entertainment, 145–146 Hayek, F.A., xii–xiii, 193, 224–225 “Heckman correction,” 171 Heckman, James background, 171 criticism of experimental social sciences, 171–172, 186, 188–189 RFTs in social science, 89 Henderson, Bruce background, 120 on business knowledge needed, 120, 122–123, 134 business strategies, 120–123, 125, 127, 129, 134, 160, 162–163 Hill, Austin Bradford causality and, 92, 93 modern RFT and, 77, 91 smoking-lung cancer causality link, 91, 93 Historical sciences Alvarez hypothesis example, 56–57 overview, 55–57 predictions and, 56, 57 “History,” 102 Holistic integration astronomical sciences, 58–59 experiments and, 58–59 high causal density vs., 59 integrated complexity and, 59 Holmes, Oliver Wendell, Jr., 51–52 Homemaker-Home Health Aide Demonstration, 182 Human capital description, 247 See also Education Human capital improvement proposals deregulating schools, 248–252 immigration and, 252–253 innovation-cohesion problem and, 240, 247 overview, 247–254 science/technology prioritization, 253–354 Human genome sequencing, 40–41 Hume, David cause-and-effect rules/induction, 10–11, 12, 26 hidden conditionals and, 11, 12, 14, 29 induction, 10–11, 12 IBM, 227, 230 Immigration, 252–253 “Impact of Legalized Abortion on Crime, The” (Donohue and Levitt), 111, 112, 113, 114 In Search of Excellence (Peters and Waterman), 127 Income inequality causal complexity, 106–110 See also Republican presidents-income inequality (case study) Induction Bacon and, 6–10 cause-and-effect rules and, 11–14 Scholastics and, 6–7, 10 See also Problem of Induction Inequality in income causal complexity, 106–110 See also Republican presidents-income inequality (case study) Information technology effects, xiii, xv, 127, 143–144, 165, 199 Inherit the Wind (movie), 236 Innovation disruption and, 231–240 entrepreneurs and, 221 environment and, 234 exploiting, 227–230 failure vs success, 224 invention vs., 226–227, 229–230 nonroutine/routine innovation, 228 routine activities and, 229 social cohesion and, 221, 231–240 trade-offs in managing, 227–228 Innovation-cohesion problem, 231–240 Institute of Education Sciences (IES), 190–191, 202, 245 Institutions, Institutional Change and Economic Performance (North), 225–226 Instrumentalism description, 44 scientists and, 44, 50 Integrated complexity defined, 59 “Intent-to-treat” principle, 77 Interaction effects, 93, 106, 135–136, 137 Intercorrelation problem, 136, 137 Invention vs innovation, 226–227, 229–230 Ioannidis, John, 91 Iran’s nuclear weapon’s program (hypothetical situation), 97–99 Iridium, 56 Issenberg, Sasha, 198 Iyengar, Sheena, 149 “Jam experiment” description of, 149–150 over-generalization with, 149, 151, 152 problems with, 150–151 James I, King, Job Corps, 182, 203 Jobs, Steve, 227–228 John Bates Clark Medal, 110, 196 Journal of the American Medical Association, 69 “Just guess” strategy in number-guessing game, 31, 32 Kammerer Award, APSA, 105 Katz, Lawrence, 247 Kepler, Johannes, 25, 49, 103 Kludge, 28 Knight, Frank, 62, 224–225, 229 Krugman, Paul, x, 100 Kuhn, Thomas background, 24 normal (“worker-bee”) science, 24 paradigm concept, 21, 24, 25, 27, 28, 29 paradigm shift, 24–25, 27, 48 scientific progress, 40 Lakatos, Imre, 22 LaLonde, Robert J., 170–171 Le Verrier, Urbain experiments and, 58, 59–60 orbit of Uranus/Neptune discovery, 22, 23, 27, 58 Legalized abortion-crime reduction (case study) bias and, 116 causal complexity and, 110–117 “causal pollution” problem, 115–116 “early legalizers”/natural experiment and, 111, 113–115, 114 (fig.), 116 holistic integration and, 115 introduction, 105 overview, 110–117, 114 (fig.) problems with study, 104–105, 110–117 regression analyses/problems, 111–115, 114 (fig.) United Kingdom study, 112–113 Levitt, Steven abortion-crime rate link and, 110–111, 112, 113, 114 background, 110 economics experiments, 196 Lewis, Walker, 124 Liberty collective action and, 216 economies of scale and, 216 foreign aggression/use of threat and, 215, 216 importance of, xii, xiii, 212 paradox of, 212 physical environmental change and, 215–216 right of entry, 215 right of exit, 214–215 See also Social cohesion Liberty-as-goal definition, 212 problems with, 213–214 Liberty-as-means definition, 212 legalizing prostitution example, 213 limitations to, 214–216 paradox of, 212 Lind, James, 72–73, 79 List, John A., 195, 196 Literacy and Bible reading, 44 Logic of Business Strategy, The (Henderson), 122, 123 Logic of Scientific Discovery, The (Popper), 28 Lorenz, Edward, 116 Lotteries (schools), 188–190, 248 Loveman, Gary, 145–146 Luddites, 234 Lung cancer deaths and, 90 See also Smoking-lung cancer link Machine learning (data mining techniques), 137 Macro approach (business) five forces model example, 128–129, 129 (fig.) overview, 128–129, 129 (fig.) Magnet schools, 187–188, 190 Mankiw, N Gregory, 99–100 Manzi, Jim academic background, 23, 123–124 “apparent superluminal motion,” 23 AT&T and, 124, 227 business strategies analysis work, 124–127, 130–132, 146 Einstein’s theories and, 23 See also Applied Predictive Technologies (APT) Markets dogma and, 50 noble lie/leap of faith, 51 similarities with science, 50–53 value vs price, 50–51 Marx/Marxists, 25, 229, 235 McKinsey, 124, 149 Medicaid, 185, 243, 244 Medicare, 243, 254, 260 Medicine “evidence-based medicine,” 80 traditional healers, 87–88 value of incremental health care spending, 87, 88 See also RFT (randomized field trial) Medicine (history) anthrax vaccine development, 73–74, 81, 161, 180 blinding, 74–75, 80 bloodletting, 70, 71, 72 controlled experiments, 72–74, 75 experimental “noise” and, 74 feedback loop and, 72 first clinical trial, 72–73 interventions/health outcomes, 70–76 observer bias and, 74 theory-based/experience-based conflict, 70 therapeutics, 70–76 tuberculosis-streptomycin treatment, 77, 91 useless/counterproductive treatments, 72 Micro approach (business) human behavior and, 132 linear program (LP) use, 130–131 overview, 130–132 Mill, J.S., 92, 103–104 Morris, Nigel, 144–145 MTV, Rock the Vote, 197 Name change (QwikMart/FastMart) case study analysis reliability, 140–141 background, 132–133 causal factors (potential), 133, 134–135, 159 experimenting/costs, 141, 146 isolating causal factor, 133–134 matching and, 139–140 non-regression pattern finding/problems, 137 omitted variable bias, 135, 136, 137 regression model/problems, 134–137 time dimension and, 138 National Institute of Justice, 176–177, 202, 245 National Research Council, 190 National Science Foundation, 246 National Venture Capital Association, 230 Negative income tax (NIT), 169, 181–182, 183, 244, 249 Neptune discovery, 22 New Atlantis (Bacon), Newton, Sir Isaac Einstein’s theories and, 22, 26 theories/status, xiv, 20–21, 22, 23, 24, 25, 26, 27, 59, 85, 103 time/technology of, 15, 20–21 Neyman, Jerzy, 75, 140 NIT (negative income tax), 169, 181–182, 183, 244, 249 Non-regression techniques, 137 Nonexperimental methods business experiments and, 154–158 hypotheses development and, 157–158 preliminary program evaluation and, 156–157 store site selection, 155–157 weaknesses, 155–156 “Nonobviousness” defined, 156 North, Douglass, 224–226 Novum Organum (Bacon) criticism of Scholastics, 4–7 falsification concept, 18–19 “paradigm” of, 40 scientific method requirements, 7–10 Nuclear weapons experiment (New Mexico), 99 Iran (hypothetical situation), 97–99 Number-guessing game, 31–32 Observations vs experiments, 15–16, 20 Office of Technology Assessment, 245 Omitted variable bias, 135, 136, 137 Ostrom, Elinor, 196–197 Paradigm shift anomalies and, 24 astronomy example, 24–25, 26, 41–42, 48–49 competition and, 26, 41–42, 48–49 description, 24 RFTs and, 79 science actions and, 49 scientists’ beliefs and, 49–50 Paradigms anomalies and, 24 criteria for accepting/rejecting, 27–28, 29 descriptions, 24, 25, 27, 28, 29 as kludge, 28 misuse of concept, 24 naive falsification and, 26–27, 28 Scholastic tradition and, 26 theories vs., 41–42 Paradox of Choice, The (Schwartz), 149 PARC lab, 227–228 “Partisan Politics and the US Income Distribution” (Bartels), 105–106 Pasteur, Louis, 73–74, 81, 161, 180 Path of the Law, The (Holmes, Jr.), 52 Pattern-finding techniques defined, 137 See also Regression model problems Pauli, Wolfgang, 17 Peirce, C.S., 74–75, 140 Perry Preschool (Head Start) experiment, 186 Perry, Rick, 197–199 Pertussis vaccine, 77 Peters, Tom, 127 Pinker, Steven, 152 Polanyi, Michael, 50 Police Foundation, 177 Policy waivers block-grants vs., 243–244 institutionalizing, 242–245 Politics belief variations and, 212 changing landscape and, 209–210 evolutionary process, 209, 210, 212 hedging bets and, 209, 210–211 piecemeal social engineering, 211 same-sex marriage example, 210 statesmen and, 211 “status quo preference,” 209, 210, 211 subsidiarity principle, 213 See also Liberty; specific issues Politics/limitations defining improvement, 210–211 knowing action effects, 210 prediction and, 209, 210 Politics/RFTs GOTV tactics, 197–199 overview, 196–199 Pooled regression, 139, 140, 154 Pope, Kevin O., 56 Popper, Sir Karl falsification concept, 17, 19, 25, 26, 27, 28, 29, 103 “occult effect,” 20 “open society,” 53 “paradigms” and, 28 scientific method and, 10, 17–18, 40, 172 social adaptability, xii social engineering, xiii–xiv, xv, 211 Porter, Michael background, 128 business strategies, 160, 162–163 macro approach, 128–129, 129 (fig.) Poverty Action Lab, MIT, 195 Predictions probabilistic vs deterministic predictions, 62–63 reliability of a predictive rule, 21, 64 See also Probabilistic predictions Prescott, Edward, x–xi Probabilistic predictions coin flipping example, 63–64, 65 coup example, 63, 64, 65–66 deterministic predictions vs., 62–63 falsification and, 64 hidden conditionals and, 65 risk and, 66, 67 uncertain predictions vs., 63–64 uncertainty and, 66–67 Probability distribution reliability and, 64–67 uncertainty and, 64–67 Problem of Induction coins falling/not falling example, 12, 13–14 description, 11–14, 15, 19 experiments and, 16 hidden conditionals and, 11–14 predictive rules in social sciences and, 14, 102 reference class problem and, 66, 67 summary description, 14 “Truth” and, 17–18 Proposals See Decentralization/experimentation proposals; Human capital improvement proposals; Welfare Prostitution legalization question, 213 Proverbs quote, 207 RAND corporation, 184 “Randomization and Social Policy Evaluation” (Heckman), 171 Randomization bias definition/description, 172 education RFTs, 186, 189 Randomized field trial See RFT (randomized field trial) Reagan administration, 108, 170, 183 Recommendations See Decentralization/experimentation proposals; Human capital improvement proposals; Welfare Reference class defined, 65 Reference class problem coin toss example, 66, 86 coup example, 65–66 financial crisis (2008) and, 61–62 Problem of Induction and, 66, 67 risk and, 66 uncertainty and, 66–67 Regression model problems direction of causality and, 136–137 income inequality-Republican presidents analysis, 105–110 interaction effects, 106, 135–136, 137 legalized abortion-crime reduction analysis, 111–115, 114 (fig.) name change case study, 134–137 potential causal factors and, 134–135 summary, 136 Relativity as “super-theory,” 24 vs Newtonian mechanics, 22, 23, 26 Reliability definition, 20 Newton’s second law of motion example, 20–21 Reliability of a predictive rule, 21, 64 Republican presidents-income inequality (case study) Bartels and, 105–110 causal complexity and, 106–110 introduction, 105 overview, 105–110 problems with study, 104–110 RFT (randomized field trial) background history, 71–76 compliance effect, 77 “control group” term, 73 creation of modern RFT, 77 drugs’ success rates, 164 experimenter bias and, 76 falsification tests and, 85 first clinical trial, 72–73 hidden conditionals and, 76 institutionalizing social-policy experimentation, 245–247 “intent-to-treat” principle, 77 library of RFT results/meta-analysis, 84–85, 174 as paradigm shift, 79 phases description, 78 polio vaccine trials/results, 86–87 pragmatic tests, 85 randomization purposes, 76 replication and, 84–85, 91 similarities between therapeutics/business, 164, 166, 167 statistics on numbers/costs, 79 status of, 78–79 stratification, 78 See also Experiments; Experiments/business; Experiments/social science RFT (randomized field trial) limitations Ghana childhood disease RFT, 87–88 internal/external validity, 84, 86, 89 overview, 81–89 replication and, 84–86 resistance to RFT and, 81–82 social sciences and, 83–84 structured experiments and, 89–94 See also Reference class problem RFT (randomized field trial) resistance acute vs chronic conditions, 80–81 holistic medicine and, 80–81 overview, 79–82 from physicians, 80 Risk defined, 66 Road to Serfdom, The (Hayek), xiii Rock the Vote, 197 Roe v Wade, 110, 111, 116 Roosevelt, Franklin Delano, 207 Rutherford, Ernest, 95 Saint-Simon, Henri de, xiv, 103 Scheibehenne, Benjamin, 151–152 Scholastics Aristotle and, Bacon’s criticism of, 4–7, 25–26, 42 beliefs, School vouchers, 187–188, 189–190, 248–249 Schumpeter, Joseph, 224–226, 230, 231, 232 Schwartz, Barry, 149 Science absolute truth and, 17–18, 44, 49–51 competing theories/proof, 41–43 explicit vs tacit knowledge, 53 honesty and, 47–48 immediate material benefits and, 43–44 market similarities, 50–53 noble lie/leap of faith of, 50, 51 nonscientific commonsense and, 42–43 predictions and, 55, 56, 57, 97, 98–99 significance of, 253–354 See also specific components; specific individuals Science without experiments astronomical sciences, 55, 57–60 description, 55 historical science, 55–57 holistic integration and, 58–59 Scientific fraud, 47 Scientific method Kuhn’s paradigms and, 26 Popper’s falsification and, 26 See also Bacon, Francis/scientific method Scottish Enlightenment, 193 Scurvy treatment, 72–73 Sesame Street, 186 Sherman, Lawrence, 177–178 “Signal-to-noise” ratio, 71, 82 Smith, Adam, xii, 221, 247 Smith, Vernon, x–xi, 191, 192, 193, 224–225 Smoking-lung cancer link alternative hypothesis and, 92–93 association data and, 92 intervention and, 93–94 nicotine replacement therapies, 94 obviousness, 93 overview, 89–94 smoking changes, 90 structured experiments and, 89, 91 surgeon general’s report, 92–93 Social cohesion collective action and, 216 description/importance of, 217–220 innovation and, 221, 231–240 responses to threats, 234–236 Social sciences definition, 101 RFTs key findings, 199 See also Experiments/social science; specific areas Social sciences case studies studies problems (overview), 104–105 See also Legalized abortion-crime reduction (case study); Republican presidents-income inequality (case study) Social sciences development history causal rules and, 103–104 Comte’s stages, 103 overview, 102–103 “positive understanding,” 103 Social sciences limitations causal density and, 60, 102, 104 experiments and, xvi–xvii, 60, 101, 102–104 holistic integration and, 102 predictions/generalizations and, xiv–xv, 100–101, 152 summary, xvi–xvii, 100–101 See also Reference class problem; specific areas Social Security block-granting and, 243 functions, 258, 259 money/funding, 244, 262 privatizing and, 260 Software as a service (SaaS) description, 222–224 as disruptive innovation, 231–232 “Salomon’s House,” Bacon, SPA See Strategic Planning Associates (SPA) Stalin and science, 22 “Status quo bias,” 209 Stewart, James, 176–177, 202 Stiglitz, Joseph, x, 66–67, 100 Stimulus package (2009) debate on, x–xi, 99, 100, 102 decision type, 211 evaluating success and, xi, 160 See also Financial crisis (2008) Store location selection experimental/nonexperimental methods, 154–156 expert judgment and, 155–156 site selection models, 155–156 Strategic Planning Associates (SPA) founding purpose, 124 micro approach, 130–132 work description, 124–127, 144 Structure of Scientific Revolutions, The (Kuhn), 24, 27 Subsidiarity principle, 213 “Substantial form,” Scholastics, “Super-theories,” 24 Systematic reviews Campbell Collaboration, 173–174 defined, 173 Taleb, Nassim, 62 Temporary Aid to Needy Families (TANF), 181, 184, 185–186 Tennessee STAR program, 186–187 Texas Instruments, 121, 126 Theories (scientific) “breeding rules,” 40–41 development/interconnections, 22, 41–42 experiments and, 18 paradigms vs., 41–42 “super-theories” and, 22–24 Therapeutic treatments’ effectiveness evaluation causal density problem, 69 history, 70–76 hospital deaths and, 69 medical research community and, 70 See also RFT (randomized field trial) Tocqueville, Alexis de, 255–256 Trial-and-error processes evolution through natural selection and, 31 number-guessing games, 32 See also specific processes Tuberculosis-streptomycin treatment, 77, 91 Uncertainty description, xi–xii, 66–67 evolutionary economics and, 225–226 probability distribution and, 64–67 reference class and, 66–67 Unequal Democracy (Bartels), 105 Uranus’s orbit, 22, 27, 58 Van Helmont, 74–75 Venture-backed companies, 230 Vicarious adaptiveness, 39–40 Victory Lab, The (Issenberg), 198 Wade, Roe v., 110, 111, 116 “Walmart bomb, the,” 128 Waterman, Robert, 127 Welfare advanced economies spending on, 254–255 financial crisis and, 256–257 history, 181 innovation-cohesion problem and, 240 negative side effects, 255–258 proposals on, 254–261 reform, 182–186 reform bill (1996), 184 state governments and, 182–183, 184 unbundling of programs, 258–261 Welfare/RFTs mandatory work requirements, 184–185, 203 negative income (NIT) experiments, 169, 181–182, 183, 249 overview, 181–186 reform and, 182–186 statistics on, 174 testing costs, 183 What Works Clearinghouse, 174 Whitehurst, Grover J., 190, 202 Window displays example, 158–159 Wolf, Patrick J., 188, 189 Xerox, 227–228 ...UNCONTROLLED UNCONTROLLED The Surprising Payoff of Trial- and- Error for Business, Politics, and Society JIM MANZI A Member of the Perseus Books Group New York Copyright © 2012 by Jim Manzi Published by... Brent Wilcox Library of Congress Cataloging-in-Publication Data Manzi, Jim Uncontrolled : the surprising payoff of trial- and- error for business, politics, and society / Jim Manzi p cm Includes... Recognition of this uncertainty calls for a heavy reliance on unstructured trial- and- error progress The limits to the use of trial and error are established predominantly by the need for strategy and

Ngày đăng: 29/03/2018, 13:33

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN