Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 28 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
28
Dung lượng
256,92 KB
Nội dung
11 At the Margins of Orthodoxy 11.1. Games, Evolution and Growth 11.1.1. Game theory Game theory was formulated as a logical instrument for investigating situations in which the results of the choices of some agents are at least partially determined by the choices of other agents with conflicting interests. This theory has been relevant, above all, in tackling the problems posed by situations of conflict and co-operation between rational and intelligent decision-makers. ‘Rational’ means that each agent makes his own choices with the aim of maximizing his subjective expected utility, whenever the decisions of the other agents are specified. ‘Intelligent’ means that each agent is in a position to know everything about the structure of the situations in which he finds himself, just like the theorist who studies them. In particular, each decision-maker knows that the other decision-makers are intelligent and rational. The definitive link between game theory and economic theory was only established in 1944, with the publication of The Theory of Games and Economic Behaviour by von Neumann and Morgenstern. A series of new notions and research directions originated from the union of the two approaches which are still alive today: the notion of co-operative games, in which players are able to make agreements and threats which are rationally fulfilled; the analysis of coalitions, which has resumed the pioneering studies of Edgeworth and has led to modern core analysis; the axiomatic definition of expected utility and the demonstration of its importance as a criterion of choice in uncertainty conditions; and the application of the formalism of game theory to a wide series of economic problems. Some circumstances, however, such as the novelty of the concepts and of their mathematical demonstrations, initially limited the diffusion of game theory, especially in the field of the social sciences, to which it was mainly directed. It was not until the end of the 1950s, with the publication of Games and Decisions (1957) by R. Luce and H. Raiffa and The Strategy of Conflict (1960) by T. Schelling, that game theory became widely known. It was only at that time that the first interesting economic applications appeared. Today the theory can deal with many classes of ‘games’, even if the attention of the scholars has focused on certain interesting cases. The two- person zero-sum games were among the first to receive an exhaustive general treatment. Beyond their own importance, which certainly cannot be over- looked, zero-sum games have been of vital importance for game theory in that the conceptual apparatus developed to analyse them has turned out to be applicable to more general cases. Particularly important was the notion of ‘safety level’, the minimum pay- off level which a player is able to guarantee himself independently of the strategies of the other. A game has a rational outcome from the individual point of view if the pay-off received by each player is not lower than his safety level; if an individual is rational, he will always act in such a way as to ensure that at least that level of pay-off can be obtained with certainty. The safety level can be calculated both for the pure strategies (which are sequences of well-determined actions) and for mixed strategies (in which, in one or more stages of the game, the choice of action is made by means of a stochastic experiment, such as tossing a coin). As early as the beginning of the century, E. Zermelo, in ‘U ¨ ber eine Anwendung der Mengenlehre auf die Theorie des Schachspiels’ (1913), had already succeeded in proving that the game of chess is strictly determined. Obviously, this does not mean that the ‘optimal’ stra tegy for this game is easy to find, as every chess player knows well. In any case, Zermelo’s theorem had an extraordinary importance, in that it was the prototype for the increasingly more general theorems to emerge in the following years. In practice, in order to ‘export’ the theorem from the field of zero-sum games, the concept of rational outcome from the individual point of view has been replaced by that of strategic equilibrium. A strategy brings equilibrium if it maximizes the level of pay-off obtained by a player, given the strategies chosen by all the others. This basic concept was introduced by John Nash in ‘Non-cooperative Games’ (1951), and it is still known as the ‘Nash equilibrium’. It is based on the idea that, in equilibrium, the strategies must consist of replies which are ‘mutually best’, in the sense that no player is able to do better than he does, given the actions of the other players. Nash’s work permitted H. W. Kuhn to prove that every n-person perfect-information game (in which all players know the whole structure of the game, from the pay-offs to the possible moves of the others) has an equilibrium in terms of pure strategy. Despite its interest, this theorem is not particularly potent, because there is nothing to exclude the case where a game has a high number of equilibria, so that it is not clear which outcome would prevail if all the players were rational. This is what actually happens in the great majority of games. Thus a recent branch of research has tried to refine the set of strategic equilibria with the use of the most varied auxiliary criteria. The results of such research are still contro- versial, as there is, in fact, no ‘objectively vali d’ criterion on which to base such a refinement process. The most important refinements have been made by John Harsanyi in ‘Games of Incomplete Information Played by Bayesian Players’ (1967–8) and in ‘The Tracing Procedure’ (1975). Harsanyi introduced a more general 429 at the margins of orthodoxy class of games, defined as ‘Bayesian games’, in which the players may not know with certainty the structure of the game. R. Selten, on the other hand, introduced the notion of ‘perfect equilibrium’ in ‘Re-examination of the Perfectness Concept for Equilibrium in Extensive Games’ (1975). He started from the observation that quite a number of Nash equilibria are imperfect in the sense that they are based on threats of action depending on circum- stances which never occur in the equilibrium situation, and which the players would never take into consideration if they could choose. The notion of ‘perfect equilibrium’ eliminates this kind of imperfection. An important recent synthesis of these two lines of research has been made by D. Kreps and R. Wilson with the notion of ‘sequential equilibrium’. In the case of two-person zero-sum games, things are much clearer: von Neumann’s famous minimax theorem states, in fact, that, if the number of feasible pure strategies is finite, such games are determinate, or, rather, that they admit a single rational result in terms of mixed strategies. The theorem has had an enormous impact on the development of the subject (the demonstration itself of the existence of a competitive equilibrium has been obtained from a generalization of one of the demonstrations of the minimax theorem), and for a long period zero-sum games were considered as the field for the application of the theory. The modern theory, although it has pushed the original ideas of von Neumann and Morgenstern far forward, has encountered formidable prob- lems. For example, even in the field of co-operative games there is usually a multiplicity of possible equilibria. The number and the nature of the equilibria associated with a certain game will thus be determined by the particular interpretation of the game, the set of strategies available to the players, and the ‘rationality criteria’ to which they adhere. There is no universally valid criterion of choice. Each proposed criterion selects ‘reasonable’ equilibria for certain games; but for other games it excludes some equally ‘reasonable’ equilibria in order to choose some other less ‘plausible ’ ones. Another difficulty emerged in the 1950s and 1960s which concerned complete and perfect information games. In complete-information games, the players understand the nature of the game; in perfect-information ones, on the other hand, the players know both the nature of the game and all the preceding moves of the other players. This limit ed the field of the phenomena which the theory was able to tackle, and therefore restricted its possible applications in economics. The theoretical developments in the 1970s and 1980s, especially the work of Harsanyi and Selten, have partially remedied this deficiency. A recent approach has considered repeated games, also called supergames. Strategic behaviour can change if, instead of playing ‘once and for all’, the individuals know that the game may be repeated a certain, perhaps indefinite, number of times. A typical example is the well-known ‘prisoner’s dilemma’, a non-zero-sum game which gives rise to a non-coo perative 430 at the margins of orthodoxy solution if it is played only once, but which can generate co-operative behaviour if it is repeated a certain number of times. R. Luce and H. Raiffa were among the first to highlight the dilemma. This consis ts in the fact that an egoistic choice is rational but does not lead to the best possible solution, while the co-operative choice is irrational for the person who makes it unless the reply of the other player is also co-operative. What is best for the indi- vidual is not necessarily best for the individuals taken together. The interest in repeated games is due to a theorem, known as the ‘Folk Theorem’, which establishes a basic analogy between repeated games and non-repeated co-operative games by pointing out that the emergen ce of factors such as ‘reputation’ or ‘credibility’, which are typical of repeated games, can naturally lead the players to explore the possibilities of co-operative solutions. This is because these factors can give efficacy to the agreements and threats that make co-operation possible. An experimental verification of the theorem was given by R. Axelrod (The Evolution of Cooperation, 1984), who demonstrated how co-opera tive results tend to prevail in a game repeated an indefinite number of times. In view of the close link between ‘oligopolistic indetermination’ and game theory, it is not surpri sing that the conceptual apparatus of the latter has found a wide application in industrial economics. In his 1947 article, ‘Price Theory and Oligopoly’, K. W. Rothschild complained that, when dealing with oligopoly, economists let themselves be too influenced by analogies from mechanics and biology. Rothschild believed that in the study of oligopolist situations it is preferable to refer to those fields of research that study moves and counter-moves, the fight for power, and strategic behavi- our. In fact, the use of game theory has led, in recent times, to the revaluation of concepts such as entry barriers and the relationship between the structure of the market and technical change. As M. Shubik ha s indicated in Strategy and Market Structure (1959), the most important result in this context has been the following: the causal link proceeding from the structure of the market to the behaviour and the performance has been replaced by the idea that the structure of the market, intended as the number and size of firms operating in it, is endogenously determined by the strategic interactions of the firms. Another fruitful area of application of game theory has been that of bargaining (theories of contracts, auctions, and collectiv e bargaining), in which two or more individuals must come to an agreement for the share of a given stake, with the constraint that, in the case of a failure to reach agreement, nobody receives anything. This problem has allowed economists to give a precise definition to a key economic notion: that of ‘contractual power’. The more one party is ‘anxious’ to conclude an agreement, the more he will be disposed to give way. Ken Binmore’s ‘Modeling Rational Players’ (1987) is an important work in this context. In it, the traditional notion of ‘substantive rationality’ has been replaced by that of ‘algorithmic rationality’, 431 at the margins of orthodoxy which resumes and generalizes Herbert Simon’s famous notion of ‘procedural rationality’. Finally, a very recent field of application of game theory is that of the theory of economic policy (monetary and fiscal policy and international economic co-operation), where Selten’s ‘perfection criterion’ and the notion of ‘sequential equilibrium’ have been widely used in relation to the concepts of ‘credibility’ and ‘reputation’ of players such as governments and unions. Among the most important works on this subject are R. J. Aumann and M. Kurz, ‘Power and Taxes’ (1977) and P. Dubey and M. Shubik, ‘A Theory of Money and Financial Institutions’ (1978). The greater fertility of the economic applications of game theory by comparison with that of other mathematical instruments also depends, perhaps, on the fact that this theory was not borrowed from another discipline, but was developed within economic research, which has favoured the formulation of concepts and formal procedures well suited to the representation of social and economic interactions. However, it is important not to forget that there are still severe limitations in the modelling within game theory. For example, the choices of the most appropriate notions of ‘individual rationality’ and ‘game equilibrium’ are multifarious and partially arbitrary. And even where a well- defined notion of equilibrium has been accepted, the problem often still remains—especially in supergames—of the multiplicity of outcomes. In any case, it remains true that game theory, because of its rigorous logical structure, enables economists to classify different types of rationality and equilibrium, and is becoming a viable alternative research approach to the neo-Walrasian one. 11.1.2. Evolutionary games and institutions The theory of evolutionary games was developed within the area of bio- logical studies, where it provided a simple and elegant formalization of Darwin’s theory of natural selection. It is based on the idea that evolution in a biological context depends on the differential reproduction of the ‘most suitable’ individuals or elements; this concept will be clarified later. John Maynard Smith summed up the first stage of research in his book, Evolution and the Theory of Games. In condensing the results of over ten years of research, the great English biologist made known outside the circle of specialists the intriguing connection between the concept of game, suitably reinterpreted for the ‘animal conflict’ context, and the more ba sic notions of strategic rationality developed in economic research an d decision theory, primarily the notion of Nash equilibrium. After an initial period of cautious interest, from the early 1990s economists began to pay increasing attention to this new area of the discipline, to which they devoted more and more of their research efforts. Their interest can be traced back to the impasse that 432 at the margins of orthodoxy characterized literature on the so-called refinements of Nash equilibria in the early part of that decade, as recalled in the previous paragraph. For the notion of Nash equilibrium to be considered a really useful instrument, in the presence of a plurality of equilibria it was necessary to come up with a convincing and practical criterion for deciding which of them to select in relation to the structure and characteristics of the game. Several refinement criteria had been proposed during the 1980s, with much use of inventiveness and technical skills, but all appeared to have substantial limits. In particular, each refinement criterion appeared to have been tailor-made for a certain type of game and was quite inadequ ate in contexts other than those for which it had been conceived. The monumental work of John Harsanyi and Reinhard Selten, A General Theory of Equilibrium Selection in Games, published in 1988 after a long period of gestation, purported to have the last word on the argument by putting forward a general theory that was valid for every possible type of game. But, despite the enormous effort and the important results achieved, the ‘general theory’ was so complex and intricate that it was soon clear that the research programme had sub- stantially failed. The advent of the evolutionary theory of games was thus hailed with much satisfaction and a certain amount of relief: thanks to this theory it was at last possible to construct a ‘social dynamics’ through which players ended by converging on the choice of a uniq ue Nash equilibrium, based on well- defined conditions. This kind of choice, which seemed impossible in the aprioristic type of approach inherent in the theory of refinements, could be realized ex post, as the result of a process of interaction between a large number of agents. However, when it came to the point, the evolutionary approach too left problems to be solved. For instance, in the case of games with a sufficiently high number of strategies, evolutionary dynamics could easily give rise to complex behaviours that did not contemplate convergence towards a final Nash equilibrium. This disappointment was soon overcome by another possible interpreta- tion of evolutionary games, one that saw it in terms of bounded rationality. As Ken Binmore observed, the problem of the ‘eductionist’ approach to game theory laid precisely in the difficulty of building a priori a theory of strategic rationality that was valid in every circumstance, while through the alternative ‘evolutionary’ approach it was possible to demonstrate how and in what conditions certain a priori rational behaviours became socially diffused. They could be spread through social learning mechanisms even in the presence of players with modest calculus abilities and inflexible and substantially adaptive behavioural patterns, based on a Simon-type satisficing type of logic. Even with very simple models, the theory showed that rational behaviour, in the broadest sense, might not ensure maximum pay-offs for players. There was consequently a direct and unequivocal challenge to the Darwinian theory on which various utilitarian economists 433 at the margins of orthodoxy had tried to base the hypothesis of Homo oeconomicus rationality. Contrary to a longstanding belief, a superior rationality did not necessarily imply greater possibilities of profit or survival in the presence of competitive interaction. These results opened up new and impor tant prospects particu- larly for those economists who were most dissatisfied with the ultimate implications of the traditional neoclassical approach. The theory of evolutionary games puts forward an interesting static notion of equilibrium and an even more interesting specification of the dynamic selection process. The new notion of equilibrium is expressed in terms of an evolutionary stable state, a condition that calls for a robust strategy profile not only in respect of single individual deviations (as in the case of Nash equilibrium) but also in respect of deviations chosen simulta neously by an albeit small fraction of players. This significant innovation has, however, one important fault: in a g iven game there is no guarantee that a Nash equilib- rium exists which satisfies that con dition. The most widely studied dynamic process in the evolutionary game theory is the so-called replicator dynamics, according to which the weight of a strategy in a population of players increases more the higher is the reward of that strategy compared with the mean gain. In other words, strategies that do better than the mean are rewarded at the expense of those that do worse, the more so the better they do. When replicator dynamics converge, they do so towards Nash equilibria, and possess a certain number of interesting properties, such as the tendency to eliminate any strategy that is strictly dominated in the game. In more recent years, economists have begun to use notions of evolu- tionary stable state and replicator dynamics to build models that can be applied to a wide variety of contexts, with particular emphasis on phe- nomena which were barely considered in the past, such as the formation of social conventions, the impact and e volution of social norms and cultural evolution. Another fertile field of application concerns the classic problems of institutional change and the evolution of individual preferences. These problems can now be tackled in a new way. Today the theory of evo lutionary games is becoming an alternative paradigm to the neoclassical approach, thanks also to its ability to explain endogenously crucial phenomena like the predominance of a certain level of rationality among economic agents and the survival conditions of non self-interested individual motives. Models of evolutionary games are particularly suitab le for the study of cultural evolution. In The Evolution of Social Contract, Brian Skyrms showed how the rules of justice and co-operative types of rules e volve in time and in specific environmental contexts, even between self-interested subjects. Another fertile field of application deals with explaining the emergence of social rules such as those of reciprocity, sympathy and altruism. Here the reference is The Complexity of Cooperation, by R. Axelrod, in which, for the first time, a particular genetic algorithm has been applied to the theory of 434 at the margins of orthodoxy evolutionary games. Genetic algorithms were initially developed in studies on artificial intelligence and reproduce the mechanisms that drive biological evolution to search for more efficient methods of ensuring adaptation to a plurality of environmental contexts. Referring to a ‘prisoner’s dilemma’ type of situation, Axelrod demonstrated that the tit-for-tat strategy which he himself ‘invented’ and expounded in his contribution of 1984, is very robust, and that cooperative and recipr ocating strategies tend to prevail in a wide variety of situations. To conclude, in the theoretical scenario opened up by the theory of evolutionary games, there is growing convergence among research under- taken by biologists, economists and sociologists. The foundations are being laid down for a science of social behaviour which goes far beyond the old disciplinary boundaries of positivist ascendancy. 11.1.3. The theory of endogenous growth To understand the reasons behind the great success of the endogenous growth theory in macroeconomics over the last decade, it is necessary to begin from the so-called ‘Solow residue’. This expression indicates all those determinants of the growth processes that cannot be reduced to labour and capital contributions. With hindsight it can be said that one of the chief merits of Solow ’s 1956 model (see section 9.2.4) was the idea that economic growth cannot be completely explained by increases in the stock of pro- ductive factors. Nonetheless, his theory left various questions unanswered: given that technical progress lies at the base of labour and capital produc- tivity increases, how can this phenomenon be modelled? How can an endogenous explanation be given for the long-run growth rate? Despite the objective importance of these questions and the innovative contributions made by economists such as Kenneth Arrow and Nicolas Kaldor, in the 1960s the neoclassical students’ concern with growth problems waned, preference being given to business cycle theory. However, this period of oblivion was not to last for long. As N. Foss showed, beginning in the early 1980s, economists gradually shifted their interest in the business cycle to include the study of real growth factors, to the detriment of monetary factors. The work of F. Kydland and E. Prescott is an example of this shift; the trend hailed the arrival of a change of climate which soon proved to be extremely favourable to the advent of the ‘new growth theory’. The theory of endogenous growth officially came to light in 1986, the year of publication of P. Rome r’s fundamental article Increasing Returns and Long-Run Growth. It describes a model of competitive equilibrium in which the growth process is guided by Marshall type increasing returns. The aggregate production function has the following characteristics: output depends on the capital stock, on labour and on the costs of R&D activities; in addition, the spillover from private research brings improvements in the 435 at the margins of orthodoxy stock of public knowledge. Knowledge is intended as an input with an increasing marginal productivity. The model therefore combines two basic hypotheses: competitive equilibrium and endogenous technological change. Romer pointed out that there are different forms of knowledge: at the one extreme we have basic scientific knowledge, at the other, knowledge intended as a set of specific instructions relating to a determined operation. Romer observed that ‘there is no reason to expect the accumulation determinants of these different types of knowledge to be the same [ ] There is, therefore, no reason to expect a unified theory of knowledge growth’ (p. 1009). One fundamental aspect shared by the numerous endogenous growth models is their characterization of knowledge as a public good: knowledge has in fact, at least to a certain extent, precisely the characteristics of non- excludability and non-rivalry of these goods. The idea is that the non- excludability from the use of new knowledge created by an individual firm generates positive effects on the pr oduction possibilities of other firms. The hypotheses of increasing returns and non-excludability of knowledge were already present in the work on learning by doing published by Arrow over two decades earlier. Arrow had put forward the idea that increasing returns appear to be a direct effect of the discovery of new knowledge. This was the basic idea developed later by Romer. One of the most important implications of Romer’s model is the following: the per capita outputs of different countries do not necessarily converge: less developed countries may well grow at a lower rate than more advanced countries or indeed may not grow at all. Later, in a model elaborated in 1990, Romer considered an economy divided into three sectors (research, inter- mediate goods and final goods), and characterized by monopolistically competitive markets. He furthermore assumed that while the stock of tech- nological knowledge was non rival, human capital was. The model predicts that economies endowed with a higher stock of human capital grow at a faster rate than the others. There will be an acceleration in growth also as a consequence of opening up to international trade. In 1988 Robert Lucas presented a model similar to Romer’s 1986 version, but assumed that two different sectors and two different types of investment (in physical capital and human capital) are to be taken into account. The only exogenous magnitude hypothesized was the population growth rate. While physical capital would accumulate through the usual neoclassical mechanism, the growth of human capital would be regulated by the fol- lowing ‘law’: independently of the stock already achieved, a steady level of effort corresponds to a steady rate of growth of the stock. Lucas held that one of the chief merits of the ‘new growth theory’ lies in its capacity to elaborate formal models to explain both the growth of advanced countries and developing countries. On the contrary, in the 1960s, the idea that it is necessary to resort to distinct theories prevailed. There are, furthermore, significant political implications, since public authorities appear to have 436 at the margins of orthodoxy greater ‘room for manoeuvre’ in situations of endogenous growth. Solow’s model, where the long-run growth rate depends exclusively on exogenous technological change, ended by not assigning any role to institutional sub- jects outside the market; conversely, endogen ous growth models admit, for example, that changes in tax regime can influence the growth rate. Another important contribution to the theory of endogenous growth was made by P. Aghion and P. Howitt, who attempted to demonstrate that firms may accumulate knowledge through numerous channels from formal edu- cation to product innovations. Great importance is attached to industrial innovations which improve product quality. The Schumpeterian view underlying Aghion and Howitt’s endogenous growth model can be sum- marized as follows: growth is the effect of technological progress which, in turn, depends on competition between firms that undertake research and create innovations. Every innovation gives rise to the production of a new type of intermediate good, as a result of which a final product can be pro- duced in conditions of greater efficiency. From the individual firm’s point of view, the incentive to invest in research comes from the prospect of building up monopoly rents through the legal protection granted by innovations. On the other hand, in a dynamic context, those very rents are made fruitless by subsequent innovations, which render the old products obsolete almost as soon as they are produced. This analytical scheme also contemplates the existence of a strong relationship between the innovative firm’s market power and the degree of excludability of knowledge. This excludability in turn depends critically on the presence of legal institutions responsible for pro- tecting ownership rights as well as on the very nature of knowledge. 11.2. The Theory of Production as a Circular Process 11.2.1. Activity analysis and the non-substitution theorem In Chapter 8 we spoke of a tradition in input–output analysis that originated at the beginning of the twentieth century in Russia with Dmitriev, and then emigrated to Germany, with von Charasoff and von Bortkiewicz. There, in the second half of the 1920s, this tradition inspired the work of Leontief and Remak. In the same chapter we also spoke of Menger’s Viennese Kolloquium, and of the work of Schlesinger and Wald on the problem of the existence of solutions in the general-equilibrium model, and we also mentioned von Neumann’s movements between Berlin and Vienna. This line of thought was transplanted to America in the 1930s and there, after the Second World War, bore diverse and notable fruits. We have already mentioned von Neumann’s contribution to the birth of game theory and of the balanced-growth models. We have also presented Leontief ’s research in input–output analysis. Finally, in Chapter 10 we spoke of the 437 at the margins of orthodoxy [...]... introduction of an interest rate, and with a special case of joint production The introduction of the interest rate modifies the results of the theorem in the sense that there is a different price system for each different value of the interest rate The ‘dynamic’ character of the theorem would consist of the possibility of applying it to an economy which is growing in steady-state 11. 2.2 The debate on the theory...438 at the margins of orthodoxy influence exerted by these lines of thought on the development of the neoWalrasian approach Now we should like to examine another two important theoretical developments which also began after the Second World War, and which can be interpreted as developments and extensions of Leontief ’s and von Neumann’s models: activity analysis and the non-substitution theorem The classic... is the same as the first chapter of Capital, which is entitled The commodity’, and which is the real prelude to the Critique of Political Economy In that chapter, Marx tackled the analysis of the commodity and its value, and laid down the theoretical bases of all his successive work; he attacked the ‘vulgar economists’, whom he accused of looking for the explanation of the value of goods in exchange... in the composition of demand and the quantities produced, and prices depend solely on technical conditions of production There are two different interpretations of the relevance of this theorem The first to be advanced interprets the term ‘substitution’ as a synonym for ‘change of techniques’ In this case the theorem serves to demonstrate the robustness of Leontief ’s and similar models The hypothesis... with Baran’s theory of economic surplus, and produced the thesis of the tendency of the potential surplus to grow—a thesis which was to replace, according to the authors, the Marxian law of the falling profit rate as a fundamental explanation of the march of capitalism towards self-destruction Capitalist 448 at the margins of orthodoxy accumulation causes, besides an increasing concentration of capital,... the development of commerce and the relationships between town and the countryside, as was argued by Sweezy and others The importance of this debate was not limited to the field of economic history, but also touched upon a central problem of Marxist economic theory, that of ‘primitive accumulation’ One of the main interests of Dobb, after the Second World War, was the economic theory of socialism, to... theoretical and applied Dobb was a critic of the theory of ‘market socialism’ of the Lange–Lerner type, and pointed out its essentially static and therefore unrealistic nature Against it, he argued that, given the burdensome inheritance of productive backwardness and of inequality in the distribution of income and resources, an economy in the phase of transition to socialism must put the problems of. .. class conflict on the transformations of the State itself Harry Braverman in Labour and Monopoly Capital (1974), dealt with the problem of the effects of mechanization and managerial control of companies over the transformation of the labour process and class composition in modern capitalism In The Modern World System (1974 –80), Immanuel Wallerstein developed the Marxian analysis of ‘primitive accumulation’,... Welfare Economics and the Economics of Socialism, 1969 Dubey P and Shubik M ‘A Theory of Money and Financial Institutions’, Journal of Economic Theory, 1978 Dunayevskaya R The Union of Soviet Socialist Republics is a Capitalist Society, 1941 —— The Marxist–Humanist Theory of State-Capitalism, 1992 Eatwell J ‘Mr Sraffa’s Standard Commodity and the Rate of Exploitation’, Quarterly Journal of Economics,... presence of joint production and economically significant price systems, there can be negative labour values—even negative rates of exploitation in the presence of a positive profit rate But the most important acquisition of recent research is that of having understood that the theory of labour value contradicts the theory of exploitation In fact, since the rate of exploitation is not an invariant in the transformation . complete-information games, the players understand the nature of the game; in perfect-information ones, on the other hand, the players know both the nature of the game and all the preceding moves of. intensity of a certain factor, there will be an increase in the price of that good, but also in the demand and the remu- neration of the factor in question, and, consequently, the prices of all the other. inspired the work of Leontief and Remak. In the same chapter we also spoke of Menger’s Viennese Kolloquium, and of the work of Schlesinger and Wald on the problem of the existence of solutions in the