The Adaptive Expectations (“AE”) Versions

Một phần của tài liệu Complex systems modeling and simulation in economics and finance (Trang 160 - 170)

In the AE model, agents are assumed to not know the underlying stochastic model. In particular, it is assumed that agents do not know the value of Π and, furthermore, do not know the values of the past and current stochastic productivity levels of other agents. The only information that an agent in the AE model is allowed to retain is information about the output produced in the previous period.

Two versions of the AE model are considered based loosely on the long-standing literature that distinguishes between individual and social learning; for instance [23], argues that there are fundamental distinctions between the two modeling approaches with significant implications in terms of greatly differing outcomes based on whether individual or social learning is assumed. Some summary comparative results for individual and social adaptive learners are provided in Table3.

1. Adaptive Expectations—Individual (“AE–I”): In this version, an agent nạvely predicts that the individual she will be paired with in the current period would be someone who has produced the same quantity as the individual she encountered in the previous period. In other words, agents in this version form their expectations solely on the basis of their most recent individual experience.

2. Adaptive Expectations—Population (“AE–P”): In this version, an agent who produces a particular variety is aware of the economy-wide distribution of the other variety produced in the previous period and predicts that distribution will remain unchanged in the current period. In other words, agents in this version form their expectations on the basis of the most recent population-wide outcome.

It must be noted that the agents in both these versions are goal-directed expected utility maximizers who use information about their observable idiosyncratic shocks and their nạve expectations about others actions to determine their optimal level of effort. It is also assumed that all adaptive agents start with the high effort strategy (in the first period, each agent expends high effort if her own stochastic productivity level is high). A few interesting results emerge that highlight the contrast with the SREE version. Although some of these differences can be found in Table3, it is easier to visualize these differences through graphs rather than summary statistics.

12I would like to thank an anonymous referee for highlighting this point.

Rational Vs. Adaptive Expectations in a Barter Economy 155

2

1.6

1.2

0.8

1 9 17

Homogenous

SREE (Low)

SREE (32489)

SREE (High)

AE Individual

AE Population 25

Fig. 1 Ensemble averages of output per capita over 1000 simulations in a four-agent model

Figure1depicts ensemble averages of per capita output over time for AE–I, AE–

P, the high SREE, the low SREE, and the randomly chosen SREE. As can be seen, the high SREE and the randomly chosen SREE bounce around throughout, while AE–I and AE–P bounce around for a while before settling at 1 which equals the low SREE. In other words, it appears that the low SREE is the eventual outcome of the adaptive expectations process. Notice that none of the SREEs have any perceptible trend, while both AE–I and AE–P have a perceptible trend that eventually settles down at the low SREE.

In the AE–I case, the convergence to the low equilibrium occurs because each time an agent trades with another agent who has produced a low level of output, she decides to produce low output herself in the next period regardless of her productivity level. This action of hers in turn causes the agent she interacts with in the next period to produce low output in the period subsequent to the next. On the other hand, if an agent encounters another agent who has produced a high level of output, she will only produce a high level herself in the next period if she herself also has a high outcome for her stochastic productivity level. The asymmetry in this model is such that with adaptive expectations, the outcome is inexorably drawn towards the low equilibrium due to the asymmetry embedded in effort choice: high effort is optimal only when one’s own productivity level is highandone reasonably expects to meet someone else who has produced a high level of output; low effort is optimal both when one’s own productivity is low andorone expects to have a trading partner who has produced low output.

The reason for the convergence of the AE–P case to the low equilibrium is similar. As long as the population-wide distribution of outputs is above a certain threshold, agents continue to adopt the high effort strategy (in which they expend high effort whenever their own stochastic productivity level is high) because they

156 S. Gouri Suresh reasonably expect to meet a high output agent the next period. However, as soon as the population-wide distribution of outputs falls below a certain threshold, each agent assumes that the probability of meeting a high output agent is too low to justify expending high effort even when their own stochastic productivity outcome is high the next period. As soon as this threshold is breached, all agents expend low effort in the next period, leading to low effort remaining the adopted strategy for all periods thereafter. When the population size is large, the likelihood of the population-wide distribution of productivity levels being extreme enough to cause the distribution of outputs to fall below the threshold mentioned previously is low. Consequently, as the population size grows larger, the results of AE–P converge towards the continuum case.

Various papers in the multiple equilibrium literature have a similar result where one or more specific equilibria tend to be achieved based on best response dynamics [6], genetic algorithms [1,3,15], various forms of learning [8,10,16], etc.

As can be seen from the discussion on AE–P above, the effect of scale depends crucially on the modeling assumptions. While issues of dimensionality preclude the analysis of all SREE in models with more than four agents, as discussed earlier, the low SREE and the high SREE can be shown to exist regardless of the number of agents. In order to examine the effects of scale more explicitly, consider Fig.2which compares one sample run each (with different random evolutions of the stochastic processes across the different models) for 4 agents, 12 agents, and 100 agents. From the one run graphs in Fig.2, it appears that scale matters a great deal for AE–P, whereas scale does not matter for SREE or AE–I, except in the sense that variance of outputs is predictably lower with more agents. This effect of scale is more clearly visible if we look at the ensemble averages presented in Fig.3.13Scale matters to a very limited extent for AE–I (being visible only when zoomed into the short run), and a great deal for AE–P as we go from 4 agents to 100 agents. AE–P most closely resembles AE–I in a model with 4 agents but by the time the economy is comprised of 100 agents, AE–P starts to resemble SREE high. For the models with for 4 agents and 8 agents, AE–P starts with a high output per capita that settles eventually to the low SREE.

Based on the above results as well as the underlying theory, it can be seen that unlike the SREE versions, the AE models are path dependent; it is not just the productivity level in the last period that matters but also the specific sequence of past stochastic productivity levels and past random pairings. However, continuing with the terminology employed by Page [19], the AE models are equilibrium independent in the sense that the long-run equilibrium converges with SREE low. This happens fairly rapidly for AE–P with a small number of agents and AE–I in all situations.

For AE–P with a large number of agents, this could take a very long time indeed but

13For all figures, the initial (zeroth) period is dropped from consideration.

Rational Vs. Adaptive Expectations in a Barter Economy 157

SREE (Low) SREE (High) AE Individual 2.4

1.6

0.8

1 34 67 100

2.4

1.6

0.8

1 34 67 100

2.4

1.6

0.8

1 34 67 100

AE Population

SREE (Low) SREE (High) AE Individual AE Population SREE (Low) SREE (High) AE Individual AE Population

Fig. 2 Scale effects—single runs of models with 4 agents, 12 agents, and 100 agents

will indeed eventually occur.14Assuming SREE as a simplification is equivalent to ignoring short and intermediate run dynamics and those might matter in the real world where the long-run equilibrium could indeed be a long way away, as in the AE–P case with a large number of agents.

14It can be shown, for instance, that if all agents were to receive a low productivity level in any period (a scenario with an extremely low but still nonzero probability in a model with many agents) then everyone would switch to the low strategy in the next period.

158 S. Gouri Suresh

AE - Population (4)

AE - Individual (4) AE - Individual (8) AE - Individual (100) SREE - High (4) SREE - High (8) SREE - High (100) SREE - Low (4, 8, 100) 100

67 34

1 0.9 1.2 1.5 1.8

0.9

1 2 3 4 5

1.2 1.5 1.8

AE - Population (8) AE - Population (100)

AE - Population (4)

AE - Individual (4) AE - Individual (8) AE - Individual (100) SREE - High (4) SREE - High (8) SREE - High (100) SREE - Low (4, 8, 100) AE - Population (8) AE - Population (100)

Fig. 3 Scale effects—ensemble means of models with 4 agents, 12 agents, and 100 agents

4 Further Discussion

In keeping with much of the literature surveyed in the introduction, the results from this paper suggest that the assumptions of homogeneity, REH, and Walrasian equilibria are far from innocuous. Even within the REH framework, assuming continuums of agents and implementing SREE restrictions could lead to focusing on a small subset of interesting results, even though those assumptions and restrictions are extremely helpful in terms of providing tractability. Furthermore,

Rational Vs. Adaptive Expectations in a Barter Economy 159 results obtained through REH are not path dependent and could feature multiple equilibria without a robust theoretical basis for choosing among them although various alternatives have been proposed in the literature. AE–I and AE–P, on the other hand, are path dependent and still eventually converge to certain SREE, thereby allowing more precise and testable analyses of trajectories in the economy through simulation-based event studies and Monte Carlo approaches. It should be noted that the AE models employed here have a limitation. These AE models assume that memory lasts only one period; changes in results caused by increasing memory could be addressed in further work.15Finally, SREE is scale free (although not in terms of variance of output) whereas AE–I and AE–P are not (AE–P, in particular, shows much greater sensitivity to scale) thereby allowing for richer models.

Overall, the exercises undertaken in this paper suggest that REE can indeed be implemented in agent-based models as a benchmark. However, even a small increase in the state space or the number of agents renders a fully specified REE computationally intractable. ABM researchers interested in realism may find REE benchmarks of limited applicability since modifying models to allow for tractable REE could involve imposing severe and debilitating limitations on the model in terms of the number of agents, their heterogeneity, the types of equilibria allowed, and other forms of complexities.

Acknowledgements I would like to thank anonymous reviewers for their invaluable suggestions.

I would also like to acknowledge my appreciation for the helpful comments I received from the participants of the 20th Annual Workshop on the Economic Science with Heterogeneous Interacting Agents (WEHIA) and the 21st Computing in Economics and Finance Conference.

References

1. Arifovic, J. (1994). Genetic algorithm learning and the cobweb model.Journal of Economic Dynamics and Control, 18(1), 3–28.

2. Bryant, J. (1983). A simple rational expectations Keynes-type model.Quarterly Journal of Economics, 98(3), 525–528.

3. Chen, S.-H, John, D., & Yeh, C.-H. (2005). Equilibrium selection via adaptation: Using genetic programming to model learning in a coordination game. In A. Nowak & S. Krzysztof (Eds.), Advances in dynamic games. Annals of the international society of dynamic games (Vol. 7, pp. 571–598). Boston: Birkhọuser.

4. Colander, D. (2006). Post Walrasian economics: Beyond the dynamic stochastic general equilibrium model. New York: Cambridge University Press.

5. Cooper, R. (1994). Equilibrium selection in imperfectly competitive economies with multiple equilibria.Economic Journal, 104(426), 1106–1122.

15Intuitively, with more memory, AE–I and AE–P may sustain the high effort equilibrium for longer because individuals would take into consideration more periods with high outputs while making decisions in subsequent periods. For instance, even in the period right after an economy- wide recession, if agents possessed longer memories, they could revert to expending high effort the next period if their own individual stochastic productivity level was high.

160 S. Gouri Suresh

6. Cooper, R., & Andrew, J. (1988). Coordinating coordination failures in Keynesian models.

Quarterly Journal of Economics, 103(3), 441–463.

7. Diamond, P. (1982). Aggregate demand management in search equilibrium.Journal of Political Economy, 90(5), 881–894.

8. Evans, G., & Seppo, H. (2013). Learning as a rational foundation for macroeconomics and finance. In R. Frydman & P. Edmund (Eds.),Rethinking expectations: The way forward for macroeconomics(pp. 68–111). Princeton and Oxford: Princeton University Press.

9. Fagiolo, G., & Andrea, R. (2012). Macroeconomic policy in DSGE and agent-based models.

Rev l’OFCE, 124(5), 67–116.

10. Frydman, R. (1982). Towards an understanding of market processes: Individual expectations, learning, and convergence to rational expectations equilibrium. The American Economic Review, 72(4), 652–668.

11. Frydman, R., & Edmund, P. (2013). Which way forward for macroeconomics and policy analysis? In R. Frydman & P. Edmund (Eds.),Rethinking expectations: the way forward for macroeconomics(pp. 1–46). Princeton and Oxford: Princeton University Press.

12. Howitt, P., & Robert, C. (2000). The emergence of economic organization. Journal of Economic Behavior and Organization, 41(1), 55–84

13. Jackson, J., & Ken, K. (2012). Modeling, measuring, and distinguishing path dependence, outcome dependence, and outcome independence.Political Analysis, 20(2), 157–174.

14. Lengnick, M. (2013). Agent-based macroeconomics: A baseline model.Journal of Economic Behavior and Organization, 86, 102–120.

15. Marimon, R., Ellen, M., & Thomas, S. (1990). Money as a medium of exchange in an economy with artificially intelligent agents.Journal of Economic Dynamics and Control, 14(2), 329–

373.

16. Milgrom, P., & John, R. (1990). Rationalizability, learning, and equilibrium in games with strategic complementarities.Econometrica, 58(6), 1255–1277.

17. Napoletano, M., Jean Luc, G., & Zakaria, B. (2012).Agent Based Models: A New Tool for Economic and Policy Analysis. Briefing Paper 3 OFCE Sciences Po, Paris.

18. Oeffner, M. (2008).Agent-Based Keynesian Macroeconomics: An Evolutionary Model Embed- ded in an Agent-Based Computer Simulation. PhD Thesis, Julius-Maximilians-Universitọt, Würzburg.

19. Page, S. (2006). Path dependence.Quarterly Journal of Political Science, 1(1), 87–115.

20. Plosser, C. (1989). Understanding real business cycles.The Journal of Economic Perspectives, 3(3), 51–77.

21. Stiglitz, J., & Mauro, G. (2011). Heterogeneous interacting agent models for understanding monetary economies.Eastern Economic Journal, 37(1), 6–12.

22. Tesfatsion, L. (2006). Agent-based computational modeling and macroeconomics. In D.

Colander (Ed.),Post Walrasian economics: Beyond the dynamic stochastic general equilibrium model(pp. 175–202). New York: Cambridge University Press.

23. Vriend, N. (2000). An illustration of the essential difference between individual and social learning, and its consequences for computational analyses.Journal of Economic Dynamics and Control, 14(1), 1–19.

Does Persistent Learning or Limited

Information Matter in Forward Premium Puzzle?

Ya-Chi Lin

Abstract Some literature explains the forward premium puzzle by the learning process of agents’ behavior parameters, and this work argues that their conclusions may not be convincing. This study extends their model to the limited information case, resulting in several interesting findings: First, the puzzle happens when the proportion of full information agents is small, even people make expectation near rationally. Second, allowing the proportion of full information agents to be endogenous and highly relied on the performance of forecasting, agents turn to become full information immediately, and the puzzle disappears. These results are similar in different learning gain parameters. Our finding shows that limited information would be more important than learning, when explaining forward premium puzzle. Third, the multi-period test of Fama equation is also examined by the exchange rate simulated by learning in the limited information case. The Fama coefficients are positive, and the puzzle will not remain. It is consistent with the stylized facts in the multi-period version of Fama regression, which is found in McCallum (J Monet Econ 33(1):105–132, 1994). Finally, we also find that if agents rely on the recent data too much when forecasting, they tend to overreact on their arbitrage behavior. The Fama coefficient deviates further from unity. People might not benefit from having more information.

Keywords Limited information ã Persistent learning ã Market efficiency ã Forward premium puzzle ã Dual learning

Y.-C. Lin ()

Hubei University of Economics, Wuhan, China e-mail:yachi.lin@hbue.edu.cn

© Springer Nature Switzerland AG 2018

S.-H. Chen et al. (eds.),Complex Systems Modeling and Simulation in Economics and Finance, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-319-99624-0_8

161

162 Y.-C. Lin

1 Introduction

The forward premium puzzle is a long-standing paradox in international finance.

Foreign exchange market efficiency is in a status that exchange rates fully reflect all available information. Unexploited excess profit opportunity does not exist when market efficiency is hold. Therefore, the forward exchange rate should be the best predictor of the future spot exchange rate. However, most empirical researches have the opposite findings. The slope coefficient in a regression of the future spot rate change on the foreign premium is significantly negative, which is expected to be unity if market is efficient. Ifst is the natural log of the current spot exchange rate (defined as the domestic price of foreign exchange),st+1is the depreciation of the natural log of the spot exchange rate from periodttot+1, i.e.,st+1 =st+1−st, andft is the natural log of the one-period forward rate at period t. The forward premium puzzle is examined by the following Fama equation:

st+1= ˆβ(ftst)+ ˆut+1 (1) The Fama coefficientβˆis unity if the efficient market hypothesis holds. However, in the majority of researches,βˆis negative. It is what we called “forward premium puzzle.”

Examining market efficiency by regressing future spot rate change on the forward premium is based on the assumption that agents are rational and risk neutral. The rejection of market efficiency has the following explanations. First, if agents are risk averse, the forward exchange rate contains a risk premium. Hodrick [13] and Engel [9] apply Lucas asset pricing model to price forward foreign exchange risk premium, which shows future spot rate deviating from forward rate. Second, agents may have incomplete knowledge about the underlying economic environment.

During the transitional period, they can only guess the forward exchange rates by averaging several spot exchange rates that may possibly happen. This makes systematic prediction errors even though they are behaving rational. It is what we called “peso problem,” which is originally studied by Krasker [15]. Lewis [16]

assumes that agents update their beliefs about the regime shifts in fundamentals by Bayesian learning. During the learning period, the forecast errors are systematic and serially correlated. Motivated by the fact that even today only a tiny fraction of foreign currency holdings are actively managed, [1] assume that agents may not incorporate all information in their portfolio decisions. Most investors do not find it in their interest to actively manage their foreign exchange positions since the resulting welfare gain does not outweigh the information processing cost. It leads to a negative correlation between exchange rate change and forward premium for five to ten quarters. The puzzle disappears over longer horizons. Scholl and Uhlig [23]

consider agents are Bayesian investors, who trades contingent on a monetary policy shock, and uses Bayesian VAR to assess the posterior uncertainty regarding the resulting forward discount premium. Forward discount premium diverges for several countries even without delayed overshooting. Forward discount puzzle seems to be robust.

Một phần của tài liệu Complex systems modeling and simulation in economics and finance (Trang 160 - 170)

Tải bản đầy đủ (PDF)

(308 trang)