Conclusions and Development Prospects

Một phần của tài liệu Complex systems modeling and simulation in economics and finance (Trang 275 - 278)

The main positive moment of the developed approach to paralleling of models designed in AnyLogic environment isautomation of their supercomputer versions creation. It simplifies the development considerably, because in most cases, after insignificant modification of the source model, it does not require finalization of rules of transformation into executable module for a supercomputer.

This approach is expendable in part of used source language and software–

hardware platform. Besides already successfully tested execution platformsAvian and ADEVS, we can develop other, lower-level means for agents’ status recalcu- lation state acceleration, and in prospect consider an issue of using such hardware accelerators asXeon Phi and NVidia CUDA.

The used technology of internodal communication by means of active com- munication technology makes it possible, whenever required, to implement easily both interactive simulation andinteractive visualization of simulation process within estimated time.However, it is possible only if a supercomputer is available in a burst mode, for example, if a compact personal supercomputer is used.

Supercomputer Technologies in Social Sciences 273 The main research question remaining is a question of maximum achievable effectiveness of paralleling in case of mass communication of agents being in different computational nodes of a supercomputer.

It is quite evident thatif every agent communicates actively with all other agents, the productivity will be low in view of extensional internodal traffic.

Nevertheless, even in such unfavorable case, a supercomputer makes it possible to accelerate simulation considerably, when a model is launched either repeatedly (for statistics) or with different parameters. In other extreme case, when almost all communications are localized in terms of geographic location of agents, effective- ness of paralleling will be good.

Therefore, effectiveness of the parallel version depends directly on that part of interagent communicationthat requires transfer of large data volume between computational nodes.

Acknowledgements This work was supported by the Russian Science Foundation (grant14-18- 01968).

References

1. Ambrosiano, N. (2006). Avian flu modeled on supercomputer.Los Alamos National Labora- tory NewsLetter, 7(8), 32.

2. Bisset, K., Chen, J., Feng, X., Kumar, V. S. A., & Marathe, M. (2009). EpiFast: A fast algorithm for large scale realistic epidemic simulations on distributed memory systems. InProceedings of 23rd ACM International Conference on Supercomputing (ICS’09), Yorktown Heights, New York(pp. 430–439)

3. Collier, N. (2012). Repast HPC Manual. [Electronic resource] February 23. Access mode:

http://repast.sourceforge.net, free. Screen title. Language. English. (date of request: May 2013) 4. Deissenberg, C., Hoog, S., & van der Herbert, D. (2008, June 24). EURACE: A massively parallel agent-based model of the European economy. Document de Travail No. 2008 (Vol.

39).

5. Epstein, J. M. (2009, August 6). Modeling to contain pandemics.Nature, 460, 687.

6. Epstein, J. M., & Axtell, R. L. (1996)Growing artificial societies: Social science from the bottom up. Ch. V. Cambridge, MA: MIT Press.

7. Keith, R. B., Jiangzhuo, C., Xizhou, F., Anil Kumar, V. S., & Madhav, V. M. (2009, June 8–12).

EpiFast: A fast algorithm for large scale realistic epidemic simulations on distributed memory systems. In ICS09 Proceedings of the 23rd international conference on supercomputing, Yorktown Heights, New York(pp. 430–439)

8. Makarov, V. L., Bakhtizin, A. R., Vasenin, V. A., Roganov, V. A., & Trifonov, I. A. (2011).

Tools of supercomputer systems used to work with agent-based models.Software Engineering, 2(3), 2–14.

9. Parker, J. (2007). A flexible, large-scale, distributed agent based epidemic model. Center on social and economic dynamics. Working Paper No. 52

10. Roberts, D. J., Simoni, D. A., & Eubank, S. (2007).A national scale microsimulation of disease outbreaks. RTI International. Research Triangle Park, NC: Virginia Bioinformatics Institute.

Is Risk Quantifiable?

Sami Al-Suwailem, Francisco A. Doria, and Mahmoud Kamel

Abstract The work of Gửdel and Turing, among others, shows that there are fundamental limits to the possibility of formal quantification of natural and social phenomena. Both our knowledge and our ignorance are, to a large extent, not amenable to quantification. Disregard of these limits in the economic sphere might lead to underestimation of risk and, consequently, to excessive risk-taking. If so, this would expose markets to undue instability and turbulence. One major lesson of the Global Financial Crisis, therefore, is to reform economic methodology to expand beyond formal reasoning.

Keywords Financial instability ã Gửdel’s incompleteness theorem ã Irreducible uncertainty ã Lucas critique ã Mispricing risk ã Quantifiability of risk ã

Reflexivity ã Rice’s theorem ã Self-reference

1 Introduction

It is dangerous to think of risk as a number – William Sharpe [10]

Can we estimate uncertainty? Can we quantify risk? The late finance expert Peter Bernstein wrote in the introduction to his book,Against the Gods: The Remarkable Story of Risk[9, pp. 6–7]:

S. Al-Suwailem ()

Islamic Development Bank Group, Jeddah, Saudi Arabia e-mail:sami@isdb.org

F. A. Doria

Advanced Studies Research Group, PEP/COPPE, Federal University at Rio de Janeiro, Rio de Janeiro, Brazil

M. Kamel

College of Computer Science, King Abdul-Aziz University, Jeddah, Saudi Arabia

© Springer Nature Switzerland AG 2018

S.-H. Chen et al. (eds.),Complex Systems Modeling and Simulation in Economics and Finance, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-319-99624-0_14

275

276 S. Al-Suwailem et al.

The story that I have to tell is marked all the way through by a persistent tension between those who assert that the best decisions are based on quantification and numbers, determined by the patterns of the past, and those who base their decisions on more subjective degrees of belief about the uncertain future. This is a controversy that has never been resolved . . .

The mathematically driven apparatus of modern risk management contains the seeds of a dehumanizing and self-destructive technology. . . . Our lives teem with numbers, but we sometimes forget that numbers are only tools. They have no soul; they may indeed become fetishes.

We argue in this chapter that developments in science and mathematics in the past century bring valuable insights into this controversy.

Philosopher and mathematician William Byers, in his book The Blind Spot:

Science and The Crisis of Uncertainty[25], makes an interesting case for why we need to embrace uncertainty. He builds on discoveries in mathematics and science in the last 100 years to argue that:

• Uncertainty is an inevitable fact of life. It is an irreducible feature of the universe.

• Uncertainty and incompleteness are the price we pay for creativity and freedom.

• No amount of scientific progress will succeed in removing uncertainty from the world.

• Pretending that scientific progress will eliminate uncertainty is a pernicious delusion that will have the paradoxical effect of hastening the advent of further crises.

These aspects probably did not receive enough attention in mainstream literature, despite the pioneering efforts of many economists (see, e.g., Velupillai et al. [111]).

We deal here with the question at the roots of uncertainty in the sciences which use mathematics as their main tool. To be specific, by “risk” we mean indeterminacy related to future economic loss or failure. By “quantifiability” we mean the ability to model it using formal mathematical systems rich in arithmetic, in order to be able to derive the quantities sought after. Of course, there are many ways to measure and quantifyhistorical risk.1 The question more precisely, therefore, is: Can we systematically quantify uncertainty regarding future economic losses?

We argue that this is not possible.

This is not to say that we can never predict the future, or can never make quantitative estimates of future uncertainty. What we argue is that, when predicting the future, we will never be able to escape uncertainty in our predictions. More important, we will never be able to quantify such uncertainty. We will see that not even the strictest kind of mathematical rigor can evade uncertainty.

Một phần của tài liệu Complex systems modeling and simulation in economics and finance (Trang 275 - 278)

Tải bản đầy đủ (PDF)

(308 trang)