2.3 Issues in the development of autonomous systems
2.3.3 Computational and implementation challenges
In addition to the practical considerations in the development of autonomous systems, some computational and implementation issues exist when the notion of autonomy is being addressed. A selection of these are listed here, using a comparison of a human driver and an autonomously guided vehicle as a motivating example. It is important to note that some of these issues are outstanding problems in artificial intelligence.
Discretization of a continuous world
Discretization is the process of converting continuous models into discrete counterparts, such that these can become suitable for numerical evaluation or implementation. This is necessary from a computation perspective. While human drivers operate within a continuous and changing world, a digital computer has to interact with the world via a model or representation, constructed with finite bits. This results in loss of information, which limits the ability of the artificial system to consider the continuum of possible paths to the goal (Miller et al., 2006). In grid-based world representations, this can be compensated partially by reducing the grid size. However, the drawback is in the substantial increase in computational expense necessary for processing the larger number of cells, which means that the system either is unable to plan as far ahead into the future, or to consider as many possible paths as before.
Curse of Dimensionality
The exponential growth in volume of computation necessary with the increase in number of dimensions of a model, is known as the “curse of dimensionality.” This problem is related to discretization, and is a major obstacle for pattern classification and machine learning algorithms, such ask–NEAREST NEIGHBOUR(Mitchell, 1997). Such is not the case in human beings, who possesses sufficient neural complexity capable of learning unstructured data and correlations at a scale currently unmatched by any artificial system, despite the dimensionality of the information being processed (Moravec, 1998).
For instance, a human driver can adapt his steering characteristics to his interpretation of the surrounding environments and assess the driveability of the terrain real-time, while it may require a machine many epochs to learn the same knowledge, and only for certain types of terrain determineda priori; to do so for all conceivable environmental conditions would be intractable, and no known machine learning algorithm to-date is capable of learning at such a scale (Blackburn and Bailey, 2004).
Uncertainty
Uncertainty refers to a state of knowledge in which one or more alternatives may result in a set of possible outcomes, but where the probabilities of the outcomes are neither known in advance nor meaningful. In statistics, uncertainty is the estimated deviation of a measurement from its true value. It extends to the inability to acquire or the lack of relevant percepts necessary for a decision to be made. This can be present in sensory data in the form of noise and disturbances, and in the imprecision within the model or representation of the world. Uncertainty is inherent in most systems; it limits the situational awareness given imperfections in sensing and perception, which constraints the understanding of the environment.
The level of uncertainty present in the components of an autonomous system limits the degree of accuracy that can be attained. Often, this results in imprecise or non- optimal solutions. In overcoming such problems, it is observed that statistical methods, especially in perception and knowledge understanding, have prevailed (Mumford, 2002).
Such computational approaches taking uncertainty into account include Bayesian Probability, such as the work of Sebastian Thrun (2005), and the Bayesian Robot Programming paradigm by Lebeltel et al. (2004).
Domain knowledge
This refers to a set of knowledge contained within a specific area of interest. In the context of an autonomous system, domain knowledge refers to the body of information that can allow the system to survive and achieve the tasks it has been designed to perform. The difficulty of domain knowledge is in its acquisition, representation, and usage. Not only does a driver possess sufficient knowledge about operating a vehicle, he is also aware of traffic rules, and the path that he needs to follow to reach his destination.
In comparison, an artificial system may require such knowledge to be provided to it, in some form of representation. The representation of such domain knowledge by an artificial system is performed by its designer.
The Representation Problem
This concerns the difficulty of representing both domain and acquired knowledge, world models and plausible solutions in forms that make it possible for understanding and reasoning. Section 2.3.2 explains this difficulty in terms of discretization, and the high dimensionality of physical reality. For an autonomous system to understand or
reason about the world, a representation of reality in the form of a world model is necessary. Pertinent to representation is the need to determine the subset and quantity of information to be stored (i.e. selection and filtering), the manner for preserving such information (i.e. storage), and how often this is to be done (i.e. sampling).
In lieu of exact sensor measurements, an internal representation that uses relative measures of distance is intuitive. For instance, while driving at a high speed through expressways and negotiating turns, a driver does not measure the exact distance from his instantaneous position to the boundaries of the road or calculate the distances to the obstacles (e.g. other cars or pedestrians and lane markings) closest to him. Instead, his internal representation of the world is relative. In contrast, many existing robots navigate by determining exact distances to immediate surroundings and obstacles, and planning paths based on such measurements. Hence, it is necessary to identify alternative representations of the world that may allow an autonomous system to navigation and perform its tasks successfully.
The Perceptual Aliasing Problem
This describes the situation where an autonomous system facing two similar states is in fact required to make very different decisions, resulting in the problem where autonomous systems are unable to change their actions when no changes on their sensory inputs are observed (Hakura et al., 1999). The downside is that at some point such a change may in fact be necessary, in the absence of which the system may fail.
For instance, an autonomously guided vehicle following a straight road may continue to do so even if it depletes its source of power. This problem is dependent on the degree of observability of the internal states of the system. Self-monitoring systems following the Model-Based Programming paradigm (Ingham and Williams, 2003; Williams et al., 2003) are able to avoid these issues by monitoring changes to the system even in the lack of perceptual differences between two similar states. This makes it possible for changes in behaviour to occur, despite no changes in sensory inputs (Hakura et al., 1999).
Anchoring, Embodiment and the Framing Problem
These are important aspects of the internal representations of the state of a system and its observation of the world, as they determine the manner such representations can be used for understanding, reasoning, problem solving, internal simulation, and decision making. Anchoring is the process of connecting symbols and sensor data as represented within an artificial system, that refers to the same physical objects or
phenomena in the external world (Coradeschi and Saffiotti, 2003). For a system that has to manipulate symbolic percepts, this problem needs to be addressed, otherwise the symbolic knowledge representation becomes meaningless since they are not grounded to entities in physical reality.
Anchoring makes it possible for autonomous systems to acquire and verify beliefs about the world, and possibly to communicate these beliefs to other entities (Roy, 2005).
One of the challenges in anchoring is the presence of uncertainty and ambiguity in sensor data. This can be solved in part by maintaining multiple hypotheses, provided the problem is not due to the vagueness in the symbolic representation. A human driver may perform anchoring effortlessly, for instance, by correlating visual images of road signs with their meanings, while for an artificial system, algorithms to perform such matching are still premature.
The Embodiment Problem refers to the difficulty of anchoring sensor data to symbolic representations of the self-model of the artificial system. This relates to the philosophical issue ofself-awareness, which is implied when an artificial system is capable of reasoning about itself and its relationship to the external environment. The Framing Problem in artificial systems is a consequent of the above two problems. It refers to the difficulty of representing causal effects of the actions of an artificial system without having to represent a large number of non-effects, as doing so would be intractable.
The Action-Selection Problem
The action-selection problem, a fundamental problem in intelligent systems, is the difficulty of answering the question: what should the system do next, given what it can perceive and understand, its goals and available capabilities or resources. This is discussed within the context of intelligent agents and biological entities, artificial intelligence, cognitive science and ethology. The difficulty lies in the high demands placed on agents, as described in (Brom and Bryson, 2006): The agent is required to select its action within a dynamic and unpredictable environment, and often it has to act in real-time, making timely decisions to carry out its tasks. Furthermore, an agent may be required to perform multiple, sometimes conflicting tasks that compete for resources.
The fundamental problem is one of combinatorial complexity: agents cannot possibly consider every option available at every instant of time; at each juncture, candidate actions have to be constrained and selected depending on the context. This accounts for the reliance on heuristics in prevailing approaches (Russel and Norvig, 1995). While
feasible for many well-structured problems, the difficulty is exacerbated when the agent is required to interact with human beings and other agents.
For these reasons, action-selection is a non-trivial problem, yet is central to the achievement of autonomy in artificial system. In effect, the action-selection problem is the superset of the problem of autonomy in intelligent systems – where artificial systems are required to select actions and goals and the means to achieve them.