Tối ưu hóa viễn thông và thích nghi Kỹ thuật Heuristic P12 potx

21 200 0
Tối ưu hóa viễn thông và thích nghi Kỹ thuật Heuristic P12 potx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

12 Neural Networks for the Optimization of Runtime Adaptable Communication Protocols Robert S. Fish and Roger J. Loader 12.1 Introduction The explosive growth of distributed computing has been fuelled by many factors. Applications such as video conferencing, teleoperation and most notably the World Wide Web are placing ever more demanding requirements on their underlying communication systems. Having never been designed to support such diverse communication patterns these systems are failing to provide appropriate services to individual applications. Artificial neural networks have been used in areas of communication systems including signal processing and call management (Kartalopolus, 1994). This chapter suggests a further use of neural networks for the maintenance of application tailored communication systems. In this context, a neural network minimises the difference between an applications required Quality of Service (QoS) and that provided by the end-to-end connection. 12.1.1 Problem area Communication systems based on the ISO Open System Interconnection (OSI) model historically suffered inefficiencies such as function duplication and excessive data copying. Telecommunications Optimization: Heuristic and Adaptive Techniques, edited by D. Corne, M.J. Oates and G.D. Smith © 2000 John Wiley & Sons, Ltd Telecommunications Optimization: Heuristic and Adaptive Techniques. Edited by David W. Corne, Martin J. Oates, George D. Smith Copyright © 2000 John Wiley & Sons Ltd ISBNs: 0-471-98855-3 (Hardback); 0-470-84163X (Electronic) Telecommunications Optimization: Heuristic and Adaptive Techniques200 However, a combination of modern protocol implementation techniques and an increase in the power and resources of modern computers has largely eliminated these overheads. Zitterbart (1993) defines the characteristics of various distributed applications and identifes four possible classes based on their communication requirements. Table 12.1 illustrates these classifications and highlights the broad range of transport services required by modern distributed applications. In the face of such diversity, the challenge of optimizing performance shifts from the efficiency of individual mechanisms to the provision of a service that best satisfies the broad range of application requirements. Providing such a service is further complicated by external factors such as end-to-end connection characteristics, host heterogeneity and fluctuations in network utilization. Traditional protocols, such as TCP/IP do not contain the broad functionality necessary to satisfy all application requirements in every operating environment. In addition, the QoS required by an application may change over the lifetime of a connection. If a protocol provides a greater QoS than is required then processor time and network bandwidth may be wasted. For these reasons, applications that use existing protocols do not necessarily receive the communication services they require. Table 12.1 Diversity of application transport requirements. Transport service class Example applications Average throughput Burst Factor Delay sens. Jitter sens. Order sens. Loss Tol. Priority Delivery Interactive Voice Low Low High High Low High No Time Critical Tele conf Mod Mod High High Low Mod Yes Motion video Distributed Compressed High High High Mod Low Mod Yes Time Critical Motion video raw Very high Low High High Low Mod Yes Real Time Manufacture Time Critical Control Mod Mod High Var High Low Yes Non Real File transfer Mod Low Low N/D High None No Time TELNET Very low High High Low High None Yes Non Time Trans. process Low High High Low Var None No Critical File service Low High High Low Var None No Configurable protocols offer customised communication services that are tailored to a particular set of application requirements and end-to-end connection characteristics. They may be generated manually, through formal languages or graphical tools, or automatically with code scanning parsers that determine application communication patterns. 12.1.2 Adaptable Communication Systems Whilst configurable communication systems provide a customized service, they are unable to adapt should the parameters on which they were based change. Adaptable protocols support continuously varying application requirements by actively selecting internal protocol processing mechanisms at runtime. There are several advantages in this: Neural Networks for the Optimization of Runtime Adaptable Communication Protocols 201 1. Application QoS: it is not uncommon for an application to transmit data with variable QoS requirements. For example, a video conferencing application may require different levels of service depending upon the content of the session. Consider a video sequence that consists of a highly dynamic set of action scenes followed by a relatively static close-up sequence. The first part, due to rapid camera movement, is reasonably tolerant of data loss and corruption, but intolerant of high jitter. In contrast, the static close-up scenes are tolerant to jitter but require minimal data loss and corruption. 2. Connection QoS: adaptable protocols are able to maintain a defined QoS over varying network conditions. Whilst certain architectures offer guaranteed or statistical services the heterogeneuos mix of interconnection devices that form the modern internet does little to cater for end-to-end QoS. The adverse effects of variables such as throughput, delay and jitter can be minimised by using appropriate protocol mechanisms. 3. Lightweight: certain environments are able to support service guarantees such as defined latency and transfer rates. Once these are ascertained an adaptable protocol may remove unnecessary functions to achieve higher transfer rates. The Dynamic Reconfigurable Protocol Stack (DRoPS) (Fish et al., 1998) defines an architecture supporting the implementation and operation of multiple runtime adaptable communication protocols. Fundamental protocol processing mechanisms, termed microprotocols are used to compose fully operational communication systems. Each microprotocol implements an arbitrary protocol processing operation. The complexity of a given operation may range from a simple function, such as a checksum, to a complex layer of a protocol stack, such as TCP. The runtime framework is embedded within an operating system and investigates the benefits that runtime adaptable protocols offer in this environment. Mechanisms are provided to initialize a protocol, configure an instantiation for every connection, manipulate the configuration during communication and maintain consistent configurations at all end points. Support is also provided for runtime adaptation agents that automatically reconfigure a protocol on behalf of an application. These agents execute control mechanisms that optimize the configuration of the associated protocol. The remainder of this chapter will address the optimization of protocol configuration. Other aspects of the DRoPS project are outside the scope of this chapter, but may be found in Fish et al. (1998; 1999) and Megson et al. (1998). 12.2 Optimising protocol configuration The selection of an optimal protocol configuration for a specific, but potentially variable, set of application requirements is a complex task. The evaluation of an appropriate configuration should at least consider the processing overheads of all available microprotocols and their combined effect on protocol performance. Additional consideration should be paid to the characteristics of the end-to-end connection. This is due to the diversity of modern LANs and WANs that are largely unable to provide guaranteed services on an end-to-end basis. An application using an adaptable protocol may manually modify its connections to achieve an appropriate service (work on ReSource reserVation Protocols (RSVP) addresses this issue). Telecommunications Optimization: Heuristic and Adaptive Techniques202 Whilst providing complete control over the functionality of a communication system, the additional mechanisms and extra knowledge required for manual control may deter developers from using an adaptable system. History has repeatedly shown that the simplest solution is often favoured over the more complex, technically superior, one. For example, the success of BSD Sockets may be attributed to its simple interface and abstraction of protocol complexities. Manual adaptation relies on the application being aware of protocol specific functionality, the API calls to manipulate that functionality and the implications of reconfiguration. The semantics of individual microprotocols are likely to be meaningless to the average application developer. This is especially true in the case of highly granular protocols such as advocated by the DRoPS framework. As previously stated, protocol configuration is dependent as much on end-to-end connection characteristics as application requirements. Manual adaptation therefore requires network performance to be monitored by the application, or extracted from the protocol through additional protocol specific interfaces. Both approaches increase the complexity of an application and reduce its efficiency. Finally, it is unlikely that the implications of adaptation are fully understood by anyone but the protocol developer themselves. These factors place additional burdens on a developer who may subsequently decide that an adaptable protocol is just not worth the effort. If it is considered that the ‘application knows best’ then manual control is perhaps more appropriate. However, it is more likely to be a deterrent in the more general case. It would be more convenient for an application to specify its requirements in more abstract QoS terms (such as tolerated levels of delay, jitter, throughput, loss and error rate) and allow some automated process to optimize the protocol configuration on its behalf. A process wishing to automate protocol optimization must evaluate the most appropriate protocol configuration with respect to the current application requirements as well as end- to-end connection conditions. These parameters refer to network characteristics (such as error rates), host resources (such as memory and CPU time) and scheduling constraints for real-time requirements. The complexity of evaluating an appropriate protocol configuration is determined by the number of conditions and requirements, the number of states that each may assume, and the total number of unique protocol configurations. Within DRoPS, a protocol graph defines default protocol structure, basic function dependencies and alternative microprotocol implementations. In practice, a protocol developer will specify this in a custom Adaptable Protocol Specification Language (APSL). Defining such a graph reduces the number of possible protocol configurations to a function of the number of objects in the protocol graph and the number of alternative mechanisms provided by each. This may be expressed as: ∏ = K k k F 1 (12.1) where, kF is the number of states of configuration k and K the total number of functions in the protocol graph. The automated process must therefore consider N combinations of requirements, conditions and configurations, which is defined as: ∏∏∏ === ⋅⋅= K k k J j j I i i FRCN 111 (12.2) Neural Networks for the Optimization of Runtime Adaptable Communication Protocols 203 where iC is the number of states of condition i and j R the number of states of requirement j, and where I and J are the total number of conditions and requirements. This represents the total number of evaluations necessary to determine the most appropriate configuration for each combination of requirements and conditions. The complexity of this task increases relentlessly with small increases in the values of I, J and K; as illustrated in Figure 12.1. Part (a) shows the effect of adding extra protocol layers and functions, and part (b) the effect of increasing the condition and requirement granularity. 12.2.1 Protocol Control Model The runtime framework supports mechanisms for the execution of protocol specific adaptation policies. These lie at the heart of a modular control system that automatically optimises the configuration of a protocol. The methods used to implement these policies are arbitrary and of little concern to the architecture itself. However, the integration of DRoPS within an operating system places several restrictions on the characteristics of these policies. The adaptation policy must posses a broad enough knowledge to provide a good solution for all possible inputs. However in the execution of this task it must not degrade performance by squandering system level resources. Therefore, any implementation must be small to prevent excessive kernel code size and lightweight so as not to degrade system performance. Adaptation policies are embedded within a control system, as depicted in Figure 12.2. Inputs consist of QoS requirements from the user and performance characteristics from the functions of the communication system. Before being passed to the adaptation policy, both sets of inputs are shaped. This ensures that values passed to the policy are within known bounds and are appropriately scaled to the expectations of the policy. User requirements are passed to the control system through DRoPS in an arbitrary range of 0 to 10. A value of 0 represents a ‘don't care’ state, 1 a low priority and 10 a high priority. These values may not map 1:1 to the policy, i.e. the policy may only expect 0 to 3. The shaping function normalizes control system inputs to account for an individual policies interpretation. End-to-end performance characteristics are collected by individual protocol. Before being used by the policy, the shaping function scales these values according to the capability of the reporting function. For example, an error detected by a weak checksum function should carry proportionally more weight than one detected by a strong function. The shaped requirements and conditions are passed to the adaptation policy for evaluation. Based on the policy heuristic an appropriate protocol configuration is suggested. The existing and suggested configurations are compared and appropriate adaptation commands issued to convert the former into the latter. Protocol functions, drawn from a library of protocol mechanisms, are added, removed and exchanged, and the updated protocol configuration is used for subsequent communication. The DRoPS runtime framework ensures that changes in protocol configuration are propagated and implemented at all end points of communication. The new configuration should provide a connection with characteristics that match the required performance more closely than the old configuration. Statistics on the new configuration will be compiled over time and if it fails to perform adequately it will be adapted. Telecommunications Optimization: Heuristic and Adaptive Techniques204 Figure 12.1 Increasing complexity of the configuration task. Neural Networks for the Optimization of Runtime Adaptable Communication Protocols 205 Figure 12.2 Model of automatic adaptation control system. 12.2.2 Neural Networks as Adaptation Policies Various projects have attempted to simplify the process of reconfiguration by mapping application specified QoS requirements to protocol configurations. Work by Box et al. (1992) and Zitterbart (1993) classified applications into service classes according to Table 12.1 and mapped each to a predefined protocol configuration. The DaCaPo project uses a search based heuristic, CoRA (Plagemann et al., 1994), for evaluation and subsequent renegotiation of protocol configuration. The classification of building blocks and measurement of resource usage are combined in a structured search approach enabling CoRA to find suitable configurations. The properties of component functions, described in a proprietry language L, are based on tuples of attribute types such as throughput, delay and loss probability. CoRA configures protocols for new connections at runtime with respect to an applications requirements, the characteristics of the offered transport service and the availability of end system resources. The second approach provides a greater degree of customisation, but the time permitted to locate a new configuration determines the quality of solution found. Beyond these investigations there is little work on heuristics for the runtime optimisation of protocol configuration. In the search for a more efficient method of performing this mapping, an approach similar to that used in Bhatti and Knight (1998) for processing QoS information about media flows was considered. However, the volume of data required to represent and reason about QoS rendered this solution intractable for fine-grained protocol configuration in an Operating System environment. Although impractical, this served to highlight the highly consistent relationships between conditions, requirements and the actual performance of individual configurations. For example, consider two requirements, bit error tolerance and required throughput, and a protocol with variable error checking schemes. The more comprehensive the error checking, the greater the impact it has on throughput. This is the Telecommunications Optimization: Heuristic and Adaptive Techniques206 case for processing overhead (raw CPU usage) and knock-on effects from the detection of errors (packet retransmission). As emphasis is shifted from correctness to throughput, the selection of error function should move from complete to non-existent, depending on the level of error in the end-to-end connection. 12.2.3 Motivation If requirements and conditions are quantized and represented as a vector, the process of mapping to protocol configurations may be reduced to a pattern matching exercise. Initial interest in the use of neural networks was motivated by this fact, as pattern is an application at which neural networks are particularly adept. The case for neural network adaptation policies is strengthened by the following factors: 1. Problem data: the problem data is well suited to representation by a neural network. Firstly extrapolation is never performed due to shaping and bounding in the control mechanism. Secondly, following shaping the values presented at the input nodes may not necessarily be discrete. Rather than rounding, as one would in a classic state table, the networks ability to interpolate allows the suggestion of protocol configurations for combinations of characteristics and requirements not explicitly trained. 2. Distribution of overheads: the largest overhead in the implementation and operation of a neural network is the training process. For this application the overheads in off line activities, such as the time taken to code a new protocol function or adaptation policy, do not adversely effect the more important runtime performance of the protocol. Thus, the overheads are being moved from performance sensitive online processing to off line activities, where the overheads of generating an adaptation policy are minimal compared to the time required develop and test a new protocol. 3. Execution predictability: the execution overheads of a neural network are constant and predictable. The quality of solution found does not depend upon an allotted search time and always results in the best configuration being found (quality of solution is naturally dependent on the training data). 12.2.4 The Neural Network Model The aim of using a neural network is to capitalise on the factors of knowledge representation and generalisation to produce small, fast, knowledgeable and flexible adaptation heuristics. In its most abstract form, the proposed model employs a neural network to map an input vector, composed of quantized requirements and conditions, to an output vector representing desired protocol functionality. A simple example is illustrated in Figure 12.3. Nodes in the input layer receive requirements from the application and connection characteristics from the protocol. The values presented to an input node represents the quantized state (for example low, medium or high) of that QoS characteristic. No restrictions are placed on the granularity of these states and as more are introduced the ability of an application to express its requirements increases. Before being passed to the network input node, values are shaped to ensure they Neural Networks for the Optimization of Runtime Adaptable Communication Protocols 207 stay within a certain range expected by the policy. It should be noted that this process does not round these values to the closest state as would be required in a state table. The networks ability to generalise allows appropriate output to be generated for input values not explicitly trained. Figure 12.3 Mapping QoS parameters to protocol configuration. When the network is executed the values written in the nodes of the output layer represent the set of functions that should appear in a new protocol configuration. To achieve this, output nodes are logically grouped according to the class of operation they perform; individual nodes represent a single function within that class. Output nodes also represent non-existent functions, such as that representing no error checking in the example. This forms a simple YES / NO pattern on the output nodes, represented by 1 and 0 respectively. For example, if error checking is not required, the node representing no error checking will exhibit a YES whilst the other nodes in this logical class with exhibit NO. In many cases, the values presented at the output nodes will not be black and white, 1 or 0, due to non-discrete input values and the effect of generalisation. Therefore the value in each node represents a degree of confidence that the function represented should appear in any new configuration. When more than one node in a logical group assumes a non-zero value, the function represented by the highest confidence value is selected. To reduce processing overhead, only protocol functions that have alternative microprotocols are represented in the output layer. 12.3 Example Neural Controller This section outlines the steps taken to implement a neural network based adaptation policy for the Reading Adaptable Protocol (RAP). RAP is a complete communication system composed of multiple microprotocols; it contains a number of adaptable functions, summarised in Table 12.2, a subset of which are used in the example adaptation policy. Telecommunications Optimization: Heuristic and Adaptive Techniques208 Table 12.2 Adaptable functionality of the Reading Adaptable Protocol. Protocol mechanism Alternative implementations Buffer allocation preallocated cache, dynamic Fragmentation and reassembly stream based, message based sequence control none, complete flow control none, window based acknowledgement scheme IRQ, PM-ARQ checksums none, block checking, full CRC 12.3.1 Adaptation Policy Training Data A neural network gains knowledge through the process of learning. In this application the training data should represent the most appropriate protocol configuration for each combination of application requirements and operating conditions. The development of a neural network adaptation controller is a three stage process: 1. Evaluate protocol performance: this process determines the performance of each protocol configuration in each operating environment. Network QoS parameters are varied and the response of individual configurations logged. 2. Evaluate appropriate configurations: the result of performance evaluation is used to determine the most appropriate configuration for each set of requirements in each operating environment. This requires development of an appropriate fitness function. 3. Generate a policy: having derived an ideal set of protocol configurations a neural network must be trained and embedded within an adaptation policy. The result of these three stages is an adaptation policy that may be loaded into the DRoPS runtime framework and used to control the configuration of a RAP based system. 12.3.1 Evaluating Protocol Performance The evaluation of protocol performance is performed by brute force experimentation. During protocol specification, a configuration file is used to identify microprotocol resources and default protocol configurations. Using this file it is possible for the APSL parser to automatically generate client and server applications that evaluate the performance characteristics of all valid protocol configurations. Evaluating every protocol configuration in a static environment, where connection characteristics remain fixed, does not account for the protocols performance over real world connections in which connection characteristics are potentially variable. To function correctly in such circumstances an adaptation policy requires knowledge of how different configurations perform under different conditions. To simulate precisely defined network characteristics, a traffic shaper is introduced. This intercepts packets traversing a host’s [...]... remains constant Figure 12.9 Mapping combinations of requirements and end-to-end conditions 216 12.4 Telecommunications Optimization: Heuristic and Adaptive Techniques Implications of Adaptation The previous sections introduce the notion of neural networks as control heuristics for the optimisation of protocol configuration Whilst a model is presented, no reference is made to the effect of using these... a data set containing the most appropriate protocol configurations for each requirement in each operating environment The configurations suggested by this set are 212 Telecommunications Optimization: Heuristic and Adaptive Techniques determined by the objectives of the fitness function and may be used to train the neural network adaptation policy for every report in the report file while( read_item_from_report_file... was noticed in increasing the number of hidden layer nodes above 6, which is surprisingly small Figure 12.7 Mapping and logical grouping in example neural network 214 Telecommunications Optimization: Heuristic and Adaptive Techniques Figure 12.8 Progression of training (upper) and error surface (lower) The SNNS provides tools to visualise various aspects of an implemented neural network Figure 12.9... second stage attempts to determine the most appropriate configuration for each combination of requirements and conditions This relies on the report file generated as 210 Telecommunications Optimization: Heuristic and Adaptive Techniques output from configuration performance evaluation Unlike the previous phase, where the evaluation applications are automatically generated by the APSL parser, the responsibility... effect of this loss At this speed and distance from the camera, the corrupted frame is highly active and the effect of the error is likely to escape largely unnoticed 218 Telecommunications Optimization: Heuristic and Adaptive Techniques Figure 12.12 Video sequence with small degree of movement After demonstrating the operation of the suspension forks the video clip effectively changes its QoS requirements . copying. Telecommunications Optimization: Heuristic and Adaptive Techniques, edited by D. Corne, M.J. Oates and G.D. Smith © 2000 John Wiley & Sons, Ltd Telecommunications Optimization: Heuristic and Adaptive. conditions. Telecommunications Optimization: Heuristic and Adaptive Techniques216 12.4 Implications of Adaptation The previous sections introduce the notion of neural networks as control heuristics for the optimisation. on ReSource reserVation Protocols (RSVP) addresses this issue). Telecommunications Optimization: Heuristic and Adaptive Techniques202 Whilst providing complete control over the functionality of

Ngày đăng: 01/07/2014, 10:20

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan