1. Trang chủ
  2. » Công Nghệ Thông Tin

REAL-TIME SYSTEMS DESIGN AND ANALYSIS phần 3 docx

53 377 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 53
Dung lượng 787,21 KB

Nội dung

82 3 REAL-TIME OPERATING SYSTEMS void int3(void) /* interrupt handler 3 */ { save(context); /* save context on stack */ task3(); /* execute task 3 */ restore(context); /* restore context from stack */ } Procedure save saves certain registers to a stack area, whereas restore restores those registers from the stack. In practice, save and restore would actually take two arguments; a pointer to data structure representing the context information and a pointer to the stack data structure, which will be discussed later. In the case of the context data structure, the programming language compiler must provide a mechanism to extract the current contents of the general registers, PCs, and so forth. 2 Finally, both save and restore must adjust the stack pointer, which is illustrated later. 3.1.3 Preemptive-Priority Systems A higher-priority task is said to preempt a lower-priority task if it interrupts the lower-priority task. Systems that use preemption schemes instead of round-robin or first-come-first-served scheduling are called preemptive-priority systems. The priorities assigned to each interrupt are based on the urgency of the task associated with the interrupt. For example, the nuclear power station monitoring system is best designed as a preemptive-priority system. While the handling of intruder events is critical, for example, nothing is more important than processing the core over-temperature alert. Prioritized interrupts can be either fixed priority or dynamic priority. Fixed- priority systems are less flexible, since the task priorities cannot be changed. Dynamic-priority systems can allow the priority of tasks to be adjusted at run- time to meet changing process demands. Preemptive-priority schemes can suffer from resource hogging by higher- priority tasks. This can lead to a lack of available resources for lower-priority tasks. In this case, the lower-priority tasks are said to be facing a problem called starvation. A special class of fixed-rate preemptive-priority interrupt-driven systems, called rate-monotonic systems, comprises those real-time systems where the priorities are assigned so that the higher the execution frequency, the higher the priority. This scheme is common in embedded applications, particularly avionics sys- tems, and has been studied extensively. For example, in the aircraft navigation system, the task that gathers accelerometer data every 10 milliseconds has the highest priority. The task that collect gyro data, and compensates these data and the accelerometer data every 40 milliseconds, has the second highest priority. Finally, the task that updates the pilot’s display every second has lowest priority. The theoretical aspects of rate-monotonic systems will be studied shortly. 2 This is not a trivial thing because the PC and registers are needed to affect the call. 3.1 REAL-TIME KERNELS 83 3.1.4 Hybrid Systems Hybrid systems include interrupts that occur at both fixed rates and sporadically. The sporadic interrupts can be used to handle a critical error that requires imme- diate attention, and thus have highest priority. This type of system is common in embedded applications. Another type of hybrid system found in commercial operating systems is a combination of round-robin and preemptive systems. In these systems, tasks of higher priority can always preempt those of lower priority. However, if two or more tasks of the same priority are ready to run simultaneously, then they run in round-robin fashion, which will be described shortly. To summarize, interrupt-only systems are easy to write and typically have fast response times because process scheduling can be done via hardware. Interrupt- only systems are a special case of foreground/background systems, which are widely used in embedded systems. One weakness of interrupt-only systems, however, is the time wasted in the jump-to-self loop and the difficulty in providing advanced services. These ser- vices include device drivers and interfaces to multiple layered networks. Another weakness is vulnerability to malfunctions owing to timing variations, unantici- pated race conditions, hardware failure, and so on. Some companies avoid designs based on interrupts for these reasons. 3.1.4.1 Foreground/Background Systems Foreground/background sys- tems are an improvement over the interrupt-only systems in that the polled loop is replaced by code that performs useful processing. Foreground/background sys- tems are the most common architecture for embedded applications. They involve a set of interrupt-driven or real-time processes called the foreground and a col- lection of noninterrupt-driven processes called the background (Figure 3.3). The foreground tasks run in round-robin, preemptive priority, or hybrid fashion. The background task is fully preemptable by any foreground task and, in a sense, represents the lowest priority task in the system. Main Program initialization while TRUE do background process; Process 1 Process 2 Process n Interrupts Figure 3.3 A foreground/background system. 84 3 REAL-TIME OPERATING SYSTEMS All real-time solutions are just special cases of the foreground/background sys- tems. For example, the polled loop is simply a foreground/background system with no foreground, and a polled loop as a background. Adding interrupts for synchronization yields a full foreground/background system. State-driven code is a foreground/background system with no foreground and phase-driven code for a background. Coroutine systems are just a complicated background pro- cess. Finally, interrupt-only systems are foreground/background systems without background processing. 3.1.4.2 Background Processing As a noninterrupt-driven task, the back- ground processing should include anything that is not time critical. While the background process is the process with the lowest priority, it should always exe- cute to completion provided the system utilization is less than 100% and no deadlocking occurs. It is common, for instance, to increment a counter in the background in order to provide a measure of time loading or to detect if any foreground process has hung up. It might also be desirable to provide individual counters for each of the foreground processes, which are reset in those processes. If the background process detects that one of the counters is not being reset often enough, it can be assumed that the corresponding task is not being executed and, that some kind of failure is indicated. This is a form of software watchdog timer. Certain types of low-priority self-testing can also be performed in the back- ground. For example, in many systems, a complete test of the CPU instruction set could be performed. This kind of test should never be performed in fore- ground, but should be part of a robust system design. The design and coding of these CPU instruction tests require careful planning. Finally, low-priority display updates, logging to printers, or other interfaces to slow devices can be performed in the background. 3.1.4.3 Initialization Initialization of the foreground/background system con- sists of the following steps: 1. Disable interrupts 2. Set up interrupt vectors and stacks 3. Perform self-test 4. Perform system initialization 5. Enable interrupts Initialization is actually the first part of the background process. It is impor- tant to disable interrupts because many systems start up with interrupts enabled while time is still needed to set things up. This setup consists of initializing the appropriate interrupt vector addresses, setting up stacks if it is a multiple-level interrupt system, and initializing any data, counters, arrays, and so on. In addition, it is necessary to perform any self-diagnostic tests before enabling any interrupts. Finally, real-time processing can begin. 3.1 REAL-TIME KERNELS 85 3.1.4.4 Real-Time Operation The real-time or foreground operation for the foreground/background system is the same as that for the interrupt-only system. For example, suppose it is desired to implement an interrupt handler for a 2- address computer architecture with a single interrupt. That is, one real-time task and the background process. The EPI and DPI instructions can be used to enable and disable the interrupt explicitly, and it is assumed that upon receiving an interrupt, the CPU will hold off all other interrupts until explicitly reenabled with an EPI instruction. For context-switching purposes, it is necessary to save the eight general regis- ters, R0-R7, on the stack. Note that context switching involves saving the status of the machine as it is used by the background process. The foreground process will run to completion so its context is never saved. Further, assume that the CPU will have the PC in memory location 6 at the time of interruption, and the address of the interrupt-handler routine (the interrupt vector) is stored in memory location 5. The following assembly code could be used to trivially initialize the simple foreground/background system. DPI ; disable interrupts STORE &handler,5 ; put interrupt handler address in location 5 EPI ; enable interrupts Of course, other initialization, such as initializing flags and other data, should be performed before enabling interrupts. If symbolic memory locations reg0 through reg7 are used to save the registers, then the interrupt handler, coded in 2-address code, might look as follows: DPI ; redundantly disable interrupts STORE R0,&reg0 ; save register 0 STORE R1,&reg1 ; save register 1 STORE R2,&reg2 ; save register 2 STORE R3,&reg3 ; save register 3 STORE R4,&reg4 ; save register 4 STORE R5,&reg5 ; save register 5 STORE R6,&reg6 ; save register 6 STORE R7,&reg7 ; save register 7 JU @APP ; execute real-time application program LOAD R7,&reg7 ; restore register 7 LOAD R6,&reg6 ; restore register 6 LOAD R5,&reg5 ; restore register 5 LOAD R4,&reg4 ; restore register 4 LOAD R3,&reg3 ; restore register 3 LOAD R2,&reg2 ; restore register 2 LOAD R1,&reg1 ; restore register 1 LOAD R0,&reg0 ; restore register 0 EPI ; re-enable interrupts RI ; return from interrupt In many computers, block save and restore instructions are available to save and restore a set of registers to consecutive memory locations. Also note 86 3 REAL-TIME OPERATING SYSTEMS that this interrupt handler does not permit the interrupt itself. If this is to be accomplished, or if more than one interrupt routine existed, a stack rather than just static memory would be needed to save context. The background program would include the initialization procedure and any processing that was not time critical, and would be written in a high-order lan- guage. If the program were to be written in C, it might appear as: void main (void) /*allocate space for context variable */ int reg0, reg1, reg2, reg3, reg4, reg5, reg6, reg7; /*declare other global variables here */ { init(); /*initialize system */ while (TRUE) /*background loop */ background(); /* non-real-time processing here */ } Foreground/background systems typically have good response times, since they rely on hardware to perform scheduling. They are the solution of choice for many embedded real-time systems. But “home-grown” foreground/background systems have at least one major drawback: interfaces to complicated devices and networks must be written. This procedure can be tedious and error-prone. In addition, these types of systems are best implemented when the number of foreground tasks is fixed and known a priori. Although languages that support dynamic allocation of memory could handle a variable number of tasks, this can be tricky. Finally, as with the interrupt-only system, the foreground/background system is vulnerable to timing variations, unanticipated race conditions, hardware failures, and so on. 3.1.4.5 Full-Featured Real-Time Operating Systems The foreground/ background solution can be extended into an operating system by adding addi- tional functions such as network interfaces, device drivers, and complex debug- ging tools. These types of systems are readily available as commercial products. Such systems rely on a complex operating system using round-robin, preemptive- priority, or a combination of both schemes to provide scheduling; the operating system represents the highest priority task, kernel, or supervisor. 3.1.5 The Task-Control Block Model The task-control block model is the most popular method for implementing commercial, full-featured, real-time operating systems because the number of real-time tasks can vary. This architecture is used in interactive on-line systems where tasks (associated with users) come and go. This technique can be used in round-robin, preemptive-priority, or combination systems, although it is generally 3.1 REAL-TIME KERNELS 87 associated with round-robin systems with a single clock. In preemptive systems, however, it can be used to facilitate dynamic task prioritization. The main draw- back of the task-control block model is that when a large number of tasks are created, the overhead of the scheduler can become significant. In the task-control block (TCB) model each task is associated with a data struc- ture, called a task control block. This data structure contains at least a PC, register contents, an identification string or number, a status, and a priority if applicable. The system stores these TCBs in one or more data structures, such as a linked list. 3.1.5.1 Task States The operating system manages the TCBs by keeping track of the status or state of each task. A task typically can be in any one of the four following states: 1. Executing 2. Ready 3. Suspended (or blocked) 4. Dormant (or sleeping) The executing task is the one that is running, and in a single-processing system there can be only one. A task can enter the executing state when it is created (if no other tasks are ready), or from the ready state (if it is eligible to run based on its priority or its position in the round-robin ready list). When a task is completed it returns to the suspended state. Tasks in the ready state are those that are ready to run but are not running. A task enters the ready state if it was executing and its time slice runs out, or it was preempted. If it was in the suspended state, then it can enter the ready state if an event that initiates it occurs. If the task was in the dormant state, then it enters the ready state upon creation of another task. Tasks that are waiting on a particular resource, and thus are not ready, are said to be suspended or blocked. The dormant state is used only in systems where the number of TCBs is fixed. This state allows for determining memory requirements beforehand, but limits available system memory. This state is best described as a task that exists but is unavailable to the operating system. Once a task has been created, it can become dormant by deleting it. 3.1.5.2 Task Management The operating system is in essence the high- est priority task. Every hardware interrupt and every system-level call (such as a request on a resource) invokes the real-time operating system. The operating system is responsible for maintaining a linked list containing the TCBs of all the ready tasks, and a second linked list of those in the suspended state. It also keeps a table of resources and a table of resource requests. Each TCB contains the essen- tial information normally tracked by the interrupt service routine (Figure 3.4). 88 3 REAL-TIME OPERATING SYSTEMS Pointer to Next TCB Status Register(s) Program Counter Register 1 … Register n Status Priority Task ID Figure 3.4 A typical task-control block. The difference between the TCB model and the interrupt-service-routine model is that the resources are managed by the operating systems in the latter, while in the TCB model, tasks track their own resources. The TCB model is useful when the number of tasks is indeterminate at design time or can change while the system is in operation. That is, the TCB model is very flexible. When it is invoked, the operating system checks the ready list to see if the next task is eligible for execution. If it is eligible, then the TCB of the currently executing task is moved to the end of the ready list, and the eligible task is removed from the ready list and its execution begins. Task management can be achieved simply by manipulating the status word. For example, if all of the TCBs are set up in the list with the status word initially set to “dormant,” then tasks can be added by changing the status to “ready” when the TCB has been initialized. During run time the status words of tasks are set accordingly, either to “executing” in the case of the next eligible task or back to “ready” in the case of the interrupted task. Blocked tasks have their status word changed to “suspended.” Completed tasks can be “removed” from the task list by resetting the status word to dormant. This approach reduces overhead because it eliminates the need for dynamic memory management of the TCBs. It also provides deterministic performance because the TCB list is of constant size. 3.1.5.3 Resource Management In addition to scheduling, the operating system checks the status of all resources in the suspended list. If a task is sus- pended due to a wait for a resource, then that task can enter the ready state only upon availability of the resource. The list structure is used to arbitrate two tasks that are suspended on the same resource. If a resource becomes available to a suspended task, then the resource tables are updated and the eligible task is moved from the suspended list to the ready list. 3.2 THEORETICAL FOUNDATIONS OF REAL-TIME OPERATING SYSTEMS In order to take advantage of some of the more theoretical results in real-time operating systems (RTOS), a fairly rigorous formulation is necessary. Most 3.2 THEORETICAL FOUNDATIONS OF REAL-TIME OPERATING SYSTEMS 89 real-time systems are inherently concurrent, that is, their natural interaction with external events typically requires multiple simultaneous tasks to cope with multiple threads of control. A process is the active object of a system and is the basic unit of work handled by the scheduler. As a process executes, it changes its state and at any time, and it may be in one, but only one, of the following states at any instant: ž Dormant (or sleeping) The task has been created and initialized. It is not yet ready to execute, that is, in this state, the process is not eligible to execute. ž Ready Processes in this state are those that are released and eligible for execution, but are not executing. A process enters the ready state if it was executing and its time-slice runs out, or if it was preempted. If a process was in the suspended or blocked state, then it enters the ready state if an event that initiates it occurs. ž Executing When a process is executing its instructions are being executed. ž Suspended (or blocked) Processes that are waiting for a particular resource, and thus are not ready, are said to be in the suspended or blocked state. ž Terminated The process has finished execution, or has self-terminated or aborted, or is no longer needed. Similar to processes, threads can be in only one of these states at any instant. A partial state diagram corresponding to process or thread states is depicted in Figure 3.5. It should be noted that the different operating systems have different naming conventions, but the states represented in this arbitrary nomenclature exist in one form or another in all RTOS. Many modern operating systems allow processes created within the same program to have unrestricted access to the shared memory through a thread facility. Ready Suspended Sleeping Terminated Preempt, Time to Schedule Interrupted, Time Slice Up Blocked or Self- Suspended Terminated by monitor or other process Self-Terminated Schedule Task Resource Released Executing Note: Monitor may cause virtually any transition Resource Released Figure 3.5 A process state diagram as a partially defined finite state machine. 90 3 REAL-TIME OPERATING SYSTEMS 3.2.1 Process Scheduling Scheduling is a fundamental operating system function. In order to meet a pro- gram’s temporal requirements in real-time systems a strategy is needed for ordering the use of system resources, and a mechanism needed for predicting the worst- case performance (or response time) when a particular scheduling policy is applied. There are two general classes of scheduling policies: pre-run-time and run-time scheduling. The goal of both types of scheduling is to satisfy time constraints. In pre-run-time scheduling, the objective is to create a feasible schedule off- line, which guarantees the execution order of processes and prevents simultaneous access to shared resources. Pre-run-time scheduling also takes into account and reduces the cost of context switching overhead, increasing the chance that a feasible schedule can be found. In run-time scheduling, static priorities are assigned and resources are allocated on a priority basis. Run-time scheduling relies on a complex run-time mechanism for process synchronization and communication. This approach allows events to interrupt processes and demand resources randomly. In terms of performance analysis, engineers must rely on stochastic simulations to verify these types of system designs. 3.2.1.1 Task Characteristics of a Real Workload The workload on pro- cessors consists of tasks each of which is a unit of work to be allocated CPU time and other resources. Every processor is assigned to at most one task at any time. Every task is assigned to at most one processor at any time. No job is scheduled before its release time. Each task, τ i , is typically characterized by the following temporal parameters: ž Precedence Constraints Specify if any task(s) needs to precede other tasks. ž Release or Arrival Time r i , j The release time of the jth instance of task τ i . ž Phase φ i The release time of the first instant of task τ i . ž Response Time Time span between the task activation and its completion. ž Absolute Deadline d i The instant by which the task must complete. ž Relative Deadline D i The maximum allowable response time of the task. ž Laxity Type Notion of urgency or leeway in a task’s execution. ž Period p i The minimum length of intervals between the release times of consecutive tasks. ž Execution Time e i The (maximum) amount of time required to complete the execution of a task i when it executes alone and has all the resources it requires. Mathematically, some of the parameters just listed are related as follows: φ i = r i,1 and r i,k = φ i + (k −1) ∗ p i (3.1) 3.2 THEORETICAL FOUNDATIONS OF REAL-TIME OPERATING SYSTEMS 91 d i,j : the absolute deadline of the jth instance of task τ i is as follows: d i,j = φ i + (j − 1) ∗ p i + D i (3.2) If the relative deadline of a periodic task is equal to its period p i ,then d i,k = r i,k + p i = φ i + k ∗p i (3.3) where k is some positive integer greater than or equal to one, corresponding to the kth instance of that task. 3.2.1.2 Typical Task Model A simple task model is presented in order to describe some standard scheduling policies used in real-time systems. The task model has the following simplifying assumptions: ž All tasks in the task set are strictly periodic. ž The relative deadline of a task is equal to its period/frame. ž All tasks are independent; there are no precedence constraints. ž No task has any nonpreemptible section, and the cost of preemption is negligible. ž Only processing requirements are significant; memory and I/O requirements are negligible. For real-time systems, it is of the utmost importance that the scheduling algo- rithm produces a predictable schedule, that is, at all times it is known which task is going to execute next. Many RTOS use a round-robin scheduling pol- icy because it is simple and predictable. Therefore, it is natural to describe that algorithm more rigorously. 3.2.2 Round-Robin Scheduling In a round-robin system several processes are executed sequentially to comple- tion, often in conjunction with a cyclic executive. In round-robin systems with time slicing, each executable task is assigned a fixed-time quantum called a time slice in which to execute. A fixed-rate clock is used to initiate an inter- rupt at a rate corresponding to the time slice. The task executes until it com- pletes or its execution time expires, as indicated by the clock interrupt. If the task does not execute to completion, its context must be saved and the task is placed at the end of the executable list. The context of the next executable task in the list is restored, and it resumes execution. Essentially, round-robin scheduling achieves fair allocation of the CPU to tasks of the same priority by time multiplexing. [...]... C2 and C3 are evaluated as follows: C1 : ∀if ≥ ei ⇒ f ≥ 3 C2 : pi /f − pi /f = 0 ⇒ f = 2, 3, 4, 5, 10, C3 : 2f − gcd (pi , f ) ≤ Di ⇒ f = 2, 3, 4, 5 From these three conditions, it can be inferred that a possible value for f could be any one of the values of 3, 4, or 5 Table 3. 1 Example task set for framesize calculation τi pi ei Di τ2 15 1 14 3 20 2 26 τ4 22 3 22 94 3. 2.4 3 REAL-TIME OPERATING SYSTEMS. .. available resources 3. 3 Process INTERTASK COMMUNICATION AND SYNCHRONIZATION Max Requirement Used 117 Possibly Needed R1 R3 R1 R2 R3 R1 R2 R3 6 5 7 3 3 2 4 5 1 2 3 1 2 1 0 2 0 1 4 2 6 1 2 2 2 5 0 Total available A B C R2 4 1 2 However, the following state is unsafe: Process Max Requirement Used Possibly Needed R1 R3 R1 R2 R3 R1 R2 R3 6 5 7 3 3 2 4 5 1 2 3 1 2 1 0 2 1 1 4 2 6 1 2 2 2 4 0 Total available A... that of the other tasks changes as tasks are released and Table 3. 3 Upper bound on utilization U for n tasks scheduled using the rate-monotonic discipline 1 Upper Bound on Utilization RMA bound 2 3 4 5 6 ∞ 1.0 n 0. 83 0.78 0.76 0.74 0. 73 0.69 100% 80% 60% 40% 20% 0% 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 Number of Tasks Utilization ln 2 Figure 3. 9 Upper bound on utilization in a rate-monotonic system... set to 1 and stored again, and a condition code of 0 is returned If the bit is 1, a condition code of 1 is returned and no store is performed The fetch, test, and store are indivisible The wait and signal operations can be implemented easily with a testand-set instruction void P(int S); { while (test _and_ set(S) == TRUE); /* wait */ } void V(int S); { S=FALSE } 3. 3 INTERTASK COMMUNICATION AND SYNCHRONIZATION... highest priority task and is scheduled first Note that at time 4 the second instance of task τ1 is released and it preempts the currently running task 3 , which has the lowest priority 3. 2 THEORETICAL FOUNDATIONS OF REAL-TIME OPERATING SYSTEMS Table 3. 2 95 Sample task set for utilization calculation τi ei pi ui = ei /pi τ1 1 4 0.25 τ2 2 5 0.4 3 5 20 0.25 t1 t2 t3 0 4 8 Figure 3. 8 12 16 20 Rate-monotonic... keeping head and tail indices Data are loaded at the tail and read from the head Figure 3. 13 illustrates this Suppose the ring buffer is a structure Head, Empty Here Tail, Fill Here Figure 3. 13 A ring buffer Processes write to the buffer at the tail index and read data from the head index Data access is synchronized with a counting semaphore set to size of ring buffer, to be discussed later 102 3 REAL-TIME. .. blocks, modems, and printers 3. 3.4 Mailboxes Mailboxes or message exchanges are an intertask communication device available in many commercial, full-featured operating systems A mailbox is a mutually 3 For those unfamiliar with C, the notation “->” indicates accessing a particular field of the structure that is referenced by the pointer 3. 3 INTERTASK COMMUNICATION AND SYNCHRONIZATION 1 03 agreed upon... precluded 3. 3.4.1 Mailbox Implementation Mailboxes are best implemented in systems based on the task control block model with a supervisor task A table containing a list of tasks and needed resources (e.g., mailboxes, printers, etc.) is kept along with a second table containing a list of resources and their states For example, in Tables 3. 6 and 3. 7, three resources currently exist; a printer and two... initialized to FALSE before either process is started 4 P and V are the first letters of the Dutch “to test” – proberen – and “to increment” – verhogen They were first suggested by Dijkstra [Dijkstra65] P and wait, and V and signal will be used synonymously throughout the text 106 3 REAL-TIME OPERATING SYSTEMS Again, for example, consider the C code for Task_A and Task_B, mentioned before The problem can be solved... V(done) 3. 3.7.2 Counting Semaphores The P and V semaphores are called binary semaphores because they can take one of two values Alternatively, a counting semaphore or general semaphore can be used to protect pools of resources, or to 108 3 REAL-TIME OPERATING SYSTEMS Real-Time Operating System Device Driver Command Buffer Status Buffer Data Buffer Device Controller Specialized Device Figure 3. 14 The . 242628 30 32 34 36 Figure 3. 10 EDF task schedule for task set in Table 3. 4. Table 3. 4 Task set for example of EDF scheduling τ i e i p i τ 1 2 5 τ 2 4 7 98 3 REAL-TIME OPERATING SYSTEMS Table 3. 5. rate-monotonic systems will be studied shortly. 2 This is not a trivial thing because the PC and registers are needed to affect the call. 3. 1 REAL-TIME KERNELS 83 3.1.4 Hybrid Systems Hybrid systems. 82 3 REAL-TIME OPERATING SYSTEMS void int3(void) /* interrupt handler 3 */ { save(context); /* save context on stack */ task3(); /* execute task 3 */ restore(context); /*

Ngày đăng: 13/08/2014, 08:20

TỪ KHÓA LIÊN QUAN