Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 45 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
45
Dung lượng
301,17 KB
Nội dung
CHAPTER 3
REAL-TIME SCHEDULING AND
SCHEDULABILITY ANALYSIS
As in preparing a schedule of to-do tasks in everyday life, scheduling a set of com-
puter tasks (also known as processes) is to determine when to execute which task,
thus determining the execution order of these tasks; and in the case of a multipro-
cessor or distributed system, to also determine an assignment of these tasks to spe-
cific processors. This task assignment is analogous to assigning tasks to a specific
person in a team of people. Scheduling is a central activity of a computer system,
usually performed by the operating system. Scheduling is also necessary in many
non-computer systems such as assembly lines.
In non-real-time systems, the typical goal of scheduling is to maximize average
throughput (number of tasks completed per unit time) and/or to minimize average
waiting time of the tasks. In the case of real-time scheduling, the goal is to meet
the deadline of every task by ensuring that each task can complete execution by its
specified deadline. This deadline is derived from environmental constraints imposed
by the application.
Schedulability analysis is to determine whether a specific set of tasks or a set of
tasks satisfying certain constraints can be successfully scheduled (completing exe-
cution of every task by its specified deadline) using a specific scheduler.
Schedulability Test: A schedulability test is used to validate that a given application
can satisfy its specified deadlines when scheduled according to a specific scheduling
algorithm.
This schedulability test is often done at compile time, before the computer system
and its tasks start their execution. If the test can be performed efficiently, then it can
be done at run-time as an on-line test.
41
Real-Time Systems: Scheduling, Analysis, and Verification. Albert M. K. Cheng
Copyright
¶ 2002 John Wiley & Sons, Inc.
ISBN: 0-471-18406-3
42 REAL-TIME SCHEDULING AND SCHEDULABILITY ANALYSIS
Schedulable Utilization: A schedulable utilization is the maximum utilization al-
lowed for a set of tasks that will guarantee a feasible scheduling of this task set.
A hard real-time system requires that every task or task instance completes its
execution by its specified deadline; failure to do so even for a single task or task
instance may lead to catastrophic consequences. A soft real-time system allows some
tasks or task instances to miss their deadlines, but a task or task instance that misses
a deadline may be less useful or valuable to the system.
There are basically two types of schedulers: compile-time (static) and run-time
(on-line or dynamic).
Optimal Scheduler: An optimal scheduler is one which may fail to meet a deadline
of a task only if no other scheduler can.
Note that “optimal” in real-time scheduling does not necessarily mean “fastest aver-
age response time” or “shortest average waiting time.” A task T
i
is characterized by
the following parameters:
S: start, release, ready, or arrival time
c: (maximum) computation time
d: relative deadline (deadline relative to the task’s start time)
D: absolute deadline (wall clock time deadline).
There are three main types of tasks. A single-instance task executes only once. A
periodic task has many instances or iterations, and there is a fixed period between
two consecutive releases of the same task. For example, a periodic task may perform
signal processing of a radar scan once every 2 seconds, so the period of this task is
2 seconds. A sporadic task has zero or more instances, and there is a minimum sep-
aration between two consecutive releases of the same task. For example, a sporadic
task may perform emergency maneuvers of an airplane when the emergency button
is pressed, but there is a minimum separation of 20 seconds between two emergency
requests. An aperiodic task is a sporadic task with either a soft deadline or no dead-
line. Therefore, if the task has more than one instance (sometimes called a job), we
also have the following parameter:
p: period (for periodic tasks); minimum separation (for sporadic tasks).
The following are additional constraints that may complicate scheduling of tasks
with deadlines:
1. frequency of tasks requesting service periodically,
2. precedence relations among tasks and subtasks,
3. resources shared by tasks, and
4. whether task preemption is allowed or not.
If tasks are preemptable, we assume that a task can be interrupted only at discrete
(integer) time instants unless we indicate otherwise.
DETERMINING COMPUTATION TIME 43
3.1 DETERMINING COMPUTATION TIME
The application and the environment in which the application is embedded are main
factors determining the start time, deadline, and period of a task. The computation
(or execution) times of a task are dependent on its source code, object code, execu-
tion architecture, memory management policies, and actual number of page faults
and I/O.
For real-time scheduling purposes, we use the worst-case execution (or computa-
tion) time (WCET) as c. This time is not simply an upper bound on the execution of
the task code without interruption. This computation time has to include the time the
central processing unit (CPU) is executing non-task code such as code for handling
page faults caused by this task as well as the time an I/O request spends in the disk
queue for bringing in a missing page for this task.
Determining the computation time of a process is crucial to successfully schedul-
ing it in a real-time system. An overly pessimistic estimate of the computation time
would result in wasted CPU cycles, whereas an under-approximation would result in
missed deadlines.
One way of approximating the WCETs is to perform testing of the system of tasks
and use the largest value of computation time seen during these tests. The problem
with this is that the largest value seen during testing may not be the largest observed
in the working system.
Another typical approach to determining a process’s computation time is by an-
alyzing the source code [Harmon, Baker, and Whalley, 1994; Park, 1992; Park,
1993; Park and Shaw, 1990; Shaw, 1989; Puschner and Koza, 1989; Nielsen, 1987;
Chapman, Burns, and Wellings, 1996; Lundqvist and Stenstrvm, 1999; Sun and
Liu, 1996]. Analysis techniques are safe, but use an overly simplified model of the
CPU that result in over-approximating the computation time [Healy and Whalley,
1999b; Healy et al., 1999]. Modern processors are superscalar and pipelined. They
can execute instructions out of order and even in parallel. This greatly reduces the
computation time of a process. Analysis techniques that do not take this fact into
consideration would result in pessimistic predicted WCETs.
Recently, there are attempts to characterize the response time of programs run-
ning in systems with several levels of memory components such as cache and main
memory [Ferdinand and Wilhelm, 1999; Healy and Whalley, 1999a; White et al.,
1999]. Whereas the studies make it possible to analyze the behavior of certain page
replacements and write strategies, there are restrictions in their models and thus the
proposed analysis techniques cannot be applied in systems not satisfying their con-
straints. More work needs to be done before we can apply similar analysis strategies
to complex computer systems.
An alternative to the above methods is to use a probability model to model the
WCET of a process as suggested in [Burns and Edgar, 2000; Edgar and Burns,
2001]. The idea here is to model the distribution of the computation time and use
it to compute a confidence level for any given computation time. For instance, in a
soft real-time system, if the designer wants a confidence of 99% on the estimate for
WCET, he or she can determine which WCET to use from the probability model. If
44 REAL-TIME SCHEDULING AND SCHEDULABILITY ANALYSIS
the designer wants a 99.9% probability, he or she can raise the WCET even higher. In
chapters 10 and 11, we describe techniques for determining the WCET of rule-based
systems.
3.2 UNIPROCESSOR SCHEDULING
This section considers the problem of scheduling tasks on a uniprocessor system.
We begin by describing schedulers for preemptable and independent tasks with no
precedence or resource-sharing constraints. Following the discussion on these basic
schedulers, we will study the scheduling of tasks with constraints and show how
these basic schedulers can be extended to handle these tasks.
3.2.1 Scheduling Preemptable and Independent Tasks
To simplify our discussion of the basic schedulers, we assume that the tasks to be
scheduled are preemptable and independent. A preemptable task can be interrupted
at any time during its execution, and resumed later. We also assume that there is no
context-switching time. In practice, we can include an upper bound on the context-
switching time in the computation time of the task. An independent task can be
scheduled for execution as soon as it becomes ready or released. It does need to wait
for other tasks to finish first or to wait for shared resources. We also assume here that
the execution of the scheduler does not require the processor, that is, the scheduler
runs on another specialized processor. If there is no specialized scheduling processor,
then the execution time of the scheduler must also be included in the total execution
time of the task set. Later, after understanding the basic scheduling strategies, we
will extend these techniques to handle tasks with more realistic constraints.
Fixed-Priority Schedulers: Rate-Monotonic and Deadline-Monotonic Al-
gorithms
A popular real-time scheduling strategy is the rate-monotonic (RM)
scheduler (RMS), which is a fixed-(static-) priority scheduler using the task’s (fixed)
period as the task’s priority. RMS executes at any time instant the instance of the
ready task with the shortest period first. If two or more tasks have the same period,
then RMS randomly selects one for execution next.
Example. Consider three periodic tasks with the following arrival times, computa-
tion times, and periods (which are equal to their respective relative deadlines):
J
1
: S
1
= 0, c
1
= 2, p
1
= d
1
= 5,
J
2
: S
2
= 1, c
2
= 1, p
2
= d
2
= 4, and
J
3
: S
3
= 2, c
3
= 2, p
3
= d
3
= 20.
The RM scheduler produces a feasible schedule as follows. At time 0, J
1
is the
only ready task so it is scheduled to run. At time 1, J
2
arrives. Since p
2
< p
1
, J
2
has
UNIPROCESSOR SCHEDULING 45
3
J
2
J
1
J
05
1
0
1
5
2
0
2
5
time
Process
Figure 3.1 RM schedule.
a higher priority, so J
1
is preempted and J
2
starts to execute. At time 2, J
2
finishes
execution and J
3
arrives. Since p
3
> p
1
, J
1
now has a higher priority, so it resumes
execution. At time 3, J
1
finishes execution. At this time, J
3
is the only ready task so
it starts to run. At time 4, J
3
is still the only task so it continues to run and finishes
execution at time 5. At this time, the second instances of J
1
and J
2
are ready. Since
p
2
< p
1
, J
2
has a higher priority, so J
2
starts to execute. At time 6, the second
instance of J
2
finishes execution. At this time, the second instance of J
1
is the only
ready task so it starts execution, finishing at time 8. The timing diagram of the RM
schedule for this task set is shown in Figure 3.1.
The RM scheduling algorithm is not optimal in general since there exist schedu-
lable task sets that are not RM-schedulable. However, there is a special class of peri-
odic task sets for which the RM scheduler is optimal.
Schedulability Test 1: Given a set of n independent, preemptable, and periodic
tasks on a uniprocessor such that their relative deadlines are equal to or larger
than their respective periods and that their periods are exact (integer) multiples of
each other, let U be the total utilization of this task set. A necessary and sufficient
condition for feasible scheduling of this task set is
U =
n
i=1
c
i
p
i
≤ 1.
Example. There are three periodic tasks with the following arrival times, computa-
tion times, and periods (which are equal to their respective relative deadlines):
J
1
: S
1
= 0, c
1
= 1, p
1
= 4,
J
2
: S
2
= 0, c
2
= 1, p
2
= 2, and
J
3
: S
3
= 0, c
3
= 2, p
3
= 8.
Because the task periods are exact multiples of each other ( p
2
< p
1
< p
3
, p
1
= 2p
2
,
p
3
= 4p
2
= 2p
1
), this task set is in the special class of tasks given in Schedulability
Test 1. Since U =
1
4
+
1
2
+
2
8
= 1 ≤ 1, this task set is RM-schedulable.
46 REAL-TIME SCHEDULING AND SCHEDULABILITY ANALYSIS
For a set of tasks with arbitrary periods, a simple schedulability test exists with
a sufficient but not necessary condition for scheduling with the RM scheduler [Liu
and Layland, 1973].
Schedulability Test 2: Given a set of n independent, preemptable, and periodic
tasks on a uniprocessor, let U be the total utilization of this task set. A sufficient
condition for feasible scheduling of this task set is U ≤ n(2
1/n
− 1).
However, using this simple schedulability test may under-utilize a computer sys-
tem since a task set whose utilization exceeds the above bound may still be RM-
schedulable. Therefore, we proceed to derive a sufficient and necessary condition
for scheduling using the RM algorithm. Suppose we have three tasks, all with start
times 0. Task J
1
has the smallest period, followed by J
2
, and then J
3
. It is intuitive
to see that for J
1
to be feasibly scheduled, its computation time must be less than or
equal to its period, so the following necessary and sufficient condition must hold:
c
1
≤ p
1
.
For J
2
to be feasibly scheduled, we need to find enough available time in the
interval [0, p
2
] that is not used by J
1
. Suppose J
2
completes execution at time t.
Then the total number of iterations of J
1
in the interval [0, t] is
t
p
1
.
To ensure that J
2
can complete execution at time t, every iteration of J
1
in [0, t] must
be completed and there must be enough available time left for J
2
. This available time
is c
2
. Therefore,
t =
t
p
1
c
1
+ c
2
.
Similarly, for J
3
to be feasibly scheduled, there must be enough processor time left
for executing J
3
after scheduling J
1
and J
2
:
t =
t
p
1
c
1
+
t
p
2
c
2
+ c
3
.
The next question is how to determine if such a time t exists so that a feasible
schedule for a set of tasks can be constructed. Note that there is an infinite number
of points in every interval if no discrete time is assumed. However, the value of the
ceiling such as
t
p
1
UNIPROCESSOR SCHEDULING 47
only changes at multiples of p
1
, with an increase at c
1
. Thus we need to show only
that a k exists such that
kp
1
≥ kc
1
+ c
2
and kp
1
≤ p
2
.
Therefore, we need to check that
t ≥
t
p
1
c
1
+ c
2
for some t that is a multiple of p
1
such that t ≤ p
2
. If this is found, then we have the
necessary and sufficient condition for feasibly scheduling J
2
using the RM algorithm.
This check is finite since there is a finite number of multiples of p
1
that are less than
or equal to p
2
. Similarly for J
3
, we check if the following inequality holds:
t ≥
t
p
1
c
1
+
t
p
2
c
2
+ c
3
.
We are ready to present the necessary and sufficient condition for feasible schedul-
ing of a periodic task.
Schedulability Test 3: Let
w
i
(t) =
i
k=1
c
k
t
p
k
, 0 < t ≤ p
i
.
The following inequality
w
i
(t) ≤ t
holds for any time instant t chosen as follows:
t = kp
j
, j = 1, ,i, k = 1, ,
p
i
p
j
iff task J
i
is RM-schedulable. If d
i
= p
i
, we replace p
i
by min(d
i
, p
i
) in the above
expression.
The following example applies this sufficient and necessary condition to check the
schedulability of four tasks using the RM algorithm.
Example. Consider the following periodic tasks all arriving at time 0, and consider
every task’s period is equal to its relative deadline.
J
1
: c
1
= 10, p
1
= 50,
48 REAL-TIME SCHEDULING AND SCHEDULABILITY ANALYSIS
J
2
: c
2
= 15, p
2
= 80,
J
3
: c
3
= 40, p
3
= 110, and
J
4
: c
4
= 50, p
4
= 190.
Using the above schedulability test, we proceed to check whether each task is schedu-
lable using the RM algorithm, beginning with the task having the smallest period.
For J
1
, i = 1, j = 1, ,i = 1, so
k = 1, ,
p
i
p
j
= 1, ,
50
50
= 1.
Thus, t = kp
j
= 1(50) = 50. Task J
1
is RM-schedulable iff
c
1
≤ 50.
Since c
1
= 10 ≤ 50, J
1
is RM-schedulable.
For J
2
, i = 2, j = 1, ,i = 1, 2, so
k = 1, ,
p
i
p
j
= 1, ,
80
50
= 1.
Thus, t = 1 p
1
= 1(50) = 50, or t = 1 p
2
= 1(80) = 80. Task J
2
is RM-schedulable
iff
c
1
+ c
2
≤ 50 or
2c
1
+ c
2
≤ 80.
Since c
1
= 10 and c
2
= 15, 10 + 15 ≤ 50 (or 2(10) + 15 ≤ 80), thus J
2
is
RM-schedulable together with J
1
.
For J
3
, i = 3, j = 1, ,i = 1, 2, 3, so
k = 1, ,
p
i
p
j
= 1, ,
110
50
= 1, 2.
Thus, t = 1 p
1
= 1(50) = 50, or t = 1 p
2
= 1(80) = 80, or t = 1 p
3
= 1(110) =
110, or t = 2 p
1
= 2(50) = 100. Task J
3
is RM-schedulable iff
c
1
+ c
2
+ c
3
≤ 50 or
2c
1
+ c
2
+ c
3
≤ 80 or
2c
1
+ 2c
2
+ c
3
≤ 100 or
3c
1
+ 2c
2
+ c
3
≤ 110
Since c
1
= 10, c
2
= 15, and c
3
= 40, 2(10) + 15 + 40 ≤ 80 (or 2(10) + 2(15) +
40 ≤ 100, or 3(10) + 2(15) + 40 ≤ 110), thus J
3
is RM-schedulable together with
J
1
and J
2
.
UNIPROCESSOR SCHEDULING 49
J
1
J
2
J
1
J
1
J
1
J
1
J
1
J
2
J
2
J
2
J
2
J
3
J
3
J
3
J
3
J
3
050
1
00
1
50
2
00
2
50 300
time
Figure 3.2 RM schedule.
For J
4
, i = 4, j = 1, ,i = 1, 2, 3, 4, so
k = 1, ,
p
i
p
j
= 1, ,
190
50
= 1, 2, 3.
Thus, t = 1 p
1
= 1(50) = 50, or t = 1 p
2
= 1(80) = 80, or t = 1 p
3
= 1(110) =
110, or t = 1 p
4
= 1(190) = 190, or t = 2 p
1
= 2( 50) = 100, or t = 2p
2
=
2(80) = 160, or t = 3 p
1
= 3(50) = 150. Task J
4
is RM-schedulable iff
c
1
+ c
2
+ c
3
+ c
4
≤ 50 or
2c
1
+ c
2
+ c
3
+ c
4
≤ 80 or
2c
1
+ 2c
2
+ c
3
+ c
4
≤ 100 or
3c
1
+ 2c
2
+ c
3
+ c
4
≤ 110 or
3c
1
+ 2c
2
+ 2c
3
+ c
4
≤ 150 or
4c
1
+ 2c
2
+ 2c
3
+ c
4
≤ 160 or
4c
1
+ 3c
2
+ 2c
3
+ c
4
≤ 190.
Since none of the inequalities can be satisfied, J
4
is not RM-schedulable together
with J
1
, J
2
,andJ
3
. In fact,
U =
10
50
+
15
80
+
40
110
+
50
190
= 1.014 > 1.
Therefore, no scheduler can feasibly schedule these tasks. Ignoring task J
4
, the
utilization is U = 0.75, which also satisfies the simple schedulable utilization of
Schedulability Test 2. The RM schedule for the first three tasks is shown in Fig-
ure 3.2.
Another fixed-priority scheduler is the deadline-monotonic (DM) scheduling al-
gorithm, which assigns higher priorities to tasks with shorter relative deadlines. It is
intuitive to see that if every task’s period is the same as its deadline, then the RM
and DM scheduling algorithms are equivalent. In general, these two algorithms are
equivalent if every task’s deadline is the product of a constant k and this task’s period,
that is, d
i
= kp
i
.
Note that some authors [Krishna and Shin, 1997] consider deadline monotonic
as another name for the earliest-deadline-first scheduler, which is a dynamic-priority
scheduler described in the next section.
50 REAL-TIME SCHEDULING AND SCHEDULABILITY ANALYSIS
Dynamic-Priority Schedulers: Earliest Deadline First and Least Laxity
First
An optimal, run-time scheduler is the earliest-deadline-first (EDF or ED) al-
gorithm, which executes at every instant the ready task with the earliest (closest or
nearest) absolute deadline first. The absolute deadline of a task is its relative deadline
plus its arrival time. If more than one task have the same deadline, EDF randomly se-
lects one for execution next. EDF is a dynamic-priority scheduler since task priorities
may change at run-time depending on the nearness of their absolute deadlines. Some
authors [Krishna and Shin, 1997] call EDF a deadline-monotonic (DM) scheduling
algorithm whereas others [Liu, 2000] define the DM algorithm as a fixed-priority
scheduler that assigns higher priorities to tasks with shorter relative deadlines. Here,
we use the terms EDF or DM to refer to this dynamic-priority scheduling algorithm.
We now describe an example.
Example. There are four single-instance tasks with the following arrival times, com-
putation times, and absolute deadlines:
J
1
: S
1
= 0, c
1
= 4, D
1
= 15,
J
2
: S
2
= 0, c
2
= 3, D
2
= 12,
J
3
: S
3
= 2, c
3
= 5, D
3
= 9, and
J
4
: S
4
= 5, c
4
= 2, D
4
= 8.
A first-in-first-out (FIFO or FCFS) scheduler (often used in non-real-time operating
systems) gives an infeasible schedule, shown in Figure 3.3. Tasks are executed in
the order they arrive and deadlines are not considered. As a result, task J
3
misses its
deadline after time 9, and task J
4
misses its deadline after time 8, before it is even
scheduled to run.
However, the EDF scheduler produces a feasible schedule, shown in Figure 3.4.
At time 0, tasks J
1
and J
2
arrive. Since D
1
> D
2
(J
2
’s absolute deadline is earlier
than J
1
’s absolute deadline), J
2
has higher priority and begins to run. At time 2, task
J
3
arrives. Since D
3
< D
2
, J
2
is preempted and J
3
begins execution. At time 5, task
J
4
arrives. Since D
4
< D
3
, J
3
is preempted and J
4
begins execution.
At time 7, J
4
completes its execution one time unit before its deadline of 8. At
this time, D
3
< D
2
< D
1
so J
3
has the highest priority and resumes execution. At
time 9, J
3
completes its execution, meeting its deadline of 9. At this time, J
2
has the
highest priority and resumes execution. At time 10, J
2
completes its execution two
J
1
J
2
J
3
J
4
05
1
0
1
5
time
Figure 3.3 FIFO schedule.
[...]... blocks for every pair of blocks not following the EDF algorithm The resulting schedule is an EDF schedule Another optimal, run-time scheduler is the least-laxity-first (LL or LLF) algorithm (also known as the minimum-laxity-first (MLF) algorithm or least-slack-time-first 52 REAL-TIME SCHEDULING AND SCHEDULABILITY ANALYSIS (LST) algorithm) Let c(i) denote the remaining computation time of a task at time... subtasks An important goal is to reduce task waiting time and context-switching time [Lee and Cheng, 1994] Using fixed-priority schedulers for non-real-time tasks may potentially lead to the priority inversion problem [Sha, Rajkumar, and Lehoczky, 1990], which occurs when a low-prioirty task with a critical section blocks a higher-priority task for an unbounded or long period of time The EDF and LL... task set is not EDF-schedulable iff either of the following conditions is true: U >1 ∃t < min P + dmax , U 1−U max ( pi − di ) 1≤i≤n such that s(t) > t A proof sketch for this test is in [Krishna and Shin, 1997] Comparing Fixed and Dynamic-Priority Schedulers The RM and DM algorithms are fixed-priority schedulers whereas the EDF and LL algorithms are dynamic-priority schedulers A fixed-priority scheduler... Next, the 50-Hz flight critical tasks, the 50-Hz non-flight critical tasks, and a sample 10-Hz flight critical 10 Hertz task are defined in the PERTS task editor Attributes such as task name, ready time, deadline, period, workload, and resource(s) utilized are defined for each task Dependencies between tasks, even on different CPU resources, such as what is required to ensure proper sequencing for X-38 tasks,... in the tool, though only a fixed-priority periodic user-defined scheduling policy across two nodes is required to model the X-38 task system Because of the deterministic task execution times required in this system, the full power of the scheduling algorithms provided is not utilized Rather, to ensure a set task execution order, user-specified priorities as well as a phase-offset start time for each task... AVAILABLE REAL-TIME OPERATING SYSTEMS The goals of conventional, non-real-time operating systems are to provide a convenient interface between the user and the computer hardware while attempting to maximize average throughput, to minimize average waiting time for tasks, and to ensure the fair and correct sharing of resources However, meeting task deadlines is not an essential objective in non-real-time operating... AND RELATED WORK Scheduling real-time tasks has been extensively studied The fact that the deadline of every task must be satisfied distinguishes real-time scheduling from non-real-timescheduling, in which the entire task set is treated as an aggregrate and an overall performance such as throughput is more important One of the first fundamental works in the field of real-time scheduling is the seminal... used to represent the X-38 task structure The tool provides a tabular, rather than graphical, interface for specifying the system, so the system being studied seems more difficult to visualize PerfoRMAx utilizes an event-actionresource model for task scheduling Because tasks in the X-38 system are not sporadic, but rather cyclic and based mainly on a minor frame time, the event-action model used by this... perfoRMAx tool, the X-38 system is not modeled Instead, we analyze example programs, a complete tutorial, and robust online AVAILABLE REAL-TIME OPERATING SYSTEMS 75 help to assess the capabilities of this tool In addition, a representative from TimeSys has visited the NASA Johnson Space Center, where the X-38 is being designed, and provided an informative consultation on how to best model the X-38 system Similar... a set of single-instance tasks whose start times are the same, then the same set of tasks can be scheduled at run-time even if their start times are different and not known a priori Knowledge of pre-assigned deadlines and computation times alone is enough to schedule using the least-laxity-first algorithm A proof for this schedulability test can be found in [Dertouzos and Mok, 1989] 3.3.3 Scheduling . run-time scheduler is the least-laxity-first (LL or LLF) algorithm
(also known as the minimum-laxity-first (MLF) algorithm or least-slack-time-first
52 REAL-TIME. constraints.
Fixed-Priority Schedulers: Rate-Monotonic and Deadline-Monotonic Al-
gorithms
A popular real-time scheduling strategy is the rate-monotonic (RM)
scheduler