1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<!--Converted with LaTeX2HTML 2K.1beta (1.48)
original version by: Nikos Drakos, CBLU, University of Leeds
* revised and updated by: Marcus Hennecke, Ross Moore, Herb Swan
* with significant contributions from:
Jens Lippmann, Marek Rouchal, Martin Wilck and others -->
<HTML>
<HEAD>
<TITLE>Scheduling Theory</TITLE>
<META NAME="description" CONTENT="Scheduling Theory">
<META NAME="keywords" CONTENT="rtic">
<META NAME="resource-type" CONTENT="document">
<META NAME="distribution" CONTENT="global">
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<META NAME="Generator" CONTENT="LaTeX2HTML v2K.1beta">
<META HTTP-EQUIV="Content-Style-Type" CONTENT="text/css">
<LINK REL="STYLESHEET" HREF="rtic.css">
<LINK REL="next" HREF="node5.html">
<LINK REL="previous" HREF="node3.html">
<LINK REL="up" HREF="node3.html">
<LINK REL="next" HREF="node5.html">
</HEAD>
<BODY bgcolor="white">
<!--Navigation Panel-->
<A NAME="tex2html163"
HREF="node5.html">
<IMG WIDTH="37" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="next"
SRC="/usr/share/latex2html/icons/next.png"></A>
<A NAME="tex2html161"
HREF="node3.html">
<IMG WIDTH="26" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="up"
SRC="/usr/share/latex2html/icons/up.png"></A>
<A NAME="tex2html155"
HREF="node3.html">
<IMG WIDTH="63" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="previous"
SRC="/usr/share/latex2html/icons/prev.png"></A>
<BR>
<B> Next:</B> <A NAME="tex2html164"
HREF="node5.html">Schedulability Analysis</A>
<B> Up:</B> <A NAME="tex2html162"
HREF="node3.html">Real Time Systems</A>
<B> Previous:</B> <A NAME="tex2html156"
HREF="node3.html">Real Time Systems</A>
<BR>
<BR>
<!--End of Navigation Panel-->
<H2><A NAME="SECTION00121000000000000000">
Scheduling Theory</A>
</H2>
<P>
One of the first works on scheduling theory for computer systems
appeared in 1967 by Conway, Maxwell, and Miller
[<A
HREF="node64.html#conway_maxwell_miller">CML67</A>]. In this work, they reference much
scheduling work that up to that point had been primarily used in job
shops across the country. They discuss results which are applicable to
both what later became known as classical scheduling theory, and real
time systems.
<P>
Classical scheduling theory - such as what would normally be seen in
a regular multitasking operating system such as Unix - typically uses
metrics such as minimizing the sum of completion times, minimizing the
weighted sum of completion times, minimizing schedule length, or
minimizing the number of processors required. Most of these metrics
evolved in job shops during and immediately after World War II, where
the main concern was of maximizing the throughput of a given machine,
such as both lathes and a milling machines.
<P>
In 1973, Coffman and Denning [<A
HREF="node64.html#coffman_and_denning">CD73</A>] showed that
these metrics are not useful for Real Time Systems. For example, the
sum of completion times is useless for time critical systems because
there is no direct assessment of timing properties (deadlines or
periods) in the metric. Thus this algorithm may be desirable for Unix,
where the goal is to maximize the throughput or number of tasks that are
executed in a given amount of time, but will not be desirable for a
hard timing environment as would be the case of AMB control.
<P>
Real time researchers thus looked at the second results presented by
Conway <I>et al.</I>. Namely, they cite a 1955 job-shop scheduling result
by Jackson, where he states:
<P>
<BLOCKQUOTE>
<I>The maximum lateness and maximum job tardiness are minimized by
sequencing the jobs in order of non-decreasing due dates</I>
</BLOCKQUOTE>
<P>
This algorithm, which is usually referred to as the ``Earliest
Deadline First'' or simply EDF algorithm, has been shown to be optimal
for uniprocessor systems, where optimality in real time scheduling
algorithms is measured by the following:
<P>
<BLOCKQUOTE>
<I>An optimal scheduling algorithm is one that may fail to meet a
deadline only if no other scheduling algorithm can meet it.</I>
</BLOCKQUOTE>
<P>
The Jackson result sparked much interest and consequently spawned two
areas of real time scheduling research. Namely, it developed research in
both <I>dynamic</I> and <I>static</I> scheduling, the latter of which
is more applicable to AMB control. A scheduling algorithm is said to be
``dynamic'' when the algorithm has full knowledge of task
properties<A NAME="tex2html9"
HREF="footnode.html#foot200"><SUP>1.5</SUP></A> of both previous and present tasks. However, it has no knowledge
of future tasks, neither in the number of tasks nor in their start
times.
<P>
A static scheduler, on the other hand, knows the task properties of
all past, present, and future tasks and therefore it is possible to
set up the key scheduling parameters <I>a priori</I>. As a simple
example, suppose that an AMB will use a simple PID for thrust control
and a plotting package for real time monitoring of plant states. We
know <I>a priori</I> that the suspension controller must execute
every <IMG
WIDTH="35" HEIGHT="20" ALIGN="BOTTOM" BORDER="0"
SRC="img4.png"
ALT="$100$"> microseconds without fail, and that the plotting package
will execute once every <IMG
WIDTH="62" HEIGHT="36" ALIGN="MIDDLE" BORDER="0"
SRC="img15.png"
ALT="$14,000$"> microseconds, although we are not too
concerned if one time it executes after 14k or after 50k
microseconds. Consequently, we can assign, <I>a priori</I> some
schedule parameters such as the next execution time (now + 100
microseconds and now + 14k microseconds), and a relative priority (it
is more important that the suspension controller executes at its
specified time).
<P>
A ``fixed priority scheduler'' (FPS) is a type of static scheduling
algorithm where tasks are assigned - <I>a priori</I> - a priority, or
relative importance, as shown in Figure <A HREF="node4.html#FPS_scheme">1.2</A>. Each task
has an associated start time at which the task is expected to begin
its execution, an execution time, and a completion time. Thus, at both
the start time and completion times of each of these tasks, the
scheduler must make a decision. That is, it must decide which task
should be allowed to execute in the CPU at any point in time to meet
all deadlines. Higher priority tasks are given precedence over lower
priority tasks, and tasks having the same priority level are assigned
into the CPU on a First In First Out (FIFO) scheme. Tasks may <I>preempt</I> lower priority tasks, only, and tasks in general are
assumed to be completely independent of each other. Research in FPS
algorithms is most heavy in optimal solutions to the priority
assignment for each of these tasks, since depending on how priorities
are assigned will determine if deadlines are met.
<P>
<P></P>
<DIV ALIGN="CENTER"><A NAME="FPS_scheme"></A><A NAME="321"></A>
<TABLE>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 1.2:</STRONG>
Fixed priority scheduling algorithm
example. For illustration purposes, let <IMG
WIDTH="25" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img1.png"
ALT="$T_1$"> denote a suspension
controller, <IMG
WIDTH="25" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img2.png"
ALT="$T_2$"> an anti-imbalance controller, and <IMG
WIDTH="25" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img3.png"
ALT="$T_3$"> a plotting
package.</CAPTION>
<TR><TD>
<DIV ALIGN="CENTER">
<IMG
WIDTH="386" HEIGHT="184" ALIGN="BOTTOM" BORDER="0"
SRC="img16.png"
ALT="\includegraphics[width=0.7\textwidth]{Figures/sched_pts.eps}">
</DIV></TD></TR>
</TABLE>
</DIV><P></P>
<P>
In 1973, Liu and Layland [<A
HREF="node64.html#liu_and_layland">LL73</A>] developed the ``Rate
Monotonic Algorithm'' (RMA). This algorithm assigns priorities to each
of the tasks in the following fashion:
<P>
<BLOCKQUOTE>
<I>The priority of the task is inversely proportional to its period.</I>
</BLOCKQUOTE>
<P>
In other words, under this assignment policy, tasks having
the smallest period (highest frequency) - as would be the case for a
suspension controller in an AMB - would have the higher priority over
a second task that executes with a larger period. Of most importance
is that this algorithm was shown by Liu and Layland to be <I>optimal</I> among FPS algorithms for real time systems, and that it can
be applied for <IMG
WIDTH="18" HEIGHT="20" ALIGN="BOTTOM" BORDER="0"
SRC="img17.png"
ALT="$n$"> tasks (<!-- MATH
$\forall n \in [1,\infty)$
-->
<IMG
WIDTH="101" HEIGHT="40" ALIGN="MIDDLE" BORDER="0"
SRC="img18.png"
ALT="$\forall n \in [1,\infty)$">).
<P>
Unfortunately, the RMA algorithm was shown to be optimal by Liu and
Layland only for independent tasks. However, since there is a large need
for communication between controller tasks in an AMB (for example,
controller effort from the suspension task are communicated to a second
task which in turn transmits this information over the network), the RMA
algorithm can be shown to be limiting for real time
applications. Consequently, research ensued which tried to relax this
independence assumption, necessary for RMA.
<P>
The direct application of a simple <I>semaphore</I><A NAME="tex2html11"
HREF="footnode.html#foot218"><SUP>1.6</SUP></A>, or synchronization variable,
for sharing critical data between tasks may result in a high priority
task being blocked by a lower priority task for an unbounded amount of
time. More importantly, this may potentially lead to missed deadlines.
This type of unbounded blocking is usually referred to as <I>priority inversion</I>, and is exemplified in Figure
<A HREF="node4.html#priority_inversion">1.3</A>.
<P>
<P></P>
<DIV ALIGN="CENTER"><A NAME="priority_inversion"></A><A NAME="322"></A>
<TABLE>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 1.3:</STRONG>
Fixed priority scheduling algorithm
example. For illustration purposes, let <IMG
WIDTH="25" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img1.png"
ALT="$T_1$"> denote a suspension
controller running at a period of <IMG
WIDTH="35" HEIGHT="20" ALIGN="BOTTOM" BORDER="0"
SRC="img4.png"
ALT="$100$"> microseconds, <IMG
WIDTH="25" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img2.png"
ALT="$T_2$"> an
anti-imbalance control algorithm, and <IMG
WIDTH="25" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img3.png"
ALT="$T_3$"> a plotting algorithm. Tasks
<IMG
WIDTH="25" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img1.png"
ALT="$T_1$"> and <IMG
WIDTH="25" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img3.png"
ALT="$T_3$"> share data, of which <IMG
WIDTH="25" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img1.png"
ALT="$T_1$"> produces the data and <IMG
WIDTH="25" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img3.png"
ALT="$T_3$">
plots it to screen. <IMG
WIDTH="25" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img1.png"
ALT="$T_1$"> and <IMG
WIDTH="25" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img3.png"
ALT="$T_3$"> share semaphore <IMG
WIDTH="19" HEIGHT="21" ALIGN="BOTTOM" BORDER="0"
SRC="img5.png"
ALT="$S$"> which is used to
protect the integrity of the data.</CAPTION>
<TR><TD>
<DIV ALIGN="CENTER">
<IMG
WIDTH="387" HEIGHT="167" ALIGN="BOTTOM" BORDER="0"
SRC="img19.png"
ALT="\includegraphics[width=0.7\textwidth]{Figures/priority_inversion.eps}">
</DIV></TD></TR>
</TABLE>
</DIV><P></P>
<P>
Priority inversion usually occurs when there are three or more tasks
present, and data is shared between both the highest and lowest priority
tasks. Generally, the lowest priority task may get a hold of the
semaphore and begin accessing data. Then, sometime later, the highest
priority task will attempt to access the semaphore but will find that it
is already locked by the lowest priority task, and therefore the highest
priority task is blocked by the lowest priority task. At this time the
lowest priority task resumes execution. However, some time later, the
medium priority task begins executing and preempts the lowest priority
task. Consequently, the highest priority task not only needs to wait for
the medium priority task to finish, but also for the lowest priority
task to release the semaphore. The problem becomes more acute when there
are more than one medium priority tasks. Consequently, the highest
priority task will have to wait for an undetermined amount of time until
all intermediate priority tasks complete, and the lowest priority task
releases its semaphore.
<P>
One solution to this problem is to carefully design each of the real
time tasks to avoid this priority inversion problem. However, this
becomes inordinately difficult as the number of tasks increases.
<P>
A powerful solution to the priority inversion problem was introduced
by Sha <I>et al.</I> in 1990 [<A
HREF="node64.html#sha_rajkumar_lehoczky">SRL90</A>] under the
name of ``priority inheritance protocols'' (PIPs). PIPs are a series
of protocols in which the lower priority tasks that may be blocking a
higher priority task will, while blocking the higher priority task,
automatically inherit the priority of the highest priority task that
the offending task blocks. Two protocols were presented: a basic
priority inheritance protocol and a priority ceiling protocol. In both
cases, priority inversion will be solved by effectively <I>bounding</I> the maximum amount of time that the highest priority tasks
will be blocked by the lower priority tasks. The former will block the
higher priority task by <I>at most</I> the duration of one critical
section from <I>each</I> of the lower tasks which share semaphores
with the higher priority tasks, while the latter will block the higher
priority task by <I>at most</I> the duration of <I>one</I> shared
segment from any of the lower priority tasks. Thus, by the application
of either of these protocols, one can know, <I>a priori</I> the worst
case execution time of any tasks.
<P>
<HR>
<!--Navigation Panel-->
<A NAME="tex2html163"
HREF="node5.html">
<IMG WIDTH="37" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="next"
SRC="/usr/share/latex2html/icons/next.png"></A>
<A NAME="tex2html161"
HREF="node3.html">
<IMG WIDTH="26" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="up"
SRC="/usr/share/latex2html/icons/up.png"></A>
<A NAME="tex2html155"
HREF="node3.html">
<IMG WIDTH="63" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="previous"
SRC="/usr/share/latex2html/icons/prev.png"></A>
<BR>
<B> Next:</B> <A NAME="tex2html164"
HREF="node5.html">Schedulability Analysis</A>
<B> Up:</B> <A NAME="tex2html162"
HREF="node3.html">Real Time Systems</A>
<B> Previous:</B> <A NAME="tex2html156"
HREF="node3.html">Real Time Systems</A>
<!--End of Navigation Panel-->
<ADDRESS>
Michael Barabanov
2001-06-19
</ADDRESS>
</BODY>
</HTML>
|