next up previous
Next: Lower Bound on Load Up: Multiprocessor EDF and Deadline Previous: Introduction

Definition of the Problem

Suppose one is given a set of $n$ simple independent periodic tasks $\tau_1, \ldots, \tau_n$, where each task $\tau _i$ has minimum interrelease time (called period for short) $T_i$, worst case compute time $c_i$, and relative deadline $d_i$, where $c_i
\leq d_i \leq T_i$. Each task generates a sequence of jobs, each of whose release time is separated from that of its predecessor by at least $T_i$. (No special assumptions are made about the first release time of each task.) Time is represented by rational numbers. A time interval $[t_1, t_2)$, $t_1 \neq t_2$, is said to be of length $t_2 - t_1$ and contains the time values greater than or equal to $t_1$ and less than $t_2$.

What we call a periodic task here is sometimes called a sporadic task. In this regard we follow Jane Liu[14], who observed that defining periodic tasks to have interrelease times exactly equal to the period ``has led to the common misconception that scheduling and validation algorithms based on the periodic task model are applicable only when every periodic task is truly periodic ... in fact most existing results remain correct as long as interrelease times of jobs in each task are bounded from below by the period of the task''.

Assume that the jobs of a set of periodic tasks are scheduled on $m$ processors preemptively according to an EDF or DM scheduling policy, with dynamic processor assignment. That is, whenever there are $m$ or fewer jobs ready they will all be executing, and whenever there are more than $m$ jobs ready there will be $m$ jobs executing, all with deadlines (absolute job deadlines for EDF, and relative task deadlines for DM) earlier than or equal to the jobs that are not executing.

Our objective is to formulate a simple test for schedulability expressed in terms of the periods, deadlines, and worst-case compute times of the tasks, such that if the test is passed one can rest assured that no deadlines will be missed.

Our approach is to analyze what happens when a deadline is missed. Consider a first failure of scheduling for a given task set, i.e., a sequence of job release times and compute times consistent with the interrelease and worst-case compute time constraints that produces a schedule with the earliest possible missed deadline. Find the first point in this schedule at which a deadline is missed. Let $\tau _k$ be the task of a job that misses its deadline at this first point. Let $t$ be the release time of this job of $\tau _k$. We call $\tau _k$ a problem task, the job of $\tau _k$ released at time $t$ a problem job, and the time interval $[t, t+d_k)$ a problem window.

Definition 1 (demand)   The demand of a time interval is the total amount of computation that would need to be completed within the window for all the deadlines within the interval to be met.

Definition 2 (load)   The load of an interval $[t, t + \Delta)$ is $W/\Delta$, where $W$ is the demand of the interval.

If we can find a lower bound on the load of a problem window that is necessary for a job to miss its deadline, and we can show that a given set of tasks could not possibly generate so much load in the problem window, that would be sufficient to serve as a schedulability condition.


next up previous
Next: Lower Bound on Load Up: Multiprocessor EDF and Deadline Previous: Introduction
Ted Baker 2003-09-11