Sei sulla pagina 1di 24

Real-Time systems Practice

York University
Computer Science
1 Introduction
1.2 Overview

Ø Most of today’s cars, gadgets, and electromechanical systems use embedded


software. They perform what previously been done by dedicated mechanical and
electrical components.

Ø Embedded software appears in everything form telephone to medical systems and


manufacturing.
Ø The main task is to engage physical world, interacting directly with sensors and
actuators.
Ø As hardware capabilities improve, the techniques of old systems engineering,
such as automatic memory management seem within reach for embedded
systems.
Ø Researchers in systems and computer science are beginning to work on very real
and different problems of embedded systems.
Ø Because of the need to respond to “real-time” events, the embedded systems
called real-time.

1.3 Definitions

© 2001-02 I. Mantegh Lecture Notes 1


Real time system: A software system where the correct functioning of the system depends
on the results produced and the time at which they are produced.
Soft RT system: A system whose operation is degraded if results are not proceeded
according to specified timing requirements.
Hard RT system: A system whose operation is incorrect if results are not produced
according to the timing constraints. Catastrophic results will happen then.

1.4 Stimulus/Response System

The behavior of a RT system can be defined by listing stimuli, associated responses and
the time at which the response must be made. There are, in general, two types of stimuli:
Periodic and aperiodic.
The RT software architecture should guarantee a transfer of control to the appropriate
modules as soon as a stimulus is received. This is normally achieved by designing t eh
system as a set of concurrent, cooperative processes. Part of the RT system is deducted to
managing these processes.

Sensor Sensor Sensor

Environment RT Control System

Actuator Actuator
Actuator
Design process (Stimulus/Response)
∗ Identify stimuli and associated responses
∗ Identify timing constraints for each pair
∗ Aggregate the stimuli and response processing into a number of concurrent
processes
∗ Associate a process with each class of stimulus and response:

© 2001-02 I. Mantegh Lecture Notes 2


∗ For each pair design algorithms. Evaluate the amount of processing and time
required to complete teach process.

1.5 RT Executable
∗ Responsible for process management & resource allocation
∗ Start appropriate process after receiving stimuli
∗ Allocate process and memory
∗ Analogous to OS
∗ Components:
1. RT clock
2. Interrupt handler, manages aperiodic requests
3. Scheduller
4. Resource Manager, given a process scheduled for execution it allocates memory
and processor
5. Dispatcher, starting the execution
6. Configuration Manager, dynamic reconfiguration of the system
7. Fault Manager, detecting hardware and software faults and tacking appropriate
actions.
Design challenges
∗ Software often encapsulates domain expertise, such as control engineering, sensors,
etc. It requires understanding of the domain and supporting technology, such as
signal processing.
∗ Network capability, which introduces significant complications such as
downloadable modules.
∗ Existing software design techniques aren’t suitable. Embedded s/w is often designed
by engineers who are classically trained in the domain.

© 2001-02 I. Mantegh Lecture Notes 3


∗ Computation theories have their root in the data transformation, not in interaction
with sensors, actuators and humans.

RT Systems development stages


∗ Design a scheduling system which will ensure that processes are started at
appropriate time to carry out required processing within time constraints.
∗ Integrate the system under the control of a RT executive.
∗ Coordinate communication and networking
∗ Prototyping
∗ Human-Machine Interface
∗ Simulation & testing

1.6 Characteristics of RT System


∗ Size and complexity
∗ Manipulation of real numbers: modeling, solving equations, math intensive
∗ Reliability and Safety: fault tolerance, robustness,
redundancy
∗ Concurrency and communication between various
components
∗ Time constraint facilities
∗ Interaction with hardware
∗ Efficient implementation, programming language
important

© 2001-02 I. Mantegh Lecture Notes 4


2 Performance Measures for RT systems
2.1 Overview
∗ RT systems normally used in critical applications and must be carefully designed
and validated.
∗ Validation:
o Design check, Simulation, Prototyping
o Characterizing performance and reliability
o Measuring performance and reliability
∗ Performance measure must:
1. Be concise and represent efficient encoding of relevant info.
2. Give an objective basis for comparison.
3. Give objective criteria for design optimization.
4. Represent verifiable facts and/or quantifiable entities.
∗ The measurements must be dependent on the application. Application dictates the
criteria for asserting performance.
∗ Quantify the goodness of the system and hold relation to measurable / estimatable
factors.

2.2 Traditional Performance Measures


Reliability: the probability that the system will not undergo failure over a prescribed time
interval.
Throughput: The average number of instructions per unit of time.
A set of failure modes, i.e. failed system states have to be defined with respect to the
application and environment. The system will be up if it ∉ the set of failed states.
The Pr of the system being outside the failure state throughout some interval is the
Capacity reliability over that interval.
Expected capacity survival time: The expected time for the capacity reliability to drop to a
specific value.
Mean computation before failure: The mean amount of computation done before failure.
Computation Reliability R(s, t, T): The Pr that system can start a task T at time T and state
s, and successfully execute it to completion.
Example:
Consider a reconfigurable redundant system of processor triads (i.e. a group of 3 CPUs
running the same instructions and doing the same task in parallel). The output of a triad is
voted on.

© 2001-02 I. Mantegh Lecture Notes 5


Assume a system of 8 functional processors, 2 triads and 2 spare processors. If any of
triad processors fail system will switch to spare ones.
For system analysis, normally a probability distribution is used for predicting the
processor(s) failure. Poisson distribution is suitable for independent events like natural
processor failure in a network.
Define:
∗ π i (ξ ) , the Pr that the system is in i at time ξ .

∗ Reliability = 1 − Pfail = 1 − å π (ξ )i
i∈Failure Set

∗ Fail Set = {s| System states are defined as failure}


If each triad has a throughput of x instruction per unit time, the throughput of system at
5 8
time ξ is x å π i (ξ ) + 2 x å π i (ξ ) .
i =3 i =6

Plain English: 8 to 6 CPUs are alive à 2 triads, 2 to 5 CPUs are alive à 1 triad; 0 to 2
CPUs are alive à system is down. To calculate total throughput, sum up the performances
in all cases multiplied by probability of each case.
Assume a Poisson distribution for the failure mechanism with rate λ. Failures are
permanent. With Markov diagram we can show different states of a system, starting with 9
processors.

Side notes
• Markov chains: A sequence of random variables X1…Xn if:
Prob(Xn+l=xn+l | X1=x1, X2=x2,…Xn=xn)
=Prob(Xn+l=xn+l | Xn=xn)
Xi is the state of the Markov chain at discrete time i. Plain English: Future state
of the chain depends only on the present state, not the past.
∗ The process is a discrete-time process, i.e. there are countably many random
variables which define the process.
∗ The state space is countable or finite.

© 2001-02 I. Mantegh Lecture Notes 6


2.3 Drawback of traditional measures
Do not consider the h/w & s/w interaction
∗ Reliability nly focuses on either h/w or s/w.
∗ Availability does not tell anything about individual downtimes.
∗ Throughput only addresses the average computing capacity.

2.4 Performability
∗ Ties computer performance with process performance.
∗ Includes the relation between different subsystems.
∗ Controlled process performance is defined with sever Accomplishments Levels.
∗ Each AL is related with execution of a certain set of control tasks.
∗ The performability of the RT system is defined as the Pr. That the system will allow
each AL to be met. If there are n ALs of A1, A2…An, the performability of the RT
system is:
(P(A1), P(A2), … P(An)),
P(Ai): Pr that the system allow the controlled process reach AL Ai.
∗ With Performability the performance quality of the controlled process, as seen by
the user, is linked to the technical performance of the RT system.

Hierarchical review of performance [ref. 1]


View 0 User’s view of the process ALs
In terms of state variables, specify the conditions that enables the user to
distinguish one grade of performance
View 1 Accomplishments of tasks as a function of the operating environment.
Specify tasks that must be run, together with opposing constraints.
View 2 Capacity of RT system resources to execute algorithms. Specify
algorithms and required resources.
View 4 H/w structure, OS, Application software
Components and resources attributes needed to meet the requirements of
view 2.

2.5 Cost functions and Hard Deadlines


Hard Deadline: A controlled process is meant to operate within a desired state space. If it
leaves this space failure will occur, e.g.:

© 2001-02 I. Mantegh Lecture Notes 7


The hard deadline of any task is the Max system response time that will still allow the
process to be within the desired state space.
Cost Function: P(ξ ) − P(0) , where P(ξ ) is the performability associated with a response
time ξ .
Example:
A particle with mass m, constrained to move along x-direction and is subject to bi-
directional thrusts.

SA=[-b, +b]
State vector: Σ=(x, V, a), coordinate, velocity, acceleration.
Controller: Produce enough thrust to keep the particle within SA.
Hard deadline and Cost Functions are functions of the process state (x, V, a).
ς(0,0,0) à deadline: ∞
ς(x,0,0) à deadline: ∞
ς(+b,V,0) à deadline: 0

© 2001-02 I. Mantegh Lecture Notes 8


© 2001-02 I. Mantegh Lecture Notes 9
© 2001-02 I. Mantegh Lecture Notes 10
2.6 Estimated Program Run Times
Factors involved:
∗ Source code : How many lines? Load & stores, jumps & goto-s, if, data accuracy…
∗ Compiler: compiled or interpreted code, optimization…
∗ Machine architecture: Memory access time, number of registers, Cache,…
∗ O/S: Task scheduling, Memory management…

© 2001-02 I. Mantegh Lecture Notes 11


3 Designing RT Systems
3.1 Major stages
• Functional requirements
• Review analysis
• Top-level design
o h/w components
o h/w interface
o s/w subsystems
o s/w interface
o startup and shutdown processed
o error handling
o verification
• Detailed design
• Prototyping
• Human-machine interface design
• Refinement, if needed (back to functional requirements)
• Implementation
• Simulation and testing

3.2 Design activities


• Decomposition into the smaller components that can be handled individually.
• Abstraction: Hiding information at each level of component and defining interfaces.
o ADT: abstract data types
o Separate compilation

3.3 Encapsulation
• Specification and development of software subcomponents, with well defined
function inter-connections and interfaces.
• If the specification of the entire software system can be verified based on the specs
of immediate subcomponents the decomposition is compositional, e.g. in sequential
programs.
• Class construct: OOP and languages, used for object abstraction.
o Object abstraction: harder to design and analyze.
o Process abstraction: More applicable to RT systems

3.4 Cohesion and coupling


How a large system should be decomposed into modules? Cohesion and coupling are
two metrics used for evaluating a good encapsulation.
Cohesion: the module integrity.
1. Coincidental: only superficial link
© 2001-02 I. Mantegh Lecture Notes 12
2. Logical: in terms of their role in the whole system not related to the module
3. Temporal: time-bound
4. Procedural: used in the same procedure
5. Communicational: data exchange
6. Functional: perform a single function
Coupling: a measure of the interdependence of modules.
• High: if modules pass control info
• Low: if only is exchanged
A good design decomposition: strong cohesion, loose coupling.

3.5 Forms of system representation (notation)


• Informal (natural language)
• Structured (graphic methods, flowcharts)
• Formal (Petri Net, Communicating sequential processes aka CSP)
Knowledge representation
1. Informal: natural language, simple sketches and illustrations
2. Structured: well-defined illustrations, standard graphic representation, e.g.
Precedence Graph
3. Formal: including methods of analysis and verification, Timed Petri Net.
Requirement specification includes modeling system, as well as its immediate
environment.

3.6 Human-Computer Interface (HCI)


• Most RT Systems require direct communication between software and human
operator(s).
• Design of HCI is one of the most critical stages in RT architecture design, since
operator(s) are the greatest source of uncertainty in command structure.
• HCI has to be a separate module in software architecture, with well-defined
interfaces.
Issues to consider:
• What are the communicating objects
• How these objects are instantiated
• Predicates about allowable range of operator’s actions in any given system state,
when objects are running.
• Who is in control at each state (normally mixed initiative)
• Interface control software must be able to catch operator’s slips
Major factors in HCI design:
• Predictability – Sufficient information provided to operator so that he can predict the
effect of command.
• Commutability – The nature of a command should not be affected by the order of
inputting of its parameters.

© 2001-02 I. Mantegh Lecture Notes 13


• Sensitivity – Change of modes and configurations should not be noticed.

© 2001-02 I. Mantegh Lecture Notes 14


3.7 Case study
Build a simple IR TV remote control
Problem statement
To develop a simple handled remote that has a three-button controller that can remotely
operate a specific TV set. It will be battery powered and uses a built-in IR transmitter.
• One button à Power
• Two buttons à Cycle through channels
Project constraints
• Remote must use a customer-supplied ROM. It stores the unique protocol that TV
understands, like manufacturer’s signature code commands and timing specification.
• Remote must conform to the TV’s timing specs. The timing of the transmission of
each bit of the signature code is important. The signal has two parts: signature and
command.
Required:
1. A diagram of the masked ROM for the h/w design engineer.
2. Detailed description of the signature signal’s timing for the programmer.
Customer Requirements
• Power
• Weight
• Color
• User Interface issues
• IR Constraints to work with the TV set
H/W and S/W Specifications
• Processor (e.g. 4-bit, 8-bit microcontroller chip)
• O/S: No kernel necessary
• Programming language: C if compiler and debugger are available for the choice of
microcontroller, Assembly otherwise.
• Third party S/W: Not likely, but check.
• Third party H/W: Push-buttons, IR LED
Hardware Components:
• Construct block diagrams

© 2001-02 I. Mantegh Lecture Notes 15


• Showing individual circuit boards, peripherals, etc.
• Identify which components can be off-the-shelf or adapted from a previous or
similar project.
• Showing how to communicate with the H/W and the interconnection of components.
• Visual effect and overview of the project.
• Some of features could be implemented by both h/w and s/w. Depending on the
project constraints and costs.
Hardware Interfaces
• I/O Ports: A list of I/O ports used by h/w, the port addresses, and all the commands
that can be written to each port.
• H/W registers: For each register, bits assignments and a description of how it is read
or written in needed.
• Memory addresses for shared memory or memory-mapped I/O.
• H/W interrupts, if exist.
• Example: pushbutton, LED, timer, ROM. Assigned I/O ports based on the processor.
Memory addresses for the ROM.
Software subsystems
• Identify self-contained pieces or subsystems that you can think as pieces and can be
worked on semi-independently: user interfaces, data collection, data processing, etc.
• Break each system into subsystems until reach to a manageable subsystem. Detail
the functionality of each subsystem.
• Example:
o Button monitoring: transfer a code identifying the button state.
o Command lookup: lookup table of global data in ROM.
o Command transition: Responsible for firing the LED and using the timer to
ensure that timing requirements are met.
• Command lookup subsystem has one interface function, which takes a button code
as input and returns a command to transmit to TV.
• Need to define a data structure for a TV command. A variable-size array of data
structures identifying LED on or off, and a duration in ms.
typedef struct
{
int FIELDon; // true if LED on, false if LED off
© 2001-02 I. Mantegh Lecture Notes 16
int nmicrosec; // # of microseconds
} LED_COMMAND;

Software Interfaces
• Specify interfaces that each subsystem furnishes.
• Detail the application programming interface (API) by specifying function calls,
data structures, and global data used in each subsystem’s interface.
• Create header files with function prototypes, data structure declarations, class
declarations, so on.
• Example: button monitoring subsystem needs one interface function, which waits till
a button is pressed and then returns a value identifying the button.
#define Button_up 0
#define Button_down 1
#define Button_power 2
int GetButtonPress(void);

Startup and shutdown process


Sequence of events that occur during these processes. Specify the details of h/w and s/w
subsystem initialization and in what order the subsystems should be initialized. For
shutdown specifically, operations such as flushing or closing connections should be
described.
Example:
• Button monitoring subsystem performs any h/w reset necessary for the button h/w.
• Command transmission initiates h/w timer and installs an interrupt handler to
process timer interrupts.

© 2001-02 I. Mantegh Lecture Notes 17


4 Task Assignment and Scheduling

The allocation/scheduling problem:


Given a set of tasks, task precedence constraints, resource requirements, task
characteristics and deadlines, we are asked to devise a feasible allocation/schedule on a
given computer.

Task:
Acting to consume resources and producing one or more results.

Precedence Graph:
Representing precedence relations between tasks. Notation:
α(T) Precedence-task set of T.
Rules: Ti must precede Tj
iαj iαj & jαkÞi&k

• Each task has resource requirements, and requires some exec time.
• A resource must be exclusively held by a task.

Release The time at which all the data that are required to begin the
time task are available.
Deadline The time by which the task must complete its execution.
Relative The absolute deadline minus the release time (total time
deadline available per operation).

© 2001-02 I. Mantegh Lecture Notes 18


ra0 Release time for task a0
da 0 Absolute deadline
Ca0 Computation / Execution time
Schedule:
A mapping from a possibly infinite set of process execution units to a possibly infinite
set of processor time units on one or more processor time axes.
S: Set of processor time unitsàSet of tasks.
S(i,t): Task scheduled to be running on processor i and time t.
• We assume that each processor is associated with a processor time axis, and each
time axis starts from 0 and is divided int a sequence of processor time units.

Feasible schedule:
A schedule in which the start time of every process execution is greater than or equal to
that process’s release time or request time, and its completion time is less than or equal
to that process execution’s deadline.

Ex: A feasible preemptive schedule for the periodic process executions p0, p1, p2 and the
asynchronous process executions a0, a1, a2 within the finite time interval [0, 56]. a1
preempts p1 at time 27, and a2 preempts p2 at time 50.

© 2001-02 I. Mantegh Lecture Notes 19


4.2 Pre-run-time and run-time scheduling
Run-time scheduling, AKA on-line scheduling:
• The schedule is computed progressively on-line as processes arrive.
• The scheduler does not assume any knowledge about major characteristics of
processes.
Advantage:
• Flexible
• Easily adapts to changes in the environment
Disadvantage:
• High run-time cost

Pre-run-time scheduling, AKA static or off-line scheduling


• The schedule is computed off-line.
• Requires to know major characteristics of the processes in advance.
• Reduces significantly the amount of run-time resources required for scheduling and
context switch.
• For satisfying timing constraints in a large complex real-time system with many
processes with hard deadlines predictability of the system’s behavior is the most
important concern; pre-run-time scheduling is often the only practical mean of
providing predictability in a complex system.
• Even though the exact timing characteristics of processes and events in the computer
system and environment may not be predictable, it is possible to use worst case
estimates of the timing characteristic with a ore-run-time scheduling strategy to
guarantee that timing constraints will be satisfied, and thud guarantee the predictable
behavior.

© 2001-02 I. Mantegh Lecture Notes 20


4.3 Using pre-run-time scheduling to schedule periodic processes
Compute off-line a schedule for entire set of periodic processes occurring within a time
period that is equal to the least common multiple (LCM) of the periods of the given set
of processes, then at run-time execute the periodic processes in accordance with
previously computed schedule.

© 2001-02 I. Mantegh Lecture Notes 21


4.4 Static priority scheduling
• Each process is given a priority value.
• Schedule runs the process with highest priority.
Problematic:
Often, priorities are assigned to processes based on how “important” each process is
subjectively perceived to be, without a through analysis of how much the assignment
will affect the timing of the whole system.
Even when assigned objectively, priorities alone are inadequate for controlling the
execution order of processes:
• At run-time in a large complex RT system there may be many different orders in
which processes arrive and request use of system resources, and there may be
relationships between processes and resource sharing constraints that must be
simultaneously satisfied. It is generally impossible to map many different execution
ordering that processes may hold to a rigid hierarchy of priorities.
• Only capable of producing very limited subset of possible schedules. In general, the
smaller the subset of schedules that can be produced by a scheduler, the lower the
level of processor utilization that can be achieved.
Processor utilization Total number of Process Execution Units
=
of a schedule Total Number of Processor Time Units

4.5 Rate-Monotonic Scheduling Algorithm (or RM)


Assumptions:
• All processes are periodic, completely preemptable.
• The release time of each process p is rp=0, i.e. at the beginning of its period.
• The deadline of each process p is dp=prdp, i.e. at the end of its period.
• Rather than assigning a fixed priority to each process according to its importance,
this algorithm assigns a fixed priority to each process based on its period: the shorter
the period, the higher the priority.
• At each time slice, run the process that that has highest priority.
• Schedulability test: a feasible schedule can be found for a set of n periodic processes
æ 1
ö
if the overall processor utilization is less than nçç 2 n − 1÷÷ . For large n this
è ø
asymptotically approaches 0.693.

© 2001-02 I. Mantegh Lecture Notes 22


Importance:
While rate-monotonic scheduling is a static priority scheme and has all the
disadvantages of static priority schemes, it has the following property:
if any static priority scheduling scheme can
find a feasible schedule for a set of periodic, completely preemptable processes, then
the rate-monotonic scheduling algorithm can also find a feasible schedule for that set
of processes.

4.6 Time analysis for RM


Example for 3 tasks
Consider a case of 3 tasks T1, T2 and T3 where P1<P2<P3, and given execution times e1,
e2, e3 and task phasing I1, I2, I3 = 0 (context switch and other overheads).
The first iteration of tasks is released at t=0. We only consider the first iteration for
now, that is, the longest of the three periods.
First task:
T1 has the highest priority and interrupts anything else, e1≤P1 for T1 to be schedulable.
Second task:
Let t be the time at which task T2 finishes. Then, T2 executes successfully if
ét ù
∃t | t ∈ [0, P2 ] • t = ê ú e1 + e2 . Plain English: t is a moment in T2’s period. The period of the
ê P1 ú
first task may be smaller than the period of the second task, so it may have to run
several times until moment t arrives. In fact even if it isn’t smaller it may still run there
once. So, if after running the first task necessary number of times there’s still enough
time in second task’s period for the second task to execute, then the second task is
schedulable. Note that incomplete runs counts as a full, hence the ceiling function. In
other words:
∃k | k ∈ Ζ • P2 ≥ kP1 ≥ ke1 + e2
This is Necessary and Sufficient Condition for T2 to be schedulable under RM.
Third task:
Same as the t above, only with the second task’s combined execution time for the
period of third one added:
ét ù é t ù
∃t | t ∈ [0, P3 ] • t = ê ú e1 + ê ú e2 + e3
ê P1 ú ê P2 ú
Note that t is in the interval from 0 to the end of the THIRD period.
© 2001-02 I. Mantegh Lecture Notes 23
∃k1 , k 2 | k1 , k 2 ∈ Ζ •
ì P3 ≥ k1 P1
ï
ík1 P1 ≥ k1e1 + k 2 e2 + e3
ïk P ≥ k e + k e + e
î 2 2 1 1 2 2 3

General Necessary and Sufficient Condition


The total amount of work performed by i tasks T1..Ti at time interval [0..t] as sum of
execution times multiplied by number of periods of all tasks:
i é t ù
Wi (t ) = å e j ê ú
j =1 ê Pj ú
It is measured in “busy time slices”. Then the necessary and sufficient condition for all
these tasks to be schedulable is, if the phasing time is negligible, ∃t | t ∈ Ζ • Wi (t ) = t or, in
plain English, if we have enough time slices for every task to do everything they wanted
to. Formally:
Wi (t )
Li (t ) =
t
Li = min Li (t ) 0 ≤ t ≤ Pi
L = max{Li }
Which basically defines the same thing via percentage of total number of time slices
used. Then, the necessary and sufficient condition asserts that given a set of n periodic
tasks (P1≤P2≤…Pn), task Ti can be feasibly scheduled under RM iff
Li ≤ 1
or
æ W (t ) ö
Min ç i ÷ ≤ 1
è t ø
0< t ≤ Pi

Wi only needs to be computed at


ìï ê P ú üï
τ i = ílPj | j = 1...i; l = 1...ê i ú ý
ïî êë Pj úû ïþ
Then we can produce two scheduling conditions:
Min Wi (t ) ≤ t , task T is RM-schedulable.
1. If t∈τ i i

ì W (t ) ü
Max í Min i ý ≤ 1 for i ∈ {1..n}, t ∈ Ti
2. If
î
i∈{1.. n } t∈τ i t þ then the entire set Ti
i∈{1..n} is RM-schedulable.

© 2001-02 I. Mantegh Lecture Notes 24

Potrebbero piacerti anche