Sei sulla pagina 1di 65

Embedded Systems

Definition
An embedded systems is any computer that is a component in a
larger system and that relies on its own microprocessor.
An embedded system is a computer system with a dedicated
function within a larger mechanical or electrical system, often
with real time computing constraints. It is embedded as a part of
a complete device often including hardware and mechanical parts
An embedded system is a system that has embedded hardware,
which makes it a system dedicated for an application or a specific
part of an application or a product or part of a larger system.

General Purpose Computing System

Embedded System

System consisting HW & general purpose


operating system for executing a variety of
applications

A system consisting special purpose HW,


SW (OS) embedded for executing a specific
set of applications

Contain general purpose OS

Contain RTOS

Applications are alterable (programmable)


by the user. User can re-install the OS or user
applications
Performance is the key deciding factor in the
selection of the system. Always Faster is
Better
Not at all tailored towards reduced operating
power requirements, options for different
levels of power management

The SW is pre-programmed and nonalterable by the end-user

Response requirements are not time-critical

Certain mission critical systems, the response


time requirement is critical
Execution behaviour is deterministic for
certain Hard Real Time systems

Need not be deterministic in execution


behaviour

Application specific requirements


(Performance, Power, memory etc.) are the
key deciding factors
Highly tailored to take advantage of the
power saving modes supported by the HW
and the OS

ES is a
REACTIVE system:
Accepts input
Performs calculations
Generates output

REAL TIME system


Specifies an upper bound on the time required to
perform the input input/calculation/output in
reaction to external events
Interacts with the physical environment
5

Features
1.
2.
3.
4.

Reliability:
Maintainability: can be easily repaired or replaced
Availability:
Safety: in the event of system failure no harm should come to
people or property. Should be fail-proof
5. Security: Resilience of the system against unauthorized use
6. High speed operation
7. Low power consumption: ES are constrained for power
8. Small size and low weight
9. Adaptability
10. Accuracy
6

Features (contd)
11. ES do a very specific task they cannot be programmed to do
different things.
12. They have limited resources of memory i.e. no secondary
storage devices
13. They should operate in extreme environmental conditions
14. Cost effective
15. Dedicated user interface (no mouse keyboard or screen)
16. Many ES have real-time constraints
17. Run time efficiency: minimum resources for dedicated function
18. Multirate operation: different operations with different time
time scale of operation are to be handled
7

Classification of ES
I. Based on generation
II. Based on complexity and performance
requirements
III. Based on functionality
IV. Based on deterministic behavior
V. Based on triggering

I. Based on generation
Based on the order in which the ES evolved
1.
First Generation:
Built around 8 bit processors (8085, Z80) and 4 bit
microcontrollers
Simple hardware circuits
Firmware developed in assembly code
Eg. stepper motor control units, digital telephone keypads
2. Second Generation:
Built around 16 bit microprocessors and 8 or 16 bit controllers
The instruction set was much more complex and powerful
that 1st generation processors
Some had embedded operating systems
Eg. Data Acquisition Systems (DAC), SCADA
9

Based on generation (contd)


3.

Third Generation:
Built around 32 bit processors and 16 bit microprocessors
Concept of domain specific processors like DSP (Digital
Signal Processors) and ASIC (Application Specific Integrated
Circuits) was introduced
More complex and powerful instruction set with the concept
of instruction pipelining
The basic instruction cycle is broken up into a series called a
pipeline. Rather than processing each instruction sequentially
(finishing one instruction before starting the next), each
instruction is split up into a sequence of steps so different
steps can be executed in parallel and instructions can be
processed concurrently (starting one instruction before
finishing the previous one).
10

Based on generation (contd)


4.

Fourth Generation:
Advent of System on Chip (SoC), reconfigurable processors
and multi-core processors resulted in
high performance
Tight integration
miniaturization
Eg. mobile-internet devices, smart phones etc.

11

II. Based on complexity and


performance
1.

Small Scale

2.

ES with simple application needs


Performance requirements not time critical
Usually built around 8-16 bit microprocessor or microcontroller
May not contain an embedded operating system
On chip RAM and ROM
Eg. electronic toys

Medium Scale
Slightly complex in hardware and firmware requirements
Usually built around16-32 bit microprocessor or microcontroller or
DSP
Usually contain embedded operating system
External RAM, ROM
Eg. washing machine, micro-wave oven

12

Based on complexity and performance (contd)


3.

Large Scale
Highly complex hardware and firmware/software
requirements
Employed in mission critical applications demanding high
performance
Built around 32 or 64 bit Reconfigurable SoC, multicore
processors and programmable logic devices
Does operations like task scheduling, prioritization and
management
Eg. nuclear reactors, anti-missile systems

13

III. Based on functionality


1.

Stand Alone ES:


Works by itself
Self contained device which does not require any host system
like a computer
It takes digital or analog inputs, calibrates, converts and
process the data
Output the results to attached output devices which either
displays or controls other devices
Eg. mp3 players, digital cameras, video game consoles,
microwave ovens

14

Based on functionality (contd)


2.

Real time ES
Which gives the required output in a specified time or which
strictly follows the time-dead lines for completion of a task
Does the function with time constraints
Types: Soft and Hard Real time systems

Soft Real time systems:


A real time system in which the violation of time constraints will
cause only degraded quality, but system can continue to operate
Design focus: to offer desired bandwidth to each real time task and
to distribute the resources to the task.
Eg. washing machine, coffee maker

15

Hard Real time systems:


Violation of time constraints will cause critical failure and
loss of life or damage to property.
The hardware and software of hard real time ES must allow a
worst case execution analysis which guarantees the execution
be completed within a strict deadline.
Eg. pacemaker, nuclear reactor systems, gas leakage alarm,
missile control systems.

16

Based on functionality (contd)


3. Networked ES:
ES having networked interfaces to access the resources
Connected network can be LAN, WAN or Internet.
Connection can be wired or wireless
eg. home security system
4. Mobile ES
Portable embedded devices
Eg. mobile phones, digital camera, etc.

17

IV. Based on deterministic behaviour


Applicable to Real Time systems
Hard and Soft

18

V. Based on triggering
Applicable to ES which are reactive in nature
Based on trigger:
Event triggered: system responses to an event
Time triggered: system responses to time
constraints

19

Need/Application
1.
2.
3.
4.
5.
6.

Data collection/storage/representation
Data communication
Data processing
Monitoring
Control
Application specific user interface

20

Challenges

Hardware requirements
Deadlines or time constraints
Power consumption
Upgradability or flexibility
Reliability
Complex testing
Limited observability and controllability
Restricted development environments
21

Challenges (contd)
The three Ps
Performance
Power consumption
Price

22

Product life cycle-phases

23

Components of ES
1. Core/Controller
2. Memory ROM, RAM, Cache
3. Peripherals: serial port, parallel port, network port,
keyboard and mouse ports, memory unit port, monitor
port
4. Other components

Reset circuit
Brown-out protection circuit
Timers/Clocks/Oscillator circuits
Real Time Clock (RTC)
Watch-dog timer
Interrupt controller
24

Components of ES
25

Embedded System Design

Requirement
Specification
Architecture
Components
System Integration
Testing-Verification-Validation

26

27

28

29

Software Development Life Cycle

30

SDLC provides a series of steps to be followed to design and develop a


software product efficiently. SDLC framework includes the following steps:
Communication
This is the first step where the user initiates the request for a desired
software product. He contacts the service provider and tries to negotiate the
terms. He submits his request to the service providing organization in
writing.

Requirement Gathering
This step onwards the software development team works to carry on the
project. The team holds discussions with various stakeholders from
problem domain and tries to bring out as much information as possible on
their requirements. The requirements are contemplated and segregated into
user requirements, system requirements and functional requirements.
The requirements are collected using a number of practices as given

studying the existing or obsolete system and software,


conducting interviews of users and developers,
referring to the database or
collecting answers from the questionnaires.

31

Feasibility Study
After requirement gathering, the team comes up with a rough plan of
software process. At this step the team analyzes if a software can be
made to fulfill all requirements of the user and if there is any possibility
of software being no more useful. It is found out, if the project is
financially, practically and technologically feasible for the organization
to take up. There are many algorithms available, which help the
developers to conclude the feasibility of a software project.

System Analysis
At this step the developers decide a roadmap of their plan and try to
bring up the best software model suitable for the project. System
analysis includes Understanding of software product limitations,
learning system related problems or changes to be done in existing
systems beforehand, identifying and addressing the impact of project
on organization and personnel etc. The project team analyzes the scope
of the project and plans the schedule and resources accordingly.

32

Software Design
Next step is to bring down whole knowledge of requirements
and analysis on the desk and design the software product. The
inputs from users and information gathered in requirement
gathering phase are the inputs of this step. The output of this step
comes in the form of two designs; logical design and physical
design. Engineers produce meta-data and data dictionaries,
logical diagrams, data-flow diagrams and in some cases pseudo
codes.

Coding
This step is also known as programming phase. The
implementation of software design starts in terms of writing
program code in the suitable programming language and
developing error-free executable programs efficiently.

33

Testing
An estimate says that 50% of whole software development
process should be tested. Errors may ruin the software from
critical level to its own removal. Software testing is done while
coding by the developers and thorough testing is conducted by
testing experts at various levels of code such as module testing,
program testing, product testing, in-house testing and testing the
product at users end. Early discovery of errors and their remedy
is the key to reliable software.

Integration
Software may need to be integrated with the libraries, databases
and other program(s). This stage of SDLC is involved in the
integration of software with outer world entities.

34

Implementation
This means installing the software on user machines. At times, software needs
post-installation configurations at user end. Software is tested for portability
and adaptability and integration related issues are solved during
implementation.

Operation and Maintenance


This phase confirms the software operation in terms of more efficiency and less
errors. If required, the users are trained on, or aided with the documentation on
how to operate the software and how to keep the software operational. The
software is maintained timely by updating the code according to the changes
taking place in user end environment or technology. This phase may face
challenges from hidden bugs and real-world unidentified problems.

Disposition
As time elapses, the software may decline on the performance front. It may go
completely obsolete or may need intense upgradation. Hence a pressing need to
eliminate a major portion of the system arises. This phase includes archiving
data and required software components, closing down the system, planning
disposition activity and terminating system at appropriate end-of-system time.

35

Software Development Paradigm


The software development paradigm helps developer to select
a strategy to develop the software. A software development
paradigm has its own set of tools, methods and procedures,
which are expressed clearly and defines software development
life cycle.
A few of software development paradigms or process models
are defined as follows:

36

1. Waterfall Model

37

Waterfall model is the simplest model of software development


paradigm. It says the all the phases of SDLC will function one after
another in linear manner. That is, when the first phase is finished
then only the second phase will start and so on.
This model assumes that everything is carried out and taken place
perfectly as planned in the previous stage and there is no need to
think about the past issues that may arise in the next phase. This
model does not work smoothly if there are some issues left at the
previous step. The sequential nature of model does not allow us go
back and undo or redo our actions.
This model is best suited when developers already have designed
and developed similar software in the past and are aware of all its
domains.

38

2. Iterative Model

39

This model leads the software development process in iterations. It


projects the process of development in cyclic manner repeating
every step after every cycle of SDLC process.
The software is first developed on very small scale and all the steps
are followed which are taken into consideration. Then, on every next
iteration, more features and modules are designed, coded, tested and
added to the software. Every cycle produces a software, which is
complete in itself and has more features and capabilities than that of
the previous one.
After each iteration, the management team can do work on risk
management and prepare for the next iteration. Because a cycle
includes small portion of whole software process, it is easier to
manage the development process but it consumes more resources.

40

3. Spiral Model
Spiral model is a combination of both, iterative model
and one of the SDLC model. It can be seen as if you
choose one SDLC model and combine it with cyclic
process (iterative model).
This model considers risk, which often goes un-noticed
by most other models. The model starts with
determining objectives and constraints of the software
at the start of one iteration. Next phase is of prototyping
the software. This includes risk analysis. Then one
standard SDLC model is used to build the software. In
the fourth phase of the plan of next iteration is
prepared.
41

42

4. V model
The major drawback of waterfall model is we move to the next stage
only when the previous one is finished and there was no chance to
go back if something is found wrong in later stages. V-Model
provides means of testing of software at each stage in reverse
manner.
At every stage, test plans and test cases are created to verify and
validate the product according to the requirement of that stage. For
example, in requirement gathering stage the test team prepares all
the test cases in correspondence to the requirements. Later, when the
product is developed and is ready for testing, test cases of this stage
verify the software against its validity towards requirements at this
stage.
This makes both verification and validation go in parallel. This
model is also known as verification and validation model.

43

44

5. Big Bang model

45

This model is the simplest model in its form. It requires little


planning, lots of programming and lots of funds. This model is
conceptualized around the big bang of universe. As scientists say
that after big bang lots of galaxies, planets and stars evolved just as
an event.
Likewise, if we put together lots of programming and funds, you
may achieve the best software product.
For this model, very small amount of planning is required. It does
not follow any process, or at times the customer is not sure about the
requirements and future needs. So the input requirements are
arbitrary.
This model is not suitable for large software projects but good one
for learning and experimenting.

46

Assembler:
A computer will not understand any program written in a language, other
than its machine language.
The programs written in other languages must be translated into the
machine language (using a software)
A program which translates an assembly language program into a machine
language program is called an assembler.
Self assembler or resident assembler : If an assembler which runs on a
computer and produces the machine codes for the same computer.
Cross Assembler : If an assembler that runs on a computer and produces
the machine codes for other computer
two types: One Pass Assembler and Two Pass Assembler.
One pass assembler is the assembler which assigns the memory addresses
to the variables and translates the source code into machine code in the first
pass simultaneously.
A Two Pass Assembler is the assembler which reads the source code twice.
In the first pass, it reads all the variables and assigns them memory
addresses. In the second pass, it reads the source code and translates the
code into object code.
47

Compiler:
It is a program which translates a high level language program into a
machine language program.
A compiler is more intelligent than an assembler.
It checks all kinds of limits, ranges, errors etc.
Program run time is more and occupies a larger part of the memory.
slow speed: a compiler goes through the entire program and then
translates the entire program into machine codes.
If a compiler runs on a computer and produces the machine codes for
the same computer then it is known as a self compiler or resident
compiler.
On the other hand, if a compiler runs on a computer and produces the
machine codes for other computer then it is known as a cross compiler.

48

Interpreter:
An interpreter is a program which translates statements of a program
into machine code.
It translates only one statement of the program at a time.
It reads only one statement of program, translates it and executes it.
Then it reads the next statement of the program again translates it and
executes it.
On the other hand, a compiler goes through the entire program and then
translates the entire program into machine codes.
A compiler is 5 to 25 times faster than an interpreter.
By the compiler, the machine codes are saved permanently for future
reference. on the other hand, the machine codes produced by interpreter
are not saved.
An interpreter is a small program as compared to compiler so occupies
less memory space
used in a smaller system which has limited memory space.
49

Linker:
In high level languages there are some built in header files or libraries
These libraries are predefined and these contain basic functions which
are essential for executing the program.
These functions are linked to the libraries by a program called Linker.
If linker does not find a library of a function then it informs to compiler
and then compiler generates an error.
The compiler automatically invokes the linker as the last step in
compiling a program.
Not only built in libraries, but also links the user defined functions to
the user defined libraries.
Usually a longer program is divided into smaller subprograms called
modules. And these modules must be combined to execute the program.
The process of combining the modules is done by the linker.

50

Loader:
Loader is a program that loads machine codes of a program into the
system memory.
In Computing, a loader is the part of an Operating System that is
responsible for loading programs.
It is one of the essential stages in the process of starting a program.
It places programs into memory and prepares them for execution.
Loading a program involves reading the contents of executable file into
memory.
Once loading is complete, the operating system starts the program by
passing control to the loaded program code.
All operating systems that support program loading have loaders. In
many operating systems the loader is permanently resident in memory.

51

Debuggers
A debugger or debugging tool is a computer program that is used
to test and debug other programs (the "target" program).
The code to be examined might alternatively be running on
an instruction set simulator (ISS), a technique that allows great power
in its ability to halt when specific conditions are encountered
but which will typically be somewhat slower than executing the code
directly on the appropriate (or the same) processor.
Some debuggers offer two modes of operationfull or partial
simulationto limit this impact.

52

Profiling ("program profiling", "software profiling")


is a form of dynamic program analysis that measures, for example,
the space (memory) or time complexity of a program
the usage of particular instructions or the frequency and duration of
function calls.
Most commonly, profiling information serves to aid program
optimization.
Profiling is achieved by instrumenting either the program source
code or its binary executable form using a tool called a profiler (or code
profiler).
Profilers may use a number of different techniques, such as eventbased, statistical, instrumented, and simulation methods.

53

Test Coverage Tools


Coverage tools: helps in checking that how thoroughly the testing has been
done.
A coverage tool first identifies the elements or coverage items that can be
counted.
At component testing level, the coverage items could be lines of code or
code statements or decision outcomes (e.g. the True or False exit from an
IF statement).
At component integration level, the coverage item may be a call to a
function or module.
The process of identifying the coverage items at component test level is
called instrumenting the code.
A set of tests is then run through the instrumented code, either
automatically using a test execution tool or manually.
The coverage tool then counts the number of coverage items that have been
executed by the test suite, and reports the percentage of coverage items that
have been tested, and may also identify the items that have not yet tested.
54

Features or characteristics of coverage measurement tools:


To identify coverage items (instrumenting the code);
To calculate the percentage of coverage items that were tested by a set
of tests
To report coverage items that have not been tested yet;
To generate stubs and drivers (if part of a unit test framework).
The concept of Stubs and Drivers are mostly used in the case of
component testing.
Component testing may be done in isolation with the rest of the system
depending upon the context of the development cycle. Stubs and
drivers are used to replace the missing software and simulate the
interface between the software components in a simple manner.
It is very important to know that the coverage tools only measure the
coverage of the items that they can identify. Just because your tests
have achieved 100% statement coverage, this does not mean that your
software is 100% tested!
55

Stubs and drivers in software testing

Suppose you have a function (Function A) that calculates the total marks obtained by a student in
a particular academic year. Suppose this function derives its values from another function
(Function B) which calculates the marks obtained in a particular subject.
You have finished working on Function A and wants to test it. But the problem you face here is
that you can't seem to run the Function A without input from Function B; Function B is still under
development. In this case, you create a dummy function to act in place of Function B to test your
function. This dummy function gets called by another function. Such a dummy is called a Stub.
To understand what a driver is, suppose you have finished Function B and is waiting for Function
A to be developed. In this case you create a dummy to call the Function B. This dummy is called
the driver.

56

In software testing, test coverage measures the amount of


testing performed by a set of test. It will include gathering
information about which parts of a program are actually
executed when running the test suite in order to determine
which branches of conditional statements have been taken.
In simple terms, it is a technique to ensure that your tests are
actually testing your code or how much of your code you
exercised by running the test.

57

Tool chain
A tool chain is a set of programming tools that are
used to perform a complex software development
task or to create a software product
Set of distinct software development tools linked
or chained together by specific stages such as
GNU Compiler Collection (GCC)
binutils:
glibc

Optionally tool chain may contain debuggers and


compilers for a specific programming language
58

Tool chain (contd)


Eg. Videogame: need tools for preparing
Sound effects music 3D models(video) Textures- animations + tools for combining these resources

Eg. GNU toolchain (used in development of GNU/Linux)


1.
2.
3.
4.

5.
6.

GNU make: automation tool for compilation and build


GNU Compile Collection (GCC): a suite of tools including linker,
assembler and other tools
GNU Bison: a parser generator; a programming tool that creates
parser/compiler for some language and machine
GNU m4: macro processor; program that reads a file and scans for
certain keywords, when a key word is found it is replaced by some
text. The keyword/text combination is called macro
GNU Debugger: code debugging tool
GNU Build system (Auto tools): is a suite of programming tools
designed to assist in making source code packages portable i.e. can
be used in another system
59

Need for tool chain


1. Degree of control required by ES developer is much greater
in-order to utilize the resources effectively
2. Approach of verification and debugging is different as
external connections/devices are involved
3. Execution environment is different so tools used in S
firmware must be flexible and adaptable

60

Design issues

61

Appendix

62

The GNU Binutils are a collection of binary tools. The main ones are:
ld - the GNU linker.
as - the GNU assembler.

But they also include:


addr2line - Converts addresses into filenames and line numbers.
ar - A utility for creating, modifying and extracting from archives.
c++filt - Filter to demangle encoded C++ symbols.
dlltool - Creates files for building and using DLLs.
gold - A new, faster, ELF only linker, still in beta test.
gprof - Displays profiling information.
nlmconv - Converts object code into an NLM.
nm - Lists symbols from object files.
objcopy - Copies and translates object files.
objdump - Displays information from object files.
ranlib - Generates an index to the contents of an archive.
readelf - Displays information from any ELF format object file.
size - Lists the section sizes of an object or archive file.
strings - Lists printable strings from files.
strip - Discards symbols.
windmc - A Windows compatible message compiler.
windres - A compiler for Windows resource files.
63

glibc:

Any Unix-like operating system needs a C library: the library which defines the
``system calls'' and other basic facilities such as open, malloc (memory allocation),
printf, exit...
The GNU C Library is used as the C library in the GNU system and in GNU/Linux
systems, as well as many other systems that use Linux as the kernel.

64

Hardware/Software/Firmware
HARDWARE is the physical arrangement of electronic parts that
can only be changed with a screwdriver or soldering iron. It is
purely physical.
SOFTWARE is the arrangement of digital instructions that guide
the operation of computer hardware. Software is loaded from
storage (flash, disk, network, etc) into the computer's operating
memory (RAM) on demand, and is designed to be easy to change.
FIRMWARE is a special class of software that is not intended to
change once shipped. An update requires either a swap of chips or a
special process to reload the flash memory containing the
software. This kind of software powers things like your TV, your
microwave, and your home router, as well as the BIOS (the boot
code) of your PC.

65

Potrebbero piacerti anche