Sei sulla pagina 1di 91

UNIT-1

Q. 1 How will you classify embedded systems ?


Ans.: An embedded system is an electronic system that has a software and is embedded
in computer hardware. It is programmable or non- programmable depending on the
application. An Embedded system is defined as a way of working, organizing,
performing single or multiple tasks according to a set of rules.In an embedded
system, all the units assemble and work together according to the program.

Types of Embedded Systems

Embedded systems can be classified into different types based on performance,


functional requirements and performance of the microcontroller.

Types of Embedded systems

Embedded systems are classified into four categories based on their performance
and functional requirements:

 Stand alone embedded systems


 Real time embedded systems
 Networked embedded systems
 Mobile embedded systems

Embedded Systems are classified into three types based on the performance of the
microcontroller such as

 Small scale embedded systems


 Medium scale embedded systems
 Sophisticated embedded systems

Stand Alone Embedded Systems

Stand alone embedded systems do not require a host system like a computer, it
works by itself. It takes the input from the input ports either analog or digital and
processes, calculates and converts the data and gives the resulting data through the
connected device-Which either controls, drives and displays the connected devices.
Examples for the stand alone embedded systems are mp3 players, digital cameras,
video game consoles, microwave ovens and temperature measurement systems.

Real Time Embedded Systems

A real time embedded system is defined as, a system which gives a required o/p in
a particular time. These types of embedded systems follow the time deadlines for
completion of a task. Real time embedded systems are classified into two types
such as soft and hard real time systems.

Networked Embedded Systems

These types of embedded systems are related to a network to access the resources.
The connected network can be LAN, WAN or the internet. The connection can be
any wired or wireless. This type of embedded system is the fastest growing area in
embedded system applications. The embedded web server is a type of system
wherein all embedded devices are connected to a web server and accessed and
controlled by a web browser.Example for the LAN networked embedded system is
a home security system wherein all sensors are connected and run on the protocol
TCP/IP

Mobile Embedded Systems

Mobile embedded systems are used in portable embedded devices like cell phones,
mobiles, digital cameras, mp3 players and personal digital assistants, etc.The basic
limitation of these devices is the other resources and limitation of memory.

Small Scale Embedded Systems

These types of embedded systems are designed with a single 8 or 16-bit


microcontroller, that may even be activated by a battery. For developing embedded
software for small scale embedded systems, the main programming tools are an
editor, assembler, cross assembler and integrated development environment (IDE).

Medium Scale Embedded Systems

These types of embedded systems design with a single or 16 or 32 bit


microcontroller, RISCs or DSPs. These types of embedded systems have both
hardware and software complexities. For developing embedded software for
medium scale embedded systems, the main programming tools are C, C++, JAVA,
Visual C++, RTOS, debugger, source code engineering tool, simulator and IDE.

Sophisticated Embedded Systems

These types of embedded systems have enormous hardware and software


complexities, that may need ASIPs, IPs, PLAs, scalable or configurable processors.
They are used for cutting-edge applications that need hardware and software Co-
design and components which have to assemble in the final system.

Q. 2 Explain the function of different components of embedded systems.


Ans.: The functions of different components of embedded systems are as follows:
1. Processor:
The main part of embedded system is processor, which could also be
generic micro-processor or micro-controller and programmed to perform
specific tasks for which the integrated system has been designed.

2. Memory:
 Electronic memory is an important part of embedded systems and
three essential types of memory can ne described RAM i.e. Random
Access Memory and ROM i.e. Read Only Memory and cache.
 RAM is one of the hardware component where data are temporarily
stored during execution of the system.
 ROM contains i/o routines that are needed for system at boot time.
 Cache instead is used by the processor as a temporary storage suring
the processing and transferring of data
3. System Clock:
 The system clock is used for all processes is running on an
embedded system and requires precise timing information
 This clock is generally composed of an oscillator and some
associated digital circulatory.

4. Peripherals:
 The peripheral devices are provided on the embedded system boards
for an easy integration.
 Typical devices include serial part, parallel part, network part,
keyboard and mouse parts, a memory unit part and monitor part.
 Some specialised embedded systems also have other parts such as
CAN-bus .

Q. 3 Differentiate between embedded system and general computing system


Ans.:
Parameter Embedded System General Computing
System
1. Contents It is a combination of It is combination of
special purpose generic hardware and
hardware and embedded general purpose OS for
OS for executing a executing a variety of
specific set of applications
applications
2. Operating System May or may not contain Contain general purpose
OS OS
3. Alterations Application are non- Application are alterable
alterable by the user by the user
4. Key Factor Application specific Performance
requirement
5. Power Less More
Consumption

6. Response Time Response time Response requirement


requirement highly are not time-critical
critical
7. Behavior Execution behavior is Need not be
deterministic deterministic in
execution behavior.

Q. 4 Explain Von-Neumann and Harvard architecture.


Ans.: Von Neumann architecture.:
1. Computer has a single, common memory space in which both program
instructions and data are stored.
2. There is a single internal data bus that fetches both instructions and data.
They cannot be performed at the same time.
3. Low performance compared to Harvard Architecture and it cheaper.
4. It allows self-modifying code.
5. Since data memory and program memory are stored physically in single
chip, chances for accidental corruption of program memory.
6. A single block of memory may be mapped to act as both data and program
memory. This is referred to as the Von Neumann architecture.

Harvard architecture:
1. Computers have separate memory areas for program instructions and data.
2. There are two or more internal data buses, which allow simultaneous
access to both instructions and data.
3. The CPU fetches program instructions on the program memory bus.
4. Easier to pipeline. So, High Performance can be achieved.
5. Comparatively high cost
6. The 8051 microcontrollers (MCS-51) have an 8-bit data bus. They can
address 64K of external data memory and 64K of external program
memory.
7. These may be separate blocks of memory, so that up to 128K of memory
can be attached to the microcontroller.
8. Separate blocks of code and data memory are referred to as the Harvard
architecture.
Q. 5 Differentiate between RISC and CISC processors.
Ans.:
- RISC CISC
1 RISC stands for Reduced CISC stands for
Instruction Set Computer Complex Instruction Set
Computer
2 RISC processors have CISC processors have
simple instructions complex instructions that
taking about one clock take up multiple clock
cycle. The average Clock cycles for execution. The
cycles Per average Clock cycles Per
Instruction(CPI) of a Instruction of a CISC
RISC processor is 1.5 processor is between 2
and 15
3 There are hardly any Most of the instructions
instructions that refer refer memory
memory.
4 RISC processors have a CISC processors have
fixed instruction format variable instruction
format.
5 The instruction set is The instruction set has a
reduced i.e. it has only variety of different
few instructions in the instructions that can be
instruction set. Many of used for complex
these instructions are operations.
very primitive.
6 RISC has fewer CISC has many different
addressing modes and addressing modes and
most of the instructions can thus be used to
in the instruction set have represent higher level
register to register programming language
addressing mode. statements more
efficiently.
7 Complex addressing CISC already supports
modes are synthesized complex addressing
using software. modes
8 Multiple register sets are Only has a single register
present set
9 RISC processors are They are normally not
highly pipelined pipelined or less
pipelined

Q. 6 Explain Little-endian and Big-endian systems.


Ans.: Big and Little Endian

 Basic Memory Concepts


In order to understand the concept of big and little endian, you need to understand
memory. Fortunately, we only need a very high level abstraction for memory. You
don't need to know all the little details of how memory works.
All you need to know about memory is that it's one large array. But one large array
containing what? The array contains bytes. In computer organization, people don't
use the term "index" to refer to the array locations. Instead, we use the term
"address". "address" and "index" mean the same, so if you're getting confused, just
think of "address" as "index".
Each address stores one element of the memory "array". Each element is typically
one byte. There are some memory configurations where each address stores
something besides a byte. For example, you might store a nibble or a bit. However,
those are exceedingly rare, so for now, we make the broad assumption that all
memory addresses store bytes.
I will sometimes say that memory is byte-addressable. This is just a fancy way of
saying that each address stores one byte. If I say memory is nibble-addressable, that
means each memory address stores one nibble.
Storing Words in Memory
We've defined a word to mean 32 bits. This is the same as 4 bytes. Integers, single-
precision floating point numbers, and MIPS instructions are all 32 bits long. How
can we store these values into memory? After all, each memory address can store a
single byte, not 4 bytes.
The answer is simple. We split the 32 bit quantity into 4 bytes. For example,
suppose we have a 32 bit quantity, written as 90AB12CD16, which is hexadecimal.
Since each hex digit is 4 bits, we need 8 hex digits to represent the 32 bit value.
So, the 4 bytes are: 90, AB, 12, CD where each byte requires 2 hex digits.
It turns out there are two ways to store this in memory.
Big Endian
In big endian, you store the most significant byte in the smallest address. Here's
how it would look:
Address Value
1000 90
1001 AB
1002 12
1003 CD

Little Endian
In little endian, you store the least significant byte in the smallest address. Here's
how it would look:
Address Value
1000 CD
1001 12
1002 AB
1003 90
Notice that this is in the reverse order compared to big endian. To remember which
is which, recall whether the least significant byte is stored first (thus, little endian)
or the most significant byte is stored first (thus, big endian).
Notice I used "byte" instead of "bit" in least significant bit. I sometimes abbreviated
this as LSB and MSB, with the 'B' capitalized to refer to byte and use the lowercase
'b' to represent bit. I only refer to most and least significant byte when it comes to
endianness.
Which Way Makes Sense?
Different ISAs use different endianness. While one way may seem more natural to
you (most people think big-endian is more natural), there is justification for either
one.
For example, DEC and IBMs are little endian, while Motorolas and Suns are big
endian. MIPS processors allowed you to select a configuration where it would be
big or little endian.
Why is endianness so important? Suppose you are storing int values to a file, then
you send the file to a machine which uses the opposite endianness and read in the
value. You'll run into problems because of endianness. You'll read in reversed
values that won't make sense.
Endianness is also a big issue when sending numbers over the network. Again, if
you send a value from a machine of one endianness to a machine of the opposite
endianness, you'll have problems. This is even worse over the network, because
you might not be able to determine the endianness of the machine that sent you the
data.
The solution is to send 4 byte quantities using network byte order which is
arbitrarily picked to be one of the endianness (not sure if it's big or little, but it's one
of them). If your machine has the same endianness as network byte order, then
great, no change is needed. If not, then you must reverse the bytes.

Q. 7 What is an embedded system? Explain in brief about different areas of


embedded system applications.
Ans.: 1. An embedded system is a computer system with a dedicated function within
a larger mechanical or electrical system, often with real-time computing
constraints It is embedded as part of a complete device often including
hardware and mechanical parts. Embedded systems control many devices in
common use today. Ninety-eight percent of all microprocessors are
manufactured as components of embedded systems.
2. Examples of properties of typically embedded computers when compared
with general-purpose counterparts are low power consumption, small size,
rugged operating ranges, and low per-unit cost.
3. Embedded systems range from portable devices such as digital watches and
MP3 players, to large stationary installations like traffic lights, factory
controllers, and largely complex systems like hybrid vehicles, MRI, and
avionics. Complexity varies from low, with a single microcontroller chip, to
very high with multiple units, peripherals and networks mounted inside a
large chassis or enclosure.
4. Applications:-
 Home Appliances – Washing Machine, Microwave, HV AC System,
DVD, etc.
 Academia – Smart Board, Smart Room, OCR, Calculator, etc.
 Instrumentations – Signal Generator, Signal Processor, Power
Supplier, Process Instrumentations, etc.
 Telecommunications – Router, Hub, Cellular phone, Web Camera,
etc.
 Auto mobile – Fuel Injections Controller, Air-bag Systems, GPS,
Cruise Control, etc.
 Aerospace – Navigation systems, Automatic Landing Systems, Space
Explorer, Space Robotics, etc.
 Office Automation – Fax, Copy Machine, Smartphone systems,
Printers, Scanners, etc.
 Security – Face Recognition, Finger Recognition, Eye Recognition,
Building Security Systems, Alarm Systems, etc.
 Industrial Automation – Data Collection Systems, Monitoring
Systems, Voltage, Current, Temperature, Industrial Robotics, etc.
Medical – CT Scanner, ECG, EEG, EMG, MRI, Blood Pressure Monitor, Medical
Diagnostic Devices, etc.

Q. 8 Explain internal and external communication interface.


Ans.: A. ONBOARD COMMUNICATION INTERFACE

1. I²C (INTER-INTEGRATED CIRCUIT)


It is a multi-master, multi-slave, packet-switched, single-ended, serial computer
bus invented by Philips Semiconductor (now NXP Semiconductors).
 It is typically used for attaching lower-speed peripheral ICs to processors
and microcontrollers in short-distance, intra-board communication.

 I²C uses only two bidirectional open-drain lines, Serial Data Line (SDA) and
Serial Clock Line (SCL), pulled up with resistors. Typical voltages used are +5
V or +3.3 V, although systems with other voltages are permitted.
 The I²C reference design has a 7-bit or a 10-bit (depending on the device
used) address space.
The reference design is a bus with a clock (SCL) and data (SDA) lines with 7-
bit addressing. The bus has two roles for nodes: master and slave:

 Master node – node that generates the clock and initiates communication with
slaves.
 Slave node – node that receives the clock and responds when addressed by the
master.

The bus is a multi-master bus, which means that any number of master nodes can
be present. Additionally, master and slave roles may be changed between messages
(after a STOP is sent).
There are four potential modes of operation for a given bus device:
 master transmit – master node is sending data to a slave,
 master receive – master node is receiving data from a slave,
 slave transmit – slave node is sending data to the master,
 slave receive – slave node is receiving data from the master.

Message protocols:
I²C defines basic types of messages, each of which begins with a START and ends
with a STOP:

 Single message where a master writes data to a slave.


 Single message where a master reads data from a slave.
 Combined messages, where a master issues at least two reads or writes to one
or more slaves.

2. SERIAL PERIPHERAL INTERFACE BUS (SPI)


 The SPI is a synchronous serial communication interface specification used
for short distance communication, primarily in embedded systems.
 The interface was developed by Motorola in the late 1980s and has become
a de facto standard. Typical applications include Secure Digital cards
and liquid crystal displays.
 SPI devices communicate in full duplex mode using a master-
slave architecture with a single master. The master device originates
the frame for reading and writing. Multiple slave devices are supported
through selection with individual slave select slave select (SS) lines.
The SPI bus specifies five logic signals:

 SCLK: Serial Clock (output from master).


 MOSI: Master Output Slave Input, or Master Out Slave In (data output from
master).
 MISO: Master Input Slave Output, or Master In Slave Out (data output from
slave).
 SDIO: Serial Data I/O (bidirectional I/O)
 SS: Slave Select (often active low, output from master).

Data transmission:
 To begin communication, the bus master configures the clock, using a
frequency supported by the slave device, typically up to a few MHz.
 The master then selects the slave device with a logic level 0 on the select
line. If a waiting period is required, such as for an analog-to-digital
conversion, the master must wait for at least that period of time before
issuing clock cycles.
 During each SPI clock cycle, a full duplex data transmission occurs. The
master sends a bit on the MOSI line and the slave reads it, while the slave
sends a bit on the MISO line and the master reads it.

 This sequence is maintained even when only one-directional data transfer is


intended.
3. UNIVERSAL ASYNCHRONOUS
RECEIVER/TRANSMITTER (UART )
 A UART is a computer hardware device for asynchronous serial
communication in which the data format and transmission speeds are
configurable.
 The electric signaling levels and methods are handled by a driver circuit
external to the UART.
 A UART is usually an individual (or part of an) integrated circuit (IC) used
for serial communications over a computer or peripheral device serial port.
Transmitting and receiving serial data:
 The universal asynchronous receiver/transmitter (UART) takes bytes of data
and transmits the individual bits in a sequential fashion.
 At the destination, a second UART re-assembles the bits into complete
bytes. Each UART contains a shift register, which is the fundamental
method of conversion between serial and parallel forms.
 Serial transmission of digital information (bits) through a single wire or
other medium is less costly than parallel transmission through multiple
wires.
The UART usually does not directly generate or receive the external signals used
between different items of equipment. Separate interface devices are used to
convert the logic level signals of the UART to and from the external signalling
levels, which may be standardized voltage levels, current levels, or other signals.
Communication may be simplex , full duplex or half duplex

Data framing:

Application:

 Transmitting and receiving UARTs must be set for the same bit speed,
character length, parity, and stop bits for proper operation.
 The receiving UART may detect some mismatched settings and set a
"framing error" flag bit for the host system; in exceptional cases the
receiving UART will produce an erratic stream of mutilated characters and
transfer them to the host system.
4. 1-WIRE INTERFACE
 1-Wire is a device communications bus system designed by Dallas
Semiconductor Corp. that provides low-speed data, signaling, and power
over a single conductor.
 1-Wire is similar in concept to I²C, but with lower data rates and longer
range.
 A network of 1-Wire devices with an associated master device is called
a MicroLAN.
 When developing and/or troubleshooting the 1-Wire bus, examination of
hardware signals can be very important. Logic analyzers and bus
analyzers are tools which collect, analyze, decode, and store signals to
simplify viewing the high-speed waveforms.

B. EXTERNAL COMMUNICATION INTERFACE


1. RS-232 & RS-485
 In telecommunications, RS-232 is a standard for serial
communication transmission of data. It formally defines the signals
connecting between a DTE (such as a computer terminal, and a DCE such
as a modem.
 The RS-232 standard is commonly used in computer serial ports. The
standard defines the electrical characteristics and timing of signals, the
meaning of signals, and the physical size and pinout of connectors.
 In RS-232, user data is sent as a time-series of bits. Both synchronous and
asynchronous transmissions are supported by the standard.
 In addition to the data circuits, the standard defines a number of control
circuits used to manage the connection between the DTE and DCE.
 Each data or control circuit only operates in one direction, that is, signaling
from a DTE to the attached DCE or the reverse. Because transmit data and
receive data are separate circuits, the interface can operate in a full
duplex manner, supporting concurrent data flow in both directions.
 The standard does not define character framing within the data stream, or
character encoding.
 RS-485 is a standard defining the electrical characteristics of drivers and
receivers for use in serial communications systems.
 Electrical signaling is balanced, and multipoint systems are supported. RS-
485 supports inexpensive local networks and multidrop communications
links.
 RS-485 can be used with data rates up to 10 Mbit/s and distances up to
1,200 m (4,000 ft), but not at the same time.
 RS-485 signals are used in a wide range of computer and automation
systems. In a computer system, SCSI-2 and SCSI-3 may use this
specification to implement the physical layer for data transmission between
a controller and a disk drive.
 RS-485 is used for low-speed data communications in commercial aircraft
cabins' vehicle bus. It requires minimal wiring, and can share the wiring
among several seats, reducing weight.
2. UNIVERSAL SERIAL BUS

USB, short for Universal Serial Bus, is an industry standard that defines cables,
connectors and communications protocols for connection, communication, and
power supply between computers and devices.
 USB was designed to standardize the connection of computer
peripherals (including keyboards, pointing devices, digital cameras,
printers, portable media players, disk drives and network adapters)
to personal computers, both to communicate and to supply electric power.
 It has largely replaced a variety of earlier interfaces, such as serial
portsand parallel ports, as well as separate power chargers for portable
devices – and has become commonplace on a wide range of devices.

In general, there are three basic formats of USB connectors:

a. the default or standard format intended for desktop or portable equipment


(for example, on USB flash drives),
b. the miniintended for mobile equipment (which is used on many cameras),
c. and the thinner micro size, for low-profile mobile equipment (most modern
mobile phones).

Also, there are 5 modes of USB data transfer, in order of increasing bandwidth:

a) Low Speed (from 1.0),


b) Full Speed (from 1.0),
c) High Speed (from 2.0),
d) SuperSpeed(from 3.0),
e) and SuperSpeed+ (from 3.1)

3. FIREWIRE

IEEE 1394 is an interface standard for a serial bus for high-speed communications
and isochronous real-time data transfer. It was developed in the late 1980s and
early 1990s by Apple, which called it FireWire.
 The copper cable it uses in its most common implementation can be up to
4.5 metres (15 ft) long. Power is also carried over this cable allowing
devices with moderate power requirements to operate without a separate
power supply.
 FireWire is also available in wireless, Cat 5, fiber optic,
and coaxial versions.
The 1394 interface is comparable to USB though USB requires a master controller
and has greater market share.
Technical specifications
 FireWire can connect up to 63 peripherals in a tree or daisy-chain topology
 It allows peer-to-peer device communication - such as communication
between a scanner and a printer - to take place without using system
memory or the CPU.
 FireWire also supports multiple hosts per bus.
 It is designed to support plug and play and hot swapping. The copper cable
it uses in its most common implementation can be up to 4.5 metres (15 ft)
long and is more flexible than most parallel SCSI cables. In its six-
conductor or nine-conductor variations, it can supply up to 45 watts of
power per port at up to 30 volts, allowing moderate-consumption devices to
operate without a separate power supply.

4. BLUETOOTH
Bluetooth is a wireless technology standard for exchanging data over short
distances from fixed and mobile devices, and building personal area
networks (PANs).
Invented by telecom vendor Ericsson in 1994, it was originally conceived as a
wireless alternative to RS-232 data cables.

Communication and connection:

 A master BR/EDR Bluetooth device can communicate with a maximum of


seven devices in a piconet though not all devices reach this maximum.
 The devices can switch roles, by agreement, and the slave can become the
master (for example, a headset initiating a connection to a phone necessarily
begins as master—as initiator of the connection—but may subsequently
operate as slave).
5. WI-FI
Wi-Fi is a technology for wireless local area networking with devices based on
the IEEE 802.11 standards.
 Devices that can use Wi-Fi technology include personal computers, video-
game consoles, smartphones, digital cameras, tablet computers, smart TVs,
digital audio players and modern printers.
 Wi-Fi compatible devices can connect to the Internet via a WLAN and
a wireless access point. Such an access point (or hotspot) has a range of
about 20 meters (66 feet) indoors and a greater range outdoors.
Wi-Fi most commonly uses the 2.4 gigahertz (12 cm) UHF and 5 gHz
(6 cm) SHF ISM radio bands. Having no physical connections, it is more
vulnerable to attack than wired connections, such as Ethernet Range:
 The Wi-Fi signal range depends on the frequency band, radio power output,
antenna gain and antenna type as well as the modulation technique.
 Line-of-sight is the thumbnail guide but reflection and refraction can have a
significant impact.
 An access point compliant with either 802.11b or 802.11g, using the stock
antenna might have a range of 100 m (0.062 mi).
6. ZIG-BEE
Zigbee is an IEEE 802.15.4-based specification for a suite of high-level
communication protocols used to create personal area networks with small, low-
power digital radios and other low-power low-bandwidth needs, designed for small
scale projects which need wireless connection.
 Its low power consumption limits transmission distances to 10–100
meters line-of-sight, depending on power output and environmental
characteristics.
 Zigbee devices can transmit data over long distances by passing data
through a mesh network of intermediate devices to reach more distant ones.
 Zigbee is typically used in low data rate applications that require long
battery life and secure networking.
 Zigbee has a defined rate of 250 kbit/s, best suited for intermittent data
transmissions from a sensor or input device.
The zigbee network layer natively supports both star and tree networks, and
generic mesh networking.
 Every network must have one coordinator device, tasked with its creation,
the control of its parameters and basic maintenance.
 Within star networks, the coordinator must be the central node. Both trees
and meshes allow the use of zigbee routers to extend communication at the
network level.
Zigbee provides the ability to run for years on inexpensive batteries for a host of
monitoring and control applications.
The zigbee network layer ensures that networks remain operable in the conditions
of a constantly changing quality between communication nodes.
Advantage:
 The zigbee advantage is the zigbee protocol which is designed to
communicate data through hostile RF environment that are common in
commercial and industrial application.
 Its protocol features include support for multiple network topologies such
as; point to point and mesh network, collision avoidance and retries, and
low latency.
Another defining feature of zigbee is facilities for carrying out secure
communications, protecting establishment and transport of cryptographic keys,
ciphering frames, and controlling device. It builds on the basic security framework
defined in IEEE 802.15.4.

Q. 9 Explain Reset Circuit and Watchdog Timer for embedded systems.


Ans.: Reset Circuit
 The reset circuit is essential to ensure that the device is not operating at a
voltage level where the device is not guaranteed to operate, during system
power ON.
 The reset signal brings the internal registers and the different hardware
systems of the processor/controller to a known state and starts the firmware
execution from the reset vector (Normally from vector address Ox0000 for
conventional processors/controllers.
 The reset vector can be relocated to an address for processors/controllers
supporting bootloader).
 The reset signal can be either active high (The processor undergoes reset
when the reset pin of the processor is at logic high) or active low (The
processor undergoes reset when the reset pin of the processor is at logic
low).
 Since the processor operation is synchronised to a clock signal. the reset
pulse should be wide enough to give time for the clock oscillator to stabilise
before the internal reset state starts.
 The reset signal to the processor can be applied at power ON through an
external passive reset circuit comprising a Capacitor and Resistor or through
a standard Reset IC like MAX810 from Maxim Dallas.
 Select the reset IC based on the type of reset signal and logic level
(CMOSITTL) supported by the processor/controller in use. Some
microprocessors/controllers contain built-in internal reset circuitry and they
don't require external reset circuitry.
 Figure illustrates a resistor capacitor based passive reset circuit for active
high and low configurations. The reset pulse width can be adjusted by
changing the resistance value R and capacitance value C.

Watchdog Timer
 In desktop Windows systems. if we feel our application is behaving in an
abnormal way or if the system hangs up. we have the 'Ctrl + Mt 4- Der to
conic out of the situation.
 What if it happens to our embedded system? Do we really have a 'Ctrl + Alt
+ Del' tee take control of the situation? Of course not, but we have a
watchdog to monitor the firmware execution and reset the system
processor/microcontroller when the program execution hangs up.
 A watchdog timer, or simply a watchdog, is a hardware timer for
monitoring the firmware execution. Depending on the internal
implementation, the watchdog timer increments or decrements a free
running counter with each clock pulse and generates a reset signal to reset
the processor if the count reaches zero for a down counting watchdog. or the
highest count value for an up counting watchdog.
 If the watchdog counter is in the enabled state. the firmware can write a
zero (for up counting watchdog implementation) to it before starting the
execution of a piece of code (subroutine or portion of cock which is
susceptible to execution hang up) and the watchdog will start counting.
 If the firmware execution doesn't complete due to malfunctioning, within
the time required by the watchdog to reach the maximum count, the counter
will generate a reset pulse and this will reset the processor (if it is connected
to the reset line of the processor.
 If the firmware execution completes before the expiration of the watchdog
timer, you can reset the count by writing a 0 (for an up counting watchdog
timer) to the watchdog timer register.
 Most of the processors implement watchdog as a built-in component and
provides status register to control the watchdog timer (like enabling and
disabling watchdog functioning). and watchdog timer register for writing
the count value.
 If the processor/controller doesn't contain a built in watchdog timer, the
same can be implemented using an external watchdog timer IC circuit.
 The external watchdog timer uses hardware logic for enabling/disabling.
resetting the watchdog count. etc. instead of the firmware based 'writing' to
the status and watchdog timer register.
 The Microprocessor supervisor IC D51232 integrates a hardware watchdog
timer in it. In modem systems running on embedded operating systems, the
watchdog can be implemented in such a way that when a watchdog timeout
occurs, an interrupt is generated instead of rescuing the processor. The
interrupt handler for this handles the situation in an appropriate fashion.
 Figure illustrates the implementation of an external watchdog timer based
microcontroller supervisor circuit for a small scale embedded system.

Q. 10 What is the use Of sensors and actuators in embedded system? Give the
example of different type of sensors and actuators
Ans.: Sensors and actuators are two critical components of every closed loop control
system. Such a system is also called a mechatronics system. A typical mechatronics
system consists of a sensing unit, a controller, and an actuating unit. A sensing unit
can be as simple as a single sensor or can consist of additional components such as
filters, amplifiers, modulators, and other signal conditioners. The controller accepts
the information from the sensing unit, makes decisions based on the control
algorithm, and outputs commands to the actuating unit. The actuating unit consists
of an actuator and optionally a power supply and a coupling mechanism.
 Sensors: Sensor is a device that when exposed to a physical phenomenon
(temperature, displacement, force, etc.) produces a proportional output
signal (electrical, mechanical, magnetic, etc.). The term transducer is often
used synonymously with sensors. However, ideally, a sensor is a device that
responds to a change in the physical phenomenon. On the other hand, a
transducer is a device that converts one form of energy into another form of
energy. Sensors are transducers when they sense one form of energy input
and output in a different form of energy. For example, a thermocouple
responds to a temperature change (thermal energy) and outputs a
proportional change in electromotive force (electrical energy). Therefore, a
thermocouple can be called a sensor and or transducer.
Below list various types of sensors that are classified by their measurement
objectives.
1. Linear/Rotational sensors
2. Acceleration sensors
3. Force, torque and pressure sensors
4. Flow sensors
5. Temperature sensors
6. Proximity sensors
7. Light sensors
8. Smart material sensors
9. Micro and nano sensors

 Actuators: Actuators are basically the muscle behind a mechatronics


system that accepts a control command (mostly in the form of an electrical
signal) and produces a change in the physical system by generating force,
motion, heat, flow, etc. Normally, the actuators are used in conjunction with
the power supply and a coupling mechanism. The power unit provides
either AC or DC power at the rated voltage and current. The coupling
mechanism acts as the interface between the actuator and the physical
system. Typical mechanisms include rack and pinion, gear drive, belt drive,
lead screw and nut, piston, and linkages.
Actuators can be classified based on the type of energy
1. Electrical Actuators
2. Electromechanical Actuators
3. Electromagnetic Actuators
4. Hydraulic and Pneumatic Actuators
5. Smart Material Actuators
6. Micro- and Nano actuators.

Q. 11 Explain oscillator circuit and Timing devices for embedded systems.


Ans.: Oscillator Circuit :
 It is analogous to the heartbeat of a living being
 The heart is responsible for the generation of the beat whereas the oscillator
unit of the embedded system is responsible for generating the precise clock
for the processor
 An oscillator provides a source of repetitive Ac signal across its output
terminals without needing any input (Expect a Dc supply)
 The signal generated by the oscillator is usually of constant amplitude
 The wave shape and amplitude are determined by the design of the
oscillator circuit and choice of component values
 The frequency of the output wave may be fixed or variable depending on
the oscillator design
 The oscillatory circuit, also called the LC circuit or tank circuit, consists of
an inductive coil of inductance L connected in parallel with a capacitor of
capacitance C. The values of L and C determine the frequency of
oscillations produced by the circuit
 The most important point is that the both capacitor and inductor are capable
of storing energy

Timing Devices :
 Real-Time clock (RTC) is a system component responsible for keeping
track of time
 RTC holds information like current time(In hours, minutes and seconds) in
12 hour/24 hour format, date, month, year, day of the week, etc. and
supplies timing reference to the system
 RTC intended to function even in the absence of power
 The RTC chip contains a microchip for holding the ti me and date related
information and backup battery cell for functioning in the absence of power,
is a signal IC package
 For operating system based embedded device a timing reference is essential
for synchronizing the operations of the OS kernel
 The RTC can interrupt the OS kernel by asserting the interrupt line of the
processor/controller to which the RTC interrupt line is connected
 The OS kernel identifies the interrupt in terms of the Interrupt Request
(IRQ) number generated by an interrupt controller.

Q. 12 What is the requirement of the firmware for embedded systems? Explain the
Process of embedded firmware development.
Ans.: Embedded firmware refers to the control algorithm (Program instructions) and or
the configuration settings that an embedded system developer dumps into the
code (Program) memory of the embedded system. It is an un-avoidable part of an
embedded system. There are various methods available for developing the
embedded firmware. They are listed below:-
1) Write the program in high level languages like Embedded C/C++ using an
Integrated Development Environment (The IDE will contain an editor, compiler.
linker. debugger, simulator. etc. IDEs are different for different family of
processor controllers. For example, Kiel micro vision3 IDE is used for all family
members of 8051 microcontroller, since it contains the generic 8051 compiler C5
I ).
2) Write the program in Assembly language acing the instructions supported by
your application's target processor/controller.
The instruction set for each family of processor/controller is different and the
program written in either of the methods given above should be convened into a
processor understandable machine code before loading it into the program
memory.
The process of convening the program written in either a high level language or
processor/controller specific Assembly code to machine readable binary code is
called 'HEX File Creation'.
The methods used for WES 'File Creation' is different depending on the
programming techniques used. If the pro-gram is written in Embedded C/C++
using an IDE, the cross compiler included in the IDE converts it into
corresponding processor/controller understandable 'HEX File'.
If you arc following the Assembly language based programming technique
(method 2), you can use the utilities supplied by the processor/controller vendors
to convert the source code into 'ILEX File'. Also third party tools arc available.
which may be of free of cost, for this conversion slow.
For a beginner in the embedded software field, it is strongly recommended to use
the high level language based development technique.
The reasons for this being: writing codes in a high level language is easy, the
code written in high level language is highly portable which means you can use
the same code to run on different processor/controller with little or less
modification.
The only thing you need to do is re-compile the program with the required
processor's IDE, after replacing the include files for that particular processor.
Also the programs written in high level languages are not developer dependent.
Any skilled programmer can trace out the functionalities of the program by just
having a look at the pro-gram.
It will be much easier if the source code contains necessary comments and
documentation lines. It is very easy to debug and the overall system development
time will be reduced to a greater extent.
The embedded software development process in assembly language is tedious
and time consuming.
The developer needs to know about all the instruction sets of the
processor/controller or at least s/he should carry an instruction set reference
manual with her/him.
A programmer using assembly language technique writes the program according
to his/her view and taste. Often he/she may be writing a method or functionality
which can be achieved through a single instruction as an experienced person's
point of view, by two or three instructions in his/her own style.
So the program will be highly dependent on the developer. It is very difficult for
a second person to understand the code written in Assembly even if it is well
documented.
We will discuss both approaches of embedded software development in a later
chapter dealing with design of embedded firmware, in detail. Two types of
control algorithm design exist in embedded firm-ware development. The first
type of control algorithm development is known as the infinite loop or 'super
Mop' based approach, where the control flow runs from top to bottom and then
jumps back to the top of the program in a conventional procedure.
It is similar to the while (1) 1: based technique in C. The second method deals
with splitting the functions to be executed into tasks and running these tasks
using a scheduler which is part of a General Purpose or Real Time Embedded
Operating System (GPOS/RTOS).

Q. 13 Explain characteristics of embedded system with example.


Ans.: Unlike general purpose computing systems , embedded system possess certain
specific characteristics & these characteristics are unique to each embedded system.
Some of the characteristics of an embedded system are:-
o Embedded system generally used for do specific task that
provide real-time output on the basis of various characteristics
of an embedded system.
o Contains a processing engine, such as a general-purpose
microprocessor

o Typically designed for a specific application or purpose

o Includes a simple (or no) user interface, such as an automotive


engine ignition controller

o Often is resource-limited. For example, it might have a small


memory foot-print and no hard drive

o Might have power limitations, such as a requirement to operate


from batteries.

o Not typically used as a general-purpose computing platform.

o Generally has application software built in, not user-selected.

o Ships with all intended application hardware and software pre-


integrated

o Often is intended for applications without human intervention.

Q. 14 Explain operational quality attributes of embedded systems.


Ans.: Quality attributes are the non-functional requirements that need to be documented
properly in any system design.
The various quality attributes that need to be addressed in any embedded system
development are broadly classified into two, Operational Quality Attributes and
Non-Operational Quality Attributes.
The important quality attributes coming under this category are :
1. Response
-Response is a measure of quickness of the system. It gives you an idea about how
fast your system is tracking the changes in input variables.
-Most of the embedded systems demands fast response which should be almost
Real Time.
-Ex. Embedded system deployed in flight control application should respond in real
time manner.
2. Throughputs
-Throughputs deals with the efficiency of a system. In general it can be defined as
the rate of production or operation of defined process over a slated period of time.
-Throughput is generally measured in terms of ―Benchmarks‖. A Benchmarks is
reference point by which something can be measured.
3. Reliability
-Reliability is a measure of how much percent you can rely upon the proper
functioning of the system or what is the percentage susceptibility of the system to
failure.
-Mean Time Between Failures (MTBF) and Mean Time To Repair (MTTR) are the
terms used in defining system reliability.
-MTBF gives the frequency of failure in hours/weeks/months.
-MTTR specifies how long the system is allowed to be out of order following a
failure.
4. Maintainability
-Maintainability deals with support and maintenance to the end user or client in
case of technical issues & product failures.
-Maintainability is closely related to system availability.
-Maintainability can be broadly classified into two categories namely Scheduled or
Periodic Maintenance and Maintenance to unexpected failures.
5. Security
-Confidentiality, Integrity and Availability are three measures of information
security.
-Confidentiality deals with the protection of data and applications unauthorized
disclosure.
-Integrity deals with the protection of data and application from unauthorized
modification.
-Availability deals with the protection of data and application from unauthorized
users.
6. Safety
-Safety deals with the possible damages that can happen to the operators, pubic and
environment due to the breakdowns of an embedded system may occur due to the
emission of radioactive hazardous materials from the embedded products.
-The breakdown of an embedded system may occur due to a hardware failure or a
firmware failure.
-Safety analysis is a must in product engineering to evaluate the anticipated
damages and determine the best course of action to bring down the consequences of
the damages to an acceptable level.

Q. 15 Explain Non-Operational quality attributes of embedded system.


Ans.: The Non-Operational quality attributes of embedded system are as follows :
1.Testablity and Debug –ability
2.Evolvability
3.Portability
4. Time to prototype and market
5.Per unit cost and revenue
1.Testablity and Debug –ability :
 Testability deals with how easily one can test his /her design application and
by which means he/she can test it.
 For an embedded product testability is applicable to both embedded
hardware and firmware.
 Debug-ability is a means of debugging the product as for portable sources
that create unexpected behaviour in the total system.
 It has two aspects in the embedded system development namely, hardware
level debugging and firmware level debugging.
 Hardware debugging is used for figuring out the issues created by hardware
problems whereas firmware debugging is employed to figure out the
portable errors that appears as a result of flaws in firmware.
2.Evolvability :
 Evolvability refers to the non-heritable variation.
 For an embedded system , it refers to the ease with which the embedded
product can be modified to take advantage of new firmware or hardware
technologies.
3. Portability :
 Portability is a means of system independence.
 An embedded product is said to be portable if the product is capable of
functioning as such in various environments , target processors , controllers
and embedded system.
 A standard embedded product should always be flexible and portable.
4.Time to Prototype and Market :
 It is the time elapsed between the conceptualization of a product and the
time at which the product is ready for selling.
 The product prototyping helps a lot in reducing time to market.
 Whenever you have a product idea or may not be certain about the
feasibility of the idea.
 Prototyping is an informal king of rapid product development in which the
important features of the product under consideration are developed.
 The time to prototype is also another critical factor. If the prototype is
developed faster the actual estimated development time can be brought
down significant.
5. Per Unit Cost and Revenue :
 Cost is a factor which is closely monitored by both end user and
manufacturer.
 Any failure to position the cost of a commercial product at a nominal rate ,
may lead to the failure of the product in the market.
 Every embedded system product has a product life cycle which starts with
the design and the development phase
 Total revenue increases from the product introduction stage to the product
maturity stage.
 The unit cost is very high during the introductory stage.

Q. 16 What is EDLC? What are the Objectives of the EDLC?


Ans.: EMBEDDED PRODUCT DEVELOPMENT LIFE CYCLE (EDLC)
EDLC is Embedded Product Development Life Cycle. It is an Analysis – Design –
Implementation based problem solving approach for embedded systems
development.
There are three phases to Product development:
Analysis involves understanding what product needs to be developed.
Design involves what approach to be used to build the product.
Implementation is developing the product by realizing the design.
Objectives of EDLC

The ultimate aim of any embedded product in a commercial production setup is to


produce Marginal benefit.
Marginal is usually expressed in terms of Return On Investment.
The investment for product development includes initial investment, manpower,
infrastructure investment etc.
EDLC has three primary objectives are:
1. Ensure that high quality products are delivered to user.
2. Risk minimization defect prevention in product development through
project management.
3. Maximize/Increased the productivity.
1. Ensure that high quality products are delivered to user

Quality in any product development is Return On Investment


achieved by the product. The expenses incurred for developing the product
the product are:
 Initial investment
 Developer recruiting
 Training
 Infrastructure requirement related

Qualitative attributes depend on the budget of the product of the


product, so budget allocation is very important.
2. Risk minimization defect prevention in product development through
project management
In which required for product development ‗loose‘ or ‗tight‘ project
management ‗project management is essential for ‘ predictability co-
ordination and risk minimization. Resource allocation is critical and it is
having a direct impact on investment.
Project management is required for:
1. Predictability
 Analyze the time to finish the product.
2. Co-ordination
 Resources(developers) needed to do the job.
3. Risk management
 Backup of resources to overcome critical situation.
 Ensuring defective product is not developed.

3. Maximize the productivity


 Productivity is a measure of efficiency as well as Return On Investment.
 This productivity measurement is based on total manpower efficiency.
 Productivity in which when product is increased then investment is fall
down saving manpower

Different ways to improve productivity:


 Use of automated tools wherever is required.
Use of resources with specific set of skills which exactly matches the requirements
of the product, which reduces the time in training the resource.

Q. 18 Explain Analysis and Support phases in embedded system development life


cycle.
Ans.: 1) Aalysis and Requirement:
a) This phase is where businesses will work on the source of their problem or
the need for a change.
b) In the event of a problem, possible solutions are submitted and analyzed to
identify the best fit for the ultimate goal(s) of the project.
c) This is where teams consider the functional requirements of the project or
solution.
d) It is also where system analysis takes place—or analyzing the needs of the
end users to ensure the new system can meet their expectations.
e) Systems analysis is vital in determining what a business‘s needs are, as well
as how they can be met, who will be responsible for individual pieces of the
project, and what sort of timeline should be expected.
f) There are several tools businesses can use that are specific to the second
phase.
g) They include:
 CASE (Computer Aided Systems/Software Engineering)
 Requirements gathering
 Structured analysis
h) Business requirements are gathered in this phase.
i) This phase is the main focus of the project managers and stake holders.
j) Meetings with managers, stake holders and users are held in order to
determine the requirements like; Who is going to use the system? How will
they use the system? What data should be input into the system? What data
should be output by the system?
k) These are general questions that get answered during a requirements
gathering phase.
l) After requirement gathering these requirements are analyzed for their
validity and the possibility of incorporating the requirements in the system
to be development is also studied.
m) Finally, a Requirement Specification document is created which serves the
purpose of guideline for the next phase of the model.
n) The testing team follows the Software Testing Life Cycle and starts the Test
Planning phase after the requirements analysis is completed.

2)Support:

a) Final phase involves maintenance and regular required updates. This step is
when end users can fine-tune the system, if they wish, to boost
performance, add new capabilities or meet additional user requirements.
b) The maintenance phase of the SDLC occurs after the product is in full
operation. Maintenance of software can include software upgrades, repairs,
and fixes of the software if it breaks.
c) Software applications often need to be upgraded or integrated with new
systems the customer deploys. It is often necessary to provide additional
testing of the software or version upgrades. During the maintenance phase,
errors or defects may exist, which would require repairs during additional
testing of the software. Monitoring the performance of the software is also
included during the maintenance phase.
d) The Support stage of the SDLC deals with the on-going support and
maintenance of the business solution. The long-term support branch owners
(may be AIMS, CS&P or O&S depending on support type) are responsible
for the maintenance and up keep of all project delivered documentation
used to facilitate ongoing support and maintenance.

Q. 19 Explain Conceptualization and Deployment phases in embedded system


development life cycle.
Ans.: Conceptualization
Conceptualization can be viewed as a phase shaping the need of an end user or
convincing the end user, whether it is a feasible product and how this product can
be realised.
Conceptualization phase involves following activities
I. Analysis and study activity-
1. Feasibility study- It examines the need for the product carefully and suggests
one or more possible solutions to build the need as a product along with
alternatives.
2. Cost benefit analysis- it is a means of identifying, revealing and assessing the
total development cost and the profit expected from the product.
3. Product scope- it deals with what is in scope for the product and what is not in
scope for the product.
II. Planning activities-it covers various plans required for the product
development, like the plan and schedule for executing the next phases following
conceptualisation, resource planning, risk management plans, etc.
At the end of the conceptualization phase, the reports on analysis and study
activities and planning activities are submitted to the client for review and approval
along with any one of the following recommendation
1. The product is feasible; proceed to the next phase of the product life cycle.
2. Product is not feasible; scrap the project.
Deployment
Deployment is the process of launching he first fully functional model of the
product in the market or handing over the fully functional initial model to end user
or client. This phase is also known as first customer shipping (FCS).
Important task performed during this phase is as follows-
1. Notification of product deployment-whenever the product is ready to
launch in the market, the launching ceremony details should be
communicated to the stake holders and to the public if it is a commercial
product.
2. Execution of training plan-proper training should be given to the end user
to get them acquainted with the new product.
3. Product installation-install the product as per the installation document to
ensure that it is fully functional.
4. Product post-implementation review-once product is launched in the
market conduct a post-implementation review to determine the success and
to document the problems faced during installation and the solutions
adopted to overcome them.

Q. 20 Explain the relation between Upgrade, Retirement and Need phases of EDLC?
Ans.: The following figure depicts the different phases in EDLC:

Need:
The need may come from an individual or from the public or from a company.
‗Need‘ should be articulated to initiate the Development Life Cycle; a ‗Concept
Proposal‘ is prepared which is reviewed by the senior management for approval.
Need can be visualized in any one of the following three needs:
New or Custom Product Development.
Product Re-engineering.
Product Maintenance.

Upgrades:
Deals with the development of upgrades (new versions) for the product which is
already present in the market.
Product upgrade results as an output of major bug fixes.
During the upgrade phase, the system is subject to design modification to fix the
major bugs reported.

Retirement/Disposal
The retirement/disposal of the product is a gradual process.
This phase is the final phase in a product development life cycle where the product
is declared as discontinued from the market.
The disposal of a product is essential due to the following reasons
Rapid technology advancement
Increased user needs.
UNIT-2
Q. 21 Explain the Sequential Program Model for Seat Belt Warning System.
Ans.: SEQUENTIAL PROGRAM MODEL
 In the sequential programming model, the functions or processing
requirements are executed in sequence .
 It is same as the conventional procedural programming.
 Here the program instructions are iterated and executed conditionally and
the data gets transformed through a series of operations.
 Finite State Machines(FSMs) are good choice for sequential program
modeling.
 Another important tool used for modeling sequential program is Flow
Charts.
 The FSM approach represents the states, events, transitions and actions,
whereas the Flow Charts models the execution flow.
 The execution of functions in a sequential program model for the ‗Seat Belt
Warning‘ system is as follows:-

#define ON 1
#define OFF 0
#define YES 1
#define NO 0
void seat_belt_warn()
{
wait_10sec();
if(check_ignition_key()==ON)
{
if(check_seat_belts==OFF)
{
set_timer(5);
start_alarm();
while((check_seat_belt()==OFF && (check_ignition_key()==OFF) &&
(timer_expire()==NO));
stop_alarm();
}
}
}

Q. 22 Explain the fundamental issues in hardware and software co-design.


Ans.: The fundamental issues in hardware and software co-design as follows:

1. Selecting the model:


-The h/w – s/w co-design models are used for capturing & describing the system
characteristics.
-A model is a formal system consisting of objects and composition rules.
-When the design moves to the implementation aspect, the information about the
system components is revealed and designer has to switch to a model capable for
capturing the system‘s architecture.
2. Selecting Architecture :
-The architecture specifies how a s/w is going to implement in terms of the no.
of types of different components & the interconnection among them.

 Controller Architecture:
-Controller Architecture implements the finite state machine model using
state register and 2 combinational circuits.
-The state register holds the present state and the combinational circuits
implements the logic for next state and output.

 Data Path Architecture:


-The data path architecture is best suited for implementing the data flow
graph model where the output is generated as a result of a set of predefined
computations on the input data.

 Finite State Machine Data Path(FSMD):


-The FSMD architecture combines the controller architecture with data path
architecture.
-The controller generates the control input whereas the data path processes
the data.
-It implements a controller with data path.

 Complex Instruction Set Computing (CISC):


-The CICS architecture uses an instruction set representing complex
operation.
-The CICS instruction set is able to perform a large complex operation with a
single instruction.

 Reduced Instruction Set Computing (RISC):


-The RISC architecture uses instruction set representing simple operations
and it requires the execution of multiple RISC instruction to perform a
complex operations.
-The data path of RISC architecture contains a large register file for storing
the operand and output.

 Very Long Instruction Word(VLIW):


-The VLIW architecture implements multiple functional units in the data
path.

Q. 23 Write a note on electronic components used in embedded system.


Ans.: Electronic component in embedded system are of two types,
1. Analog electronic components.
2. Digital electronic components.

1. Analog electronic components:-


a. A register limits the current following though a circuit
b. Interfacing of LEDs buzzer etc. with the port pins of microcontroller though
current limiting, resistors is a typical example for the usage of registorsin
embedded system application.
c. Capacitors and inductors used in signal filtering and resonating circuit.
d. Electronic capacitor , ceramic capacitor, tantalum capacitor etc., are the
commonly used capacitor in embedded hardware design
e. Inductor are widely used filtering the power supply from ripple and noise signal
f. PN junction Diode, Schottky diode, zener diode etc. is widely used in embedded
system hardware design
g. A zener Diode acts as normal PN junction diode when forward biased.
h. It also permits current flow in the reserve direction if the voltage is greater then
the junction breakdown voltage.
i. Transaction in embedded application are used for either switching or
amplification purpose.
j. In switching application the transistor is in either ON or OFF state.
k. In amplification operation the transistor is always in the On state partially ON.
l. The current is below saturation current value and the current through the
transistor is variable.
2.Digital electronic components :-
a. Digital electronic deal with digital or discrete signal.
b.Micropcessor, microcontroller and system on chips work on digital principle.
c.The interact with the rest of the world through digital I\O interface and process
digital data.
d. Embedded system employ varies digital electronic circuit for ―glue logic ‖
implementation.
e. ―Glue Logic‖ is the custom digital electronic circuitry required to achieve
compatibility interface between two different integrated circuit chips.
f. Address decoder, latches, encoder\ decoder, etc. are the example of glue circuit.
g. Transistor Transistor logic (TTL), Complementary metal oxide semiconductor
CMOS logic etc. are some of the standard describing the electrical characteristics
of the digital signal in a digital system.
1. Open collector and tri state output.
2. Logic gate
3. Buffer
4. Latch
5. Decoder
6. Encoder
7. Multiplexer

De-multiplexer.

Q. 25 Write note on Device Drivers.


Ans.: In computing, a device driver is a computer program that operates or controls a
particular type of device that is attached to a computer. A driver provides
a software interface to hardware devices, enabling operating systems and other
computer programs to access hardware functions without needing to know precise
details of the hardware being used. A driver communicates with the device through
the computer bus or communications subsystem to which the hardware connects.
When a calling program invokes a routine in the driver, the driver issues commands
to the device. Once the device sends data back to the driver, the driver may invoke
routines in the original calling program. Drivers are hardware dependent
and operating-system-specific. They usually provide the interrupt handling required
for any necessary asynchronous time-dependent hardware interface.
The main purpose of device drivers is to provide abstraction by acting as a
translator between a hardware device and the applications or operating systems that
use it. Programmers can write the higher-level application code independently of
whatever specific hardware the end-user is using.
For example, a high-level application for interacting with a serial port may simply
have two functions for "send data" and "receive data". At a lower level, a device
driver implementing these functions would communicate to the particular serial
port controller installed on a user's computer. The commands needed to control
a 16550 UART are much different from the commands needed to control
an FTDI serial port converter, but each hardware-specific device
driver abstracts these details into the same (or similar) software interface.
Common levels of abstraction for device drivers include:
For hardware:
 Interfacing directly
 Writing to or reading from a device control register
 Using some higher-level interface (e.g. Video BIOS)
 Using another lower-level device driver (e.g. file system drivers using disk
drivers)
 Simulating work with hardware, while doing something entirely different

For software:
 Allowing the operating system direct access to hardware resources
 Implementing only primitives
 Implementing an interface for non-driver software (e.g., TWAIN)
 Implementing a language, sometimes quite high-level (e.g., PostScript)
So choosing and installing the correct device drivers for given hardware is often a
key component of computer system configuration.

Q. 26 Explain high level language to machine language conversion process?


Ans.:  A compiler translates the code written in one language to some other
language without changing the meaning of the program. It is also expected
that a compiler should make the target code efficient and optimized in
terms of time and space.

 Compiler design principles provide an in-depth view of translation and


optimization process. Compiler design covers basic translation mechanism
and error detection & recovery. It includes lexical, syntax, and semantic
analysis as front end, and code generation and optimization as back-end.

 The process of converting the source code representation of your


embedded software into an executable binary image involves three distinct
steps:

1. Each of the source files must be compiled or assembled into an object file.

2. All of the object files that result from the first step must be linked together
to produce a single object file, called the relocatable program.
3. Physical memory addresses must be assigned to the relative offsets within
the relocatable program in a process called relocation.

 The result of the final step is a file containing an executable binary image
that is ready to run on the embedded system.
 The embedded software development process has the three steps are Each
of these development tools takes one or more files as input and produces a
single output file.
 Each of the steps of the embedded software build process is a
transformation performed by software running on a general-purpose
computer. To distinguish this development computer (usually a PC or Unix
workstation) from the target embedded system, it is referred to as the host
computer. The compiler, assembler, linker, and locator run on a host
computer rather than on the embedded system itself. Yet, these tools
combine their efforts to produce an executable binary image that will
execute properly only on the target embedded system.
 The job of a compiler is mainly to translate programs written in some
human-readable language into an equivalent set of opcodes for a particular
processor. In that sense, an assembler is also a compiler (you might call it
an ―assembly language compiler‖), but one that performs a much simpler
one-to-one translation from one line of human-readable mnemonics to the
equivalent opcode. Everything in this section applies equally to compilers
and assemblers.
 Together these tools make up the first step of the embedded software build
process. Of course, each processor has its own unique machine language, so
you need to choose a compiler that produces programs for your specific
target processor. In the embedded systems case, this compiler almost always
runs on the host computer. It simply doesn‘t make sense to execute the
compiler on the embedded system itself. A compiler such as this—that runs
on one computer platform and produces code for another—is called a cross-
compiler. The use of a cross-compiler is one of the defining features of
embedded software development.
 The GNU C compiler (gcc) and assembler (as) can be configured as either
native compilers or cross-compilers. These tools support an impressive set
ofhost-target combinations. The gcc compiler will run on all common PC
and Mac operating systems. The target processor support is extensive,
including AVR, Intel x86, MIPS, PowerPC, ARM, and SPARC. Additional
information about gcc can be found online at http://gcc.gnu.org.
 Regardless of the input language (C, C++, assembly, or any other), the
output of the cross-compiler will be an object file. This is a specially
formatted binary file that contains the set of instructions and data resulting
from the language translation process. Although parts of this file contain
executable code, the object file cannot be executed directly. In fact, the
internal structure of an object file emphasizes the incompleteness of the
larger program.
 The contents of an object file can be thought of as a very large, flexible data
structure. The structure of the file is often defined by a standard format such
as the Common Object File Format (COFF) or Executable and Linkable
Format (ELF). If you‘ll be using more than one compiler (i.e., you‘ll be
writing parts of your program in different source languages), you need to
make sure that each compiler is capable of producing object files in the
same format; gcc supports both of the file formats previously mentioned.
Although many compilers (particularly those that run on Unix platforms)
support standard object file formats such as COFF and ELF, some others
produce object files only in proprietary formats. If you‘re using one of the
compilers in the latter group, you might find that you need to get all of your
other development tools from the same vendor.
 Most object files begin with a header that describes the sections that follow.
Each of these sections contains one or more blocks of code or data that
originated within the source file you created. However, the compiler has
regrouped these blocks into related sections. For example, in gcc all of the
code blocks are collected into a section called text, initialized global
variables (and their initial values) into a section called data, and
uninitialized global variables into a section called bss.
 There is also usually a symbol table somewhere in the object file that
contains the names and locations of all the variables and functions
referenced within the source file. Parts of this table may be incomplete,
however, because not all of the variables and functions are always defined
in the same file. These are the symbols that refer to variables and functions
defined in other source files. And it is up to the linker to resolve such
unresolved references.

Q. 27 Explain Finite state machine Model for automatic Tea/Coffee vending


machine.
Ans.:  The tea/coffee vending is initiated by user inserting a 5 rupee coin.
 After inserting the coin ,the user can either select ‗Coffee‘ or ‗Tea‘ or press
‗Cancel‘ to cancel the order & take back the coin.
 It contain 4 states namely:- ‗Wait for coin‘, ‘Wait for User Input‘, ‗Dispense
Tea‘ & ‗Dispense Coffee‘.
 The event ‗Insert coin‘ , transitions the state to ‗Wait for User Input‘.
 The system stays in this state until a user input is received from the button
‗Cancel‘ , ‗Tea‘ or ‗Coffee‘.
 If the event triggered in ‗Wait state‘ is ‗Cancel button‘ press, the coin is
pushed out and the state transition to ‗Wait for Coin‘.
 If the event received in the ‗Wait State‘ is either ‗Tea‘ button press,
‗Coffee‘ button press, the state changes to ‗Dispense Tea‘ & ‗Dispense
Coffee‘ respectively.
 Once the coffee/tea vending is over the respective states transition back to
the ‗Wait for Coin‘ state.
E:-Event
A:-Action
State A:- Wait for coin
State B:- Wait for user input
Sate C :- Dispense tea
State D:- Dispense coffee

Q. 30 Elaborate use of UML in embedded system development


Ans.: Software in embedded systems design needs to be looked carefully for software
specification and analysis. Unified Modeling Language and extension proposals in
the real-time domain can be used for the development of new design flows. UML
can be used for specification, design and implementation of modern embedded
systems. UML can also be used for modelling the system from functional
requirements through executable specifications and for that purpose it is important
to be able to model the context for an embedded system – both environmental and
user-driven.[6]
Some key concepts of UML related to embedded systems:

 UML is not a single language, but a set of notations, syntax and semantics to
allow the creation of families of languages for particular applications.
 Extension mechanisms in UML like profiles, stereotypes, tags, and constraints
can be used for particular applications.
 Use-case modelling to describe system environments, user scenarios, and test
cases.
 UML has support for object-oriented system specification, design and
modelling.
 Growing interest in UML from the embedded systems and real-time
community.
 Support for state-machine semantics which can be used for modelling and
synthesis.
 UML supports object-based structural decomposition and refinement.
Q. 31 Explain embedded firmware design approaches.
Ans.:  The firmware design approaches for embedded product is purely dependent
on the complexity of the functions to be performed, the speed of operation
required etc.
 Two basic approaches are used for embedded firmware design.
 They are conventional procedural based firmware design and embedded
operating system based design.
1. The super loop based approach/conventional procedural based
firmware design.
 Conventional procedural programming is executed task by
task.
 The task listed at the top of the program code is executed
first and the tasks just below the top are executed after
completing the first task.
 In a multiple task based system, each task is executed in
serial in this approach.
 Non-ending repetition is achieved by using an infinite loop.
this approach is also referred as super loop based approach.
 The tasks are running inside an infinite loop, the only way to
come out of the loop is either a hardware reset or an interrupt
assertion.
 A hardware reset brings the program execution back to the
main loop.
 Super loop based design doesn‘t require an operating system
,since there is no need for scheduling which task is to be
executed and assigning priority to each task.
 In a super loop based design, the priorities are fixed and the
order in which the tasks to be executed are also fixed.
 Design is deployed in low-cost product and product where
response time is not time critical.
 Example of super loop based is an electronic video game toy
containing keypad and display unit.
2. The embedded operating system based approach.
 The general purpose OS (GPOS) based design is very similar
to a conventional PC based application development where
an operating system.
 You will be creating and running user applications on top of
it.
 Example of embedded product using Microsoft windows XP
OS are PDA.
 Real time operating system (RTOS) based design approach is
employed in embedded products demanding real time
response.
 RTOS respond in a timely and predictable manner to events.
 RTOS contains a real time kernel responsible scheduler for
scheduling tasks, multiple threads etc.
 RTOS allows flexible scheduling of system resources like
the CPU and memory and offers some way to communicate
between tasks.

Q. 33 Explain Interrupt Handling and Time Management Performed by RTOS?


Ans.: Interrupt Handling
1. Deals with the handling of various types of interrupts.
2. Interrupts provide real-time behaviour to systems.
3. Interrupts informs the processors that an external device or an associated
task require immediate attention of the CPU.
4. Interrupt can either synchronous or asynchronous.
5. Interrupts which occurs in sync. with the currently executing task is known
as synchronous interrupts.
6. Interrupts which are not in sync. with currently executing task is known as
asynchronous interrupts.
7. for synchronous interrupts, the interrupts handler runs in the same context
of interrupting task.
8. for asynchronous interrupts, the interrupt handler is usually written as
separate task and it runs in a different context.

Time Management
1. Accurate time management is essential for providing precise time reference
for all application.
2. The time reference to kernel is provided by a high-resolution real time
clock (RTC) hardware chip.
3. The hardware timer is programmed to interrupt the processor/controller at a
fixed rate. This timer interrupt is referred as timer tick.
4. The timer tick is taken as the timing references by the kernel.
the timer tick interval may vary depending on the hardware timer.

Q.34 What points should be considered to select RTOS?


Ans.:  A lot of factors need to be analyzed carefully before making a decision on the
selection of a RTOS.
 The factors can be either functional or non-functional.
Functional Requirements:
Functional requirements of RTOS are as follows:
 Processor Support
 Memory Requirements
 Real-Time Capabilities
 Kernel and Interrupt Latency
 Inter Process Communication and Task Synchronization
 Modularization Support
 Support for Networking and Communication
 Development Language Support
Processor Support: It is essential to ensure the processor support by RTOS.
Memory Requirement:
 OS requires ROM memory for holding OS files and it is normally stored in a
non-volatile memory files like FLASH.
 OS also requires working memory RAM for loading OS services.
 Since embedded systems are memory constrained, it is essential to evaluate the
minimal ROM and RAM requirements for OS under consideration.

Real Time Capabilities: The task/process scheduling policies play an important


role in Real time behaviour of an OS.

Kernel and Interrupt Latency:


 The kernel of OS may disable interrupts while executing certain services and it
may lead to interrupt latency.
 For an embedded system whose response requirements are high, this latency
should be minimal.
Inter Process Communication and Task Synchronization:
 The implementation of inter process communication and synchronization is OS
kernel dependent.
 Certain kernels may provide a bunch of options whereas others provide very
limited options.
Modularization Support: It is very useful if OS supports modularization where in
which developer can choose the essential modules and recompile OS image for
functioning.

Support for Networking and Communication:


 OS kernel may provide stack implementation and driver support for a bunch of
communication interfaces and networking.
 Ensure that OS under consideration provides support for all interfaces required
by embedded product.
Development Language Support:
 Certain OS includes run time libraries required for running applications written
in languages like JAVA and C#.
 A JVM customized for OS is essential for running java applications.

Non-Functional Requirements:
Non-functional requirements of RTOS are as follows:
o Custom Developed or Off the shelf
o Cost
o Development and Debugging tools Availability
o Ease of Use
o After sales

Custom Developed or Off the shelf:


o Depending on OS requirement, it is possible to go for the complete development
of OS suiting ES needs or use an off the shelf, readily available OS which is
either a commercial product or open source product which is in close match
with the system requirements.
o The decision on which to select is purely dependent on the development cost,
licensing fees for OS, development time and availability of skilled resources.

Cost: The total cost for developing or buying OS and maintaining it in terms of
commercial product and custom build needs to be evaluated before taking a
decision on the selection.

Development and Debugging tools Availability:


 The availability of development and debugging tools is a critical decision
making factor in the selection of OS for embedded design.
 Certain OS may be superior in performance but the availability of tools for
supporting the development may be limited.
Ease of Use: How easy it is to use a commercial RTOS is another important feature
that needs to be considered in the RTOS selection

After sales: For a commercial embedded RTOS, after sales in the form of email,
on call services, for bug fixes, critical patch updates and supported for production
issues, etc. should be analysed thoroughly.

Q.35 Explain different functions of RTOS.


Ans.:
a. Integrated development environment
b. In embedded C or embedded C++
c. Real-Time clock based hardware & software timers
d. Additional Functions. For Ex-TCP/IP

Scheduler
Device drivers & device manager
 Testing & System Debugging
 Functions for IPCs using the signals event flag group, semaphore
Functions- handling functions.
FEATURES:
 Priorities definitions for the tasks & IST.
 Limit number of task.
 Device imagining tool & device drivers.
 Basic kernel Functions & Scheduling.
 Host Target tools.
 Support to the clock time & timer functions.
 Support number of architectures of processors.

TYPES OF RTOS:
 In-House developed RTOS.
 Broad-based commercial RTOS.
 General purpose OS with RTOS.
 Special focus RTOS.

IN HOUSE DEVELOPED RTOS:


 For small level developer.
 Development company uses the codes built by the in
house group of engineers & system integrators.

BROAD-BASED COMMERCIAL RTOS:


 Helps in building a product fast.
 Support to development of GUIs in the systems.
 Provides several development tools.
 Support to DSO.

GENERAL PURPOSE OS WITH RTOS:


 Embedded LINUX or Windows Xp.
 Used in combination with RTOS functions. EX-RT
Linux.
 Windows Xp embedded for X86 architecture.

SPECIAL FOCUS RTOS:


Used with specific processors like ARM EX-OSEK for Automotives
or Symbian OS for the mobile phones.
Q. 36 Explain Hardware and Software trade-offs for embedded systems.
Ans.: Two approaches for the embedded Two approaches for the embedded system
design device programmer system design device programmer :-
(1) When the software development cycle ends then the cycle begins for the
process of integrating the software into the hardware at the time when a system is
designed .
(2) Both cycles concurrently proceed when co-designing a time critical
sophisticated system
 Hardware Software Trade Hardware Software Trade –off:

It is possible that certain subsystems in hardware (microcontroller), IO memory


accesses, real-time clock, system clock, pulse width modulation, timer and serial
communication are also implemented by the software.
It is possible that certain subsystems in hardware (microcontroller), IO memory
accesses, real-time clock, system clock, pulse width modulation, timer and serial
communication are also implemented by the software.
 A serial communication real-time clock and timers featuring
microcontroller may cost more than the microprocessor with external
memory and a software implementation. Hardware implementations
provide advantage of processing speed.
 Hardware implementation advantages Hardware implementation advantages

i) Reduced memory for the program.


ii) Reduced number of chips but at an increased cost.
iii) Simple coding for the device drivers.
iv) Internally embedded codes, which are more secure than at the external ROM.
v) Energy dissipation can be controlled by controlling the clock rate and voltage.
 Software implementation advantages:-

i) Easier to change when new hardware versions become available.


ii) Programmability for complex operations.
iii) Faster development time.
iv) Modularity and portability.
v) Use of standard software engineering, modeling and RTOS tools.
vi) Faster speed of operation of complex functions with high-speed
microprocessors.
vii) Less cost for simple systems.
Choosing the right platform:-
 Processor ASIP or ASSP
 Multiple processors
 System-on-Chip
 Memory
 Other Hardware Units of System
 Buses
 Software Language
 RTOS (real-time programming OS)
 Code generation tools
 Tools for finally embedding the software into binary image.
 Embedded System Processors Choice Embedded System Processors Choice
 Processor Less System
 System with Microprocessor or Microcontroller or DSP
 System with Single purpose processor or ASSP in VLSI or FPGA.
Factors and Needed Features Taken into Factors and Needed Features Taken
into Consideration:-
 When the 32-bit system, 16kB+ on chip memory and need of cache,
memory management unit or SIMD or MIMD or DSP instructions arise, we
use a microprocessor or DSP.
 Video game, voice recognition and image-filtering systems − need a DSP.
 Microcontroller provides the advantage of on-chip memories and
subsystems like the timers.

Factors for On Factors for On -Chip Feature Chip Feature


1. 8 bit or 16 bit or 32 bit ALU
2. Cache,
3. Memory Management Unit or
4. DSP calculations I
5. Intensive computations at fast rate
6. Total external and internal Memory up to or more than 64 kB
7. Internal RAM Internal ROM/EPROM/EEPROM
8. Flash
9. Timer 1, 2 or 3
10. Watchdog Timer
11. Serial Peripheral Interface Full duplex
12. Serial Synchronous Communication Interface (SI) Half Duplex.

Hardware Sensitive Programming:-


Can have memory mapped IOs or IO mapped IOs. IO instructions are processor
sensitive. Fixed point ALU only or Floating-point operations preetn Provision for
execution of SIMD (single instruction multiple data) and VLIW (very large
instruction word) instructions. Programming of the modules needing SIMD and
VLIW instructions is handled differently in different processors. Assembly
language − sometimes facilitate an optimal use of the processor‘s special features
and instructions. Advanced processors − usually provide the compiler or
optimizing compiler sub-unit to obviate need for programming in assembly.
Memory Sensitive :-
Real-time programming model and algorithm used by a programmer depend on
memory available and processor performance. Memory address of IO device
registers, buffers, control-registers and vector addresses for the interrupt sources or
source groups are prefixed in a microcontroller. Programming for these takes into
account the addresses. Same addresses must be allotted for these by the RTOS.
Memory-sensitive programs need to be optimized for the memory use by skillful
programming for example, ARM Thumb® instruction set use.
UNIT-3
Q. 38 Explain different terminology used in memory system designing?
Ans.: Terminology used into the detail design and application of memory system are as
follows:
1)Access time:
 It is the time to access a word in memory.
 Access time specifies to perform a read or a write operation.
 For a read operation, that time will be when the data appears at the
output port of the device.
 For a write operation, that time will be when the data is successfully
written into the memory.

2)Cycle time:
 It is the time interval between read/write instructions to the next
read/write instructions.
 The time interval from the start of one read or write operations until
the start of the next.
 It is the measure the how quickly the memory can be repeatedly
accessed.

3)Block size:
 It is the collection of words in memory.
 When quantities of the data are transferred within the system, the
unit of transfer are called blocks.
 The block size specifies the number, words in such a collection.

4)Band width:
 Memory band width is a measure of the word transmission rate to /
from memory via the memory I/O bus.
 When the data is read from the memory, the pattern on each data
line will be q square wave.
 The highest frequency of that square wave is the memory band
width.

5)Latency:
 It is the amount of time required to access the first sequence of
words.
 It measures the time necessary to compute the address of that
sequence and that locates its first block of words in memory.
6)Block access time:
 It is the time required to access the entire block from start for read.
 It includes the time to find the Oth word of block and then transfer
remaining words.

7)Page:
 It is the logical view placed on larger collection of words in
memory.
 Page is generally consist of blocks, the size of page can be given in
words or blocks.

Q. 39 What is memory map? Why is memory map necessary in the design of


embedded systems?
Ans.:  Formulating a memory map is a useful early step in the design of the core
system .
 The map specifies the allocation and use of each location in the physical
memory address space.
 At a bare minimum the memory map should identify the data and code
space.
 The memory map list the addresses in the memory allocated to each portion
of the application. It is primary physical memory.
 From a high-level perspective the memory subsystem is comprised of two
basic types: RAM and ROM.
 RAM is used to hold words that may change at runtime.
 This will be the space available to hold data among other things.
 A portion of the RAM memory may be allocated for non-volatile RAM that
is be used for data that needs to be retained if power is removed from the
system.
 If the design is using memory mapped I/O, then all the physical memory
will not be available for data or code.
 ROM is used to hold words that are not expected to change at runtime.

Q. 40 What are some of the factors that should be considered when designing a
memory map for an embedded system?
Ans.: In computer science, a memory map is a structure of data (which usually resides in
memory itself) that indicates how memory is laid out. Memory maps can have a
different meaning in different parts of the operating system. It is the fastest and
most flexible cache organization which uses an associative memory. The
associative memory stores both the address and content of the memory word.
In the boot process, a memory map is passed on from the firmware in order to
instruct an operating system kernel about memory layout. It contains the
information regarding the size of total memory, any reserved regions and may also
provide other details specific to the architecture.
In virtual memory implementations and memory management units, a memory map
refers to page tables, which store the mapping between a certain process's virtual
memory layout and how that space relates to physical memory addresses.
In native debugger programs, a memory map refers to the mapping between loaded
executable/library files and memory regions. These memory maps are used to
resolve memory addresses (such as function pointers) to actual symbols.

BIOS Memory map


The PC BIOS provides a set of routines that can be used by operating system to get
memory layout. Some of the available routines are:
BIOS Function: INT 0x15, AX=0xE801 [1]:
This BIOS interrupt call is used by the running OS to get the memory size for
64MB+ configurations. It is supported by AMI BIOSses dated August 23, 1994 or
later. The operating system just sets AX to 0xE801 then calls int 0x15.

Q. 41 Classify memories based on their strength and weaknesses.


Ans.: Introduction to Memory Types
Many types of memory devices are available for use in modern computer systems.
As an embedded software engineer, you must be aware of the differences between
them and understand how to use each type effectively. In our discussion, we will
approach these devices from the software developer's perspective. Keep in mind
that the development of these devices took several decades and that their
underlying hardware differs significantly. The names of the memory types
frequently reflect the historical nature of the development process and are often
more confusing than insightful. Figure 1 classifies the memory devices we'll
discuss as RAM, ROM, or a hybrid of the two.

Types of RAM
The RAM family includes two important memory devices: static RAM (SRAM)
and dynamic RAM (DRAM). The primary difference between them is the lifetime
of the data they store. SRAM retains its contents as long as electrical power is
applied to the chip. If the power is turned off or lost temporarily, its contents will
be lost forever. DRAM, on the other hand, has an extremely short data lifetime-
typically about four milliseconds. This is true even when power is applied
constantly.
In short, SRAM has all the properties of the memory you think of when you hear
the word RAM. Compared to that, DRAM seems kind of useless. By itself, it is.
However, a simple piece of hardware called a DRAM controller can be used to
make DRAM behave more like SRAM. The job of the DRAM controller is to
periodically refresh the data stored in the DRAM. By refreshing the data before it
expires, the contents of memory can be kept alive for as long as they are needed. So
DRAM is as useful as SRAM after all.

Types of ROM
Memories in the ROM family are distinguished by the methods used to write new
data to them (usually called programming), and the number of times they can be
rewritten. This classification reflects the evolution of ROM devices from hardwired
to programmable to erasable-and-programmable. A common feature of all these
devices is their ability to retain data and programs forever, even during a power
failure.
The very first ROMs were hardwired devices that contained a preprogrammed set
of data or instructions. The contents of the ROM had to be specified before chip
production, so the actual data could be used to arrange the transistors inside the
chip. Hardwired memories are still used, though they are now called "masked
ROMs" to distinguish them from other types of ROM. The primary advantage of a
masked ROM is its low production cost. Unfortunately, the cost is low only when
large quantities of the same ROM are required.
An EPROM (erasable-and-programmable ROM) is programmed in exactly the
same manner as a PROM. However, EPROMs can be erased and reprogrammed
repeatedly. To erase an EPROM, you simply expose the device to a strong source
of ultraviolet light. (A window in the top of the device allows the light to reach the
silicon.) By doing this, you essentially reset the entire chip to its initial-
unprogrammed-state. Though more expensive than PROMs, their ability to be
reprogrammed makes EPROMs an essential part of the software development and
testing process.

Hybrid types
As memory technology has matured in recent years, the line between RAM and
ROM has blurred. Now, several types of memory combine features of both. These
devices do not belong to either group and can be collectively referred to as hybrid
memory devices. Hybrid memories can be read and written as desired, like RAM,
but maintain their contents without electrical power, just like ROM. Two of the
hybrid devices, EEPROM and flash, are descendants of ROM devices. These are
typically used to store code. The third hybrid, NVRAM, is a modified version of
SRAM. NVRAM usually holds persistent data.
EEPROMs are electrically-erasable-and-programmable. Internally, they are similar
to EPROMs, but the erase operation is accomplished electrically, rather than by
exposure to ultraviolet light. Any byte within an EEPROM may be erased and
rewritten. Once written, the new data will remain in the device forever-or at least
until it is electrically erased. The primary tradeoff for this improved functionality is
higher cost, though write cycles are also significantly longer than writes to a RAM.
So you wouldn't want to use an EEPROM for your main system memory.
Flash memory combines the best features of the memory devices described thus far.
Flash memory devices are high density, low cost, nonvolatile, fast (to read, but not
to write), and electrically reprogrammable. These advantages are overwhelming
and, as a direct result, the use of flash memory has increased dramatically in
embedded systems. From a software viewpoint, flash and EEPROM technologies
are very similar. The major difference is that flash devices can only be erased one
sector at a time, not byte-by-byte. Typical sector sizes are in the range 256 bytes to
16KB. Despite this disadvantage, flash is much more popular than EEPROM and is
rapidly displacing many of the ROM devices as well.

Strengths
Many memory studies provide*evidence*to support the distinction between STM
and LTM (in terms of encoding, duration and capacity). *The model can account
for*primacy & recency effects.
The model is*influential*as it has generated a lot of research into memory.
The model is supported by studies of amnesiacs: For example the*HM case study.*
HM is still alive but has marked problems in long-term memory after brain surgery.
He has remembered little of personal (death of mother and father) or public events
(*Watergate*,*Vietnam*War) that have occurred over the last 45 years. However
his short-term memory remains intact.

Weaknesses
The model is*oversimplified, in particular when it suggests that both short-term
and long-term memory each operate in a single, uniform fashion.* We now know is
this not the case.
It has now become apparent that both short-term and long-term memory are more
complicated that previously thought.* For example, the*Working Model of
Memory*proposed by Baddeley and Hitch (1974) showed that short term memory
is more than just one simple unitary store and comprises different components (e.g.
central executive, visuo-spatial etc.).
In the case of long-term memory, it is unlikely that different kinds of knowledge,
such as remembering how to play a computer game, the rules of subtraction and
remembering what we did yesterday are all stored within a single, long-term
memory store.* Indeed*different types of long-term memory have been identified,
namely episodic (memories of events), procedural (knowledge of how to do things)
and semantic (general knowledge).
The model suggests*rehearsal*helps to transfer information into LTM but this
is*not essential.*Why are we able to recall information which we did not rehearse
(e.g. swimming) yet unable to recall information which we have rehearsed (e.g.
reading your notes while revising). Therefore, the role of rehearsal as a means of
transferring from STM to LTM is much less important than Atkinson and Shiffrin
(1968) claimed in their model.
However, the models main emphasis was on structure and tends to*neglect the
process elements of memory*(e.g. it only focuses on attention and rehearsal).
The*multi store model*has been criticized for being*a passive/one way/linear
model.

Q.42 Explain internal architecture of typical memory chip.


Ans.: Primary storage (or main memory or internal memory), often referred to simply
as memory, is the only one directly accessible to the CPU. The CPU continuously
reads instructions stored there and executes them as required. Any data actively
operated on is also stored there in uniform manner.
In computing, memory refers to the physical devices used to store programs
(sequences of instructions) or data (e.g. program state information) on a temporary
or permanent basis for use in a computer or other digital electronic device. The
term primary memory is used for the information in physical systems which
function at high-speed (RAM), as a distinction from secondary memory, which are
physical devices for program and data storage which are slow to access but offer
higher memory capacity. Primary memory stored on secondary memory is called
―Virtual Memory‖ Memory is a solid state digital device that provides storage for
data values. It also known as memory cell that exhibit two stable states which are 1
and 0. There are two memory types: volatile and non-volatile memory. Volatile
memory is memory that loses its contents when the computer or hardware device
loses power. Computer RAM is a good example of a volatile memory. Non-volatile
memory, sometimes abbreviated as NVRAM, is memory that keeps its contents
even if the power is lost. CMOS is a good example of a non-volatile memory.
Cache is generally divided into several types, such as L1 cache, L2 cache and L3
cache. Cache built into the CPU itself is referred to as Level 1 (L1) cache. Cache is
in a separate chip next to the CPU is called Level 2 (L2) cache. Some CPUs have
both, L1 and L2 cache built-in and assign a separate chip as cache Level 3 (L3)
cache. Cache built in CPU faster than separate cache. However, a separate cache is
still about twice as fast from Random Access Memory (RAM). Cache is more
expensive than RAM, but the motherboard with built-in cache very well in order to
maximize system performance.
Cache serves as a temporary storage for data or instructions needed by the
processor. In addition, the cache function to speed up data access on the computer
because the cache stores data / information has been accessed by a buffer, to ease
the work processor.

Another benefit of cache memory is that the CPU does not have to use the bus
system motherboard for data transfer. Each time the data must pass through the
system bus, the data transfer rate slow the ability motherboard. CPU can process
data much faster by avoiding obstacles created by the system bus..
Memory Trace
◦A temporal sequence of memory references (addresses) from a real program.
Temporal Locality
◦If an item is referenced, it will tend to be referenced again soon
Spatial Locality
◦If an item is referenced, nearby items will tend to be referenced soon.

Q. 43 Explain static and dynamic memory allocation.


Ans.: Static memory allocation:-
1. The first type of memory allocation is known as a static memory allocation,
which corresponds to file scope variables and local static variables. The
addresses and sizes of these allocations are fixed at the time of compilation1
and so they can be placed in a fixed-sized data area which then corresponds
to a section within the final linked executable file. Such memory allocations
are called static because they do not vary in location or size during the
lifetime of the program.
2. There can be many types of data sections within an executable file; the three
most common are normal data, BSS data and read-only data. BSS data
contains variables and arrays which are to be initialised to zero at run-time
and so is treated as a special case, since the actual contents of the section
need not be stored in the executable file. Read-only data consists of constant
variables and arrays whose contents are guaranteed not to change when a
program is being run.

int a; /* BSS data */


int b = 1; /* normal data */
const int c = 2; /* read-only data */
3. As all static memory allocations have sizes and address offsets that are
known at compile-time and are explicitly initialised, there is very little that
can go wrong with them. Data can be read or written past the end of such
variables, but that is a common problem with all memory allocations and is
generally easy to locate in that case. On systems that separate read-only data
from normal data, writing to a read-only variable can be quickly diagnosed
at run-time.

Dynamic memory allocation :-


1. The last type of memory allocation is known as a dynamic memory
allocation, which corresponds to memory allocated via malloc() or operator
new[]. The sizes, addresses and contents of such memory vary at run-time
and so can cause a lot of problems when trying to diagnose a fault in a
program. These memory allocations are called dynamic memory allocations
because their location and size can vary throughout the lifetime of a
program.
2. Such memory allocations are placed in a system memory area called the
heap, which is allocated per process on some systems, but on others may be
allocated directly from the system in scattered blocks. Unlike memory
allocated on the stack, memory allocated on the heap is not freed when a
function or scope is exited and so must be explicitly freed by the
programmer. The pattern of allocations and deallocations is not guaranteed
to be (and is not really expected to be) linear and so the functions that
allocate memory from the heap must be able to efficiently reuse freed
memory and resize existing allocated memory on request. In some
programming languages there is support for a garbage collector, which
attempts to automatically free memory that has had all references to it
removed, but this has traditionally not been very popular for programming
languages such as C and C++, and has been more widely used in functional
languages like ML1.
3. Because dynamic memory allocations are performed at run-time rather than
compile-time, they are out with the domain of the compiler and must be
implemented in a run-time package, usually as a set of functions within a
linker library. Such a package manages the heap in such a way as to abstract
its underlying structure from the programmer, providing a common
interface to heap management on different systems. However, this malloc
library must decide whether to implement a fast memory allocator, a space-
conserving memory allocator, or a bit of both. It must also try to keep its
own internal tables to a minimum so as to conserve memory, but this means
that it has very little capability to diagnose errors if any occur.
4. dynamic memory allocations are the types of memory allocations that can
cause the most problems in a program since almost nothing about them can
be used by the compiler to give the programmer useful warnings about
using uninitialized variables, using freed memory, running off the end of a
dynamically-allocated array, etc.
Q. 44 Explain Read and Write operation of DRAM with timing diagram.
Ans.: DRAM is a type of random-access memory that stores each bit of data. Data is
stored in a separate capacitor within an integrated circuit. The capacitor can be
either charged or discharged. Charged and discharged states are normally
represented by 1 or 0.
DRAM READ OPERATION
To read the data from a memory cell, the cell must be selected by its row and
column coordinates, the charge on the cell must be sensed, amplified, and sent to
the support circuitry, and the data must be sent to the data output. In terms of
timing, the following steps must occur:
1. The row address must be applied to the address input pins on the memory
device for the prescribed amount of time before RAS goes low (tASR) and
held (tRAH) after RAS goes low.
2. RAS must go from high to low and remain low (tRAS).
3. A column address must be applied to the address input pins on the memory
device for the prescribed amount of time (tASC) and held (tCAH) after
CAS goes low.
4. WE must be set high for a read operation to occur prior (tRCS) to the
transition of CAS, and remain high (tRCH) after the transition of CAS.
5. CAS must switch from high to low and remain low (tCAS).
6. OE goes low within the prescribed window of time. Cycling OE is optional;
it may be tied low, if desired.
7. Data appears at the data output pins of the memory device. The time at
which the data appears depends on when RAS (tRAC), CAS (tCAC), and
OE (tOEA) went low, and when the address is supplied (tAA).
8. Before the read cycle can be considered complete, CAS and RAS must
return to their inactive states (tCRP, tRP).
DRAM WRITE OPERATION :
To write to a memory cell, the row and column address for the cell must be selected
and data must be presented at the data input pins. The chip's onboard logic either
charges the memory cell's capacitor or discharges it, depending on whether a 1 or 0
is to be stored. In terms of timing, the following steps must occur:
1. The row address must be applied to the address input pins on the memory
device for the prescribed amount of time before RAS goes low and be held
for a period of time.
2. RAS must go from high to low.
3. A column address must be applied to the address input pins on the memory
device for the prescribed amount of time after RAS goes low and before
CAS goes low and held for the prescribed time.
4. WE must be set low for a certain time for a write operation to occur (tWP).
The timing of the transitions are determined by CAS going low (tWCS,
tWCH).
5. Data must be applied to the data input pins the prescribed amount of time
before CAS goes low (tDS) and held (tDH).
6. CAS must switch from high to low.
7. Before the write cycle can be considered complete, CAS and RAS must
return to their inactive states.
DRAM READ CYCLE 1

Q. 45 Explain Refresh Arbitration for DRAM.


Ans.: Internally refreshing one or more DRAM arrays without requiring additional
external command signals. Scheduling of either refresh cycles and/or read/write
access cycles uses an arbitration and selection circuit that receives a refresh request
input signal from an independent oscillator and a row access select RAS input
signal. A word line address multiplexer provides either internally-provided refresh
or externally-provided row-line address signals to a word line decoder.
A refresh row counter uses a token status signal for activating only one refresh row
counter at a time. Instantaneous refresh power is controlled by controlling the
number of cells in each DRAM block. An arbitration and control system includes
an address transition block with a delay for resolving met stability, a refresh control
block, a RAS control block, and an arbitration circuit that temporarily stores
unselected requests.
A DRAM system with internal refreshing of dynamic memory cells, comprising:
an arbitration and selection control circuit that receives the RFSH signal and the
input RAS signal and that arbitrates between those two signals to provide two
alternative output signals: one of which is an internal-refresh selection SEL_RFSH
command output signal when the RFSH input signal is given priority by the
arbitration and selection circuit and the other of which is an external-address
selection SEL_RAS command output signal when the RAS input signal is given
priority by the arbitration and selection circuit.
A DRAM system with internal refreshing of dynamic memory cells, comprising:
said DRAM system having an arbitration and selection control circuit that receives
the RFSH signal and the input RAS signal and that arbitrates between those two
signals to provide two output signals: one of which is an internal-refresh selection
SEL_RFSH command output signal when the RFSH input signal is given priority
by the arbitration and selection circuit and the other of which is an external-address
selection SEL_RAS command output signal when the RAS input signal is given
priority by the arbitration and selection circuit.
The arbitration and selection circuit includes a RAS_FF row address select flip-flop
circuit that has a clock input terminal connected to the RAS input terminal and that
has a data input terminal connected to a HIGH (VCC) voltage level; and wherein
the arbitration and selection circuit includes a REF_FF reference select flip-flop
circuit that has a clock input terminal connected the to the RFSH input terminal and
that has a data input terminal connected to a HIGH (VCC) voltage level. The
arbitration and selection circuit internally schedules refresh cycles for completion
within a predetermined period of time while performing arbitrary write or read
access cycles.

Q. 46 Explain refresh timing & refresh address of DRAM memory interface.


Ans.: Refresh Timing
• For a refresh, only the row address is needed, so a column address doesn't
have to be applied to the chip address circuits.
• Data read from the cells does not need to be fed into the output buffers or
the data bus to send to the CPU.
• Decoding of two most significant bits of such a counter will produce a
signal that occurs at count of 384 after 15.36 sec.
• Executing a refresh 16 counts early provides some timing margin.
• The refresh circuitry must perform a refresh cycle on each of the rows on
the chip within the refresh time interval, to make sure that each cell gets
refreshed.
• While the memory is operating, each memory cell must be refreshed
repetitively, within the maximum interval between refreshes specified by
the manufacturer, which is usually in the millisecond region.
• Refreshing does not employ the normal memory operations (read and write
cycles) used to access data, but specialized cycles called refresh cycles
which are generated by separate counter circuits in the memory circuitry
and interspersed between normal memory accesses.
Refresh Address
• The refresh address is not the same address as is used by normal read or
write operations
• To provide the address, 12 bit binary counter is used.
• The counter should be incremented following the completion of each row
refresh operation.
• During a normal read or write operation, the row addresses to the DRAM
are provided by the source executing the operation.
• During a refresh, they are given by the refresh address counter.
• The DRAM address lines must provide row, then column address during
normal operation & refresh row addresses during the refresh cycle.

Q. 48 Explain the basic concepts of cache memory.


Ans.: Basic Concepts Of Cache Memory:-
 Cache is small, fast memory that temporarily holds copies of block data and
program instructions from the main memory.
 The increased speed of cache memory over that of main memory components
offers the prospective for programmes to execute much more rapidly if the
instructions and data can be held in cache.
 Many of today's higher performance processors implemented around the
Harvard architecture will internally support both an Icache (instruction cache)
and a Dcache (data Cache).

1. Locality of reference -
 Execution generally occurs either sequentially or in small loop with a small
number of instructions.
 Such behaviour means that the overall forward progress through a program is
proceeding at a much lower rate than the access time of the fastest memory.
 Put another way with respect to the entire program actual execution takes
place within a small window that moves forward through the program.
 Formally such a phenomenal is called sequential locality of reference
 Because the program is executing only a few instructions within a small
window if those few instructions can be kept in fast memory the program
will appear to be executing out of the memory.
 If the area within the program in which the application is currently executing
is in the local window.
 Two other types of locality of reference are spatial and temporal.
 Special locality suggests that a future access of a resource a memory address
in this case is going to be physically near one previously accessed.
 Temporal locality suggests that a future access of a resource again a memory
address is going to be temporarily near one recently accessed. Using locality
of reference knowledge can significantly improve memory access time
performance.

2. Cache system architecture –


 When using a cashing system the goal is to operate out of cash a memory
typically SRAM to the greatest extent possible.
 When program execution needs an instruction or data that is not in the
cache it must be brought in from in main memory typically DRAM.

Q. 49 Explain the Locality of Reference used in cache memory.


Ans.: Locality of Reference used in cache memory:-
 Any computer system that has a strong locality of reference is likely to perform
well when it comes to the cache memory, and other vital elements. Locality of
reference is when specific locations of storage are accessed on a regular basis.
 Temporal locality refers to the reuse of specific data, and/or resources, within
relatively small time duration.
 Spatial locality refers to the use of data elements within relatively close storage
locations.
 Sequential locality, a special case of spatial locality, occurs when data elements
are arranged and accessed linearly, such as, traversing the elements in a one-
dimensional array.
 There are a number of different forms of locality of reference including:
 Temporal locality
 Spatial locality
 Branch locality
 Equidistant locality

 Temporal locality:-
Temporal locality is when the cache memory is referenced once, and then again
shortly afterwards. The data accessed is stored in memory, and when it is
accessed again it can be done so much quicker, as a reference point is created.

 Spatial locality:-
Spatial locality is when a specific location of memory is accessed. The knock-on
effect of this is that nearby points of memory will most likely be accessed in the
near future and the size of the memory needed is predicted and this allows for faster
access, in the short term, and over a longer period of time.

 Branch locality:-
Branch locality occurs when there are not many options for the path in the co-
ordinate space. The instruction most likely to result in this type of locality of
reference is one that is structured simply and has the ability for different reference
points to be situated a distance away from each other.

 Equidistant locality:-
Equidistant locality is when a linear function is used to determine which location of
the cache memory will be needed in certain situations. The equidistant locality is so
called, as it is halfway between the spatial locality and the branch locality.
Locality of reference is important as it predicts behaviour in computers and can
avoid the computer having future problems with the memory.

Q. 51 Explain Direct Mapping Cache implementation.


Ans.: Direct Mapping Cache implementation:-
 The first cache management strategy is called direct mapping.
 The address is provided and the read or write operation is executed.
 When the target address is not found in the cache , a cache miss occurs.
 The required data or instruction must be copied into the cache from the main
memory.
 When such a transfer is needed, the complete block containing the required
word is brought in.
 The destination to where the new block is determined by the replacement
algorithm designed into the cache.
 The main memory page size is set equal to the cache size; therefore, each page
will contain a corresponding number of blocks.
 Main memory will contain size mod size cache page .Consequently; each main
memory page will contain a block 0, block 1 & so on.
 Example : -
 When a block is brought into the corresponding numbered block in the cache
Thus, a main memory block 0 will always be placed into the cache block 0 slots,
a main memory block1 will always be placed into the cache block1 slot & so
forth.
Q. 52 Explain Set-associative Mapping Cache implementation.
Ans.: Set-associative Mapping Cache implementation:-
 Algorithm for storing and retrieving data and instruction into and out of the
cache, called block-set associative, combines some of the simplicity of the direct
mapping algorithm with some of the flexibility of the associative algorithm.
 Under the block-set associative scheme, the entry at a specific index is expanding
from a single block to multiple blocks. Such a collection is called a set.
 The number of blocks in each set is determined by the specific implementation.
 The design to be implemented will be a two way set associated scheme; thus each
set will have two blocks.
 Similarly, a four-way implementation would support sets containing four blocks.
 Main memory address space is first organized as a collection of M blocks.
 The M blocks are then organized as a collection of N group.
 The group number to which each block is assigned is computed as
Group Number= M mod N
 Example-:
The set number in the cache corresponds to main memory group numbers any
block from main memory group j can be placed into the cache set j. A set is now
searched associatively; the search is for less much complex because we are
dealing with much smaller search space.
UNIT-4

Q. 54 How C/C++ is useful in embedded system programming. Also mention the


advantages of high level programming for embedded system.
Ans.:  The programming language ‗C‘ is considered as the most popular choice for
embedded development for various reasons like, rich compiler/cross-compiler
support, rich support for system level programming, code efficiency,
performance etc.
 When it comes to the desktop application development C++ is considered as a
good candidate for object oriented development. Though C++ is a popular
choice for object oriented programming in desktop environment, when it comes
to embedded application development C++ is not able to gain much attention
due to various reasons like, large memory footprint of compiled code,
performance bottlenecks etc.
 However it is very interesting to note that certain language specific features
offered by C++, like operator overloading, Object Oriented concepts, exception
handling mechanism etc. can be effectively utilized in the embedded
programming arena.
 C++ is an object oriented Program (OOP) language, which in addition, supports
the procedure oriented codes of C. Program coding in C++ codes provides the
advantage of objected oriented programming as well as the advantage of C and
in-line assembly. Programming concepts for embedded programming in C++
are as follows:
1. A class binds all the member functions together for creating objects. The
objects will have memory allocation as well as default assignments to its
variables that are not declared static. Let us assume that each software timer
that gets the count input from a real time clock is an object. Now consider
the codes for a C++ class RTCSWT. A number of software timer objects can
be created as the instances of RTCSWT.
2. A class can derive (inherit) from another class also. Creating a child class
from RTCSWT as a parent class creates a new application of the RTCSWT.
3. Methods (C functions) can have same name in the inherited class. This is
called method overloading. Methods can have the same name as well as the
same number and type of arguments in the inherited class. This is called
method overriding. These are the two significant features that are extremely
useful in a large program.
4. Operators in C++ can be overloaded like in method overloading. Recall the
following statements and expressions. The operators ++ and ! are overloaded
to perform a set of operations.
5. There is struct that binds all the member functions together in C. But a C++
class has object features. It can be extended and child classes can be derived
from it.
6. A number of child classes can be derived from a common class. This feature
is called polymorphism. A class can be declared as public or private. The
data and methods access is restricted when a class is declared private. Struct
does not have these features.
Advantages of High Level Programming For Embedded System:-
 The development cycle is short for complex systems due to the use of functions
(procedures), standard library functions, modular programming approach and
top down design. Application programs are structured to ensure that the
software is based on sound software engineering principles.
a. The use of the standard library function, square root ( ), saves the
programmer time for coding. New sets of library functions exist in an
embedded system specific C or C++ compiler. Exemplary functions are the
delay ( ), wait ( ) and sleep ( ).
b. Modular programming approach is an approach in which the building blocks
are reusable software components.
c. Bottom up design is a design approach in which programming is first done
for the submodules of the specific and distinct sets of actions.
d. Top-Down design is another programming approach in which the main
program is first designed, then its modules, sub-modules, and finally, the
functions.
 Data type declarations provide programming ease. For example, there are four
types of integers, int, unsigned int, short and long. When dealing with positive
only values, we declare a variable as unsigned int.
Each data type is an abstraction for the methods to use, to manipulate, to
represent, and for a set of permissible operations.
 Type checking makes the program less prone to error. For example, type
checking does not permit subtraction, multiplication and division on the char
data types. Further, it lets + be used for concatenation.
 Control Structures (for examples, while, do - while, break and for) and
Conditional Statements (for examples, if, if- else, else - if and switch - case)
make the program-flow path design tasks simple.
 Portability of non-processor specific codes exists. Therefore, when the hardware
changes, only the modules for the device drivers and device management,
initialization and locator modules and initial boot up record data need
modifications.

Q. 55 What type of files can be included using Include Pre-processor directive?


Ans.:  Include is a preprocessor directive to include the contents (codes or data) of a
file. The files that can be included are given below. Inclusion of all files and
specific header files has to be as per requirements.
1. Including Codes Files: These are the files for the codes already available.
For example, # include ―prctlHandlers.c‖.

2. Including Constant data Files: These are the files for the codes and may
have the extension ‗.const‘.

3. Including Stings data Files: These are the files for the strings and may have
the extension ‗.strings‘ or ‗.str.‘ or ‗.txt. For example, # include
―netDrvConfig.txt‖.
4. Including initial data Files: There are files for the initial or default data for
the shadow ROM of the embedded system. The boot-up program is copied
later into the RAM and may have the extension ‗.init‘. On the other hand,
RAM data files have the extension, ‗.data‘.

5. Including basic variables Files: These are the files for the local or global
static variables that are stored in the RAM because they do not possess the
initial (default) values. The static means that there is a common not more
than one instance of that variable address and it has a static memory
allocation. There is only one real time clock, and therefore only one instance
of that variable address. These basic variables are stored in the files with the
extension ‗.bss‘.

6. Including Header Files: It is a preprocessor directive, which includes the


contents (codes or data) of a set of source files. These are the files of a
specific module. A header file has the extension ‗.h‘. Examples are as
follows. The string manipulation functions are needed in a program using
strings. These become available once a header file called ―string.h‖ is
included. The mathematical functions, square root, sin, cos, tan, atan and so
on are needed in programs using mathematical expressions. These become
available by including a header file, called ―math.h‖. The pre-processor
directives will be ‗# include <string.h>‘ and ‗# include <math.h>‘.

 Also included are the header files for the codes in assembly, and for the I/O
operations (conio.h), for the OS functions and RTOS functions. # include
―vxWorks.h‖ in Example 5.1 is directive to compiler, which includes VxWorks
RTOS functions.

Q. 56 What are the disadvantages of standard C++? Explain Optimization of codes


in embedded C++ programs to eliminate the disadvantages.
Ans.: Disadvantages Of Standard C++:-
 Program codes become lengthy, particularly when certain features of the
standard C++ are used. Examples of these features are as follows:
(a) Template.
(b) Multiple Inheritances (Deriving a class from many parents).
(c) Exceptional handling.
(d) Virtual base classes.
(e) Classes for IO Streams.

Optimization of codes in embedded C++ programs to eliminate the


disadvantages:-
 Embedded system codes can be optimised when using an OOP language by the
following:
a. Declare private as many classes as possible. It helps in optimising the
generated codes.
b. Declare private as many classes as possible. It helps in optimising the
generated codes. (b) Use char, int and boolean (scalar data types) in place of
the objects (reference data types) as arguments and use local variables as
much as feasible.
c. Recover memory already used once by changing the reference to an object to
NULL.

 A special compiler for an embedded system can facilitate the disabling of


specific features provided in C++. Embedded C++ is a version of C++ that
provides for a selective disabling of the above features so that there is a less
runtime overhead and less runtime library. The solutions for the library
functions are available and ported in C directly.

 The IO stream library functions in an embedded C++ compiler are also


reentrant. So using embedded C++ compilers or the special compilers make the
C++ a significantly more powerful coding language than C for embedded
systems.

 GNU C/C++ compilers (called gcc) find extensive use in the C++ environment
in embedded software development. Embedded C++ is a new programming tool
with a compiler that provides a small runtime library.

 It satisfies small runtime RAM needs by selectively de-configuring features


like, template, multiple inheritance, virtual base class, etc. when there is a less
runtime overhead and when the less runtime library using solutions are
available. Selectively removed (de-configured) features could be template, run
time type identification, multiple Inheritance, exceptional handling, virtual base
classes, IO streams and foundation classes.

 An embedded system C++ compiler (other than gcc) is Diab compiler from
Diab Data. It also provides the target (embedded system processor) specific
optimisation of the codes. The runtime analysis tools check the expected run
time error and give a profile that is visually interactive.

Q. 57 Illustrate the use of Infinite loops with example in embedded system design.
Ans.: Use Of Infinite Loops In Embedded System Design:-
 Infinite loops are never desired in usual programming. The program will never
end and never exit or proceed further to the codes after the loop. Infinite loop is
a feature in embedded system programming!

 The hardware equivalent of an infinite loop is a ticking system clock (real time
clock) or a free running counter.

 In following Example gives a ‗C‘ program design in which the program starts
executing from the main ( ) function. There are calls to the functions and calls on
the interrupts in between. It has to return to the start. The system main program
is never in a halt state. Therefore, the main ( ) is in an infinite loop within the
start and end.
Example
# define false 0
# define true 1
void main (void)
{
/* The Declarations here and initialization here */
..
/* Infinite while loop follows. Since the condition set for the while loop is
always true, the statements within the curly braces continue to execute */
while (true)
{
/* Codes that repeatedly execute */
..
}

 Assume that the function main does not have a waiting loop and simply passes
the control to an RTOS. Consider a multitasking program. The OS can create a
task. The OS can insert a task into the list. It can delete from the list. Let an OS
kernel pre-emptively schedule the running of the various listed tasks. Each task
will then have the codes in an infinite loop.
 How do more than one infinite loops co-exist?
The code inside waits for a signal or event or a set of events that the kernel
transfers to it to run the waiting task. The code inside the loop generates a
message that transfers to the kernel. It is detected by the OS kernel, which passes
another task message and generates another signal for that task, and pre-empts
the previously running task.
Let an event be setting of a flag, and the flag setting is to trigger the running of a
task whenever the kernel passes it to the waiting task. The instruction, ‗if (flag1)
{...};‘ is to execute the task function for a service if flag1 is true.

Q. 58 Explain the use of modifiers.


Ans.: The actions of modifiers are as follows:
Case (i): Modifier ‗auto‘ or No modifier, if outside a function block, means
that there is ROM allocation for the variable by the locator if it is
initialised in the program. RAM is allocated by the locator, if it is
not initialised in the program.

Case (ii): Modifier ‗auto‘ or No modifier, if inside the function block, means
there is ROM allocation for the variable by the locator if it is
initialised in the program. There is no RAM allocation by the
locator.

Case (iii): Modifier ‗unsigned‘ is modifier for a short or int or long data type.
It is a directive to permit only the positive values of 16, 32 or 64
bits, respectively.
Case (iv): Modifier ‗static‘ declaration is inside a function block. Static
declaration is a directive to the compiler that the variable should be
accessible outside the function block also and there is to be a
reserved memory space for it. It then does not save on a stack on
context switching to another task. When several tasks are executed
in cooperation, the declaration static helps.
There is ROM allocation by the locator if it is initialised in the
program. There is RAM allocation by the locator if it is not
initialised in the program.

Case (v): Modifier static declaration is outside a function block. It is not


usable outside the class in which declared or outside the module in
which declared. There is ROM allocation by the locator for the
function codes.

Case (vi): Modifier const declaration is outside a function block. It must be


initialised by a program. For example, #define const
Welcome_Message ―There is a mail for you‖. There is ROM
allocation by the locator.

Case (vii): Modifier register declaration is inside a function block. It must be


initialised by a program. For example, ‗register CX‘. A CPU
register is temporarily allocated when needed. There is no ROM or
RAM allocation.

Case (viii): Modifiers interrupt. It directs the compiler to save all processor
registers on entry to the function codes and restore them on return
from that function.

Case (ix): Modifier extern. It directs the compiler to look for the data type
declaration or the function in a module other than the one currently
in use.

Case (x): Modifier volatile outside a function block is a warning to the


compiler that an event can change its value or that its change
represents an event. An event example is an interrupt event,
hardware event or inter-task communication event.

Case (xi): Modifier volatile static declaration is inside a function block.


Examples are (a) ‗volatile static boolean RTIEnable = true;‘ (b)
‗volatile static boolean RTISWTEnable;‘ and (c) ‗volatile static
boolean RTCSWT_F;‘
The static declaration is for the directive to the compiler that the
variable should be accessible outside the function block also, and
there is to be a reserved memory space for it; and volatile means a
directive not to optimise as an event can modify. It then does not
save on the stack on context switching to another task. When
several tasks are executed in cooperation, the declaration static
helps. The compiler does not optimise the code due to declaration
volatile. There is no ROM or RAM allocation by the locator.

Q. 59 What are main features of source code engineering tools for embedded
C/C++?
Ans.: Source Code Engineering Tools For Embedded C/C++:-
 A source code engineering tool is of great help for source-code
development, compiling and cross compiling. The tools are commercially
available for embedded C/C++ code engineering, testing and debugging.
 The features of a typical tool are comprehension, navigation and browsing,
editing, debugging, configuring (disabling and enabling the C++ features)
and compiling. A tool for C and C++ is SNiFF+. It is from WindRiver®
Systems. A version, SNiFF+ PRO has full SNiFF+ code as well as debug
module.
 Main features of the tool are as follows:
1. It searches and lists the definitions, symbols, hierarchy of the classes,
and class inheritance trees.
2. It searches and lists the dependencies of symbols and defined symbols,
variables, functions (methods) and other symbols.
3. It monitors, enables and disables the implementation virtual functions.
4. It finds the full effect of any code change on the source code.
5. It searches and lists the dependencies and hierarchy of included header
files.
6. It navigates to and fro between the implementation and symbol
declaration.
7. It navigates to and fro between the overridden and overriding methods.
8. It browses through information regarding instantiation (object creation)
of a class.
9. It browses through the encapsulation of variables among the members
and browses through the public, private and protected visibility of the
members.
10. It browses through object component relationships.
11. It automatically removes error- prone and unused tasks.
12. It provides easy and automated search and replacement.

Q. 60 Explain the use of pointers with example.


Ans.: The Use Of Pointers:-
 In computer science, a pointer is a programming language object, whose value
refers to (or "points to") another value stored elsewhere in the computer
memory using its memory address. A pointer references a location in memory,
and obtaining the value stored at that location is known as dereferencing the
pointer.
 Syntax:
Type *variable_name;
Here, type is the pointer's base type; it must be a valid C data type and var-name
is the name of the pointer variable. The asterisk * used to declare a pointer is the
same asterisk used for multiplication. However, in this statement the asterisk is
being used to designate a variable as a pointer. Take a look at some of the valid
pointer declarations −
 Pointers are powerful tools when used correctly and according to certain basic
principles. Exemplary uses are as follows. Let a byte each be stored at a
memory address.
1. Let a port A in system have a buffer register that stores a byte. Now a
program using a pointer declares the byte at port A as follows: ‗unsigned
byte *portA‘. [or Pbyte *portA.] The * means ‗the contents at‘. This
declaration means that there is a pointer and an unsigned byte for portA, The
compiler will reserve one memory address for that byte. Consider ‗unsigned
short *timer1‘. A pointer timer1 will point to two bytes, and the compiler
will reserve two memory addresses for contents of timer1.

2. Consider declarations as follows. void *portAdata; The void means the


undefined data type for portAdata. The compiler will allocate for the
*portAdata without any type check.

3. A pointer can be assigned a constant fixed address. Recall two preprocessor


directives: ‗# define portA (volatile unsigned byte *) 0x1000‘ and ‗# define
PIOC (volatile unsigned byte *) 0x1001‘. Alternatively, the addresses in a
function can be assigned as follows. ‗volatile unsigned byte * portA =
(unsigned byte *) 0x1000‘ and ‗volatile unsigned byte *PIOC = (unsigned
byte *) 0x1001‘. An instruction, ‗portA ++;‘ will make the portA pointer
point to the next address and to which is the PIOC.

4. Consider, unsigned byte portAdata; unsigned byte *portA = &portAdata. The


first statement directs the compiler to allocate one memory address for
portAdata because there is a byte each at an address. The & (ampersand sign)
means ‗at the address of‘. This declaration means the positive number of 8
bits (byte) pointed by portA is replaced by the byte at the address of
portAdata. The right side of the expression evaluates the contained byte from
the address, and the left side puts that byte at the pointed address. Since the
right side variable portAdata is not a declared pointer, the ampersand sign is
kept to point to its address so that the right side pointer gets the contents
(bits) from that address.

5. A NULL pointer declares as following: ‗#define NULL (void*) 0x0000‘.


NULL pointer is very useful. Consider a statement: ‗while (* RTCSWT_List.
ListNow -> state != NULL) { numRunning ++;‘. When a pointer to ListNow
in a list of software timers that are running at present is not NULL, then only
execute the set of statements in the given pair of opening and closing curly
braces. One of the important uses of the NULL pointer is in a list. The last
element to point to the end of a list, or to no more contents in a queue or
empty stack, queue or list.
Q. 61 Explain the steps to use a function in an embedded program.
Ans.: The Steps To Use A Function In An Embedded Program:-
 There are functions and a special function for starting the program execution,
‗void main (void)‘.
 Given below are the steps to be followed when using a function in the program.
1. Declaring a function:
 Just as each variable has to have a declaration, each function must be
declared.
 Consider an example. Declare a function as follows: ‗int run (int
indexRTCSWT, unsigned int maxLength, unsigned int numTicks,
SWT_Type swtType, SWT_Action swtAction, boolean loadEnable);‘.
Here int specifies the returned data type.
 The run is the function name. There are arguments inside the brackets.
Data type of each argument is also declared. A modifier is needed to
specify the data type of the returned element (variable or object) from any
function. Here, the data type is specified as an integer. [A modifier for
specifying the returned element may be also being static, volatile,
interrupt and extern.]

2. Defining the statements in the function:


 Just as each variable has to be given the contents or value, each function
must have its statements.
 Consider the statements of the function ‗run‘. These are within a pair of
curly braces as follows: ‗int RTCSWT:: run (int indexRTCSWT, unsigned
int maxLength, unsigned int numTicks, SWT_Type swtType,
SWT_Action swtAction, boolean loadEnable) {...};‘.
 The last statement in a function is for the return and may also be for
returning an element.

3. Call to a function:
 Consider an example: ‗if (delay_F = = true & & SWTDelayIEnable = =
true) ISR_Delay ( );‘.
 There is a call on fulfilling a condition. The call can occur several times
and can be repeatedly made. On each call, the values of the arguments
given within the pair of bracket pass for use in the function statements.

(i) Passing the Values (elements):-


The values are copied into the arguments of the functions. When the function is
executed in this way, it does not change a variable‘s value at the called program.
A function can only use the copied values in its own variables through the
arguments.

(ii) Reentrant Function:-


Reentrant function is usable by the several tasks and routines synchronously (at
the same time). This is because all its argument values are retrievable from the
stack.
(iii) Passing the References:-
When an argument value to a function passes through a pointer, the function
can change this value. On returning from this function, the new value will be
available in the calling program or another function called by this function.

Q. 62 Explain reentrant function.


Ans.: Reentrant Function:-
 Reentrant function is usable by the several tasks and routines synchronously (at
the same time). This is because all its argument values are retrievable from the
stack.
 A function is called reentrant function when the following three conditions are
satisfied.
1. All the arguments pass the values and none of the argument is a pointer
(address) whenever a calling function calls that function. There is no pointer
as an argument in the above example of function ‗run‘.

2. When an operation is not atomic, that function should not operate on any
variable, which is declared outside the function or which an interrupt service
routine uses or which is a global variable but passed by reference and not
passed by value as an argument into the function.
The following is an example that clarifies it further. Assume that at a server
(software), there does a 32 bit variable count to count the number of clients
(software) needs service. There is no option except to declare the count as a
global variable that shares with all clients. Each client on a connection to a
server sends a call to increment the count. The implementation by the
assembly code for increment at that memory location is non-atomic when (i)
the processor is of eight bits, and (ii) the servercompiler design is such that it
does not account for the possibility of interrupt in-between the four
instructions that implement the increment of 32-bit count on 8-bit processor.
There will be a wrong value with the server after an instance when interrupt
occurs midway during implementing an increment of count.

3. That function does not call any other function that is not itself Reentrant. Let
RTI_Count be a global declaration. Consider an ISR, ISR_RTI. Let an
‗RTI_Count ++;‘ instruction be where the RTI_Count is variable for counts
on a real-time clock interrupt. Here ISR_RTI is a not a Reentrant routine
because the second condition may not be fulfilled in the given processor
hardware. There is no precaution that may be taken here by the programmer
against shared data problems at the address of the RTI_Count because there
may be no operation that modifies RTI_Counts in any other routine or
function than the IST_RTI. But if there is another operation that modifies the
RTI_Count the shared-data problem will arise.
Q. 63 Explain program elements: Macros and Functions used in embedded system
programming
Ans.: Macros:
 A macro is a collection of codes that is defined in a program by a name. It
differs from a function in the sense that once a macro is defined by a name, the
compiler puts the corresponding codes for it at every place where that macro
name appears.
 Whenever the name of the macro appears, the compiler places the codes
designed for it. Macros, called test macros or test vectors are also designed and
used for debugging a system.
 Macros are used for short codes only. This is because, if a Function call is used
instead of macro, the overheads (context saving and other actions on function
call and return) will take a time.

Functions:
 The functions execute a named set of codes with values passed by calling
program through its arguments. Also returns a data object when it is not
declared as void, It has the context saving and retrieving overheads.
 Main Function
Declarations of functions and data types, typedef and either
(i)Executes a named set of codes, calls a set of functions, and calls on the
Interrupts the ISRs or
(ii) starts an OS Kernel
 Interrupt service Routine or Device Driver
Declarations of functions and data types, typedef and either
(i)Executes a named set of codes. Must be short so that other sources of
interrupts are also serviced within the deadlines.
(ii)Must be either a reentrant routine or must have a solution to the shared data
problem.
 Recursive Function
A function that calls itself. It must be a reentrant function also' Most often its
use is avoided in embedded systems due to memory constraints.
 Reentrant Function
Reentrant function is usable by the several tasks and routines synchronously.

How does a macro differ from a function?


 The codes for a function are compiled once only. On calling that function, the
system has to save the context, and on return restore the context. Further, a
function may return nothing (void declaration case) or return a Boolean value,
or an integer or any primitive or reference type of data.
 The codes for macro are compiled in every function wherever that macro name
is used, as the compiler before compilation, puts the codes at the places
wherever the macro is used. On using the macro, the processor does not have to
save the context, and does not have to restore the context, as there is no return.
 Macros are used for short codes only.
Q. 64 What is Stack? Which type of Stack Structures may be created during
execution of the embedded software?
Ans.: Stack:-
 A data structure, called stack is a special program element. A stack means an
allocated memory block data from which a data element is read in a LIFO (Last
In First Out) way and an element is popped or pushed from an address pointed
by a pointer, called SP (Stack Pointer) or Stop and SP changes on each push or
pop such that it points to the top of stack.
 Various stack structures may be created during processing. For handling each
stack, one pointer, which points to the stack top is needed.
 Figure shows the various stack structures that are created during execution of
the embedded software.
1. A call can be made for another routine during running of routine. In order
that on completion of the called routine, the processor returns only to the
one calling, the instruction address for return must be pushed on the stack.
Pushing means saving on the stack top and incrementing stack to point to
the next top. Popping means retrieving the saved address from the stack top
and decrementing the stack to point to the previous top. There can also be
nested calls and returns. Nesting means one routine calls another, which
calls another and return from the called routine is always to the calling
routine. Therefore, at the memory a block of memory address is allocated to
the stack that saves the pushed return addresses of the nested calls. It is
shown in figure (a). Two bytes of address are acquired in the PC from stack
on return from a call to a routine (function).

2. There may be at the beginning of an input data, for example, received call
numbers in a phone, which is saved onto a stack at RAM in order to be
retrieved later in LIFO mode. It is shown figure (b).
Consider for example, o each push the following are saved on a stack. (i)
Four pointers (addresses each of 4 bytes); (ii) four integers (each of 4 bytes)
(iii) four floating point numbers (each of 4 bytes).
Memory allocation required for a stack structure for pushing the function
parameters = 4 × 4 + 4 × 4 + 4 × 4 = 48 B.

3. An application may also create the run-time stack structures. There can be
multiple data stacks at the different memory blocks, each having a separate
pointer address. There can be multiple stacks shown as Stack 1, …, Stack N
in Figure (c).

4. Each task or thread in a multi-tasking or multi-threading software design


should have its own stack where its context is saved. The context is saved on
the processor on switching to another task or thread. The context includes
the return address for the PC for retrieval on switching back to the task.
There can be multiple stacks shown as saved contexts of the different
memory blocks, each having a separate pointer address. Threads of
application programs and supervisory (OS) programs have separate stacks at
separate stacks at separate memory blocks.
Q. 65 What are the different features of queue implementation helpful in embedded
networking system?
Ans.:  Queue:
 A structure with a series of data elements with the first element waiting for
an operation.
 Used when an element is not to be accessed by index with a pointer directly,
as in an array, but only through FIFO (first in first out) mode through a
queue-head pointer.
 Features of queue implementation:
 When a byte stream is sent on a network, the bytes for the header for the
length of the stream and for the source and destination addresses of the
stream are must.
[Note: There may be other header bits, for example, in the IP protocol. There
may trail bytes for examples, in HDLC and Ethernet protocols.]
 Priority Queue:
 When there is an urgent message to be placed in a queue, we can program
such that a priority data element inserts at the head instead of at the tail.
 That element retrieves as if there is last-in firstout.
 Networking applications need the specialized formations of a queue.
 On a network, the bits are transmitted in a sequence and retrieved at the other
end in a sequence.
 To separate the bits at the different blocks or frames or packets, there are
header bytes.
 Queues of bytes in a stream play a vital role in a network communication.
 Network queues have headers for length, source and destination addresses
and other bytes as per the protocol used.
 The header with the queue data elements (forming a byte stream) follows a
protocol.
 A protocol may also provide for appending the bytes at the queue tail.
 These may be the CRC (Cyclic Redundancy Check) bytes at the tail.
 Data streams flow control Data streams flow control:
 Use a special construct FIPO (First in Provisionally Out)
 FIPO is a queue special construct in which deletion is provisional and head
pointer moves backward as per the last acknowledged (successfully read)
byte from source of deletion in the network.

Q. 66 Explain the use of queue In network protocol implementation.


Ans.: Use Of Queue In Network Protocol Implementation:-
 Networking applications need the specialised formations of a queue.
 A queue with its current size (length) as its heading element before the data in
the queue. A queue as a queue that has the length, source address and
destination address as its heading elements before the data in the queue follows.
 On a network, the bits are transmitted in a sequence and retrieved at the other
end in a sequence. To separate the bits at the different blocks or frames or
packets, there are header bytes.
 The header with the queue elements follows a protocol. A protocol may also
provide for appending the bytes at the queue tail. These may be the CRC
(Cyclic Redundancy Check) bytes at the tail. Figure (a) shows a pipe from a
queue. Figure (b) shows a queue between the sockets. Figure (c) shows three
queues of the packets on a network.

Figure: (a) A pipe from a queue (b) A queue between two sockets
(c) The queues of the packets on the network
Q. 67 Explain the use of queue in interrupt handling.
Ans.: FreeRTOS Queues:-
 Queues are the primary form of intertask communications. They can be used to
send messages between tasks, and between interrupts and tasks. In most cases
they are used as thread safe FIFO (First In First Out) buffers with new data
being sent to the back of the queue, although data can also be sent to the front.

User Model: Maximum Simplicity, Maximum Flexibility:-


 The FreeRTOS queue usage model manages to combine simplicity with
flexibility - attributes that are normally mutually exclusive. Messages are sent
through queues by copy, meaning the data (which can be a pointer to larger
buffers) is itself copied into the queue rather than the queue always storing just a
reference to the data. This is the best approach because:
1. Small messages that are already contained in C variables (integers, small
structures, etc.) can be sent into a queue directly. There is no need to allocate
a buffer for the message and then copy the variable into the allocated buffer.
Likewise, messages can be read from queues directly into C variables.
Further, sending to a queue in this way allows the sending task to
immediately overwrite the variable or buffer that was sent to the queue, even
when the sent message remains in the queue. Because the data contained in
the variable was copied into the queue the variable itself is available for re-
use. There is no requirement for the task that sends the message and the task
that receives the message to agree which task owns the message, and which
task is responsible for freeing the message when it is no longer required.

2. Using queues that pass data by copy does not prevent queues from being used
to pass data by reference. When the size of a message reaches a point where it
is not practical to copy the entire message into the queue byte for byte, define
the queue to hold pointers and copy just a pointer to the message into the
queue instead. This is exactly how the FreeRTOS+UDP implementation
passes large network buffers around the FreeRTOS IP stack.

3. The kernel takes complete responsibility for allocating the memory used as
the queue storage area.

4. Variable sized messages can be sent by defining queues to hold structures


that contain a member that points to the queued message, and another
member that holds the size of the queued message.
5. A single queue can be used to receive different message types, and
messages from multiple locations, by defining the queue to hold a structure
that has a member that holds the message type, and another member that
holds the message data (or a pointer to the message data). How the data is
interpreted depends on the message type. This is exactly how the task that
manages the FreeRTOS+UDP IP stack is able to use a single queue to
receive notifications of ARP timer events, packets being received from the
Ethernet hardware, packets being received from the application, network
down events, etc.

6. The implementation is naturally suited for use in a memory protected


environment. A task that is restricted to a protected memory area can pass
data to a task that is restricted to a different protected memory area because
invoking the RTOS by calling the queue send function will raise the
microcontroller privilege level. The queue storage area is only accessed by
the RTOS (with full privileges).

7. A separate API is provided for use inside of an interrupt. Separating the API
used from an RTOS task from that used from an interrupt service routine
means the implementation of the RTOS API functions do not carry the
overhead of checking their call context each time they execute. Using a
separate interrupt API also means, in most cases, creating RTOS aware
interrupt service routines is simpler for end users than when compared to
alternative RTOS products.

8. In every way, the API is simpler.

Blocking on Queues:-
 Queue API functions permit a block time to be specified.
 When a task attempts to read from an empty queue the task will be placed into
the Blocked state (so it is not consuming any CPU time and other tasks can run)
until either data becomes available on the queue, or the block time expires.
 When a task attempts to write to a full queue the task will be placed into the
Blocked state (so it is not consuming any CPU time and other tasks can run)
until either space becomes available in the queue, or the block time expires.
 If more than one task block on the same queue then the task with the highest
priority will be the task that is unblocked first.

Q. 68 What is FIPO? Explain its application.


Ans.: FIPO():-
 A commonly used network transport protocol is ‗Go back to N ‘. It is used in
case of a point-to-point network. Receiver acknowledgment occurs at successive
but irregular intervals of time. Bytes transmit from the network driver
(transmitter) and queue up to a certain limiting number or up to the occurrence
of a time-out, whichever is earlier.
 Flow control of ‗bytes‘ or ‗packets‘ or ‗frames‘ in many network protocols is
done by taking into account the acknowledgements from the receiving entity.
1. If there is no acknowledgement within the limit or time-out, there is
complete retransmission of the bytes from the queue.
2. If there is an acknowledgement for any byte that was sent in a sequence,
there is retransmission of the bytes that remained unacknowledged at that
instance of N-th sequence.

 There has to be three pointers, one for the front (*QHEAD), a second for the
back (*QTAIL) and a third pointer is tempfront (*QACK). Two pointers are the
same as in every queue. The third pointer defines a point up to which an
acknowledgement has been received.

 The acknowledgement is for a byte inserted (placed) at the queue back. The
insertion into the queue is at the back (*QTAIL). There is a predefined limiting
difference between front and back (*QTAIL). There is a predefined time-
interval up to which insertions can occur at the back (*QTAIL). There is a
predefined limiting maximum permitted difference between tempfront
(*QACK) and front (*QHEAD).

 This design gives a necessary feature. There can be a variable amount of delays
in transmitting a byte a well as in receiving its or its successor
acknowledgement. The receiver does not acknowledge every byte. There is
acknowledgement only at successive predefined time-intervals. The design can
be called FIPO (First In provisionally out).

 Figure shows a FIPO queue for accounting the acknowledgements on the


networks. It also shows at the bottom, the pointer addresses at three instances, at
the beginning of transmission, on acknowledgement and after
acknowledgement.

 Note that the window that is between N-th sequence pointed by QACK and
waiting sequence pointed by QTAIL as a function of time, is shown as sliding
after receipt of QACK from the receiver for N-th sequence. [Refer left to right
changes in Figure.]
1. front (*QHEAD) equals back (*QTAIL) as well as tempfront (*QACK) at
the beginning of the transmission.
2. When there is an acknowledgement, front (*QHEAD) resets and equals
tempfront (*QACK).
3. The transmission starts from the tempfront (*QACK) again.
4. There is a limiting maximum time interval difference between transmission
from tempfront (*QACK) and after that time if tempfront (*QACK) is not
equal to front (*QHEAD) then front (*QHEAD) resets and equals tempfront
(*QACK) again. It means that after the limit, the tempfront (*QACK) will
be forced to be equal to front (*QHEAD). This is because the receiver did
not acknowledge within the stipulated time interval.
Figure: FIPO queue for accounting for the acknowledgements on the networks
with Go back to N (sliding window protocol) for transmission flow control.
Note the three pointer addresses at three instances: at the beginning of
transmission, on acknowledgement and after acknowledgement.
Q. 69 Explain with example multiple function calls in the main program.
Ans.: Multiple Function Calls In The Main Program:-
One of the most common methods is for the multiple function-calls to be made in a
cyclic order in an infinite loop of the main. Recall the 64 kbps network problem of
Example 4.1. Let us design the C codes given in Example 5.3 for an infinite loop
for this problem. Example 5.4 shows how the multiple function calls are defined in
the main for execution in the cyclic orders. Figure 5.1 shows the model adopted
here.

Figure: Programming model for multiple function calls in ‘main()’ function

Q. 70 Explain the uses of a list of tasks in a ready list.


Ans.: Ready List:
 The scheduler uses a data structure called the ready list to track the tasks that are
in the ready state.
 In ADEOS, the ready list is implemented as an ordinary linked list, ordered by
priority. So the head of this list is always the highest priority task that is ready to
run. Following a call to the scheduler, this will be the same as the currently
running task. In fact, the only time that won't be the case is during a reschedule.
 Figure 8-2 shows what the ready list might look like while the operating system
is in use.

Figure 8-2. The ready list in action


 The main use of an ordered linked list like this one is the ease with which the
scheduler can select the next task to be run. (It's always at the top.)
 Unfortunately, there is a tradeoff between lookup time and insertion time. The
lookup time is minimized because the data member readyList always points
directly to the highest priority ready task.
 However, each time a new task changes to the ready state, the code within
the insert method must walk down the ready list until it finds a task that has a
lower priority than the one being inserted. The newly ready task is inserted in
front of that task. As a result, the insertion time is proportional to the average
number of tasks in the ready list.
 Whether it's a system task or an application task, at any time each task exists in
one of a small number of states, including ready, running, or blocked. As the
real-time embedded system runs, each task moves from one state to another,
according to the logic of a simple finite state machine (FSM). Figure
5.2 illustrates a typical FSM for task execution states, with brief descriptions of
state transitions.

Figure 5.2: A typical finite state machine for task execution states.

 Although kernels can define task-state groupings differently, generally three


main states are used in most typical pre-emptive-scheduling kernels, including:
 Ready State - The task is ready to run but cannot because a higher
priority task is executing.
 Blocked State - The task has requested a resource that is not available,
has requested to wait until some event occurs, or has delayed itself for
some duration.
 Running State - The task is the highest priority task and is running.

Q. 71 What are the advantages and disadvantages of Java for embedded system
programming?
Ans.: Java has advantages for embedded programming as follows:
1. Java is completely an OOP language.
2. Java has in-built support for creating multiple threads. It obviates the need for
an operating system (OS) based scheduler for handling the tasks.
3. Java is the language for most Web applications and allows machines of different
types to communicate on the Web.
4. There is a huge class library on the network that makes program development
quick.
5. Platform independence in hosting the compiled codes on the network is because
Java generates the byte codes. These are executed on an installed JVM (Java
Virtual Machine) on a machine. [Virtual machines (VM) in embedded systems
are stored at the ROM.] Platform independence gives portability with respect to
the processor used.
6. Java does not permit pointer manipulation instructions. So it is robust in the
sense that memory leaks and memory related errors do not occur. A memory
leak occurs, for example, when attempting to write to the end of a bounded
array.
7. Java byte codes that are generated need a larger memory when a method has
more than 3 or 4 local variables.
8. Java being platform independent is expected to run on a machine with an RISC
like instruction execution with few addressing modes only.

Disadvantages Of Java For Embedded System Programming As Follows:-


1. As java codes are first interpreted by the JVM, it runs comparatively slowly.
This disadvantage can be overcome as follows: - Java byte codes can be
converted to native machine codes for fast running using Just-In-Time (JIT)
compilation. A java accelerator (co-processor) can be used in the system for fast
code-run.
2. Java byte codes that are generated need a larger memory. An embedded Java
system may need a minimum of 512KB ROM & 512KB RAM because of the
need to first install JVM & run the application.

Q. 72 Elaborate the use of J2ME in embedded system software development.


Ans.: Use of J2ME in embedded system software development:-
 An embedded Java system may need a minimum of 512 kB ROM and 512 kB
RAM because of the need to first install JVM and run the application.
 Use of J2ME (Java 2 Micro Edition) or Java Card or EmbeddedJava helps in
reducing the code size to 8 kB for the usual applications like smart card. The
following are the methods:
1. Use core classes only. Classes for basic run time environment form the VM
internal format and only the programmer‘s new Java classes are not in
internal format.
2. Provide for configuring the run time environment. Examples of configuring
are deleting the exception handling classes, user defined class loaders, file
classes, AWT classes, synchronized threads, thread groups, multidimensional
arrays, and long and floating data types. Other configuring examples are
adding the specific classes for connections when needed, datagrams, input
output and streams.
3. Create one object at a time when running the multiple threads.
4. Reuse the objects instead of using a larger number of objects.
5. Use scalar types only as long as feasible.
 A smart card is an electronic circuit with a memory and CPU or a synthesised
VLSI circuit. It is packed like an ATM card. For smart cards, there is Java card
technology. Internal formats for the run time environments are available mainly
for the few classes in Java card technology. Java classes used are the
connections, datagrams, input output and streams, security and cryptography.
 JavaCard, EmbeddedJava and J2ME (Java 2 Micro Edition) are three versions
of Java that generate a reduced code size.
 J2ME provides the optimised run-time environment. Instead of the use of
packages, J2ME provides for the codes for the core classes only. These codes
are stored at the ROM of the embedded system. It provides for two alternative
configurations, Connected Device Configuration (CDC) and Connected Limited
Device Configurations (CLDC). CDC inherits a few classes from packages for
net, security, io, reflect, security.cert, text, text.resources, util, jar and zip.
CLDC does not provide for the applets, awt, beans, math, net, rmi, security and
sql and text packages in java.lang. There is a separate javax.mircoedition.io
package in CLDC configuration. A PDA (personal digital assistant) uses CDC
or CLDC.
 There is a scaleable OS feature in J2ME. There is a new virtual machine, KVM
as an alternative to JVM. When using the KVM, the system needs a 64 kB
instead of 512 kB run time environment.
 J2ME need not be restricted to configure the JVM to limit the classes. The
configuration can be augmented by Profiler classes. For example, MIDP
(Mobile Information Device Profiler). A profile defines the support of Java to a
device family. The profiler is a layer between the application and the
configuration. For example, MIDP is between CLDC and application.
 Between the device and configuration, there is an OS, which is specific to the
device needs. A mobile information device has the followings.
1. A touch screen or keypad.
2. A minimum of 96 x 54 pixel color or monochrome display.
3. Wireless networking.
4. A minimum of 32 kB RAM, 8 kB EEPROM or flash for data and 128 kB
ROM.
5. MIDP used as in PDAs, mobile phones and pagers.
 MIDP classes describe the displaying text. It describes the network connectivity.
For example, for HTTP. [Internet Hyper Text Transfer Protocol.] It provides
support for small databases stored in EEPROM or flash memory. It schedules
the applications and supports the timers. [Recall RTCSWTs.] An RMI profiler
is an exemplary profiler for use in distributed environments.

Q. 73 What are the functions of compiler and cross-compiler?


Ans.: The Functions Of Compiler And Cross-Compiler:-
 Two compilers are needed. One compiler is for the host computer which does
the development and design and also the testing and debugging. The second
compiler is a cross-compiler.
 The cross-compiler runs on a host, but develops the machine codes for a
targeted system (processor of the embedded system). There is a popular
freeware called GNU C/C++ compiler and free AS11M assembler for 68HC11.
 A GNU compiler is configurable both as host compiler as well as cross-
compiler. It supports 80x86, Window 95/NT, 80x86 Red Hat Linux and several
other platforms. It supports 80x86, 68HC11, 80960, PowerPC and several other
target system processors.
 A compiler generates an object file. For compilation for the host alone, the
compiler can be turbo C, turbo C++ or Borland C and Borland C++. The target
system-specific or multiple-choice cross-compilers that are available
commercially may be used.
 These are available for most embedded system microprocessors and
microcontrollers. The IAR System, Sweden, offers cross-compilers for many
targets.
 The targets can be of the PIC family or 8051 family, 68HC11 family or 80196
family. These compilers can switch on its configuring back from the embedded
system-specific cross-compiler to the ANSI C standard compiler.
 PCM is another cross compiler for the PIC (Programmable Interrupt Controller)
microcontroller family, PIC 16F84 or 16C76, 16F876.
 The host also runs the cross-compiler that offers an integrated development
environment. It means that a target system can emulate and simulate the
application system on the host.
 In an embedded system design, as the final step, the bytes must be placed at the
ROM after compilation. A ‗C‘ program helps to achieve that goal. The object
file is generated on compilation of a program while an executable file is
required, which has the source codes having the absolute addresses.
 The executable file is the file that a device program uses to put (store or burn in)
the initial data, constants, vectors, tables, and strings, and the source codes in
the ROM.
 A locator file has the final information of memory allocation to the codes, data,
and initialization data and so on. The locator then uses the allocation map file,
and generates the source code within the allocated addresses.
 The ROM has the following sections:
(i) Machine (Executable) codes for the bootstrap (reset) program.
(ii) Initialization (default) data at shadow ROM for copying into the RAM
during execution.
(iii) Codes for the application and interrupt service routines.
(iv) System configuration data needed for the execution the codes.
(v) Standard data or vectors and tables.
(vi) Machine codes for the device manager and device drivers.

Q. 74 What do you understand by memory optimization? How will you optimise the
use of memory in an embedded system?
Ans.: Memory Optimization:-
 When codes are made compact and fitted in small memory areas without
affecting the code performance, it is called memory optimization.
 It also reduces the total number of CPU cycles, and thus, the total energy
requirements.
 Certain steps changed to reduce the need for memory and having a compact
code.
 Following rules should be kept in mind while optimizing memory needs:-
1. Use declaration as unsigned byte if there is a variable, which always has a
value between 0 and 255. When using data structures, limit the maximum
size of queues, lists and stacks size to 256. Byte arithmetic takes less time
than integer arithmetic.
Follow a rule that uses unsigned bytes for a short for an integer if possible,
to optimize use of the RAM and ROM available in the system.
Avoid if possible the use of ‘long’ integers and ‘double’ precision floating
point values.

2. Avoid use of library functions if a simpler coding is possible. Library


functions are the general functions. Use of general function needs more
memory in several cases.
Follow a rule that avoids use of library functions in case a generalised
function is expected to take more memory when its coding is simple.

3. When the software designer knows fully the instruction set of the target
processor, assembly codes must be used. This also allows the efficient use
of memory. The device driver programs in assembly especially provide
efficiency due to the need to use the bit set-reset instructions for the control
and status registers. Only the few assembly codes for using the device I/O
port addresses, control and status registers are needed. The best use is made
of available features for the given applications. Assembly coding also helps
in coding for atomic operations. A modifier register can be used in the C
program for fast access to a frequently used variable.
As a rule, use the assembly codes for simple functions like configuring the
device control register, port addresses and bit manipulations if the
instruction set is clearly understood. Use assembly codes for the atomic
operations for increment and addition. Use modifier ‘register’ for a
frequently used variable.

4. Calling a function causes context saving on a memory stack and on return


the context is retrieved. This involves time and can increase the worst-case
interrupt-latency. There is a modifier inline. When the inline modifier is
used, the compiler inserts the actual codes at all the places where these
operators are used. This reduces the time and stack overheads in the
function call and return. But, this is at the cost of more ROM being needed
for the codes. If used, it increases the size of the program but gives a faster
speed.
As a rule, use inline modifiers for all frequently used small sets of codes in
the functions or the operator overloading functions if the ROM is available
in the system.

5. As long as shared data problem does not arise, the use of global variables
can be optimised. These are not used as the arguments for passing the
values. A good function is one that has no arguments to be passed. The
passed values are saved on the stacks in case of interrupt service calls and
other function calls. Besides obviating the need for repeated declarations,
the use of global variables will thus reduce the worst-case interrupt-latency
and the time and stack overheads in the function call and return. But this is
at the cost of the codes for eliminating shared data problem. When a
variable is declared static, the processor accesses with less instruction than
from the stack.
As a rule, use global variables if shared data problems are tackled and use
static variables in case it needs saving frequently on the stack.
6. Combine two functions if possible. For example, LElSearch (boolean
present, const LElType & item) is a combined function. The search
functions for finding pointers to a list item and pointers of previous list
items combine into one. If present is false the pointer of the previous list
item retrieves the one that has the item.
As a rule, whenever feasible combine two functions of more or less similar
codes.

7. Recall the use of a list of running timers and list of initiated tasks. All the
timers and a conditional statement that changes the count input in case of a
running count and does not change it in case of idle state timers could have
also been used. More number of calls will however be needed and not once,
but repeatedly on each real time clock interrupt tick. The RAM memory
needed will be more. Therefore, creating a list of running counters is a more
efficient way. Similarly, bringing the tasks first into an initiated task list
will reduce the frequent interactions with the OS and context savings and
retrievals stack, and time overheads. Optimise the RAM use for the stacks.
It is done by reducing the number of tasks that interact with the OS. One
function calling another function and that calling the third and so on means
nested calls. Reduce the number of nested calls and call at best one more
function from a function. This optimises the use of the stack.
As a rule, reduce use of frequent function calls and nested calls and thus
reduce the time and RAM memory needed for the stacks, respectively.

8. Use if feasible, alternatives to the switch statements with a table of pointers


to the functions. This saves processor time in deciding which set of
statements to execute while performing the conditional tests all down a
chain.

9. Use the delete function when there is no longer a need for a set of
statements after that execute.
As a rule, to free the RAM used by a set of statements, use the delete
function and destructor functions.

10. When using C++, configure the compiler for not permitting the multi-
inheritance, templates, exceptional handling, new style casts, virtual base
classes, and namespaces.
As a rule, for using C++, use the classes without multiple inheritances,
without template, with runtime identification and with throwable
exceptions.
UNIT-5

Q. 75 What are the open standards, frameworks and alliances presents in the
market?
Ans.:  An open standard is a standard that is publicly available and has various rights to
use associated with it, and may also have various properties of how it was
designed (e.g. open process). There is no single definition and interpretations
vary with usage.
 The terms open and standard have a wide range of meanings associated with their
usage. There are a number of definitions of open standards which emphasize
different aspects of openness, including the openness of the resulting
specification, the openness of the drafting process, and the ownership of rights in
the standard.
 The term "standard" is sometimes restricted to technologies approved by
formalized committees that are open to participation by all interested parties and
operate on a consensus basis.

1. The Open Mobile Alliance (OMA):-


 The Open Mobile Alliance was formed in June 2002 by nearly 200 companies
representing the world‘s leading mobile operators, device & network
suppliers, information technology companies and content providers.
 Most significantly, the Open Mobile Alliance is designed to be the center of
mobile service enabler specification work, helping the creation of
interoperable services across countries, operators and mobile terminals that
will meet the needs of the user.
 To grow the mobile market, the companies supporting the Open Mobile
Alliance will work towards stimulating the fast and wide adoption of a variety
of new, enhanced mobile information, communication and entertainment
services.

2. The Open Handset Alliance (OHA) :-


 Android is developed by the Open Handset Alliance (OHA) led by Google.
 The open handset alliance (OHA) is a business alliance of 84 firms to develop
open standard for mobile devices.
 OHA develops technologies that will significantly lower the cost of
developing and distributing mobile devices and services.
 OHA devoted to advancing open standards for mobile devices.

3. Android:-
 Android is a Linux-based operating system for mobile devices such as
smartphones and tablet computers.
 Android is an open source software platform for mobile, embedded and
wearable devices. The First Android Phone is HTC G1.
 Android specially developed for applications. Android allows writing
managed code in the Java languageAndroid includes a Java API for
developing applications.
 There are more than 2.6 million apps in Android market. Android is not a
device or product.
 Android has its own virtual machine i.e. DVM (Dalvik Virtual Machine),
which is used for executing the android applications.
 Android gives a rich application system that permits you to construct inventive
applications and games for mobile devices in a Java language environment.

4. Openmoko:-
 Openmoko Linux is an operating system for smartphones developed by the
Openmoko project. It is based on the Ångström distribution, comprising
various pieces of free software.
 The main targets of Openmoko Linux were the Openmoko Neo 1973 and the
Neo FreeRunner. Furthermore, there were efforts to port the system to other
mobile phones.
 Openmoko Linux was developed from 2007 to 2009 by Openmoko Inc. The
development was discontinued because of financial problems. Afterwards the
development of software for the Openmoko phones was taken over by the
community and continued in various projects, including SHR, QtMoko and
Hackable1.

Q. 76 Elaborate recent processor trends in embedded system.


Ans.: Processor Trends:-
• There have been tremendous advancements in the area of processor design.
• Following are some of the points of difference between the first generation of
processor/controller and today‘s processor/ controller.
1. Number of ICs per chip: Early processors had a few number of IC/gates per
chip. Today‘s processors with Very Large Scale Integration (VLSI) technology
can pack together ten of thousands of IC/gates per processor.
2. Need for individual components: Early processors need different components
like brown out circuit, timers, DAC/ADC separately interfaced if required to
be used in the circuit. Today‘s processors have all these components on the
same chip as the processor.
3. Speed of Execution: Early processors were slow in terms of number of
instructions executed per second. Today‘s processor with advanced
architecture support features like instruction pipeline improving the execution
speed.
4. Clock frequency: Early processors could execute at a frequency of a few MHz
only. Today‘s processors are capable of achieving execution frequency in rage
of GHz.
5. Application specific processor: Early systems were designed using the
processors available at that time. Today it is possible to custom create a
processor according to a product requirement.

 Following are the major trends in processor architecture in embedded


development.
A. System on Chip (SoC)
 This concept makes it possible to integrate almost all functional systems
required to build an embedded product into a single chip.
 SoC are now available for a wide variety of diverse applications like Set
Top boxes, Media Players, PDA, etc.
 SoC integrate multiple functional components on the same chip thereby
saving board space which helps to miniaturize the overall design.
B. Multicore Processors/ Chiplevel Multi Processor
 This concept employs multiple cores on the same processor chip operating
at the same clock frequency and battery.
 Based on the number of cores, these processors are known as:
o Dual Core – 2 cores
o Tri Core – 3 cores
o Quad Core – 4 cores
 These processors implement multiprocessing concept where each core
implements pipelining and multithreading.

C. Reconfigurable Processors
 It is a processor with reconfigurable hardware features.
 Depending on the requirement, reconfigurable processors can change their
functionality to adapt to the new requirement. Example: A reconfigurable
processor chip can be configured as the heart of a camera or that of a
media player.
 These processors contain an Array of Programming Elements (PE) along
with a microprocessor. The PE can be used as a computational engine like
ALU or a memory element.

Q. 77 Elaborate the use of multi-core processor for embedded systems.


Ans.: Multicore Processors/Chiplevel Multi Processor(CMP):-
 One way of achieving increased performance is to increase the operating clock
frequency. Indeed it will increase the speed of execution with the cost of high
power consumption.
 In today‘s world most embedded devices are demanding battery power source for
their operation. Here we don‘t have the luxury to offer high performance with the
cost of reduced battery life.
 Here comes the role of Multicore processors. Multicore processors incorporate
multiple processor cores on the same chip and works on the same clock frequency
supplied to the chip.
 Based on the number of cores, the processors are known as dual core (2 cores), tri
core (3 cores), quad core (4 cores), etc. Multicore processors implement
multiprocessing (Simultaneous execution. Don‘t confuse it with multitasking.).
 Each core of the CMP implements pipelining, multithreading and superscalar
execution. Current implementations of Intel multicore processors support cores up
to 4, whereas Freescale multicore processors support cores up to 2.
 It is amazing to note that the multicore processor OCTEONTM CN3860,
developed by Cavium Networks is supporting 16 MIPS processor cores capable
of operating at a clock frequency of 1 GHz.
Q. 78 Write a note on embedded OS trends.
Ans.: Different trends in the embedded industry related to:
 Processor Trends
 Operating System Trends
 Development Language Trends
 Bottlenecks faced by Embedded Industry
 Open Standards, Frameworks and alliances

1. PROCESSOR TRENDS :
 Following are some of the points of difference between the first generation of
processor/controller and today‘s processor/ controller.
 Number of ICs per chip: Early processors had a few number of IC/gates per
chip. Today‘s processors with Very Large Scale Integration (VLSI)
technology can pack together tens of thousands of IC/gates per processor.
 Need for individual components: Early processors need different
components like brown out circuit, timers, DAC/ADC separately interfaced if
required to be used in the circuit. Today‘s processors have all these
components on the same chip as the processor.
 Speed of Execution: Early processors were slow in terms of number of
instructions executed per second. Today‘s processor with advanced
architecture support features like instruction pipeline improving the execution
speed.

2. OPERATING SYSTEM TRENDS


 The advancements in processor technology have caused a major change in the
Embedded Operating System Industry.
 There are lots of options for embedded operating system to select from which
can be both commercial and proprietary or Open Source.
 Virtualization concept is brought in picture in the embedded OS industry which
replaces the monolithic architecture with the microkernel architecture.
 This enables only essential services to be contained in the kernel and the rest are
installed as services in the user space as is done in Mobile phones.

3. DEVELOPMENT LANGUAGE TRENDS


 There are two aspects to Development Languages with respect to Embedded
Systems Development.
 Embedded Firmware
It is the application that is responsible for execution of embedded system. It
is the software that performs low level hardware interaction, memory
management etc. on the embedded system.
 Embedded Software
It is the software that runs on the host computer and is responsible for
interfacing with the embedded system. It is the user application that
executes on top of the embedded system on a host computer. Early
languages available for embedded systems development were limited to C
& C++ only. Now languages like Microsoft C#, ASP.NET, VB, Java, etc.
are available.
4. BOTTLENECKS FACED BY EMBEDDED INDUSTRY :
 Following are some of the problems faced by the embedded devices industry:
 Memory Performance:
The rate at which processors can process may have increased considerably
but rate at which memory speed is increasing is slower.
 Lack of Standards/ Conformance to standards
Standards in the embedded industry are followed only in certain handful
areas like Mobile handsets. There is growing trend of proprietary
architecture and design in other areas.
 Lack of Skilled Resource
Most important aspect in the development of embedded system is
availability of skilled labor. There may be thousands of developers who
know how to code in C, C++, Java or .NET but very few in embedded
software

Q. 79 What are different development language trends for embedded system


programming available?
Ans.: Different Development Language Trends For Embedded System
 There are two aspects to Development Languages with respect to Embedded
Systems Development
A. Embedded Firmware:-
 It is the application that is responsible for execution of embedded system.
 It is the software that performs low level hardware interaction, memory
management etc. on the embedded system.

B. Embedded Software
 It is the software that runs on the host computer and is responsible for
interfacing with the embedded system
 It is the user application that executes on top of the embedded system on a
host computer.

 Early languages available for embedded systems development were limited to C


& C++ only. Now languages like Microsoft C$, ASP.NET, VB, Java, etc. are
available.
A. Java
 Java is not a popular language for embedded systems development due to its
nature of execution.
 Java programs are compiled by a compiler into bytecode. This bytecode is
then converted by the JVM into processor specific object code.
 During runtime, this interpretation of the bytecode by the JVM makes java
applications slower that other cross compiled applications.
 This disadvantage is overcome by providing in built hardware support for
java bytecode execution.
 Another technique used to speed up execution of java bytecode is using Just
In Time (JIT) compiler. It speeds up the program execution by caching all
previously executed instruction.
 Following are some of the disadvantage of Java in Embedded Systems
development:
o For real time applications java is slow
o Garbage collector of Java is non-deterministic in behaviour which makes it
not suitable for hard real time systems.
o Processors need to have a built in version of JVM
o Those processors that don‘t have JVM require it to be ported for the
specific processor architecture.
o Java is limited in terms of low level hardware handling compared to C and
C++
o Runtime memory requirement of JAVA is high which is not affordable by
embedded systems.

B. .NET CF
 It stands for .NET Compact Framework.
 .NET CF is a replacement of the original .NET framework to be used on
embedded systems.
 The CF version is customized to contain all the necessary components for
application development.
 The Original version of .NET Framework is very large and hence not a
good choice for embedded development.
 The .NET Framework is a collection of precompiled libraries.
 Common Language Runtime (CLR) is the runtime environment of .NET. It
provides functions like memory management, exception handling, etc.
 Applications written in .NET are compiled to a platform neutral language
called Common Intermediate Language (CIL).
 For execution, the CIL is converted to target specific machine instructions
by CLR.

Q. 80 Draw the architecture of PIC processor.


Or
Q. 81 What families of PIC microcontrollers are available?
Or
Q. 82 Draw the block diagram of PIC microcontroller and explain each unit.
Or
Q. 84 Write a note on PIC microcontroller.
Ans.: The Architecture Of PIC Processor:-
Q. 86 Explain the function of each bit of Status register of PIC microcontroller.
Ans.: 1. General purpose registers are the memory locations available to users for data
storage.
2. They can be accessed directly or indirectly through the File Select Register
(FSR).
3. Special Function registers are used by CPU and peripherals for configuration,
status indication, data association, etc.
4. They are located in the lower memory areas of a memory bank.
5. The size of the SFR memory varies across device families and the typical size for
16F877 is 96 bytes.
6. The status of arithmetic operations and reset along with the data memory bank
selector bits RP0 and RP1 are held in the STATUS register.
7. The bit details of the Status register is explained below :
B7 B6 B5 B4 B3 B2 B1 B0
IRP RP1 RP0 TO\ PD\ Z DC C

8. The table given below explains the meaning and use of each bit :

Bit Name Explanation


C Carry/Borrow Flag C=1 for addition operation indicates
a carry generated by the addition
operation and
C=0 for subtraction operation
indicates borrow generated by the
subtraction operation executed by
ALU.
DC Digit Carry/Borrow Flag C=1 for addition operation indicates
a carry generated out of the fourth
bit by the addition operation and
C=0 for subtraction operation
indicates borrow generated out of
the fourth bit by the subtraction
operation executed by ALU.
Z Zero Flag Z=1 when the result of an arithmetic
or logic operation executed by the
ALU is zero.
PD\ Power Down Flag PD\=1 After power up or on
execution of the CLRWDT
instruction.
PD\=0 On executing a SLEEP
instruction.
TO\ Time out Flag TO\=1 After power up or on
execution of the CLRWDT or
SLEEP instruction.
TO\=0 On the expiration of
Watchdog Timer (WDT).
Bit Name Explanation
RP0 Memory Bank select bit 0 Memory bank selector bit.
RP1 Memory Bank select bit 1
IRP Register Bank Select Memory bank selector for indirect
addressing
IRP=0 : Bank 0,1 (Indirect Address
000H to 0FFH)
IRP=1: Bank 2, 3 (Indirect Address
100H to 1FFH).

Q. 89 Explain the function of each bit of Status Register SREG of AVR


microcontroller.
Ans.: The status register SREG holds the status of the most recently executed arithmetic
instruction in the ALU.
The bit details of the status register is explained below:-
B7 B6 B5 B4 B3 B2 B1 B0
I T H S V N Z C

Bit Name Explanation


C Carry Flag C=1 indicates a carry generated by the arithmetic or
login operation executed by AEU.

Z Zero Flag Sets when the result of an arithmetic or login operation


is zero.

N Negative Flag Sets when the result of an arithmetic or login operation


is negative.
V Over Flow Twos compliment over flow flag. Sets when over flow
occurs in twos compliment operation.

S Sign Flag Exclusive OR (XOR) between negative flag (N) and


over flag (V)

H Half carry Flag Sets when a carry generated out of bit 3(bit index
starts from 0)in arithmetic operation .useful in BCD
arithmetic

T Bit copy Acts as source and destination for the bit copy storage
storage Instruction, bit load (BLD) and bit store (BST)
respectively. BLD loads the specified bit in the register
With the value of T. BST loads T with the value of
specified bit in the specified register.

I Global Activates or deactivates the interrupt. If set to 1all the


interrupt interrupt configured through their corresponding
enable interrupt enabler register, are serviced. If set to 0 none
of the interrupt are serviced. I bit is cleared by
hardware when then interrupt is occur and it is set
automatically on executing a RETI instructions.

Q. 92 Explain different processing modes of ARM.


Ans.:  ARM processors support different processor modes, depending on the
architecture version (see Table).
 Note
ARMv6-M and ARMv7-M do not support the same modes as other ARM
processors. This section does not apply to ARMv6-M and ARMv7-M.

Table. ARM processor modes


Processor mode Architectures Mode number
User All 0b10000
FIQ - Fast Interrupt Request All 0b10001
IRQ - Interrupt Request All 0b10010
Supervisor All 0b10011
Abort All 0b10111
Undefined All 0b11011
System ARMv4 and above 0b11111
Monitor Security Extensions only 0b10110
 All modes except User mode are referred to as privileged modes. They have full
access to system resources and can change mode freely. User mode is an
unprivileged mode.
 Applications that require task protection usually execute in User mode. Some
embedded applications might run entirely in Supervisor or System modes.
 Modes other than User mode are entered to service exceptions, or to access
privileged resources.
 On architectures that implement the Security Extensions, code can run in either a
secure state or in a non-secure state. See the ARM Architecture Reference
Manual for details.

Q. 93 Explain different type of instructions of ARM controller.


Ans.: The instruction set architecture (ISA) of ARM supports three different types of
Instruction sets, namely;
 ARM Instruction Set:
1. The original ARM instruction set.
2. Here, all instructions are 32 bits wide and are word aligned.
3. Since, all instructions are word aligned, one single fetch reads four 8-bit
memory locations. Hence there is no significance for the lower 2-bits of the
program counter.

 Thumb Instruction Set:


1. Thumb is the 16-bit subset for the 32-bit ARM instructions.
2. These instructions can be considered as a 16-bit compressed form of the
original 32-bit ARM instructions.
3. These instructions can be executed either by decompressing the instruction to
the original 32-bit ARM instruction or by using dedicated 16-bit instruction
decoding unit.
4. Here, all instructions are half word aligned, meaning one single fetch reads
two 8-bit memory locations.
5. Here there is no significance for the lower bit of the program counter.
6. Provides higher code density than the 32-bit original ARM instruction.

 Jazelle Instruction Set:


1. Jazelle is the h/w implementation of the Java Virtual Machine (JVM) for
executing java bytes codes.
2. The 140 Java Instruction are implemented directly in the Jazelle h/w unit,
rest 94 Java instructions are implementing by emulating them with multiple
ARM instructions.
3. All instructions related to the Jazelle are the 8-bit wide.
4. Here the processor reads 4 instructions at a time by word access.

The ARM instruction set can be broadly classified into:


1. Data Processing Instruction:
 The data processing instruction include the arithmetical and logical operation
instruction, comparison and data movement instruction.
 The following table summarizes the data processing operations supported by
ARM,

Operation Category Instructions


Arithmetic ADD, ADC, SUB, RSB, AND
Logical ORR, EOR, BIC, CMP, CMN, TST,
Comparisons TEQ, MOV, MVN
Data Movement

2. Single Register and Multiple Register Data Transfer Instructions:


 The instructions related to register data transfer come under this category.
 These instructions involve either a single register data transfer or multiple
register data transfer.

3. Program Control Instruction:


 Diverts the program flow.
 It can be either conditional execution or unconditional branching.
 BX, B, BL etc., are examples for branching instructions.

4. Multiple Instruction:
 Multiply (MUL), Multiply Accumulate (MLA), Multiply Long (MLL),
Multiply Long Accumulate (MLAL) instruction supported by ARM
instruction set.

5. Barrel Shift Operations:


 ARM ISA doesn‘t implement shift instructions by default.
 However, shift operations are implemented as part of other instructions, with
the help of barrel shifter.

6. Branching Instruction:
 For changing the program execution flow.
 It can be either conditional branching or unconditional branching.

7. Co-processor Specific Instruction:


 Bit shifting operators like Logical left shift (LSL), Arithmetic right shift
(ASR), Logical right shift (LSR), Rotate right (ROR), and Rotate Right
Extended (RRE) are examples of this.

Potrebbero piacerti anche