Sei sulla pagina 1di 85

CONTENTS

CHAPTER 1: INTRODUCTION

1.1 Introduction -------------------------------------------------01


1.2 Abstract -------------------------------------------------------03
1.3 Example of web server--------------------------------------05
1.4 Characteristics of embedded systems -------------------06

CHAPTER 2: THE OSI REFERENCE MODEL

2.1 Summary of OSI layers----------------------------------12


2.2 Encapsulation -----------------------------------------------14
2.3 Some abbreviations ----------------------------------------15
2.4 Example of OSI system------------------------------------15
2.5 Relation between TCP/IP Suite and OSI model------21

CHAPTER 3: TCP/IP SUITE

3.1 TCP Backgrounder-----------------------------------------25


3.2 TCP window management--------------------------------28
3.3 IP Backgrounder--------------------------------------- ----30
3.4 IP Addressing-------------------------------------------- ---30
3.5 IP Fragmentation-------------------------------------------38
3.6 Ethernet/TCP/IP Applications---------------------------39

1
CHAPTER 4: APPLICATION LAYER PROTOCOLS

4.1 Introduction --------------------------------------------41


4.2 What is Application layer---------------------------41
4.3 Application layer protocol Architecture---------42
4.4 Peer to peer architecture----------------------------44
4.5 Application layer--------------------------------------46
4.6 Embedded HTTP server-----------------------------47
4.7 Protocol description-----------------------------------48

CHAPTER 5: HARDWARE DESCRIPTION

5.1 Introduction &overview---------------------------------54


5.2 RCM2200 Description-----------------------------------54
5.3 Hardware setup-------------------------------------------59
5.4 Development hardware connections------------------64
5.5 ADC Description---------------------------- -------------69
5.6 Temperature sensor-------------------------- -----------72

CHAPTER 6: SOFTWARE IMPLEMENTATION-----75

CONCLUSION------------------------------------------------------------82
BIBILOGRAPHY- -------------------------------------------------------83

2
1. INTRODUCTION

1.1 Introduction:

The Internet has revolutionized many aspects of our daily lives. It has affected
the way we do business as well as the way we spend our leisure time. Count the ways
you have used the Internet recently. Perhaps you have sent email to a friend, paid a
utility bill, read a news paper from a distant city, or looked up a local movie schedule
– all by using the Internet. Or, may be you researched a medical topic, booked a hotel
reservation, and chatted with a fellow trekkie. The Internet is a communication system
that has brought a wealth of information to our fingertips and organized it for our use.

These days it seems, everybody and their brother is talking about the need of
becoming “Internet aware” - a new catch phrase sounding eerily similar to something
said in Eden some time ago. The explosive growth and appeal of the Internet has
everyone scrambling to get onboard, or be thought of as somehow “20th century”.
Today, Internet accessibility in one form or another, if not an a priori requirement, is
at least a highly desirable option in many embedded applications. Previously the sole
domain of mainframes, PCs, and workstations, TCP/IP stacks and other networking
applications are now being written by the dozens for embedded microprocessors and
microcontrollers, providing them the smarts to hook into the “matrix”. We’ll also seek
to resonate some understanding of the basic issues one needs to consider when
deciding on which approach is best suited for their embedded Internet application.

While we wonder about the practicality of Embedded Systems in refrigerators


that have the capability to communicate over the Internet, there are many examples of
useful embedded systems in which communication technologies create very useful
products. Embedded Systems are not transmitting electric meter readings over low
bandwidth wireless links to alleviate the need to read them visually. Embedded
Systems with Global Positioning System technology and wireless links are used to
pinpoint the exact location, speed, oil pressure and other parameters of fleets of trucks
any where in the country. These example applications are not only useful but also
financially viable.

3
Communications is fast becoming a general requirement for Embedded
Systems that include no form of external communication. This requirement is nothing
to fear as networking technologies can meet the challenge of the embedded domain
and make developing these systems much simpler.

A networked application is nothing more than a program that has the


capability to communicate over a network with another networked application. With
the TCP/IP protocol suite and the standard Berkeley sockets, creating networked
applications is simple.

Embedded Systems commonly utilize one of two interfaces for connectivity,


Ethernet or Serial link. An Ethernet connection requires the availability of a local
network by which the device may connect. Connecting via a serial link opens many
other avenues since communication can take place through device acting as a gateway
or through a modem (wireless or land line).

1.2 What is an “Embedded” system?

An important semantic point to close before we really get into this


chapter is the definition of an embedded system. As the saying goes, there are
as many definitions as there are embedded system developers, but a good
common one exists:

An embedded system is a specialized computer system that provides a


dedicated function such as control. monitoring, or other services within a larger
system.

Note that from this definition, a desktop personal computer does not
qualify. Nor do any other general-purpose devices such as Personal Digital
Assistants (PDA). The embedded computer in the microwave fits this def -
inition, as does the computer within our car's radio. Even cellular phones still fit
this definition, although that definition is changing with the higher integration
possible today (cell phone/PDA integration).

4
It's important to note that an embedded system in no way implies a real-
time system. A real-time system responds to unpredictable external stimuli in a
timely and predictable way. All of the protocols discussed in this text have very
soft requirements with respect to timing and therefore embedded aspects will be
the secondary focus.

One other interesting distinction of most embedded systems is that the


software that is developed for them is built (compiled and linked) on an other
computer. When we build software on a standard desktop PC, we compile the
source and run it on the same machine. In an embedded sys tem, we commonly
'cross-compile" our code on one system (the host) and then execute on our
embedded system (the target). The host and target commonly differ by
architecture, so the cross-compiler generates code that is not executable on the
host.

ABSTRACT
1.3 Embedded Web Server:

Embedded web server is an Implementation of connecting Embedded Systems


to Internet, WAN and LAN. This will be achieved by implementing TCP/IP protocol
stack and an Ethernet connection in an Embedded System. Once the system is able to
communicate through TCP/IP protocol, this can be placed in any TCP/IP network,
such as Internet and LAN. And the device can be programmed to desired
functionality. Typical functions are Web Server, E-mail client, FTP server, POP3 etc.

A major advantage of this kind of implementation is Remote Monitoring and


Remote Controlling. In this model TCP/IP enabled Embedded Systems can be
scattered around a large network or even through out the world! Still all of the
systems can be controlled from a remote central station.

5
In this project we are developing “Embedded Web Server For Temperature
Monitor and Controller” Based on 8 bit - Rabbit Microprocessor. This device can be
used in any LAN / WAN / and Internet. Using this application we can get an idea of
how to control more complex equipments in a large network from a centralized
station.

Increasing demand for network connectivity in Embedded Systems can be


addressed by this technology.

Rabbit
Microprocessor T
C REMOTE - PC
Ether [TCP / IP
P INTERN
Net ENABLED
/ ET /
Port CLEINT]
I LAN /
AD P WAN
C
Temperatur
e Sensor

6
Embedded solutions:
Among the myriad of embedded Internet solutions being touted today, all fall neatly
into one of five fundamental groups:

1. Embedding a fully functional (or nearly so), third party TCP/IP stack into your
application, enabling direct Internet access…

2. Using a third party’s external TCP/IP gateway device, such as NetSilicon’s


Net+ARM™ solutions…

3. Writing your own TCP/IP stack, or some functional subset thereof…

4. Using your own, or a third party’s, “lightweight”, proprietary communication


protocol to talk with an external Gateway device, which is itself connected to the
Internet (e.g. EmWare)…

5. Everything else – for those which don’t fall that neatly.

1.4 Example of a Web Server:

7
Suppose the embedded web server is embedded in several units in a
house. Every server is connected to the network. A computer located at home
as on Figure 2 controls all devices and can receive requests from other
computers on the Internet. The web server is identified by its unique IP
address and can be controlled remotely from anywhere in the world as long
as the authorization is in order.

8
1.5 Characteristics of Embedded Systems

Let's now look at some of the characteristics of embedded systems regard ing
their capability of connecting to the Internet.

1.5.1 RTOS or no RTOS

One of the classic arguments in embedded systems software design is


whether to use an operating system kernel (or Real Time Operating Sys tem. or
RTOS). The operating system (OS) provides a basic layer of services to simplify
the job of the embedded application developer. From the other end of the
spectrum, the OS can create resource constraints that drive the cost of the end
device higher. It's very hard these days not to justify the use of an operating
system or kernel, especially when the device must com municate using Internet
protocols. C P /IPstack (and accompanying upper layer socket interface and lower
layer device driver) puts additional constraints upon the embedded system
design.

For example, the TCPIIP stack must have a relatively accu rate time
source for timer management (to handle the various timeouts and timing
activities that take place in the stack). The stack must also have a resource
management system for packets. This could be a standard dy namic memory
management system (Malloc/Free) or a custom system that pre-allocates packet
buffers for speed.

9
It is possible to use a TC P /IPstack with no kernel as long as you pro vide
the services that it needs (timer, memory manager, etc.). In systems with minimal
requirements, this can yield a very small and fast imple mentation. When
possible, the best arrangement is an OS that is pre-configured with a network
stack. For example, all embedded Linux distributions au tomatically include a
standard TC P /IPstack. Commercial RTOS vendors such as QNX Software and
Wind River Systems optionally include net- working stacks. Other than
simplicity for development (you concentrate on your application instead of the
as/stack), there are other benefits to this model. The largest benefit is that of
available software.

Consider an embedded Linux distribution. Since the Linux distribution on


the desktop and in the embedded device are virtually the same, networking
protocols and applications can be quickly and easily ported. This means that the
mass of code available lo The dynamic allocation of packets means that
whenever a packet is needed, it is dynamically allocated from the heap
subsystem. The heap is nothing more than a large block of memory that can be
broken down into smaller pieces depending upon application needs. The heap is
also used for generalized resource management, so it's a common base for
systemwide memory management. The general problem with dynamic memory
management is garbage collection. When a packet is returned to the heap, it must
fold the packet back in to join blocks whenever possible for large al locable
segments. If the returned blocks were simply left in their original size, you'd
ultimately end up with the inability to allocate blocks over the largest available
block size and an inefficient search algorithm to find the block closest to your
requested size.

Pre-allocation is the allocation of packets (commonly of a fixed size) and


then immediate queuing for quick allocation and release. With fixed size packets,
there is no garbage collection, which makes the method both simple and
efficient. Pre-allocation can be based upon a dynamic memory management
model (for initial allocation of packets) or can simply be de fined as blocks of
memory reserved for the application. low standard desktop systems can be easily

10
tailored for the embedded environment saving time and effort .

1.5.2 Blocking or non blocking calls:

The standard socket layer defines blocking semantics such that a call to
read from a socket with no data available will cause the call to block until data is
available. The write call behaves in a similar mam1er. If the number of octets
being written is greater than the available space in the buffer, the write call
blocks until sufficient space in the write buffer is available. If an as is not
available, these blocking semantics are also not available. The socket layer can
work in a non-blocking manner, though in many cases this can be inefficient.
Some commercial stacks provide a callback mecha nism such that when some
type of event occurs for a given socket, a user- "---- defined application is called.
This type of architecture can be extremely efficient.

1.5.3 Single or multi threaded designs:

With an RTOS, the stack may be defined as a single task or be split among the
protocol layers. In most cases, this distinction exists between the stack and the
data link layer. The Media Access Control (MAC) or serial port (for PPP/SLIP)
is commonly split from the TCPIIP stack software due to their asynchronous
nature. Recall from Chapter I that this split is due to the asynchrony of the layer's
processing requirements.

1.5.4 Memory Management Issues

Embedded systems are commonly designed for minimal cost. With cost re -

11
duction comes reduction in the available resources such as Flash/non volatile
memory and RAM. Having a minimal amount of RAM means that extreme care
must be taken regarding the method that the TCPIIP stack uses for the allocation
of packets.

The method for allocation of packets can typically be configured in


embedded stacks, particularly those that are as/kernel agnostic. The two primary
methods are dynamic allocation or pre-allocation.

The dynamic allocation of packets means that whenever a packet is


needed, it is dynamically allocated from the heap subsystem. The heap is nothing
more than a large block of memory that can be broken down into smaller pieces
depending upon application needs. The heap is also used for generalized resource
management, so it's a common base for systemwide memory management. The
general problem with dynamic memory man agement is garbage collection. When
a packet is returned to the heap, it must fold the packet back in to join blocks
whenever possible for large al locable segments. If the returned blocks were
simply left in their original size, you'd ultimately end up with the inability to
allocate blocks over the largest available block size and an inefficient search
algorithm to find the block closest to your requested size.
Pre-allocation is the allocation of packets (commonly of a fixed size) and then
immediate queuing for quick allocation and release. With fixed size packets, there
is no garbage collection, which makes the method both simple and efficient. Pre-
allocation can be based upon a dynamic memory management model (for initial
allocation of packets) or can simply be de fined as blocks of memory reserved for
the application.

12
2.THE OSI REFERENCE MODEL:

The OSI reference model specifies standards for describing "Open Systems
Interconnection" with the term 'open' chosen to emphasize the fact that by using these
international standards, a system may be defined which is open to all other systems
obeying the same standards throughout the world. The definition of a common
technical language has been a major catalyst to the standardization of communications
protocols and the functions of a protocol layer.

13
The seven layers of the OSI reference model showing a connection between
two end systems communicating using one intermediate system.

The structure of the OSI architecture is given in the figure above, which
indicates the protocols used to exchange data between two users A and B. The figure
shows bidirectional (duplex) information flow; information in either direction passes
through all seven layers at the end points. When the communication is via a network
of intermediate systems, only the lower three layers of the OSI protocols are used in
the intermediate systems.

2.1 The summary of the OSI layers:

Physical layer: Provides electrical, functional, and procedural characteristics to


activate, maintain, and deactivate physical links that transparently send the bit stream;
only recognises individual bits, not characters or multicharacter frames.

Data link layer: Provides functional and procedural means to transfer data between
network entities and (possibly) correct transmission errors; provides for activation,
maintenance, and deactivation of data link connections, grouping of bits into
characters and message frames, character and frame synchronisation, error control,
media access control, and flow control (examples include HDLC and Ethernet)

Network layer: Provides independence from data transfer technology and relaying
and routing considerations; masks peculiarities of data transfer medium from higher
layers and provides switching and routing functions to establish, maintain, and
terminate network layer connections and transfer data between users.

Transport layer: Provides transparent transfer of data between systems, relieving


upper layers from concern with providing reliable and cost effective data transfer;

14
provides end-to-end control and information interchange with quality of service
needed by the application program; first true end-to-end layer.

Session layer: Provides mechanisms for organizing and structuring dialogues


between application processes; mechanisms allow for two-way simultaneous or two-
way alternate operation, establishment of major and minor synchronisation points, and
techniques for structuring data exchanges.

Presentation layer: Provides independence to application processes from


differences in data representation that is, in syntax; syntax selection and conversion
provided by allowing the user to select a "presentation context" with conversion
between alternative contexts.

Application layer: Concerned with the requirements of application. All application


processes use the service elements provided by the application layer. The elements
include library routines which perform interprocess communication, provide common
procedures for constructing application protocols and for accessing the services
provided by servers which reside on the network.

The communications engineer is concerned mainly with the protocols


operating at the bottom four layers (physical, data link, network, and transport) in the
OSI reference model. These layers provide the basic communications service. The
layers above are primarily the concern of computer scientists who wish to build
distributed applications programs using the services provided by the network.
"Hop-by-Hop" "Network-wide" and "End-to-End" Communication

The two lowest layers operate between adjacent systems connected via the
physical link and are said to work "hop by hop". The protocol control information is

15
removed after each "hop" across a link (i.e. by each System) and a suitable new
header added each time the information is sent on a subsequent hop.

The network layer (layer 3) operates "network-wide" and is present in all


systems and responsible for overall co-ordination of all systems along the
communications path.

The layers above layer 3 operate "end to end" and are only used in the End
Systems (ES) which are communicating. The Layer 4 - 7 protocol control information
is therefore unchanged by the IS in the network and is delivered to the corresponding
ES in its original form. Layers 4-7 (if present) in Intermediate Systems (IS) play no
part in the end-to-end communication.

2.2 Encapsulation:

When an application sends data using TCP, the data is sent down the
protocol stack, through each layer, until it is sent as a stream of bits across
the network. Each layer adds information to the data by prepending headers
and adding trailers to the data it receives. Figure 5 shows this process.

16
2.3 Some abbreviations:

• TCP segment: The unit of data that TCP sends to IP.


• IP datagram: The unit of data that IP sends to the network interface.
• Frame: The stream of bits that flows across the Ethernet.

IP (Internet Protocol) adds an identifier to the IP header it generates to


indicate which layer the data belongs to. IP handles this by storing an 8-bit

17
value in its header called the protocol field. Similarly, many different
applications can be using TCP or UDP at any time. The Transport Layer
protocol stores an identifier in the header they generate to identify the
application. Both TCP and UDP use 16-bit port numbers to identify
applications. The TCP and UDP store the source port number and the
destination port number in their respective headers. The network interface
sends and receives frames on behalf of IP, ARP, RARP. There must be some
form of identification in the Ethernet header indicating which network layer
protocol generates the data. To handle this, there is a 16- bit frame type field
in the Ethernet header.

2.4 Example of OSI Communication:

A typical OSI communications scenario can now be sketched. Assume that


one end system (local computer) has a client program which requires the services of a
remote server program located at a remote computer connected by a communications
network. A simplified diagram is shown in the figure below. This shows the peer-to-
peer communication.

18
Client / Server interaction across a packet data network
Detailed Description

The diagram above provides only a very simplified view of what happens. In
fact, a number of primitives are required to perform even such simple communication.
The text belows performs a detailed analysis of the primitives and PDUs which are
exchanged.

The user executes the client program and attempts to send one piece of data to
the server (for example,. a request to lookup the address corresponding to a name).
The actual data need not be of concern here. Before the client may send the data, it
must first establish a connection to the application layer (i.e. communications
software library) on the local computer.

19
The client program uses an A-Associate request function call to start the
communication session. Once the application layer has initialized, it contacts the
presentation layer, sending a P-Connect request primitive (see below). This
establishes the format of the data to be used and the data formats which are to be
supported by the system A on the network. This information is sent to the session
layer as an SDU with an S-Connect request primitive. The session layer allocates a
session identifier, and selects appropriate protocol options to support the services
requested by the application layer. The session layer also identifies the intended
recipient of the communication (i.e. the remote computer).

Establishment of a connection between client and server (PART 1):

The session layer proceeds to request a transport layer connection to the


remote system using a T-Connect request. At this stage, the session layer may choose
to not send the S-Connect SDU, and instead to wait to see whether the transport
connection request succeeds. The transport request identifies the remote service
required (i.e. the port identifier for the server) and the type of transport protocol to be
used. The basic types of transport service are reliable and best-effort.

At this stage the transport layer requests the network layer to establish a
connection to the remote system. The network layer service will normally have
established a link layer connection (and a physical layer connection) to the nearest
intermediate system. In this case however, we assume that all the layers must be
connected.

20
Finally, the network layer connect packet is sent using the link layer service to
the remote system. The system responds by invoking the requested transport layer,
and establishing a transport layer connection. If the remote system is able, it will then
confirm the establishment of the transport connection, and pass the confirmation back
to the local client computer. At this time the two systems have a communications path
using the transport layer service.

The transport layer may then use this service to request a connection to the
server process on the remote computer . It does this by forwarding the original session
layer SDU (which it had previously stored in the session layer of the local computer)
as a transport layer packet. This passes across the network layer service, and at the
remote system is passed to the session layer protocol.

21
The received SDU identifies the session ID and the application process with
which the client wishes to communicate. The presentation layer is sent an S-Connect
indication, containing the presentation options requested by the client and the A-
Associate request sent by the client. The application layer now attempts to connect to
the server process. (In some cases, a new server program will be activated to handle
this new request).

Establishment of a connection between client and server (PART 2):

Once this has succeeded a response is generated, carrying the final details of
the connection. This information is passed as an A-Associate response primitive to the
application layer. At each layer additional PCI is added by the remote system, and
finally an N-Data primitive is sent across the network layer service. At the receiver
each layer verifies the information it receives, and confirms the successful connection.
The application layer send an A-Associate confirm message to the client process
which is then ready to transmit data.

Transmission of data is accomplished similarly. The client application sends a


Data Request primitive, along with the data, to the application layer, which adds its
header and passes a Data Request primitive and Data Unit to the presentation layer.
This process continues, with addition of appropriate headers, until it reaches the data
link layer, which adds its header and passes the individual bits to the physical layer
for transmission. The bits are received at the remote physical layer, and passed to the
remote system data link layer. When the data link layer recognises the end of the data
unit, it strips off the header and sends the remaining PDU to the network layer (by
including it in a Data Indication primitive).

22
Data Indication primitives, with reduced data units, cascade up the layers until the
application data reaches server application. Client application only learns of
successful receipt if the server application returns an Acknowledgment (in this case,
possibly with the requested data). The Acknowledgment returns across the network.
The "association" has now been made and data may be sent across the network
between the client and server.

After data transfer is complete, a disconnect phase similar to the connect phase will
occur

23
.

2.5 Relationship between TCP/IP suite and OSI reference model:

Figure 1-1 shows the TCP/IP protocol suite in relationship to the OSI
reference model.[1] The network interface layer, which corresponds to the OSI
physical and data link layers, is not actually part of the specification. However, it has
become a de facto layer either as shown in Figure 1-1 or as separate physical and data
link layers. It is described in this section in terms of the OSI physical and data link
layers.
The OSI protocol suite itself has become, with some rare exceptions, a relic of
early Internet history. Its current contribution to networking technology seems to be

24
mainly limited to the usefulness of its reference model in illustrating modular protocol
suites to networking students and, of course, the IS-IS routing protocol still widely
used in large service provider and carrier networks.

Figure 1-1. TCP/IP protocol suite.

The physical layer contains the protocols relating to the physical medium on
which TCP/IP will be communicating. Officially, the protocols of this layer fall
within four categories that together describe all aspects of physical media:

Electrical/optical protocols describe signal characteristics such as voltage or


photonic levels, bit timing, encoding, and signal shape.

Mechanical protocols are specifications such as the dimensions of a connector


or the metallic makeup of a wire.

Functional protocols describe what something does. For example, "Request to


Send" is the functional description of pin 4 of an EIA-232-D connector.

25
Procedural protocols describe how something is done. For example, a binary 1
is represented on an EIA-232-D lead as a voltage more negative than 3 volts.

The data link layer contains the protocols that control the physical layer: how
the medium is accessed and shared, how devices on the medium are identified, and
how data is framed before being transmitted on the medium. Examples of data-link
protocols are IEEE 802.3/Ethernet, Frame Relay, ATM, and SONET.

The internet layer, corresponding to the OSI network layer, is primarily


responsible for enabling the routing of data across logical network paths by defining a
packet format and an addressing format. This layer is, of course, the one with which
this book is most concerned.

The host-to-host layer, corresponding to the OSI transport layer, specifies the
protocols that control the internet layer, much as the data link layer controls the
physical layer. Both the host-to-host and data link layers can define such mechanisms
as flow and error control. The difference is that while data-link protocols control
traffic on the data link the physical medium connecting two devices the transport
layer controls traffic on the logical link the end-to-end connection of two devices
whose logical connection traverses a series of data links.

The application layer corresponds to the OSI session, presentation, and


application layers. Although some routing protocols such as Border Gateway Protocol
(BGP) and routing Information Protocol (RIP) reside at this layer,[2] the most
common services of the application layer provide the interfaces by which user
applications access the network.

26
The application layer corresponds to the OSI session, presentation, and
application layers. Although some routing protocols such as Border Gateway Protocol
(BGP) and routing Information Protocol (RIP) reside at this layer,[2] the most
common services of the application layer provide the interfaces by which user
applications access the network.

BGP is an application layer protocol because it uses TCP to transport its


messages, and RIP because it uses UDP for the same purposes. Other routing
protocols such as OSPF are said to operate at the internet layer because they
encapsulate their messages directly into IP packets.

A function common to the protocol suite of Figure 1-1 and any other protocol
suite is multiplexing between layers. Many applications might use a service at the
host-to-host layer, and many services at the host-to-host layer might use the internet
layer. Multiple protocol suites (IP, IPX, and AppleTalk, for example) can share a
physical link via common data-link protocols.

27
3. TCP/IP SUITE

3.1 TCP Backgrounder:

The primary purpose of any transport protocol is to provide a “…reliable,


securable, logical (i.e. virtual) connection between pairs of processes”. As per RFC
1122, TCP is the primary virtual-circuit transport protocol for the Internet suite.” By
“virtual-circuit”, what is meant is that, although TCP establishes what appears to be
an actual circuit-switched or, physical connection (just like the one you make when
you phone in your pizza order), TCP is actually a packet-switched protocol. Unlike
the direct point-to-point circuit established between a pair of telephones when placing
a call, each packet in a packet switched protocol may be routed through different
circuits, or paths, in reaching its destination. In a packet switched protocol, every
packet contains a source and destination address. This enables the dynamic routing of
packets. As circuits become available or unavailable, the several packets of any single
message may be routed through the Internet using many different paths before
reaching their final destination. TCP is used by applications requiring a reliable,
connection-oriented transport service, such as Web browsers (HTTP), electronic mail
(SMTP/POP), and file transfer programs (FTP). What does all that mean? Well, as per
the venerable RFC, providing this Quality of Service (QoS) over an unreliable
network requires facilities in the following areas:

Basic Data Transfer:

TCP manages the transfer of data between peers by encapsulating the data into
segments, which are then carried in IP datagrams through the Internet. TCP attaches a
header as shown in Figure 1 to each segment, carrying parameters necessary for
addressing, flow-control, and other important functions.

28
Figure 1. TCP Header Format

Reliability:
TCP includes mechanisms to recover data that has been damaged, lost, duplicated,
or received out of order. These mechanisms include:

1. Assigning a number to each byte transmitted (the sequence number), and requiring
an acknowledgement (or, “ACK”) from the receiving TCP for all bytes sent. If such
an acknowledgment is not received within a predefined timeout interval, the data is
retransmitted. At the receiver, these sequence numbers are used to reconstruct the
original data. It is possible for segments to be received out of order, should they be
routed through paths having unequal transit times. In addition, consequent to the
varying and potentially unequal delays incurred by different segments, a transmitting
TCP may not receive a timely acknowledgement should a segment be unduly delayed.
In that case, the transmitter would resend this segment - incorrectly inferring that it
had been either lost or damaged, resulting in the reception of duplicated segments by
the receiving TCP. In both of these cases, the segments’ sequence numbers help
ensure that the data reconstructed by the receiving TCP exactly matches that
originally sent. Segments received out of order are correctly reordered, and duplicate
segments are discarded.

29
2. Including a checksum for each segment transmitted. This checksum must be
confirmed
by the receiving TCP. Should a segment’s checksum fail, it is not acknowledged. In
this
case, the sending TCP will eventually resend this segment.

Flow Control:
TCP utilizes a method of flow control called a window. The window is a 16-
bit value transmitted in every segment header indicating the maximum number of
bytes that the sender may transmit before receiving further permission. More on this
later…

Multiplexing:
In a typical host (all systems using TCP/IP attached to the Internet - except
routers), multiple resident applications (e.g. a Web browser, and an e-mail client) may
simultaneously utilize TCP’s services. Within each host, each application is assigned
a port number, thereafter used by TCP as a “handle” to identify the application. Using
this port number, TCP is able to determine which segment goes to which application.

Connections:

Not to be confused with the physical, point-to-point connection mentioned


earlier, TCP is a connection-oriented protocol. That is, in order to achieve the
reliability and implement the flow-control mechanism mentioned above, TCP must
establish, manage, and maintain a connection between the two peers exchanging data.
A connection is defined as a pair of sockets, and a socket is defined as the
concatenation of the application’s port number with its host’s IP address. These data,
plus each TCP’s sequence numbers, window sizes, etc., specifies the connection.
When two hosts wish to communicate, their respective TCP's first establish a
connection (initialize the status information on each side). When their
communications are complete, the connection is terminated, or closed, to free the
resources for other uses.

30
Precedence and Security:

TCP includes features that allow applications to indicate certain levels of


security and Precedence for their communications. Default precedence and security
values are required to be used when these features are not explicitly indicated (which
is most of the time).

3.2 TCP and Window Management

Though initially perhaps somewhat confusing, TCP’s use of “the Window”,


and just how this Window is able to slide is one of TCP’s fundamental concepts.
Essentially, TCP’s Window is a means of flow control, somewhat analogous to the
XON/XOFF mechanism used in asynchronous serial links. However, TCP’s Window
augments this basic flow control mechanism by including means to maximize the
efficiency of the communication channel. In the TCP context, efficiency is defined as
the maximum potential data flow between peers - in the shortest possible time. That
is, the transmission of data in a manner that utilizes the least amount of network
traffic. Every TCP segment sent out over a network contains a dynamic Window
value in the header whose purpose is to inform the other end of the connection just
how much data it is currently prepared to accept. At first glance, this may seem a little
redundant since during the SYN process each TCP explicitly or implicitly advertises
its Maximum Segment Size (MSS). Once the other end knows the maximum number
of bytes it can transmit in a segment, why does it require yet another parameter called
a Window? The answer is quite simple: the MSS value advertised during SYN is
usually totally unrelated to the buffer capacity of the receiving Application. That is,
the MSS value stated during SYN is governed by the underlying Link layer’s
maximum Frame size. Ethernet frames, for example, are limited to 1500 bytes.
Consequently, a TCP sitting on top of an Ethernet would likely advertise a MSS of no
greater than about 1460 bytes (to account for lower layer header overhead).
Nevertheless, the Application’s receive buffers may be larger than Ethernet’s
maximum frame size, and as a consequence, may be capable of receiving more than

31
one frame at a time. This is desirable in that it reduces the number of ACK packets the
receiver must send, again improving network efficiency. In this case, the sender may
send several segments without waiting for a confirmatory “ACK” after each segment.
Since the delays encountered by datagrams traversing the internet are highly variable,
requiring a transmitter to wait for the peer to ACK every segment before sending
another would result in a great deal of wasted time. The judicious use of the Window
helps minimize such waste by allowing the transmitter to send as much data as the
peer is capable of accepting, without having to wait for an ACK of the individual
segments. Certainly, the receiver must still “ACK” every segment, but this can be
done in an aggregate manner instead of one at a time.

One important consideration for any TCP’s Window management scheme is


the nefarious Silly Window Syndrome, or SWS. Since first encountered by a
Professor on acid, it has generated a great deal of press and seems to be a favorite
buzzword of many armchair Internet experts. SWS is an unforeseen weakness in a
literal, straightforward implementation of the window management scheme as
suggested in RFC 793, somehow or other exploited by the original Telnet
Application. Subsequent studies led to the development and standardization of both
sender and receiver algorithms to preclude it (for those interested, see RFC 1122:
4.2.3.4 and 4.2.3.3).

Simply defined, Silly Window Syndrome is a “…stable pattern of small


incremental window movements resulting in extremely poor TCP performance.” It
occurs when a sending TCP gets fooled into sending only tiny data segments,
although both sender and receiver have a large total buffer space available. SWS can
only occur during the transmission of large amounts of data, and will disappear once
the connection goes “quiescent”.

32
3.3 IP Backgrounder:

The Internet Protocol puts the “IP” in TCP/IP. It is TCP/IP’s Network


protocol. IP comprises two basic functions: addressing and fragmentation. Just like
TCP, IP encapsulates its data by prepending it with a header as illustrated in Figure 2.
It’s easy to get confused as to just why we need an IP address in the first place. If your
PC sits on an Ethernet or other LAN (Local Area Network), isn’t its MAC address
unique? Why not simply use this address instead of requiring yet another one?

Figure 2. IP Header Format

3.4 IP addressing

The answer is straightforward. Remember that the Internet is not simply one
big LAN; rather the Internet is defined as a network of networks, or perhaps better
stated, a network of LANs.

If everybody were a node on one great big homogenous network, all running
the same Link layer protocol (Ethernet, for example), there would be no need for a
separate addressing scheme. The fact is however, that many disparate networks exist,
all operating incompatible Link layer protocols. Every host on a LAN is uniquely
identified at the Data Link layer by its Link layer, or MAC address (Media Access
Control). Neighboring nodes on any given LAN communicate with each other based
on this physical address. However, a node on an Ethernet cannot directly
33
communicate with a node on a Token Ring network – and vice versa. Likewise, nodes
speaking ATM, FDDI, etc. are all unintelligible to an Ethernet node. The purpose and
design of the Internet Protocol is to allow nodes sitting on these dissimilar LANs to
internet work. It does so by abstracting their conflicting Link layer protocols,
providing a uniform communication interface for all hosts. This permits hosts residing
on disparate networks to communicate, even
though they may speak a different L3 (Link layer language).

This is where the IP address comes in. Whereas every node on a LAN is
uniquely identified at the Data Link layer by its MAC address, each host on the
Internet is uniquely identified by its IP address. IP addresses (i.e. Ipv4) are 32-bit
numbers, comprising two subfields: a network identifier and a host identifier (also
referred to as the netid and hostid). Figure 3 illustrates this hierarchical addressing
scheme. The netid field of the address uniquely identifies a specific LAN, WAN, or
other group of linked computers, such as one of the networks shown in Figure 2. The
hostid field of the address uniquely identifies a host on the addressed network.
(Actually, the hostid specifies a unique NIC, or Network Interface Card. An
individual computer usually - but not necessarily, has only one such NIC.)

Version 4 of the Internet Protocol (IPv4) has been in use since 1981 and is
slowly being supplanted by IPv6. Version 6 improves upon IPv4 in several areas, not
the least of which is the extension of IP addresses to 128 bits.

34
Figure 3 – Ipv4 Hierarchical Addressing scheme

The IP Address:

The Internet Protocol moves data between hosts in the form of datagrams.
Each datagram is delivered to the address contained in the Destination Address (word
5) of the datagram's header. The Destination Address is a standard 32-bit IP address
that contains sufficient information to uniquely identify a network and a specific host
on that network.

An IP address contains a network part and a host part, but the format of these
parts is not the same in every IP address. The number of address bits used to identify
the network, and the number used to identify the host, vary according to the prefix
length of the address. There are two ways the prefix length is determined: by address
class or by a CIDR address mask. We begin with a discussion of traditional IP address
classes.

35
Address Classes

Originally, the IP address space was divided into a few fixed-length structures
called address classes. The three main address classes are class A, class B, and class
C. By examining the first few bits of an address, IP software can quickly determine
the class, and therefore the structure, of an address. IP follows these rules to
determine the address class:

If the first bit of an IP address is 0, it is the address of a class A network. The


first bit of a class An address identifies the address class. The next 7 bits identify the
network, and the last 24 bits identify the host. There are fewer than 128 classes A
network numbers, but each class A network can be composed of millions of hosts.

If the first 2 bits of the address are 1 0, it is a class B network address. The
first 2 bits identify class; the next 14 bits identify the network, and the last 16 bits
identify the host. There are thousands of class B network numbers and each class B
network can contain thousands of hosts.

If the first 3 bits of the address are 1 1 0, it is a class C network address. In a


class C address, the first 3 bits are class identifiers; the next 21 bits are the network
address, and the last 8 bits identify the host. There are millions of class C network
numbers, but each class C network is composed of fewer than 254 hosts.

If the first 4 bits of the address are 1 1 1 0, it is a multicast address. These


addresses are sometimes called class D addresses, but they don't really refer to
specific networks. Multicast addresses are used to address groups of computers all at
one time. Multicast addresses identify a group of computers that share a common
application, such as a video conference, as opposed to a group of computers that share
a common network.

36
If the first four bits of the address are 1 1 1 1, it is a special reserved address.
These addresses are sometimes called class E addresses, but they don't really refer to
specific networks. No numbers are currently assigned in this range.

Luckily, this is not as complicated as it sounds. IP addresses are usually


written as four decimal numbers separated by dots (periods). [1] Each of the four
numbers is in the range 0-255 (the decimal values possible for a single byte). Because
the bits that identify class are contiguous with the network bits of the address, we can
lump them together and look at the address as composed of full bytes of network
address and full bytes of host address. If the value of the first byte is:

[1] Addresses are occasionally written in other formats, e.g., as hexadecimal numbers.
However, the "dot" notation form is the most widely used. Whatever the notation, the
structure of the address is the same.

Less than 128, the address is class A; the first byte is the network number, and the
next three bytes are the host address.

From 128 to 191, the address is class B; the first two bytes identify the network, and
the last two bytes identify the host.

37
From 192 to 223, the address is class C; the first three bytes are the network address,
and the last byte is the host number.

From 224 to 239, the address is multicast. There is no network part. The entire
address identifies a specific multicast group.

Greater than 239, the address is reserved. We can ignore reserved addresses.

The IP address, which provides universal addressing across all of the networks
of the Internet, is one of the great strengths of the TCP/IP protocol suite. However, the
original class structure of the IP address has weaknesses. The TCP/IP designers did
not envision the enormous scale of today's network. When TCP/IP was being
designed, networking was limited to large organizations that could afford substantial
computer systems. The idea of a powerful UNIX system on every desktop did not
exist. At that time, a 32-bit address seemed so large that it was divided into classes to
reduce the processing load on routers, even though dividing the address into classes
sharply reduced the number of host addresses actually available for use. For example,
assigning a large network a single class B address, instead of six class C addresses,
reduced the load on the router because the router needed to keep only one route for
that entire organization. However, an organization that was given the class B address
probably did not have 64,000 computers, so most of the host addresses available to
the organization were never assigned.

38
The class-structured address design was critically strained by the rapid growth
of the Internet. At one point it appeared that all class B addresses might be rapidly
exhausted. [2] To prevent this, a new way of looking at IP addresses without a class
structure was developed.

The source for this prediction is the draft of Super netting: an Address
Assignment and Aggregation Strategy, by V. Fuller, T. Li, J. Yu, and K. Varadhan,
March 1992.

Classless IP Addresses

The rapid depletion of the class B addresses showed that three primary address
classes were not enough: class A was much too large and class C was much too small.
Even a class B address was too large for many networks but was used because it was
better than the alternatives.

The obvious solution to the class B address crisis was to force organizations to
use multiple class C addresses. There were millions of these addresses available and
they were in no immediate danger of depletion. As is often the case, the obvious
solution is not as simple as it may seem. Each class C address requires its own entry
within the routing table. Assigning thousands or millions of class C addresses would
cause the routing table to grow so rapidly that the routers would soon be
overwhelmed. The solution required a new way of assigning addresses and a new way
of looking at addresses.

39
Originally network addresses were assigned in more or less sequential order as
they were requested. This worked fine when the network was small and centralized.
However, it did not take network topology into account. Thus only random chance
would determine if the same intermediate routers would be used to reach network
195.4.12.0 and network 195.4.13.0, which makes it difficult to reduce the size of the
routing table. Addresses can only be aggregated if they are contiguous numbers and
are reachable through the same route. For example, if addresses are contiguous for
one service provider, a single route can be created for that aggregation because that
service provide will have a limited number of routes to the Internet. But if one
network address is in France and the next contiguous address is in Australia, creating
a consolidated route for these addresses does not work.

Today, large, contiguous blocks of addresses are assigned to large network


service providers in a manner that better reflects the topology of the network. The
service providers then allocate chunks of these address blocks to the organizations to
which they provide network services. This alleviates the short-term shortage of class
B addresses and, because the assignment of addressees reflects the topology of the
network, it permits route aggregation. Under this new scheme, we know that network
195.4.12.0 and network 195.4.13.0 are reachable through the same intermediate
routers. In fact, both of these addresses are in the range of the addresses assigned to
Europe, 194.0.0.0 to 195.255.255.255. Assigning addresses that reflect the topology
of the network enables route aggregation, but does not implement it. As long as
network 195.4.12.0 and network 195.4.13.0 are interpreted as separate class C
addresses, they will require separate entries in the routing table. A new, flexible way
of defining addresses is needed.

Evaluating addresses according to the class rules discussed above limits the
length of network numbers to 8, 16, or 24 bits - 1, 2, or 3 bytes. The IP address,
however, is not really byte-oriented. It is 32 contiguous bits. A more flexible way to
interpret the network and host portions of an address is with a bit mask. An address
bit mask works in this way: if a bit is on in the mask, that equivalent bit in the address
is interpreted as a network bit; if a bit in the mask is off, the bit belongs to the host
part of the address. For example, if address 195.4.12.0 is interpreted as a class C

40
address, the first 24 bits are the network number and the last 8 bits are the host
address. The network mask that represents this is 255.255.255.0, 24 bits on and 8 bits
off. The bit mask that is derived from the traditional class structure is called the
default mask or the natural mask. However, with bit masks we are no longer limited
by the address class structure. A mask of 255.255.0.0 can be applied to network
address 195.4.0.0. This mask includes all addresses from 195.4.0.0 to 195.4.255.255
in a single network number. In effect, it creates a network number as large as a class
B network in the class C address space. Using bit masks to create networks larger than
the natural mask is called supernetting, and the use of a mask instead of the address
class to determine the destination network is called Classless Inter-Domain Routing
(CIDR). [3]

3.5 IP fragmentation

Don’t confuse an IP fragment with a TCP segment; an IP fragment is a piece


of a TCP segment whose size precludes it from being transmitted over a network in
one piece. The Internet Protocol was designed to be independent of both the
underlying Data Link protocol and the overlying Transport protocol. This flexibility is
critical because of the large numbers of incompatible Transport and Link layer
protocols. However, this independence carries with it certain difficulties, one of which
is how to transmit a datagram whose size exceeds that of the underlying Link layer’s
frame size, also referred to as the Maximum Transmission Unit (MTU). To
accommodate this eventuality, an IP should be capable of fragmenting a segment
(received from the overlying transport layer) whose size exceeds this MTU, into
multiple datagrams, whose sizes allow them to fit into the frame size below it…
simple. Not really – in fact, most IP’s avoid the nastiness of fragmentation by
determining the underlying frame size limitation and reporting that to the transport
layer ahead of time by means of what is termed a path discovery mechanism. In fact,
this is the recommended procedure. However, ALL IP’s are required to be capable of
accepting and reassembling incoming fragmented datagrams IP manages this process
by assigning an identification number and fragment offset number to every datagram.

41
3.6 Ethernet/TCP/IP Applications
Every receive event is triggered by an interrupt from the Ethernet controller.
This interrupt has the highest priority, so all other activities are stopped immediately.
Responses to the packet received are sent within this interrupt.
Data sent from the Transmission Control Protocol are sent periodically with a
timer interrupt. Every time the timer interrupt occurs, a counter is incremented. This
counter controls when a packet has to be retransmitted. Applications are running all
the time when there is no transmission of packets. This means that when there are
several applications, each application has to be activated in a round robin manner.
Time and memory sharing has to be taken care of by the programmer.

Limitations
Since the software for this web server has been optimized with regard to both
size and speed, there are some limitations to the web server. In the lower layer
protocols (Ethernet - IP), only the functionality required to keep the protocols able to
respond to normal headers is implemented. TCP is simplified, but almost fully
implemented. data, respectively.

TCP provides reliability by doing the following:

• The application data is broken into what TCP considers the best sized chunks to
send. This is totally different from UDP, where each write by the application
generate a UDP datagram of that size. When TCP sends a segment it maintains a
timer, waiting for the other end to acknowledge reception of the segment.
When TCP receives data from the other end of the connection, it sends an
acknowledgment. TCP maintains a checksum on its data header page. This is an end-
to-end checksum whose purpose is to detect any modifications of the data in transit. If
a segment arrives with an invalid checksum, TCP discards it and does not
acknowledge receiving it.TCP also provides flow control. Each end of a TCP
connection has a finite amount of buffer space.

42
TCP/IP Capabilities

• Socket-Level TCP (Transmission Control Protocol) provides reliable full-


duplex data transmission.
• Socket-Level UDP(User Datagram Protocol)--simple protocol exchanges
datagrams without acknowledgements or guaranteed delivery.
• ICMP (Internet Control Message Protocol)--network-layer Internet protocol
that reports errors and provides other information relevant to IP packet
processing.
• DNS (Domain Name System) client--a distributed Internet directory service
that is used mostly to translate between domain names and IP addresses, and
to control Internet e-mail delivery.
• DHCP (Dynamic Host Configuration Protocol) client--provides a framework
for passing configuration information to hosts on a TCP/IP network. DHCP is
based on the Bootstrap Protocol (BOOTP), adding the capability of automatic
allocation of reusable network addresses and additional configuration options.
• HTTP (Hypertext Transfer Protocol) server--the protocol used by Web
browsers and Web servers to transfer files, such as text and graphic files.
Includes facilities for Server Side Includes (SSI) and CGI routines.
• SMTP (Simple Mail Transfer Protocol) client--Internet protocol providing e-
mail services.
• FTP (File Transfer Protocol) server and client--application protocol, part of
the TCP/IP protocol stack, used for transferring files between network nodes.
Server with password support for file transfers between network nodes
available on the Rabbit 2000.
• TFTP (Trivial File Transfer Protocol) server and client--simplified version of
FTP that allows files to be transferred from one computer to another over a
network.
• POP3 (Post Office Protocol) client.
• Serial-to-Telnet gateway.

43
4.APPLICATION LAYER PROTOCOLS

4.1 Introduction:

In the previous chapters, we've covered the basics of networking as well as


a brief introduction to embedded networking. In this chapter, we'll look at a
variety of architectures and design characteristics of application layer protocols.
The purpose of this is to understand the variety of ways that ap plication layer
protocols are designed, developed and used. As we'll soon discover, although the
purposes for the protocols may differ, they share more similarities than
differences. Finally, we'll look at some of the design characteristics of application
layer protocols and how to decide which is right for a custom protocol design.

4.2 What is an application layer?

An application layer protocol. Simply defined, is a protocol that exists at


the application layer. From an API perspective, application layer protocols are
built on the BSD Sockets API. Protocols at this layer are also commonly re ferred
to as "applications," although in many cases this is not precisely true. For
example, in an embedded system we may utilize an SMTP client to send e-mail
from our device to an external host. If we visualize the pro tocol layering, our
application then sits on SMTP that further sits on the T C P /IPstack. The SMTP
client protocol provides us with a very simple API to communicate data through
SMTP to an end user. Although these pro tocols may be integrated together in a
single application, we honor the lay ering to aid in our understanding of the
component relationships (see Figure for an example).

44
FIGURE Application using application layer protocols.

While the TCP/IP suite provides the capability to move data between hosts
on the Internet, the application layer protocols provide services that are
meaningful to higher level applications (transporting e-mail, finding resources on
a network, synchronizing time, etc.).

4.3 Application layer protocol architectures:

Application layer protocols can be constructed in a variety of ways and use


a number of different communication topologies. For example, HTTP is a
traditional client-server protocol in which a browser (client) connects to a Web
server and retrieves content. HTTP is also a stream-oriented protocol built on
TCP. The server may provide content to a variety of clients dis tributed throughout
the Internet.

The Simple Network Management Protocol (SNMP) is an


interesting reverse architecture to HTIP. For example, the SNMP agent
(server) pro vides the data that is collected by the client (network manager
system), but the client in this case connects to many servers for
monitoring. SNMP is also built on.UDP since data can simply be
requested again if a request or response packet is lost. In these two
examples, a clear relationship exists between a client and a server
(although their roles are reversed).

45
A final example is Service Location Protocol (SLP). In this protocol. a
User Agent may request a particular service but instead of communicating
directly to a server, the agent multicasts its request on the network to any one who
is listening. If a Service Agent hears the request and provides the service being
requested, a reply is generated directly to the User Agent. This is a peer-to-peer
architecture in which no central control exists; con trol is instead distributed
throughout the network.

In the following sections, the organization of application layer protocols,


including the varying means by which they communicate, will be discussed.

Standard Client/Server

The client/server architecture is by far the most common as the pattern fits
the requirements of most application layer protocols. In this model, a server
exists which serves as the repository for some type of data. Clients connect to the
server to request and then receive data for processing or presentation. From our
prior example of HTTP and SNMP, the topologies differ but the same
relationships exist (see Figure).

FIGURE: Client/Server architecture in two different topologies.

46
In the H1TP example, there exists a one-to-many relationship to the
server. In contrast. in the SNMP example there is a one-to-many relation ship to
the client. Each example represents the standard client/server model but serves a
different purpose .•

Another point to note on client/server is that it is a Request/response


architecture. In both cases, a request is generated from the client and the server
responds accordingly. This is also known as the "pull" model as data is
conceptually pulled from the server to the client. The opposite approach is to
have the server send data to the client when it's necessary to do so. This is called
the "push" model as data is pushed to the client without a preceding request. The
push model can be more efficient since data is transmitted only when necessary
(such as when it changes) and the client need not periodically poll the server.

An example of an application layer protocol that provides both models is


SNMP. Recall from Figure 3.2 that the client can request data from the servers
(SNMP agents). SNMP agents also provide the capability to trans mit what are
known as traps. A trap is a message that is sent asynchro nously from the server to
the client upon detection of an anomalous condition that may need immediate
attention. This type of communication is known as asynchronous because it is a
"reply" with no associated re quest. Synchronous communication implies a
request for every reply (see Figure 3).

4.4 Peer to Peer architecture:

In a peer-to-peer architecture, the roles of the entities that


communicate are less defined. For example, a host can act as both a
client and a server depending upon where communication is initiated.

47
Client Server

Synchronous communication (“pull”)

Client Server

Asynchronous communication (“push”)

FIG: Synchronous Vs. asynchronous communication

In most peer-to-peer applications, a central server exists to identify who is


available for communication (so-called Meta data) in this model. A client first
connects to the server to identify who is in the group and there fore open for
communication. More architecturally sound systems spread the Meta data around
the network to increase reliability. The Napster ar chitecture used the single meta-
server approach while the newer Gnutella file-sharing system uses a distributed
model for Meta data.

Another model exists which is used primarily on local area networks and
makes use of broadcast or multicast communication. The Service Lo cation
Protocol can operate with a Directory Agent which will provide the Meta data as
described before (who is in the group). In the absence of a Di rectory Agent, the
clients and servers can use multicast communication to target a specific group of
hosts. Another example is the Dynamic Host Configuration protocol. DHCP uses
broadcast communication to identify the host on the local network that can
provide configuration data.

48
4.5 Application Layer:

At the top of the TCP/IP protocol architecture is the Application Layer. This
layer includes all processes that use the Transport Layer protocols to deliver data.
There are many applications protocols. Most provide user services, and new services
are always being added to this layer.

The most widely known and implemented applications protocols are:


telnet The Network Terminal Protocol, which provides remote login over the network.
FTP The File Transfer Protocol, which is used for interactive file transfer.
SMTP The Simple Mail Transfer Protocol, which delivers electronic mail.
HTTP The Hypertext Transfer Protocol, which delivers Web pages over the
network.

While HTTP, FTP, SMTP, and telnet are the most widely implemented
TCP/IP applications, you will work with many others as both a user and a system
administrator. Some other commonly used TCP/IP applications are: Domain Name
Service (DNS)

Also called name service, this application maps IP addresses to the names
assigned to network devices. DNS is discussed in detail in this book. Open Shortest
Path First (OSPF)

Routing is central to the way TCP/IP works. OSPF is used by network devices
to exchange routing information. Routing is also a major topic of this book.
Network Filesystem (NFS) This protocol allows files to be shared by various hosts
on the network.
Some protocols, such as telnet and FTP, can only be used if the user has some
knowledge of the network. Other protocols, like OSPF, run without the user even
knowing that they exist. As system administrator, you are aware of all these
applications and all the protocols in the other TCP/IP layers. And you're responsible
for configuring them!

49
4.6 Embedded HTTP server

The use of Web browsers has become the standard method for communi cating
with Managing remote embedded devices. The Web browser is a common
appliance on networked desktops and provides a rich set of functionality for
communication and presentation of data from remote devices.
These days it's commonplace to find HTTP servers on a variety of small embedded
devices. Like e-mail, the Web client is a ubiquitous tool and is used by all Internet users. The
HTTP server provides the means to export information from the device for remote monitoring
as well as permit modification of device parameters via Common Gateway Interface (CGI)
forms. With a little work, the HTTP server can provide the means to display dynamic data
(more on this topic later in this chapter).
The greatest attribute of the HTTP server is its peer, the HTTP client. Using a very
simple tag language called HTML (Hypertext Markup Language), a rich presentation can be
realized. The simplicity of serving simple files to the client results in simple presentation of
possibly complex data.

Why HTTP Server in Embedded Systems?

The HTTP server is a perfect vehicle for the presentation of data from an embedded
system. The HTTP server can be built simply and can reside in very small code space. HTTP
is very simple and therefore supports the simplicity of implementation. Since the Web
browser (HTTP client) pro· vides the rendering of the HTML page to the usee the server need
only serve content through a simple socket server to provide the HTTP server functionality.

HTTP servers can also be disadvantageous in embedded systems design. The


traditional HTTP server requires a file system in which the content is stored and then
served. A file system, with an API. Can be expensive in terms of non-volatile (or
volatile) memory usage. For example, a flash based file system can increase parts
count on a device and therefore its cost. The design that is presented here constructs a
simple static file system that obviates this need through an internal “application" file
system.

50
4.7 Protocol discussion:

HTIP, as described in RFC 2068, is a very simple ASCII -based protocol.


HTIP uses a standard synchronous request/response design over the TCP/IP
protocol. identical to classical client/server architecture (See Figure 7.1). When a
client makes a request to an HTTP server, it sends an HTIP request message.
The HTIP request message includes the client request as well as information
about the client's capabilities. A single blank line at the end of the request
terminates the request message and signals the HTIP server to go to work.

General Header
Request line
Request Header
operational headers
CRLF Entity Header
Massage body

Request

Response

HTTP Client HTTP Server


(Web Browser) (Web Server)
General header
Status line Response header
Operational headers Entity header
CRLF
Message body

Fig.: H TTP protocol architecture.

51
The HTIP request message is made up of a number of fields. The
request. One setting the stage for the activities that follows. The first element is
'''e method token (in this case, GET)which indicates the method to be performed
on the resource. The resource follows ) , for the G E Trequest
( l i n d e x . h t m lwhid.
indicates the file to be returned. Finally, the HTTP ver sion string is provided to
indicate which version of HTTP the request, understands.

The optional headers indicate protocol requests by the client as well ~


information about the client that the server may need to understand be fore
providing a response, Let's look at the individual headers and figure out what they
mean.

Connection: Keep-Alive

While a request can be satisfied by a single response, what happens the


client requests another resource that is managed by' this server? WIu.; commonly
happens is that a new TCP socket is created to handle the re quest. This means that
the socket must be brought up which involves z number of packet transfers
between the client and server to connect 1M socket. These results in some latency
between the request and the response The K e e p - A l i vheader
e specifies that the
client wants the server to keep the current socket open for future requests (in
essence, pipe lining requests t, the server). This results in better bandwidth
utilization since the socks" creation process is not required for each request.

User-Agent: Mozilla/4.6.1 [en] (X11; U; Linux 2.2.12-20 i6B6)

52
The U s e r - A g e nrequest
t header field contains intonation about the User
Agent originating the request. Its most common use is in statistical: collection of
data to identify most common browsers, which present pro tocol violations, etc.
Unfortunately, the field is also used by malicious com panies to mangle or refuse
to serve content to client browsers that come from different developers.

Host: 192.168.1.1:80

The Host request header is the host URL as sped lied by the original re -
quest. Combining this with our resource results in 192.168.1.1: 801 index.html.
Which is (minus the HTTP://protocolheader)theoriginaIURL of the request.

The host can be used by hosts that can be referenced by more than one
fully qualified domain name (such as www.a.com. www.b.com). By looking at
the Host header, the server can identify which host was originally requested and
serve content appropriately.

A c ce p t : i m ag e / gi f , im a ge / x - x b i tm a p , i m ag e / jp e g , i m a g e / pj p e g,
i m ag e / pn g

The Accept request header specifies which media types are acceptable
within the response. In this case, the client browser informs, among other things.
that it can render images of type gif.

Accept-Encoding: gzip

53
The Accept-Encoding request header indicates which encoding methods
are acceptable. In this case, the client indicates that gzip is the only accept able
encoding. If the Accept-Encoding request header were followed by an empty
field, this would indicate that no encoding method was acceptable.

Accept·Language: en

Accept-Language, like the prior Accept request headers, is used to


define the acceptable languages (natural in this case). In the case above, en is used
to denote that the client browser is interested only in English. A comma de limited
list could be provided to identify preferences of languages.

A c c e p t - C h a r s eits:o - 8 8 59 - 1 ,· , u tf · a

Finally, the Accept-Charset request header indicates acceptable charac -


ter sets to the server.

The HTTP request message contains quite a bit of information to inform


the server of its capabilities and preferences. Similarly, the HTTP response
follows a similar structure. All HTTP responses follow basic structure shown in
figure.The status line identifies the status of the HTTP request. In this case we see
a 200 OK which identifies that the request is satisfied by the message body that
follows.

The response headers then follow and provide some generic and so~
detailed information about the response. The Date: and Server: headers a:~ self-
explanatory.

54
The ETag: header is an entity tag that can be used (as one service . identify
whether a cached version of a page on a client is the same as 1hz on the server. If
the Tags match, the page is deemed unchanged and is. cached version is used.

The Content Length: header simply represents the number of octets, the
message body. This is useful in systems in which memory is dynamic: . Call
allocated for the body of the message.

The Connection: close header indicates that the server, upon eminent: the
response to the client, will close this socket and that no further requests can be
made through it. Recall the discussion of Keep-Alive in the re quest section.

The final header Content - Type: indicates the type of data that follows . -
the message body. The text/html type identifies our response as text with an
HTML encoding. Finally, the message body is emitted with a single blank line
separation it from the response headers.

We've covered only a few of the many possible headers and variation that can
occur in HTTP, but this sampling should indicate the complex rand flexibility of
HTTP. Using a simple text-based structure for both ~ quests and responses, data
transfer is accommodated as well as feature negotiation.

55
Advanced Uses

While the use of HTTP for retrieval of multimedia files from Web servers the
most common application, HTTP is finding new uses in modern appli cation-layer
protocols. HTTP has evolved into an application transport layer protocol for other
application-layer protocols, such as SOAP (Simple Object Access Protocol) and XML-
RPC (Extensible Markup Language-Re mote Procedure Call). In these applications, the
request includes the object: of interest along with any parameters and the response is the
object or re sult of computation from that remote object. As an extensible transport
protocol, HTTP is finding new uses outside of the hypertext and content distribution
domain.

O rigin and Evolution

The Hypertext Transfer Protocol was derived from an earlier idea by T e '_ Nelson
in his book, Literary Machilles
(in which the term "hypertext· " coined). A hypertext link
provided the ability to tie together ideas or relate>: pieces of knowledge. Therefore, ideas
could be linked, allowing a user' identify relations and connections to other pieces of
information. Nelson enveloped this idea in a project called "Xanadu" through the 1960s
and ear 1970s, but due to a lack of product release, his project was finally droppeG..
Using the ideas of Nelson, Tim Berners-Lee of the European Organiza tion for Nuclear
Research (CERN) conceived of hypertext links that spanned the network allowing
computers to be linked not only by net works but also by information. In 1989, Berners-
Lee submitted a proposal that would allow researchers at CERN to gain access to large
amounts of stored information.
This information included reports, documentation, online databases, and other
information that was useful to a geographically dispersed group that desired to
collaborate on a particular subject.
Although useful, the early browsers were limited. In 1994, Marc An dreeson built the first
widely used browser called Mosaic at the National Center for Supercomputing
Applications (at the University of Illinois). This work led to the first browser company-
Netscape.
Since that time, many other browsers have been introduced including Internet
Explorer from Microsoft, Communicator from Netscape, Opera from Opera Software,
and the open source Galeon (available at source forge.net).

56
5. HARDWARE DEVELOPMENT

5.1 Introduction & Overview

The RCM2200 is an advanced module that incorporates the powerful Rabbit


2000® microprocessor, flash memory, static RAM, digital I/O ports, and a 10Base-T
Ethernet port, all on a PCB just half the size of a business card.

5.2 RCM2200 Description


The RCM2200 is a small-footprint module designed for use on a motherboard
that supplies power and interface to real-world I/O devices. Its two 26-pin connection
headers provide 26 parallel user I/O lines, shared with three serial ports, along with
data, address and control lines. A fourth serial port and three additional I/O lines are
available on the programming header. A fully-enabled slave port permits glueless
master-slave interface with another Rabbit based system. The slave port may also be
used with non-Rabbit systems, although additional logic may be required. The
CM2200 is equipped with a 10Base-T Ethernet port, 256K flash memory and 128K
static RAM. There are three production models in the RCM2200 series. If the
standard models do not serve your needs, other variations can be specified and
ordered in production quantities. Contact your Z-World or Rabbit Semiconductor
sales representative for details. Table 1 provides a summary of all the models in the
RCM2200 family.

57
In addition, a variant of the RCM2200 is available. The RCM2300 omits the
Ethernet connectivity but offers a much smaller footprint, one-half the size of the
RCM2200. Another Rabbit Core module can be used to reprogram an RCM2200.
This reprogramming (and debugging) can be done via the Internet using Z-World’s
Rabbit Link network programming gateway or with Ethernet-equipped Rabbit Core
RCM2100 and RCM2200 models using Dynamic C’s Device Mate features. The
RCM2200 is particularly suitable for use as an inexpensive hardware platform for
Dynamic C’s Device Mate features.

The RCM2200 RabbitCore module is designed to be the heart


of embedded control systems. The RCM2200 features an integrated
Ethernet port and provides for LAN and Internet-enabled systems
to be built as easily as serial-communication systems.

The RCM2200 has a Rabbit 2000 microprocessor operating at


22.1 MHz, static RAM, flash memory, two clocks (main oscillator
and timekeeping), and the circuitry necessary for reset and
management of battery backup of the Rabbit 2000's internal real-

58
time clock and the static RAM. Two 26-pin headers bring out the
Rabbit 2000 I/O bus lines, address lines, data lines, parallel ports,
and serial ports.

The RCM2200 receives its +5 V power from the user board


on which it is mounted. The RabbitCore RCM2200 can interface
with all kinds of CMOS-compatible digital devices through the user
board.

59
5.2.1 RCM2200 Features

• Small size: 1.60" × 2.30" × 0.86"


(41 mm × 58 mm × 22 mm)
• Microprocessor: Rabbit 2000 running at 22.1 MHz
• 26 parallel I/O lines: 16 configurable for input or output, 7
fixed inputs, 3 fixed outputs
• 8 data lines (D0-D7)
• 4 address lines (A0-A3)
• Memory I/0 read, write
• External reset input
• Five 8-bit timers (cascadable in pairs) and two 10-bit timers
• 256K-512K flash memory, 128K-512K SRAM
• Real-time clock
• Watchdog supervisor
• Provision for customer-supplied backup battery via
connections on header J5
• 10Base-T RJ-45 Ethernet port
• Raw Ethernet and two associated LED control signals
available on 26-pin header
• Three CMOS-compatible serial ports: maximum asynchronous
baud rate of 691,200 bps, maximum synchronous baud rate
of 5,529,600 bps. One port is configurable as a clocked port.
• Six additional I/O lines are located on the programming port,
can be used as I/O lines when the programming port is not
being used for programming or in-circuit debugging-one
synchronous serial port can also be used as two general
CMOS inputs and one general CMOS output, and there are
two additional inputs and one additional output.

60
5.2.2 Advantages of the RCM2200

• Fast time to market using a fully engineered, "ready to run"


microprocessor core.
• Competitive pricing when compared with the alternative of
purchasing and assembling individual components.
• Easy C-language program development and debugging,
including rapid production loading of programs.
• Generous memory size allows large programs with tens of
thousands of lines of code, and substantial data storage.
• Integrated Ethernet port for network connectivity, royalty-
free TCP/IP software.

61
5.3. HARDWARE SETUP

This chapter describes the RCM2200 hardware in more detail, and


explains how to set up and use the accompanying Prototyping Board.

NOTE: This chapter (and this manual) assume that you have the RCM2200
Development Kit. If you purchased an RCM2200 module by itself, you will have to
adapt the information in this chapter and elsewhere to your test and development
setup.

62
5.3.1 Development Kit Contents

The RCM2200 Development Kit contains the following items:

• RCM2200 module with Ethernet port, 256K flash memory, and 128K SRAM.
• RCM2200 Prototyping Board.
• Wall transformer power supply, 12 V DC, 500 mA. (Included only with
Development Kits sold for the North American market. Overseas users will have to
substitute a power supply compatible with local mains power.)
• 10-pin header to DE9 programming cable with integrated level-matching circuitry.
• Dynamic C CD-ROM, with complete product documentation on disk.
• This Getting Started manual.
• Rabbit 2000 Processor Easy Reference poster.
• Registration card.

5.3.2 Prototyping Board

The Prototyping Board included in the Development Kit makes it easy to


connect an RCM2200 module to a power supply and a PC workstation for
development. It also provides some basic I/O peripherals (switches and LEDs), as
well as a prototyping area for more advanced hardware development.
For the most basic level of evaluation and development, the Prototyping Board can be
used without modification. As you progress to more sophisticated experimentation
and hardware development, modifications and additions can be made to the board
without modifying or damaging the RCM2200 module itself.

63
The Prototyping Board is shown below in Figure 2, with its main features
identified.

Prototyping Board Features

• Power Connection—A 3-pin header is provided for connection to the power


supply.
Note that it is symmetrical, with both outer pins connected to ground and the center
pin connected to the raw V+ input. The cable of the wall transformer provided with
the North American version of the development kit ends in a connector that is
correctly connected in either orientation.
Users providing their own power supply should ensure that it delivers 8–24 V
DC at not less than 500 mA. The voltage regulator will get warm while in use. (Lower
supply voltages will reduce thermal dissipation from the device.)

64
• Regulated Power Supply—The raw DC voltage provided at the POWER IN
jack is routed to a 5 V linear voltage regulator, which provides stable power to the
RCM2200 module and the Prototyping Board. A Shottky diode protects the power
supply against damage from reversed raw power connections.
• Power LED—The power LED lights whenever power is connected to the
Prototyping Board.

• Reset Switch—A momentary-contact, normally open switch is connected


directly to the RCM2200’s /RES pin. Pressing the switch forces a hardware reset of
the system.

• I/O Switches and LEDs—Two momentary-contact, normally open switches are


connected to the PB2 and PB3 pins of the master RCM2200 module and may be read
as inputs by sample applications.
Two LEDs are connected to the PE1 and PE7 pins of the master module, and
may be driven as output indicators by sample applications. The LEDs and switches
are connected through JP1, which has traces shorting adjacent pads together. These
traces may be cut to disconnect the LEDs, and an 8-pin header soldered into JP1 to
permit their selective reconnection with jumpers. See Figure 3 for details.

• Expansion Areas—The Prototyping Board is provided with several unpopulated


areas for expansion of I/O and interfacing capabilities. See the next section for details.

65
• Prototyping Area—A generous prototyping area has been provided for the
installation of through-hole components. Vcc (5 V DC) and Ground buses run around
the edge of this area. An area for surface-mount devices is provided to the right of the
through-hole area. (Note that there are SMT device pads on both top and bottom of
the Prototyping Board.) Each SMT pad is connected to a hole designed to accept a 30
AWG solid wire.

• Slave Module Connectors—A second set of connectors is pre-wired to permit


installation of a second, slave RCM2200 or RCM2300 module. This capability is
reserved for future use, although the schematics in this manual contain all of the
details an experienced developer will need to implement a master-slave system.
Prototyping Board Expansion

The Prototyping Board comes with several unpopulated areas, which may be
filled with components to suit the user’s development needs. After you have
experimented with the sample programs in Section 3.4, you may wish to expand the
board’s capabilities for further experimentation and development. Refer to the
Prototyping Board schematic (090– 0122) for details as necessary.

• Module Extension Headers—The complete pin sets of both the Master and
Slave RabbitCore modules are duplicated at these two sets of headers. Developers can
solder wires directly into the appropriate holes, or, for more flexible development, 26-
pin header strips can be soldered into place. See Figure 1 for the header pinouts.

66
• RS-232—Two 2-wire or one 4-wire RS-232 serial port can be added to the
Prototyping Board by installing a driver IC and four capacitors. The Maxim
MAX232CPE driver chip or a similar device is recommended for the U2. Refer to the
Prototyping Board schematic for additional details.
A 10-pin 0.1-inch spacing header strip can be installed at J6 to permit
connection of a ribbon cable leading to a standard DE-9 serial connector.
All RS-232 port components mount to the underside of the Prototyping Board,
between the Master module connectors.
NOTE: The RS-232 chip, capacitors and header strip are available from electronics
distributors such as Digi-Key.

• Prototyping Board Component Header—Four I/O pins from the module are
hardwired to the Prototyping Board LEDs and switches.

5.4 Development Hardware Connections:


There are four steps to connecting the Prototyping Board for use with
Dynamic C and the sample programs:

1. Attach the RCM2200 module to the Prototyping Board.


2. Connect the programming cable between the RCM2200 module and the
workstation PC.
3. Connect the module’s Ethernet port to a PC’s Ethernet port, or to an Ethernet
network.
4. Connect the power supply to the Prototyping Board.

67
5.4.1 Attach Module to Prototyping Board:

Turn the RCM2200 module so that the Ethernet connector end of the module
extends off the Prototyping Board, as shown in Figure 4 below. Align the module
headers J4 ad J5 into sockets J1 and J2 on the Prototyping Board.

Figure 4. Installing the RCM2200 on the Prototyping Board

68
Connect Ethernet Network Cable
Programming and development can be done with the RCM2200 without
connecting the Ethernet port to a network. However, if you will be running the sample
programs that use the Ethernet capability or will be doing Ethernet-enabled
development, you should connect the RCM2200 module’s Ethernet port at this time.
There are four options for connecting the RCM2200 to a network for development
and runtime purposes. The first two options permit total freedom of action in selecting
network addresses and use of the “network,” as no action can interfere with other
users. We recommend one of these options for initial development.

69
• No LAN — The simplest alternative for desktop development. Connect the
RCM2200’s Ethernet port directly to the PC’s network interface card using an RJ-45
crossover cable. A crossover cable is a special cable that flips some connections
between the two connectors and permits direct connection of two client systems. A
standard RJ-45 network cable will not work for this purpose.

• Micro-LAN — Another simple alternative for desktop development. Use a small


Ethernet 10Base-T hub and connect both the PC’s network interface card and the
RCM2200’s Ethernet port to it, using standard network cables.
The following options require more care in address selection and testing actions, as
conflicts with other users, servers and systems can occur:

• LAN — Connect the RCM2200’s Ethernet port to an existing LAN, preferably one
to which the development PC is already connected. You will need to obtain IP
addressing information from your network administrator.

• WAN — The RCM2200 is capable of direct connection to the Internet and other
Wide Area Networks, but exceptional care should be used with IP address settings
and all network-related programming and development. We recommend that
development and debugging be done on a local network before connecting a
RabbitCore system to the Internet.

TIP: Checking and debugging the initial setup on a micro-LAN is recommended


before connecting the system to a LAN or WAN.

70
5.4.2 Connect Power

When all other connections have been made, you can connect
power to the RCM2200 Prototyping Board.

Hook the connector from the wall transformer to header J5 on the


Prototyping Board as shown in Figure 6 below. The connector may
be attached either way as long as it is not offset to one side.

Figure 6. Power Supply Connections

Plug in the wall transformer. The power LED on the Prototyping


Board should light up. The RCM2200 and the Prototyping Board
are now ready to be used.

71
5.5 ADC DESCRIPTION

General Description

The ADC0801, ADC0802, ADC0803, ADC0804 and ADC0805 are


CMOS 8-bit successive approximation A/D converters that use a differential
potentiometric ladder—similar to the 256R products. These converters are
designed to allow operation with the NSC800 and INS8080A derivative control
bus with TRI-STATE output latches directly driving the data bus. These A/Ds
appear like memory locations or I/O ports to the microprocessor and no
interfacing logic is needed. Differential analog voltage inputs allow increasing
the common-mode rejection and offsetting the analog zero input voltage
value. In addition, the voltage reference input can be adjusted to allow
encoding any smaller analog voltage span to the full 8 bits of resolution.
Features

1. Compatible with 8080 µP derivatives—no interfacing logic needed - access


time - 135 ns
2. Easy interface to all microprocessors, or operates “stand alone”
3. Differential analog voltage inputs
4.Logic inputs and outputs meet both MOS and TTL voltage level
specifications
5.Works with 2.5V (LM336) voltage reference
6.On-chip clock generator
7.0V to 5V analog input voltage range with single 5V supply
8.No zero adjust required
9.0.3" standard width 20-pin DIP package
10.20-pin molded chip carrier or small outline package
11.Operates ratiometrically or with 5 VDC, 2.5 VDC, or analog span adjusted
voltage reference

72
Key Specifications:

1. Resolution 8 bits
2.Total error ±1⁄4 LSB, ±1⁄2 LSB and ±1 LSB
3.Conversion time 100 µs

This fig. shows the pin diagram of 80x family. It is a dual in line
package and it has 20 pins. the data pins of this IC are connected to the data
pins of the rabbit micro processor.and the fourth pin of the PORT D of the
processor is connected to the RD pin f the ADC. the fifth pin of the PORT D of
the processor is connected to the WR pin f the ADC.the sixth pin of the PORT
D of the processor is connected to the INT pin f the ADC.

we are using IC801 for analog to digital conversion. its specifications


are mentioned above. Its operating range is in between -40 to 85 degree
centigrade.
73
TIMING DIAGRAMS

Read operation:

Write operation:

74
5.6 Temparature sensor:

LM35

Precision Centigrade Temperature Sensors

5.6.1 General Description

The LM35 series are precision integrated-circuit temperature sensors, whose


output voltage is linearly proportional to the Celsius (Centigrade) temperature. The
LM35 thus has an advantage over linear temperature sensors calibrated in ° Kelvin, as
the user is not required to subtract a large constant voltage from its output to obtain
convenient Centigrade scaling. The LM35 does not require any external calibration or
trimming to provide typical accuracies of ±1⁄4°C at room temperature and ±3⁄4°C
over a full −55 to 150°C temperature range.

Low cost is assured by trimming and calibration at his wafer level. The
LM35’s low output impedance, linear output, and precise inherent calibration make
interfacing to readout or control circuitry especially easy. It can be used with single
power supplies, or with plus and minus supplies. As it draws only 60 µA from its
supply, it has very low self-heating, less than 0.1°C in still air. The LM35 is rated to
operate over a −55° to +150°C temperature range, while the LM35C is rated for a
−40° to +110°C range (−10° with improved accuracy). The LM35 series is available
packaged in hermetic TO-46 transistor packages, while the LM35C, LM35CA, and
LM35D are also available in the plastic TO-92 transistor package. The LM35D is also
available in an 8-lead surface mount small outline package and a plastic TO-220
package.

75
5.6.2 Features:

1.Calibrated directly in ° Celsius (Centigrade)


2.Linear + 10.0 mV/°C scale factor
3.0.5°C accuracy guarantee able (at +25°C)
4.Rated for full −55° to +150°C range
5.Suitable for remote applications
6.Low cost due to wafer-level trimming
7.Operates from 4 to 30 volts
8.Less than 60 µA current drain
9.Low self-heating, 0.08°C in still air
10.Nonlinearity only ±1⁄4°C typical
11.Low impedance output, 0.1 
for
 1 mA load

76
77
6. SOFTWARE IMPLEMENTATION

PROGRAM FOR MONITORING THE TEMPARATURE OF A


REMOTE SERVER

/
*********************************************************************
**********
post.c
Z-World, 2000

This program accepts and parses POST style form submission,


to
get information from the user.
*********************************************************************
**********/

#class auto
/***********************************
* Configuration *
* ------------- *
* All fields in this section must *
* be altered to match your local *
* network settings. *
***********************************/
/*

* Pick the predefined TCP/IP configuration for this sample. See


* LIB\TCPIP\TCP_CONFIG.LIB for instructions on how to set the
* configuration.
*/

#define TCPCONFIG 1
/*
* Web server configuration
*/
/*
* Only one server is needed for a reserved port
*/
#define HTTP_MAXSERVERS 1
#define MAX_TCP_SOCKET_BUFFERS 1

78
/*
* Our web server as seen from the clients.
* This should be the address that the clients (netscape/IE)
* use to access your server. Usually, this is your IP address.
* If you are behind a firewall, though, it might be a port on
* the proxy, that will be forwarded to the Rabbit board. The
* commented out line is an example of such a situation.
*/
#define REDIRECTHOST _PRIMARY_STATIC_IP
//#define REDIRECTHOST "proxy.domain.com:1212"
/********************************
* End of configuration section *
********************************/
/*

* REDIRECTTO is used by each ledxtoggle cgi's to tell the


* browser which page to hit next. The default REDIRECTTO
* assumes that you are serving a page that does not have
* any address translation applied to it.
*
*/

#define REDIRECTTO "http://" REDIRECTHOST ""

#memmap xmem

#use "dcrtcp.lib"
#use "http.lib"

#ximport "samples/tcpip/http/pages/form.html" index_html


#ximport "images/faqs_ttl.gif" itech_logo

//ADC READING MACROS

//Hardware connections
//Port A -> ADC DATA
//Port D.3 ->ADC RD
//Port D.4 ->ADC WR
//Port D.5 ->ADC INT
//

#define INT 0
#define RD 1
#define WR 4

#define SET 1
#define CLR 0

79
/* the default for / must be first */
const HttpType http_types[] =
{
{ ".shtml", "text/html", shtml_handler}, // ssi
{ ".html", "text/html", NULL}, // html
{ ".cgi", "", NULL}, // cgi
{ ".gif", "image/gif", NULL}
};

#define MAX_FORMSIZE 64

typedef struct {
char *name;
char value[MAX_FORMSIZE];
} FORMType;
FORMType FORMSpec[1];

int setpoint,ctmp;

char lrtime[40],strsetpoint[10],strctmp[10];
unsigned int d;

void getTemp(void);
/*
* parse the url-encoded POST data into the FORMSpec struct
* (ie: parse 'foo=bar&baz=qux' into the struct
*/

int parse_post(HttpState* state)


{
auto int retval;
auto int i;

// state->s is the socket structure, and state->p is pointer


// into the HTTP state buffer (initially pointing to the
beginning
// of the buffer). Note that state->p was set up in the submit
// CGI function. Also note that we read up to the
content_length,
// or HTTP_MAXBUFFER, whichever is smaller. Larger POSTs will
be
// truncated.
retval = sock_aread(&state->s, state->p,
(state->content_length < HTTP_MAXBUFFER-1)?
(int)state->content_length:HTTP_MAXBUFFER-
1);

80
if (retval < 0) {
// Error--just bail out
return 1;
}

// Using the subsubstate to keep track of how much data we have


received
state->subsubstate += retval;

if (state->subsubstate >= state->content_length) {


// NULL-terminate the content buffer
state->buffer[(int)state->content_length] = '\0';

// Scan the received POST information into the FORMSpec structure


for(i=0; i<(sizeof(FORMSpec)/sizeof(FORMType)); i++) {
http_scanpost(FORMSpec[i].name, state->buffer,
FORMSpec[i].value,
MAX_FORMSIZE);
}

// Finished processing--returning 1 indicates that we are


done
return 1;
}
// Processing not finished--return 0 so that we can be called
again
return 0;
}

/*
* Sample submit.cgi function
*/

int submit(HttpState* state)


{
auto int i;

if(state->length)
{
/* buffer to write out */
if(state->offset < state->length)
{
state->offset += sock_fastwrite(&state->s,
state->buffer + (int)state->offset,
(int)state->length - (int)state->offset);
}
else
{
state->offset = 0;
state->length = 0;
}
}

81
else
{
switch(state->substate)
{
case 0:
strcpy(state->buffer, "HTTP/1.0 200 OK\r\n\r\n");
state->length = strlen(state->buffer);
state->offset = 0;
state->substate++;
break;

case 1:
strcpy(state->buffer,
"<html><head><title>Results</title></head><body>\r\n");
state->length = strlen(state->buffer);
state->substate++;
break;

case 2:
/* init the FORMSpec data */
FORMSpec[0].value[0] = '\0';

state->p = state->buffer;
state->substate++;
break;

case 3:
/* parse the POST information */
if(parse_post(state))
{
sprintf(state->buffer, "<p>SetPoint:
%s<p>\r\n",
FORMSpec[0].value);
// ### Add
strcpy(strsetpoint,FORMSpec[0].value);
setpoint= atoi(strsetpoint);
// ### Add End
state->length = strlen(state->buffer);
state->substate++;

}
else{ }
break;

82
case 4:
strcpy(state->buffer, "<p>Go <a
href=\"/\">home</a></body></html>\r\n");
state->length = strlen(state->buffer);
state->substate++;
break;

default:

state->substate = 0;
return 1;
}
}

return 0;
}

const HttpSpec http_flashspec[] =


{
//Files
{ HTTPSPEC_FILE, "/", index_html, NULL, 0, NULL,
NULL},
{ HTTPSPEC_FILE, "/index.html", index_html, NULL, 0,
NULL, NULL},
{ HTTPSPEC_FILE, "/faqs_ttl.gif", itech_logo, NULL, 0,
NULL, NULL},
//Variables
{ HTTPSPEC_VARIABLE, "setpoint", 0, strsetpoint, PTR16, "%s",
NULL},
{ HTTPSPEC_VARIABLE, "ctmp", 0, strctmp, PTR16, "%s", NULL},
{ HTTPSPEC_VARIABLE, "lrtime", 0, lrtime, PTR16, "%s", NULL},
//Function
{ HTTPSPEC_FUNCTION, "/submit.cgi", 0, submit, 0, NULL,
NULL}
};

void main()
{
/* 1. Convert the I/O ports. Disable slave port which makes
* Port A an output, and PORT E not have SCS signal.
*/

WrPortI(SPCR, &SPCRShadow, 0x80);

WrPortI(PDFR,&PDFRShadow,0X0);
WrPortI(PDDDR,& PDDDRShadow,0X18);

83
/* init FORM searchable names - must init ALL FORMSpec structs! */

FORMSpec[0].name = "setpoint";

ctmp=0;
setpoint =0;

sock_init();
http_init();

tcp_reserveport(80);

while (1){

http_handler();
getTemp();
// for(d=5000;d>0;d--);

}
}

void getTemp()
{
struct tm rtc; // time struct
// ADC Read
int val;

BitWrPortI(PDDR,&PDDRShadow,SET,WR);
BitWrPortI(PDDR,&PDDRShadow,CLR,WR);

for(d=5000;d>0;d--); //delay
BitWrPortI(PDDR,&PDDRShadow,SET,WR);

while( BitRdPortI(PDDR,INT) );

BitWrPortI(PDDR,&PDDRShadow,CLR,RD);

//for(d=65000;d>0;d--);//delay

BitWrPortI(PDDR,&PDDRShadow,CLR,RD);
BitWrPortI(PDDR,&PDDRShadow,SET,RD);

ctmp = RdPortI(PADR);

sprintf(strctmp,"%d",ctmp);

//get RTC Time


tm_rd(&rtc); // get time in
struct tm

sprintf(lrtime," %02d:%02d:%02d , %02d/%02d/%04d\n\n",


rtc.tm_hour,rtc.tm_min,rtc.tm_sec,rtc.tm_mday,rtc.tm_mon,1900+rtc.
tm_year);
}

84
85

Potrebbero piacerti anche