Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
CHAPTER 6
Applications
Middleware
Operating system
Platform
Applications, services
Middleware
Node 1 Node 2
System layers
Dr. Almetwally Mostafa 5
Operating System Layer
Users satisfaction is achieved if middleware-OS combination
has good performance.
OS running at a node provides its own abstractions of local hardware
resources for processing, storage and communication.
Middleware utilizes a combination of local resources to implement its
mechanisms for remote invocations between objects or processes.
OSs provide support for middleware layer to work
effectively:
Encapsulation: provide transparent service interface to resources of
the computer.
Protection: protect resources from illegitimate access.
Concurrent processing: users/clients may share resources and
access concurrently.
Provide the resources needed for (distributed) services and
applications to complete their task: Communication and scheduling.
Communication
manager
Supervisor
Core OS functionality
Dr. Almetwally Mostafa 7
Operating System Layer
The core OS components include the following:
Process manager:
Handles the creation of and operation upon process.
Thread manager:
Handles thread creation, synchronization and scheduling.
Communication manager:
Handles communication between threads attached to different
processes on the same computer.
Some OSs support communication between threads in remote
processes.
Memory manager:
Manages physical and virtual memory.
Supervisor:
Dispatches interrupts, system call traps and other exceptions.
RB copied
RA RB
from RA
Kernel
Share
A's d
frame B's page
table
page table
Thread 2 makes
requests to server
Input-output
Receipt &
queuing
Thread 1
generates
results
T1
Requests
N threads
Client
Server
per- per-
workers connection object
threads threads
P added
Process Process
A B SA preempted
Process SA unblocked
Kernel SA blocked
P idle
Virtual processors Kernel
P needed
Scheduler activations
Dr. Almetwally Mostafa 27
Communication and Invocation
Invocation Performance
The performance of RPC and RMI mechanisms is a critical
factor for effective distributed systems.
Clients and servers may make many millions of invocation-related
operations in their lifetimes.
Software overheads often predominate over network
overheads in invocation times.
Invocation times have not decreased in proportion with increases in
network bandwidth.
Each invocation mechanism executes a code out of the calling
procedure or object scope and involves the arguments
communication to the code and the data values return to the
caller.
The important performance-related distinctions between
invocation mechanisms:
Whether they are synchronous or asynchronous.
Whether they involve a domain transition.
Whether they involve communication across a network.
Dr. Almetwally Mostafa 28
Communication and Invocation
Invocation Performance
Thread Control transfer via
(a) System call trap instruction
Protection domain
boundary
Thread 1 Thread 2
User 1 User 2
Kernel 1 Kernel 2
Dr. Almetwally Mostafa 29
Communication and Invocation
Invocation Performance
A null invocation is an RPC (or RMI) without parameters
that execute a null procedure and returns no values.
Important to measure a fixed overhead, the latency.
Execution time for a null procedure call:
Local procedure call < 1 microseconds
Remote procedure call ~ 10 milliseconds 10,000 times slower!
Much of the delay taken by the actions of the operating
system kernel and middleware.
Network time involving about 100 bytes (null invocation size)
transferred at 100 megabits/sec. accounts for only .01 millisecond.
Factors affecting remote invocation performance:
Marshalling/unmarshalling + operation dispatch at the server
Data copying:- application -> kernel space -> communication
buffers
Thread scheduling and context switching
Protocol processing:- for each protocol layer
Network access delays:- connection setup, network latency
Dr. Almetwally Mostafa 30
Communication and Invocation
Invocation Performance
Shared regions may be used for rapid communication between
a user process and the kernel or between user processes.
Data is communicated by writing to and reading from the shared region
without coping them to and from the kernel address spaces.
The delay of a client experiences during request-replay
interactions over TCP is not necessarily worse than UDP and it
is sometimes better for large messages.
The operating system default buffering can be used to collect several
small massages and then send them together rather than sending them
in separate packets.
Develop a more efficient invocation mechanism for the case of
two processes on the same computer, lightweight RPC (LRPC):
Based on optimization concerning data coping and thread scheduling.
Use shared regions for client-server communication with a different
private region (A stack) between the server and each of its local clients.
Each client and the server are able to pass arguments and return values
directly via an A stack.
Client Server
A stack
A
Kernel
2. Trap to Kernel 3. Upcall 5. Return (trap)