Sei sulla pagina 1di 36

ABSTRACT

Challenges in the Migration to 4G

Second-generation (2G) mobile systems were very successful in the


previous decade. Their success prompted the development of third
generation (3G) mobile systems. While 2G systems such as GSM, IS-95, and
cdmaOne were designed to carry speech and low-bit-rate data, 3G systems
were designed to provide higher-data-rate services. During the evolution
from 2G to 3G, a range of wireless systems, including GPRS, IMT-2000,
Bluetooth, WLAN, and HiperLAN, have been developed. All these systems
were designed independently, targeting different service types, data rates,
and users. As all these systems have their own merits and shortcomings,
there is no single system that is good enough to replace all the other
technologies. Instead of putting efforts into developing new radio interfaces
and technologies for 4G systems, which some researchers are doing, we
believe establishing 4G systems that integrate existing and newly developed
wireless systems is a more feasible option.

Researchers are currently developing frameworks for future 4G


networks. Different research programs, such as Mobile VCE, MIRAI, and
DoCoMo, have their own visions on 4G features and implementations. Some
key features (mainly from user's point of view) of 4G networks are stated as
follows:
" High usability: anytime, anywhere, and with any technology
" Support for multimedia services at low transmission cost
" Personalization
" Integrated services
First, 4G networks are all IP based heterogeneous networks that allow users
to use any system at any time and anywhere. Users carrying an integrated
terminal can use a wide range of applications provided by multiple wireless
networks.

Second, 4G systems provide not only telecommunications services, but


also data and multimedia services. To support multimedia services, high-
data-rate services with good system reliability will be provided. At the same
time, a low per-bit transmission cost will be maintained.

Third, personalized service will be provided by this new-generation


network. It is expected that when 4G services are launched, users in widely
different locations, occupations, and economic classes will use the services.
In order to meet the demands of these diverse users, service providers
should design personal and customized services for them.
ABSTRACT

A 64 Point Fourier Transform Chip

Fourth generation wireless and mobile system are currently the focus
of research and development. Broadband wireless system based on
orthogonal frequency division multiplexing will allow packet based high data
rate communication suitable for video transmission and mobile internet
application. Considering this fact we proposed a data path architecture using
dedicated hardwire for the baseband processor.

The most computationally intensive part of such a high data rate


system are the 64-point inverse FFT in the transmit direction and the viterbi
decoder in the receiver direction. Accordingly an appropriate design
methodology for constructing them has to be chosen a) how much silicon
area is needed b) how easily the particular architecture can be made flat for
implementation in VLSI c) in actual implementation how many wire crossings
and how many long wires carrying signals to remote parts of the design are
necessary d) how small the power consumption can be .This paper describes
a novel 64-point FFT/IFFT processor which has been developed as part of a
large research project to develop a single chip wireless modem.

ALGORITHM FORMULATION

The discrete fourier transformation A(r) of a complex data sequence B(k) of


length N
where r, k ={0,1……, N-1} can be described as

Where WN = e-2?j/N . Let us consider that N=MT , ? = s+ Tt and


k=l+Mm,where s,l ? {0,1…..7} and m, t ? {0,1,….T-1}. Applying these
values in first equation and we get

This shows that it is possible to realize the FFT of length N by first


decomposing it to one M and one T-point FFT where N = MT, and combinig
them. But this results in in a two dimensional instead of one dimensional
structure of FFT. We can formulate 64-point by considering M =T = 8

This shows that it is possible to express the 64-point FFT in terms of a


two dimensional structure of 8-point FFTs plus 64 complex inter-dimensional
constant multiplications. At first, appropriate data samples undergo an 8-
point FFT computation. However, the number of non-trivial multiplications
required for each set of 8-point FFT gets multiplied with 1.
ABSTRACT

MILSTD 1553B

The digital data bus MILSTD 1553B was designed in early 1970's to
replace analog point to point wire bundles between electronic
instrumentations. The MILSTD 1553B has four main elements.

1. Bus controller to manage information flow.


2. Remote terminals to interface one or more subsistence.
3. Bus monitor for data bus monitoring.
4. Data bus components. Including bus couplers, cabling, terminators and
connectors.

This is a differential serial bus used in military and space equipments


comprised of multiple redundant bus connection and communicates at one
MBps. Bus has single active bus control and up to 31 remote terminals. Data
transfer contain up to 16 bit data word.

INTERFACE DESCRIPTION

MILSTD 1553 is military standard that defines the electrical and


protocol characteristic for data bus. A data bus is used to provide a medium
for exchange of data between various systems. It is similar to LAN in
personal computers and office automation industry. A data transmission
medium which would allow all systems and subsistence to share a common
set of wires was needed. So MILSTD 1553 standard redefines TDM as the
transmission of data from several signal sources through one communication
system with different signal samples staggered in time to form a composite
pulse train.

HISTORY

In 1968 SAE a technical body of military and industrial members estd


a subcommittee to define a serial data bus to meet the needs of military
avionics community under the project name A 2 - K. This subcommittee
developed first draft in 1970. In 1973 MILSTD 1553 was released through F
- 16 fighter plane.

Then MILSTD 1553 A was released in 1975 through air forces F 16 and
army's attack helicopter AH - 64 A Apache. 1553 B was released in 1978.
The SAE decided to freeze the standard to allow the component
manufacturers to develop real world 1553 products.
ABSTRACT

BIT for Intelligent system design

The principal of Built-in-test and self-test has been widely applied to


the design and testing of complex, mixed-signal electronic systems, such as
integrated circuits (IC s) and multifractional instrumentation [1]. A system
with BIT is characterized by its ability to identify its operation condition by
itself, through the testing and diagnosis capabilities built into its in structure.
To ensure reliable performance, testability needs to be incorporated into the
early stage of system and product design.

Various techniques have been developed over the past decades to


implement the BIT technique. In the semiconductor, the objective of
applying BIT is to improve the yield of chip fabrication, enable robust and
efficient chip testing and better scope with the increasing circuit complexity
and integration density.

This has been achieved by having an IC chip generate its own test
stimuli and measure the corresponding responses from the various elements
within the chip to determine its condition. In recent years, BIT has seen
increasing applications in other branches of industry, eg. manufacturing,
aerospace and transportation and for the purposes of system condition
monitoring. In manufacturing systems, BIT facilitates automatic detection of
toolwear and breakage and assists in corrective actions to ensure part
quality and reduce machine downtime.

2. BIT TECHNIQUES
BIT techniques are classified:
a. on-line BIT
b. off-line BIT
On-line BIT:
It includes concurrent and nonconcurrent techniques. Testing occurs during
normal functional operation.
Concurrent on-line BIST - Testing occurs simultaneously with normal
operation mode, usually coding techniques or duplication and comparison
are used. [3]
Nonconcurrent on-line BIST - testing is carried out while a system is in an
idle state, often by executing diagnostic software or firmware routines
Off-line BIT:
System is not in its normal working mode it usually uses onchip test
generators and output response analysers or micro diagnostic routines.
Functional off-line BIT is based on a functional description of the Component
ABSTRACT

CAN

The development of CAN began when more and more electronic


devices were implemented into modern motor vehicles. Examples of such
devices include engine management systems, active suspension, ABS, gear
control, lighting control, air conditioning, airbags and central locking. All this
means more safety and more comforts for the driver and of course a
reduction of fuel consumption and exhaust emissions.

To improve the behavior of the vehicle even further, it was necessary


for the different control systems to exchange information. This was usually
done by discrete interconnection of the different systems. The requirement
for information exchange has then grown to such an extent that a cable
network with a length of up to several moles and many connectors was
required. This produced throwing problems concerning material cost,
production time and reliability. The solution to the problem was the
connection of the control systems through a serial bus system. This bus had
to fulfill some special requirements due to its usage in a vehicle.

With the use of CAN, point-to-point wiring is replaced by one serial bus
connecting all control systems. This is accomplished by adding some CAN-
specific hardware to each control unit that provides the "rules" or the
protocol for transmitting and receiving information via the bus. CAN or
Controller Area Network is an advanced serial bus system that efficiently
supports distributed control systems, It was initially developed for the use in
motor vehicles by Robert Bosch Gmbh, Germany, in the late 1980s, also
holding the CAN license. CAN is most widely used in the automotive and
industrial market segments. Typical applications for CAN are motor vehicles,
utility vehicles, and industrial automation. Other applications are trains,
medical equipment, building automation, household appliances, and office
automation.
ABSTRACT

Chip Morphing

Engineering is a study of tradeoffs. In computer engineering the


tradeoff has traditionally been between performance, measured in
instructions per second, and price. Because of fabrication technology, price
is closely related to chip size and transistor count. With the emergence of
embedded systems, a new tradeoff has become the focus of design. This
new tradeoff is between performance and power or energy consumption. The
computational requirements of early embedded systems were generally
more modest, and so the performance-power tradeoff tended to be weighted
towards power. "High performance" and "energy efficient" were generally
opposing concepts.

However, new classes of embedded applications are emerging which


not only have significant energy constraints, but also require considerable
computational resources. Devices such as space rovers, cell phones,
automotive control systems, and portable consumer electronics all require or
can benefit from high-performance processors. The future generations of
such devices should continue this trend.

Processors for these devices must be able to deliver high performance


with low energy dissipation. Additionally, these devices evidence large
fluctuations in their performance requirements. Often a device will have very
low performance demands for the bulk of its operation, but will experience
periodic or asynchronous "spikes" when high-performance is needed to meet
a deadline or handle some interrupt event. These devices not only require a
fundamental improvement in the performance power tradeoff, but also
necessitate a processor which can dynamically adjust its performance and
power characteristics to provide the tradeoff which best fits the system
requirements at that time.

These motivations point to three major objectives for a power


conscious embedded processor. Such a processor must be capable of high
performance, must consume low amounts of power, and must be able to
adapt to changing performance and power requirements at runtime.

The objective of this seminar is to define a micro-architecture which


can exhibit low power consumption without sacrificing high performance.
This will require a fundamental shift to the power-performance curve
presented by traditional microprocessors.
ABSTRACT

Nuclear Batteries-Daintie st Dynamos

Micro electro mechanical systems (MEMS) comprise a rapidly


expanding research field with potential applications varying from sensors in
air bags, wrist-warn GPS receivers, and matchbox size digital cameras to
more recent optical applications. Depending on the application, these
devices often require an on board power source for remote operation,
especially in cases requiring for an extended period of time. In the quest to
boost micro scale power generation several groups have turn their efforts to
well known enable sources, namely hydrogen and hydrocarbon fuels such as
propane, methane, gasoline and diesel. Some groups are developing micro
fuel cells than, like their micro scale counter parts, consume hydrogen to
produce electricity. Others are developing on-chip combustion engines,
which actually burn a fuel like gasoline to drive a minuscule electric
generator. But all these approaches have some difficulties regarding low
energy densities, elimination of by products, down scaling and recharging.
All these difficulties can be overcome up to a large extend by the use of
nuclear micro batteries.

Radioisotope thermo electric generators (RTGs) exploited the


extraordinary potential of radioactive materials for generating electricity.
RTGs are particularly used for generating electricity in space missions. It
uses a process known as See-beck effect. The problem with RTGs is that
RTGs don't scale down well. So the scientists had to find some other ways of
converting nuclear energy into electric energy. They have succeeded by
developing nuclear batteries.

NUCLEAR BATTERIES

Nuclear batteries use the incredible amount of energy released


naturally by tiny bits of radio active material without any fission or fusion
taking place inside the battery. These devices use thin radioactive films that
pack in energy at densities thousands of times greater than those of lithium-
ion batteries. Because of the high energy density nuclear batteries are
extremely small in size. Considering the small size and shape of the battery
the scientists who developed that battery fancifully call it as "DAINTIEST
DYNAMO". The word 'dainty' means pretty.

Scientists have developed two types of micro nuclear batteries. One is


junction type battery and the other is self-reciprocating cantilever. The
operations of both are explained below one by one.
ABSTRACT

Micro Electronic Pill

The invention of transistor enabled the first use of radiometry


capsules, which used simple circuits for the internal study of the gastro-
intestinal (GI) [1] tract. They couldn't be used as they could transmit only
from a single channel and also due to the size of the components. They also
suffered from poor reliability, low sensitivity and short lifetimes of the
devices. This led to the application of single-channel telemetry capsules for
the detection of disease and abnormalities in the GI tract where restricted
area prevented the use of traditional endoscopy.

They were later modified as they had the disadvantage of using


laboratory type sensors such as the glass pH electrodes, resistance
thermometers, etc. They were also of very large size. The later modification
is similar to the above instrument but is smaller in size due to the
application of existing semiconductor fabrication technologies. These
technologies led to the formation of "MICROELECTRONIC PILL".

Microelectronic pill is basically a multichannel sensor used for remote


biomedical measurements using micro technology. This is used for the real-
time measurement parameters such as temperature, pH, conductivity and
dissolved oxygen. The sensors are fabricated using electron beam and
photolithographic pattern integration and were controlled by an application
specific integrated circuit (ASIC).

BLOCK DIAGRAM

Microelectronic pill consists of 4 sensors (2) which are mounted on two


silicon chips (Chip 1 & 2), a control chip (5), a radio transmitter (STD- type
1-7, type2-crystal type-10) & silver oxide batteries (8).
1-access channel, 3-capsule, 4- rubber ring, 6-PCB chip carrier

BASIC COMPONENTS

A. Sensors

There are basically 4 sensors mounted on two chips- Chip 1 & chip 2. On
chip 1(shown in fig 2 a), c), e)), temperature sensor silicon diode (4), pH
ISFET sensor (1) and dual electrode conductivity sensor (3) are fabricated.
Chip 2 comprises of three electrode electrochemical cell oxygen sensor (2)
and optional NiCr resistance thermometer.
ABSTRACT

Chip Morphing

Engineering is a study of tradeoffs. In computer engineering the


tradeoff has traditionally been between performance, measured in
instructions per second, and price. Because of fabrication technology, price
is closely related to chip size and transistor count. With the emergence of
embedded systems, a new tradeoff has become the focus of design. This
new tradeoff is between performance and power or energy consumption. The
computational requirements of early embedded systems were generally
more modest, and so the performance-power tradeoff tended to be weighted
towards power. "High performance" and "energy efficient" were generally
opposing concepts.

However, new classes of embedded applications are emerging which


not only have significant energy constraints, but also require considerable
computational resources. Devices such as space rovers, cell phones,
automotive control systems, and portable consumer electronics all require or
can benefit from high-performance processors. The future generations of
such devices should continue this trend.

Processors for these devices must be able to deliver high performance


with low energy dissipation. Additionally, these devices evidence large
fluctuations in their performance requirements. Often a device will have very
low performance demands for the bulk of its operation, but will experience
periodic or asynchronous "spikes" when high-performance is needed to meet
a deadline or handle some interrupt event. These devices not only require a
fundamental improvement in the performance power tradeoff, but also
necessitate a processor which can dynamically adjust its performance and
power characteristics to provide the tradeoff which best fits the system
requirements at that time.

These motivations point to three major objectives for a power


conscious embedded processor. Such a processor must be capable of high
performance, must consume low amounts of power, and must be able to
adapt to changing performance and power requirements at runtime.

The objective of this seminar is to define a micro-architecture which


can exhibit low power consumption without sacrificing high performance.
This will require a fundamental shift to the power-performance curve
presented by traditional microprocessors.
ABSTRACT

Non Visible Imaging

Near infrared light consists of light just beyond visible red light
(wavelengths greater than 780nm). Contrary to popular thought, near
infrared photography does not allow the recording of thermal radiation
(heat). Far-infrared thermal imaging requires more specialized equipment.

Infrared images exhibit a few distinct effects that give them an exotic,
antique look. Plant life looks completely white because it reflects almost all
infrared light (because of this effect, infrared photography is commonly used
in aerial photography to analyze crop yields, pest control, etc.) The sky is a
stark black because no infrared light is scattered. Human skin looks pale and
ghostly. Dark sunglasses all but disappear in infrared because they don't
block any infrared light, and it's said that you can capture the near infrared
emissions of a common iron.

Infrared photography has been around for at least 70 years, but until
recently has not been easily accessible to those not versed in traditional
photographic processes. Since the charge-coupled devices (CCDs) used in
digital cameras and camcorders are sensitive to near-infrared light, they can
be used to capture infrared photos. With a filter that blocks out all visible
light (also frequently called a "cold mirror" filter), most modern digital
cameras and camcorders can capture photographs in infrared.

In addition, they have LCD screens, which can be used to preview the
resulting image in real-time, a tool unavailable in traditional photography
without using filters that allow some visible (red) light through.
ABSTRACT

MILSTD 1553B

The digital data bus MILSTD 1553B was designed in early 1970's to
replace analog point to point wire bundles between electronic
instrumentations. The MILSTD 1553B has four main elements.

1. Bus controller to manage information flow.


2. Remote terminals to interface one or more subsistence.
3. Bus monitor for data bus monitoring.
4. Data bus components. Including bus couplers, cabling, terminators and
connectors.

This is a differential serial bus used in military and space equipments


comprised of multiple redundant bus connection and communicates at one
MBps. Bus has single active bus control and up to 31 remote terminals. Data
transfer contain up to 16 bit data word.

INTERFACE DESCRIPTION

MILSTD 1553 is military standard that defines the electrical and


protocol characteristic for data bus. A data bus is used to provide a medium
for exchange of data between various systems. It is similar to LAN in
personal computers and office automation industry. A data transmission
medium which would allow all systems and subsistence to share a common
set of wires was needed. So MILSTD 1553 standard redefines TDM as the
transmission of data from several signal sources through one communication
system with different signal samples staggered in time to form a composite
pulse train.

HISTORY

In 1968 SAE a technical body of military and industrial members estd


a subcommittee to define a serial data bus to meet the needs of military
avionics community under the project name A 2 - K. This subcommittee
developed first draft in 1970. In 1973 MILSTD 1553 was released through F
- 16 fighter plane.

Then MILSTD 1553 A was released in 1975 through air forces F 16 and
army's attack helicopter AH - 64 A Apache. 1553 B was released in 1978.
The SAE decided to freeze the standard to allow the component
manufacturers to develop real world 1553 products.
ABSTRACT

Poly Fuse

Polyfuse is a new standard for circuit protection. It is resettable. Many


manufacturers also call it polyswitch or multifuse. Polyfuses are not fuses
but polymeric positive temperature co-efficient (PPTC) thermistors.

Current limiting can be accomplished by using resistors , fuses ,


switches or positive temperature co-efficient devices. Resistors are rarely an
acceptable solution because the high power resistors that are usually
required are expensive. One-shot fuses can be used, but they might fatigue,
and they must be replaced after a fault event. Ceramic PTC devices tends to
have high resistance and power dissipation characteristics.

The preferred solution is a PPTC device which has low resistance in


normal operation and high resistance when exposed to a fault. Electrical
shorts or electrically over-loaded circuits can cause over-current and over
temperature damage.

Like traditional fuses , PPTC devices limit the flow of dangerously high
current during fault conditions. Unlike traditional fuses, PPTC devices reset
after the fault is cleared and the power to the circuit is removed.

THE BASICS

Technically, polyfuses are not fuses but polymeric positive temperature


co-efficient (PPTC) thermistors. For thermistors characterized as positive
temperature co-efficient , the device resistance increases with temperature.
These comprise thin sheets of conductive plastic with electrodes attached to
either side. The conductive plastic is basically a non-conductive crystalline
polymer loaded with a highly conductive carbon to make it conductive. The
electrodes ensure even distribution of power throughout the device.

Polyfuses are usually packaged in radial, axial, surface- mount, chip,


disk or washer form, these are available in voltage ratings of 30 to 250 volts
and current ratings of 20Ma to 100 amps.
ABSTRACT

Non Visible Imaging

Near infrared light consists of light just beyond visible red light
(wavelengths greater than 780nm). Contrary to popular thought, near
infrared photography does not allow the recording of thermal radiation
(heat). Far-infrared thermal imaging requires more specialized equipment.

Infrared images exhibit a few distinct effects that give them an exotic,
antique look. Plant life looks completely white because it reflects almost all
infrared light (because of this effect, infrared photography is commonly used
in aerial photography to analyze crop yields, pest control, etc.) The sky is a
stark black because no infrared light is scattered. Human skin looks pale and
ghostly. Dark sunglasses all but disappear in infrared because they don't
block any infrared light, and it's said that you can capture the near infrared
emissions of a common iron.

Infrared photography has been around for at least 70 years, but until
recently has not been easily accessible to those not versed in traditional
photographic processes. Since the charge-coupled devices (CCDs) used in
digital cameras and camcorders are sensitive to near-infrared light, they can
be used to capture infrared photos. With a filter that blocks out all visible
light (also frequently called a "cold mirror" filter), most modern digital
cameras and camcorders can capture photographs in infrared.

In addition, they have LCD screens, which can be used to preview the
resulting image in real-time, a tool unavailable in traditional photography
without using filters that allow some visible (red) light through.
ABSTRACT

Smart Note Taker

The Smart NoteTaker is such a helpful product that satisfies the needs
of the people in today's technologic and fast life. This product can be used in
many ways. The Smart NoteTaker provides taking fast and easy notes to
people who are busy one's self with something. With the help of Smart
NoteTaker, people will be able to write notes on the air, while being busy
with their work. The written note will be stored on the memory chip of the
pen, and will be able to read in digital medium after the job has done. This
will save time and facilitate life.

The Smart NoteTaker is good and helpful for blinds that think and
write freely. Another place, where our product can play an important role, is
where two people talks on the phone. The subscribers are apart from each
other while their talk, and they may want to use figures or texts to
understand themselves better. It's also useful especially for instructors in
presentations. The instructors may not want to present the lecture in front of
the board. The drawn figure can be processed and directly sent to the server
computer in the room. The server computer then can broadcast the drawn
shape through network to all of the computers which are present in the
room. By this way, the lectures are aimed to be more efficient and fun. This
product will be simple but powerful. The product will be able to sense 3D
shapes and motions that user tries to draw. The sensed information will be
processed and transferred to the memory chip and then will be monit ored on
the display device. The drawn shape then can be broadcasted to the network
or sent to a mobile device.

There will be an additional feature of the product which will monitor


the notes, which were taken before, on the application program used in the
computer. This application program can be a word document or an image
file. Then, the sensed figures that were drawn onto the air will be recognized
and by the help of the software program we will write, the desired character
will be printed in the word document. If the application program is a paint
related program, then the most similar shape will be chosen by the program
and then will be printed on the screen.

Since, JAVA Applet is suitable for both the drawings and strings, all
these applications can be put together by developing a single JAVA program.
The JAVA code that we will develop will also be installed on the pen so that
the processor inside the pen will type and draw the desired shape or text on
the display panel.
ABSTRACT

4G Wireless Systems

Fourth generation wireless system is a packet switched wireless


system with wide area coverage and high throughput. It is designed to be
cost effective and to provide high spectral efficiency . The 4g wireless uses
Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band
(UWB),and Millimeter wireless. Data rate of 20mbps is employed. Mobile
speed will be up to 200km/hr.The high performance is achieved by the use
of long term channel prediction, in both time and frequency, scheduling
among users and smart antennas combined with adaptive modulation and
power control. Frequency band is 2-8 GHz. it gives the ability for world wide
roaming to access cell anywhere.

Wireless mobile communications systems are uniquely identified by


"generation designations. Introduced in the early 1980s, first generation
(1G) systems were marked by analog frequency modulation and used
primarily for voice communications. Second generation (2G) wireless
communications systems, which made their appearance in the late 1980s,
were also used mainly for voice transmission and reception The wireless
system in widespread use today goes by the name of 2.5G-an "in between "
service that serves as a stepping stone to 3G. Whereby 2G communications
is generally associated with Global System for Mobile (GSM) service, 2.5G is
usually identified as being "fueled " by General Packet Radio Services (GPRS)
along with GSM. In 3G systems, making their appearance in late 2002 and in
2003, are designed for voice and paging services, as well as interactive
media use such as teleconferencing, Internet access, and other services.

The problem with 3G wireless systems is bandwidth-these systems


provide only WAN coverage ranging from 144 kbps (for vehicle mobility
applications) to 2 Mbps (for indoor static applications). Segue to 4G, the
"next dimension " of wireless communication. The 4g wireless uses
Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band
(UWB), and Millimeter wireless and smart antenna. Data rate of 20mbps is
employed. Mobile speed will be up to 200km/hr.Frequency band is 2 �]8
GHz. it gives the ability for world wide roaming to access cell anywhere.
ABSTRACT

Adaptive Optics in Ground Based Telescopes

Adaptive optics is a new technology which is being used now a days in


ground based telescopes to remove atmospheric tremor and thus provide a
clearer and brighter view of stars seen through ground based telescopes.
Without using this system, the images obtained through telescopes on earth
are seen to be blurred, which is caused by the turbulent mixing of air at
different temperatures.

Adaptive optics in effect removes this atmospheric tremor. It brings


together the latest in computers, material science, electronic detectors, and
digital control in a system that warps and bends a mirror in a telescope to
counteract, in real time the atmospheric distortion.

The advance promises to let ground based telescopes reach their


fundamental limits of resolution and sensitivity, out performing space based
telescopes and ushering in a new era in optical astronomy. Finally, with this
technology, it will be possible to see gas-giant type planets in nearby solar
systems in our Milky Way galaxy. Although about 100 such planets have
been discovered in recent years, all were detected through indirect means,
such as the gravitational effects on their parent stars, and none has actually
been detected directly.

WHAT IS ADAPTIVE OPTICS ?

Adaptive optics refers to optical systems which adapt to compensate


for optical effects introduced by the medium between the object and its
image. In theory a telescope's resolving power is directly proportional to the
diameter of its primary light gathering lens or mirror. But in practice ,
images from large telescopes are blurred to a resolution no better than
would be seen through a 20 cm aperture with no atmospheric blurring. At
scientifically important infrared wavelengths, atmospheric turbulence
degrades resolution by at least a factor of 10.

Space telescopes avoid problems with the atmosphere, but they are
enormously expensive and the limit on aperture size of telescopes is quite
restrictive. The Hubble Space telescope, the world's largest telescope in orbit
, has an aperture of only 2.4 metres, while terrestrial telescopes can have a
diameter four times that size.
ABSTRACT

ANN for misuse detection

Because of the increasing dependence which companies and


government agencies have on their computer networks the importance of
protecting these systems from attack is critical. A single intrusion of a
computer network can result in the loss or unauthorized utilization or
modification of large amounts of data and cause users to question the
reliability of all of the information on the network. There are numerous
methods of responding to a network intrusion, but they all require the
accurate and timely identification of the attack.

The timely and accurate detection of computer and network system


intrusions has always been an elusive goal for system administrators and
information security researchers. The individual creativity of attackers, the
wide range of computer hardware and operating systems, and the ever
changing nature of the overall threat to target systems have contributed to
the difficulty in effectively identifying intrusions. While the complexities of
host computers already made intrusion detection a difficult endeavor, the
increasing prevalence of distributed network-based systems and insecure
networks such as the Internet has greatly increased the need for intrusion
detection.

There are two general categories of attacks which intrusion detection


technologies attempt to identify - anomaly detection and misuse detection
.Anomaly detection identifies activities that vary from established patterns
for users, or groups of users. Anomaly detection typically involves the
creation of knowledge bases that contain the profiles of the monitored
activities.

The second general approach to intrusion detection is misuse


detection. This technique involves the comparison of a user's activities with
the known behaviors of attackers attempting to penetrate a system. While
anomaly detection typically utilizes threshold monitoring to indicate when a
certain established metric has been reached, misuse detection techniques
frequently utilize a rule-based approach. When applied to misuse detection,
the rules become scenarios for network attacks. The intrusion detection
mechanism identifies a potential attack if a user's activities are found to be
consistent with the established rules. The use of comprehensive rules is
critical in the application of expert systems for intrusion detection.
ABSTRACT

Cellular Communications

Roke Manor Research is a leading provider of mobile


telecommunications technology for both terminals and base stations. We add
value to our clients' projects by reducing time-to-market and lowering
production costs, and provide lasting benefits through building long-term
relationships and working in partnership with our customers.

We have played an active role in cellular communications technology


since the 1980's, working initially in GSM and more recently in the definition
and development of 3G (UMTS). Roke Manor Research has over 200
engineers with experience in designing hardware and software for 3G
terminals and base stations and is currently developing technology for 4G
and beyond.

We are uniquely positioned to provide 2G, 3G and 4G expertise to our


customers.

The role of Roke Manor Research engineers in standardisation bodies


(e.g. ETSI and 3GPP) provides us with intimate knowledge of all the 2G and
3G standards (GSM, GPRS, EDGE, UMTS FDD (WCDMA) and TD-SCDMA
standards). Our engineers are currently contributing to the evolution of 3G
standards and can provide up-to-the-minute implementation advice to
customers.
ABSTRACT

Wire less local loop (WLL)

Wireless local loop (WLL) is a system that connects subscriber to


a public switched telephone network (PSTN) using radio signal as a
substitute for copper for all or part of the connection between the
subscriber and the switch Using a wireless link to provide last mile
connectivity not only reduces the construction period but also reduces
installation and operating costs. Wireless loops are expected to
radically alter the traditional fixed telephone network, and make cost
effective implementation of rural telephony. In developing economies,
WLL is expected to help unlock competition in the local loop, enabling
new operators to bypass existing wire line networks to deliver POTS
and data access.

Spectrum has become very scarce and costly resource in the


present scenario. Its efficient use is utmost essential to provide
telecom services at reasonable price especially in the rural areas. As
the available frequency spectrum is limited, we have to divide the
geographical area into small cells so that we can re-use the available
frequency band after certain distance, this is known as reuse of
frequencies and the distance is known as the re-use distance. The
systems using such technology are known as cellular systems.

WLL technology is basically cellular technology. Cellular


technologies can be divided into two broad categories ‘Macro-Cellular’
and ‘Micro-Cellular’ based on coverage or the cell area. System based
on Macro Cellular technology such as GSM /CDMA /DAMPS were
adapted from cellular mobile system and have cell radii in few
kilometers in urban environment, whereas system based on Micro
Cellular architecture such as DECT (Digital Enhanced Cordless
Communication) is an extension of Cordless Telephone System and
have cell radii of few hundred meters in typical urban environments.
ABSTRACT

DD Using Bio-robotics

In order to measure quantitatively the neuro-psychomotor


conditions of an individual with a view to subsequently detecting
his/her state of health, it is necessary to obtain a set of parameters
such as reaction time, speed, strength and tremor. By processing
these parameters through the use of fuzzy logic it is possible to
monitor an individual's state of health, .i.e. whether he/she is healthy
or affected by a particular pathology such as Parkinson's disease,
dementia, etc.

The set of parameters obtained is useful not only to diagnose


neuro-motor pathologies (e.g. Parkinson Disease), but also to assess
general everyday health or to monitor sports performance; moreover,
continuous use of the device by an individual for health-monitoring
purposes, not only allows for detection of the onset of a particular
pathology but also provides greater awareness in terms of how life
style or certain habits tend to have repercussions on psycho-physical
well-being. Since an individual's state of health should be continually
monitored, it is essential that he or she can manage the test
autonomously without his/her emotional state being influenced:
autonomous testing is important, as the individual is likely to be more
relaxed thus obviating emotional problems. The new system has been
designed with reference to the biomechanical characteristics of the
human finger.

Disease detector (DDX) is a new bio robotic device that is a


fuzzy based control system for the detection of neuro-motional and
psychophysical health conditions. The initial experimental system
(DD1) and the current system (DD2) are not easily portable and, even
if they are very reliable, cannot estimate the patient health beyond the
typical parameters of Parkinson's disease nor are they able to remotely
transmit such diagnoses.

This new bio-robotic system is exploited in order to obtain an


intelligent and reliable detector supported by a very small and portable
device, with a simple joystick with few buttons, a liquid-display (LCD),
and a simple interface for remote communication of diagnosis. It may
be adopted for earth and space applications, because of its portability,
in order to measure all the reactions in front of external effects.
ABSTRACT

Multiple Access Techniques

Multiple access schemes are used to allow many simultaneous


users to use the same fixed bandwidth radio spectrum. In any radio
system, the bandwidth that is allocated to it is always limited. For
mobile phone systems the total bandwidth is typically 50 MHz, which is
split in half to provide the forward and reverse links of the system.
Sharing of the spectrum is required in order increase the user capacity
of any wireless network. FDMA, TDMA and CDMA are the three major
methods of sharing the available bandwidth to multiple users in
wireless system.

For systems using Frequency Division Multiple Access (FDMA),


the available bandwidth is subdivided into a number of narrower band
channels. Each user is allocated a unique frequency band in which to
transmit and receive on. During a call, no other user can use the same
frequency band. Each user is allocated a forward link channel (from
the base station to the mobile phone) and a reverse channel (back to
the base station), each being a single way link. The transmitted signal
on each of the channels is continuous allowing analog transmissions.
The channel bandwidth used in most FDMA systems is typically low
(30kHz) as each channel only needs to support a single user. FDMA is
used as the primary subdivision of large allocated frequency bands and
is used as part of most multi-channel systems. Figures below show the
allocation of the available bandwidth into several channels.

TDMA systems transmit data in a buffer and burst method, thus


the transmission of each channel is non-continuous. The input data to
be transmitted is buffered over the previous frame and burst
transmitted at a higher rate during the time slot for the channel. TDMA
can not send analog signals directly due to the buffering required, thus
is only used for transmitting digital data. TDMA can suffer from
multipath effects as the transmission rate is generally very high,
resulting in significant inter-symbol interference.
ABSTRACT

Smart NoteTaker

The Smart NoteTaker is such a helpful product that satisfies the


needs of the people in today’s technologic and fast life. This product
can be used in many ways. The Smart NoteTaker provides taking fast
and easy notes to people who are busy one’s self with something. With
the help of Smart NoteTaker, people will be able to write notes on the
air, while being busy with their work. The written note will be stored
on the memory chip of the pen, and will be able to read in digital
medium after the job has done. This will save time and facilitate life

The Smart NoteTaker is good and helpful for blinds that think
and write freely. Another place, where the product can play an
important role, is where two people talks on the phone. The
subscribers are apart from each other while their talk, and they may
want to use figures or texts to understand themselves better. It’s also
useful especially for instructors in presentations. The instructors may
not want to present the lecture in front of the board. The drawn figure
can be processed and directly sent to the server computer in the room.
ABSTRACT

SUGGESTION SCHEME SYSTEM

‘SUGGESTION SCHEME SYSTEM’ is all about giving your


suggestions and sharing your new innovative ideas to make your
department work better. The next section is ‘share a new idea or
concept’ in which you can share your idea or concept which you have
seen is implemented in some other organization or you have read
about it in some article and you think that concept to b implemented
in your department also.

Next section enables you to view all the suggestions, ideas or


concept and those suggestions which are implemented by the
department.
Here you can view department wise by selecting the department for
which you want to view.
You can also view date wise i.e. between a certain intervals of time
(for e.g. 07/01/2006-08/02/2006).
The next section is for the administrator, he had to give his username
and password for accessing this section, he can change his id or
password and he can only delete any suggestion, idea, concept or
implemented share idea if there is a need to do so.
Finally the last section is for the user help, if he finds any problem in
using this system he can click on the help and try and solve his
problem.

In this project the language used is jsp (java server pages) and
HTML and the database used is ms access. The name of the DSN is
‘swa’.
Since the language used involves java codes this application can be
used on windows as well as on the Linux platform.
Advantages of using jsp and ms access database are mentioned later
in our section.
The web server used to create an environment for running jsp files is
Jakarta Apache Tomcat server and the jdk is also been used to execute
java codes.
The pages are created and designed using HTML.
ABSTRACT

Cyberterrorism

Cyberterrorism is a new terrorist tactic that makes use of information


systems or digital technology, especially the Internet, as either an
instrument or a target. As the Internet becomes more a way of life with us,it
is becoming easier for its users to become targets of the cyberterrorists. The
number of areas in which cyberterrorists could strike is frightening, to say
the least.

The difference between the conventional approaches of terrorism and


new methods is primarily that it is possible to affect a large multitude of
people with minimum resources on the terrorist's side, with no danger to
him at all. We also glimpse into the reasons that caused terrorists to look
towards the Web, and why the Internet is such an attractive alternative to
them.

The growth of Information Technology has led to the development of


this dangerous web of terror, for cyberterrorists could wreak maximum
havoc within a small time span. Various situations that can be viewed as
acts of cyberterrorism have also been covered. Banks are the most likely
places to receive threats, but it cannot be said that any establishment is
beyond attack. Tips by which we can protect ourselves from cyberterrorism
have also been covered which can reduce problems created by the
cyberterrorist.

We, as the Information Technology people of tomorrow need to study


and understand the weaknesses of existing systems, and figure out ways of
ensuring the world's safety from cyberterrorists. A number of issues here are
ethical, in the sense that computing technology is now available to the whole
world, but if this gift is used wrongly, the
consequences could be disastrous. It is important that we understand and
mitigate cyberterrorism for the benefit of society, try to curtail its growth, so
that we can heal the present, and live the future…
ABSTRACT

Electro Dynamic Tether

Tether is a word, which is not heard often. The word meaning of tether
is 'a rope or chain to fasten an animal so that it can graze within a certain
limited area'. We can see animals like cows and goats 'tethered' to trees and
posts.

In space also tethers have an application similar to their word


meaning. But instead of animals, there are spacecrafts and satellites in
space. A tether if connected between two spacecrafts (one having smaller
orbital altitude and the other at a larger orbital altitude) momentum
exchange can take place between them. Then the tether is called
momentum exchange space tether. A tether is deployed by pushing one
object up or down from the other. The gravitational and centrifugal forces
balance each other at the center of mass. Then what happens is that the
lower satellite, which orbits faster, tows its companion along like an orbital
water skier. The outer satellite thereby gains momentum at the expense of
the lower one, causing its orbit to expand and that of the lower to contract.
This was the original use of tethers.

But now tethers are being made of electrically conducting materials


like aluminium or copper and they provide additional advantages.
Electrodynamic tethers, as they are called, can convert orbital energy into
electrical energy. It works on the principle of electromagnetic induction. This
can be used for power generation. Also when the conductor moves through a
magnetic field, charged particles experience an electromagnetic force
perpendicular to both the direction of motion and field. This can be used for
orbit raising and lowering and debris removal. Another application of tethers
discussed here is artificial gravity inside spacecrafts.

NEED AND ORIGIN OF TETHERS

Space tethers have been studied theoretically since early in the 20th
century, it wasn't until 1974 that Guiseppe Colombo came up with the idea
of using a long tether to support satellite from an orbiting platform. But that
was simple momentum exchange space tether. Now lets see what made
scientists think of electrodynamic tethers.

Every spacecraft on every mission has to carry all the energy sources
required to get its job done, typically in the form of chemical propellants,
photovoltaic arrays or nuclear reactors. The sole alternative - delivery
service - can be very expensive.
ABSTRACT

Optical Satellite Communication

The European Space Agency (ESA) has programmes underway


to place Satellites carrying optical terminals in GEO orbit within the next
decade. The first is the ARTEMIS technology demonstration satellite which
carries both microwave and SILEX (Semiconductor Laser Intro satellite Link
Experiment) optical interorbit communications terminal. SILEX employs
direct detection and GaAIAs diode laser technology; the optical antenna is a
25cm diameter reflecting telescope.

The SILEX GEO terminal is capable of receiving data modulated


on to an incoming laser beam at a bit rate of 50 Mbps and is equipped with a
high power beacon for initial link acquisition together with a low divergence
(and unmodulated) beam which is tracked by the communicating partner.
ARTEMIS will be followed by the operational European data relay system
(EDRS) which is planned to have data relay Satellites (DRS). These will also
carry SILEX optical data relay terminals.

Once these elements of Europe's space Infrastructure are in


place, these will be a need for optical communications terminals on LEO
satellites which are capable of transmitting data to the GEO terminals. A
wide range of LEO space craft is expected to fly within the next decade
including earth observation and science, manned and military
reconnaissance system.

The LEO terminal is referred to as a user terminal since it


enables real time transfer of LEO instrument data back to the ground to a
user access to the DRS s LEO instruments generate data over a range of bit
rates extending of Mbps depending upon the function of the instrument. A
significant proportion have data rates falling in the region around and below
2 Mbps. and the data would normally be transmitted via an S-brand
microwave IOL
ABSTRACT

Optical Switching

Explosive information demand in the internet world is creating


enormous needs for capacity expansion in next generation
telecommunication networks. It is expected that the data- oriented network
traffic will double every year.

Optical networks are widely regarded as the ultimate solution to the


bandwidth needs of future communication systems. Optical fiber links
deployed between nodes are capable to carry terabits of information but the
electronic switching at the nodes limit the bandwidth of a network. Optical
switches at the nodes will overcome this limitation. With their improved
efficiency and lower costs, Optical switches provide the key to both manage
the new capacity Dense Wavelength Division Multiplexing (DWDM) links as
well as gain a competitive advantage for provision of new band width hungry
services. However, in an optically switched network the challenge lies in
overcoming signal impairment and network related parameters. Let us
discuss the present status, advantages and challenges and future trends in
optical switches.

A fiber consists of a glass core and a surrounding layer called the


cladding. The core and cladding have carefully chosen indices of refraction to
ensure that the photos propagating in the core are always reflected at the
interface of the cladding. The only way the light can enter and escape is
through the ends of the fiber. A transmitter either alight emitting diode or a
laser sends electronic data that have been converted to photons over the
fiber at a wavelength of between 1,200 and 1,600 nanometers.

Today fibers are pure enough that a light signal can travel for about 80
kilometers without the need for amplification. But at some point the signal
still needs to be boosted. Electronics for amplitude signal were replaced by
stretches of fiber infused with ions of the rare-earth erbium. When these
erbium-doped fibers were zapped by a pump laser, the excited ions could
revive a fading signal. They restore a signal without any optical to electronic
conversion and can do so for very high speed signals sending tens of
gigabits a second. Most importantly they can boost the power of many
wavelengths simultaneously.

Now to increase information rate, as many wavelengths as possible are


jammed down a fiber, with a wavelength carrying as much data as possible.
ABSTRACT

Optical Packet Switching Network

With in today's Internet data is transported using wavelength division


multiplexed (WDM) optical fiber transmission system that carry 32-80
wavelengths modulated at 2.5gb/s and 10gb/s per wavelength. Today's
largest routers and electronic switching systems need to handle close to
1tb/s to redirect incoming data from deployed WDM links. Mean while next
generation commercial systems will be capable of single fiber transmission
supporting hundreds of wavelength at 10Gb/s and world experiments have
demonstrated 10Tb/shutdown transmission.

The ability to direct packets through the network when single fiber
transmission capacities approach this magnitude may require electronics to
run at rates that outstrip Moor's law. The bandwidth mismatch between fiber
transmission systems and electronics router will becomes more complex
when we consider that future routers and switches will potentially terminate
hundreds of wavelength, and increase in bit rate per wavelength will head
out of beyond 40gb/s to 160gb/s. even with significance advances in
electronic processor speed, electronics memory access time only improve at
the rate of approximately 5% per year, an important data point since
memory plays a key role in how packets are buffered and directed through a
router.

Additionally opto-electronic interfaces dominate the power


dissipations, footprint and cost of these systems, and do not scale well as
the port count and bit rate increase. Hence it is not difficult to see that the
process of moving a massive number of packets through the multiple layers
of electronics in a router can lead to congestion and exceed the performance
of electronics and the ability to efficiently handle the dissipated power.

In this article we review the state of art in optical packet switching and
more specifically the role optical signal processing plays in performing key
functions. It describe how all-optical wavelength converters can be
implemented as optical signal processors for packet switching, in terms of
their processing functions, wavelength agile steering capabilities, and signal
regeneration capabilities. Examples of how wavelength converters based
processors can be used to implement asynchronous packet switching
functions are reviewed. Two classes of wavelength converters will be
touched on: monolithically integrated semiconductor optical amplifiers (SOA)
based and nonlinear fiber based.
ABSTRACT

Robotics

Over the course of human history the emergence of certain new


technologies have globally transformed life as we know it. Disruptive
technologies like fire, the printing press, oil, and television have dramatically
changed both the planet we live on and mankind itself, most often in
extraordinary and unpredictable ways. In pre-history these disruptions took
place over hundreds of years. With the time compression induced by our
rapidly advancing technology, they can now take place in less than a
generation.

We are currently at the edge of one such event. In ten years robotic
systems will fly our planes, grow our food, explore space, discover life
saving drugs, fight our wars, sweep our homes and deliver our babies. In
the process, this robotics driven disruptive event will create a new 200
billion dollar global industry and change life as you now know it, forever.
Just as my children cannot imagine a world without electricity, your children
will never know a world without robots. Come take a bold look at the future
and the opportunities for Mechanical Engineers that wait there.

The Three Laws of Robotics are:

1. A robot may not injure a human being, or, through inaction, allow a
human being to come to harm.
2. A robot must obey the orders given it by human beings except where
such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does
not conflict with the First or Second Law
ABSTRACT

Sensors on 3D Digitization

Digital 3D imaging can benefit from advances in VLSI technology in


order to accelerate its deployment in many fields like visual communication
and industrial automation. High-resolution 3D images can be acquired using
laser-based vision systems. With this approach, the 3D information becomes
relatively insensitive to background illumination and surface texture.
Complete images of visible surfaces that are rather featureless to the human
eye or a video camera can be generated. Intelligent digitizers will be capable
of measuring accurately and simultaneously colour and 3D.

Machine vision involves the analysis of the properties of the luminous


flux reflected or radiated by objects. To recover the geometrical structures of
these objects, either to recognize or to measure their dimension, two basic
vision strategies are available.

Passive vision, attempts to analyze the structure of the scene under ambient
light. Stereoscopic vision is a passive optical technique. The basic idea is
that two or more digital images are taken from known locations. The images
are then processed to find the correlations between them. As soon as
matching points are identified, the geometry can be computed.

Sensors For 3D Imaging

The sensors used in the autosynchronized scanner include

1. Synchronization Circuit Based Upon Dual Photocells

This sensor ensures the stability and the repeatability of range


measurements in environment with varying temperature. Discrete
implementations of the so-called synchronization circuits have posed many
problems in the past. A monolithic version of an improved circuit has been
built to alleviate those problems.

2. Laser Spot Position Measurement Sensors

High-resolution 3D images can be acquired using laser-based vision


systems. With this approach, the 3D information becomes relatively
insensitive to background illumination and surface texture. Complete images
of visible surfaces that are rather featureless to the human eye or a video
camera can be generated.
ABSTRACT

Signaling System

Signaling System 7 (SS7) is architecture for performing out-of-band


signaling in support of the call-establishment, billing, routing, and
information-exchange functions of the public switched telephone network
(PSTN). It identifies functions to be performed by a signaling-system
network and a protocol to enable their performance.

What is Signaling?

Signaling refers to the exchange of information between call


components required to provide and maintain service.

As users of the PSTN, we exchange signaling with network elements all


the time. Examples of signaling between a telephone user and the telephone
network include: dialing digits, providing dial tone, accessing a voice
mailbox, sending a call-waiting tone. SS7 is a means by which elements of
the telephone network exchange information. Information is conveyed in the
form of messages. SS7 messages can convey information such as:
" I'm forwarding to you a call placed from 212-555-1234 to 718-555-5678.
Look for it on trunk 067.
" Someone just dialed 800-555-1212. Where do I route the call?
" The called subscriber for the call on trunk 11 is busy. Release the call and
play a busy tone.
" The route to XXX is congested. Please don't send any messages to XXX
unless they are of priority 2 or higher.
" I'm taking trunk 143 out of service for maintenance.
" SS7 is characterized by high-speed packet data and out-of-band signaling.

What is Out-of-Band Signaling?

Out-of-band signaling is signaling that does not take place over the
same path as the conversation.

We are used to thinking of signaling as being in-band. We hear dial


tone, dial digits, and hear ringing over the same channel on the same pair of
wires. When the call completes, we talk over the same path that was used
for the signaling. Traditional telephony used to work in this way as well. The
signals to set up a call between one switch and another always took place
over the same trunk that would eventually carry the call. Signaling took the
form of a series of multi frequency (MF) tones, much like touch tone dialing
between switches.
ABSTRACT

VLSI Computations

Over the past four decades the computer industry has experienced
four generations of development, physically marked by the rapid changing of
building blocks from relays and vacuum tubes (1940-1950s) to discrete
diodes and transistors (1950-1960s), to small- and medium-scale integrated
(SSI/MSI) circuits (1960-1970s), and to large- and very-large-scale
integrated (LSI/VLSI) devices (1970s and beyond). Increases in device
speed and reliability and reductions in hardware cost and physical size have
greatly enhanced computer performance. However, better devices are not
the sole factor contributing to high performance. Ever since the stored-
program concept of von Neumann, the computer has been recognized as
more than just a hardware organization problem. A modern computer
system is really a composite of such items as processors, memories,
functional units, interconnection networks, compilers, operating systems,
peripherals devices, communication channels, and database banks.

To design a powerful and cost-effective computer system and to devise


efficient programs to solve a computational problem, one must understand
the underlying hardware and software system structures and the computing
algorithm to be implemented on the machine with some user-oriented
programming languages. These disciplines constitute the technical scope of
computer architecture. Computer architecture is really a system concept
integrating hardware, software algorithms, and languages to perform large
computations. A good computer architect should master all these disciplines.
It is the revolutionary advances in integrated circuits and system
architecture that have contributed most to the significant improvement of
computer performance during the past 40 years. In this section, we review
the generations of computer systems and indicate the general tends in the
development of high performance computers.
Generation of Computer Systems

The division of computer systems into generations is determined by


the device technology, system architecture, processing mode, and languages
used. We consider each
generation to have a time span of about 10 years. Adjacent generations may
overlap in several years as demonstrated in the figure. The long time span is
intended to cover both development and use of the machines in various
parts of the world. We are currently in the fourth generation, while the fifth
generation is not materialized yet.
ABSTRACT

Wireless Intellegent Network (WIN)

(WIN) is a concept being developed by the Telecommunications


Industry Association (TIA) Standards Committee TR45.2. The charter of this
committee is to drive intelligent network (IN) capabilities, based on interim
standard (IS)-41, into wireless networks. IS-41 is a standard currently being
embraced by wireless providers because it facilitates roaming. Basing WIN
standards on this protocol enables a graceful evolution to an IN without
making current network infrastructure obsolete.

Today's wireless subscribers are much more sophisticated


telecommunications users than they were five years ago. No longer satisfied
with just completing a clear call, today's subscribers demand innovative
ways to use the wireless phone. They want multiple services that allow them
to handle or select incoming calls in a variety of ways.
Enhanced services are very important to wireless customers. They have
come to expect, for instance, services such as caller ID and voice messaging
bundled in the package when they buy and activate a cellular or personal
communications service (PCS) phone. Whether prepaid, voice/data
messaging, Internet surfing, or location-sensitive billing, enhanced services
will become an important differentiator in an already crowded, competitive
service-provider market.

Enhanced services will also entice potentially new subscribers to sign


up for service and will drive up airtime through increased usage of PCS or
cellular services. As the wireless market becomes increasingly competitive,
rapid deployment of enhanced services becomes critical to a successful
wireless strategy.
Intelligent network (IN) solutions have revolutionized wireline networks.
Rapid creation and deployment of services has become the hallmark of a
wireline network based on IN concepts. Wireless intelligent network (WIN)
will bring those same successful strategies into the wireless networks.
ABSTRACT

Wireless LAN Security

Wireless local area networks (WLANs) based on the Wi-Fi (wireless


fidelity) standards are one of today's fastest growing technologies in
businesses, schools, and homes, for good reasons. They provide mobile
access to the Internet and to enterprise networks so users can remain
connected away from their desks. These networks can be up and running
quickly when there is no available wired Ethernet infrastructure. They can be
made to work with a minimum of effort without relying on specialized
corporate installers.

Some of the business advantages of WLANs include:


" Mobile workers can be continuously connected to their crucial applications
and data;
" New applications based on continuous mobile connectivity can be
deployed;
" Intermittently mobile workers can be more productive if they have
continuous access to email, instant messaging, and other applications;
" Impromptu interconnections among arbitrary numbe rs of participants
become possible.
" But having provided these attractive benefits, most existing WLANs have
not effectively addressed security-related issues.

THREATS TO WLAN ENVIRONMENTS

All wireless computer systems face security threats that can


compromise its systems and services. Unlike the wired network, the intruder
does not need physical access in order to pose the following security threats:

Eavesdropping

This involves attacks against the confidentiality of the data that is


being transmitted across the network. In the wireless network,
eavesdropping is the most significant threat because the attacker can
intercept the transmission over the air from a distance away from the
premise of the company.
ABSTRACT

Smart Note Taker

The Smart NoteTaker is such a helpful product that satisfies the needs
of the people in today's technologic and fast life. This product can be used in
many ways. The Smart NoteTaker provides taking fast and easy notes to
people who are busy one's self with something. With the help of Smart
NoteTaker, people will be able to write notes on the air, while being busy
with their work. The written note will be stored on the memory chip of the
pen, and will be able to read in digital medium after the job has done. This
will save time and facilitate life.

The Smart NoteTaker is good and helpful for blinds that think and
write freely. Another place, where our product can play an important role, is
where two people talks on the phone. The subscribers are apart from each
other while their talk, and they may want to use figures or texts to
understand themselves better. It's also useful especially for instructors in
presentations. The instructors may not want to present the lecture in front of
the board. The drawn figure can be processed and directly sent to the server
computer in the room. The server computer then can broadcast the drawn
shape through network to all of the computers which are present in the
room. By this way, the lectures are aimed to be more efficient and fun. This
product will be simple but powerful. The product will be able to sense 3D
shapes and motions that user tries to draw. The sensed information will be
processed and transferred to the memory chip and then will be monit ored on
the display device. The drawn shape then can be broadcasted to the network
or sent to a mobile device.

There will be an additional feature of the product which will monitor


the notes, which were taken before, on the application program used in the
computer. This application program can be a word document or an image
file. Then, the sensed figures that were drawn onto the air will be recognized
and by the help of the software program we will write, the desired character
will be printed in the word document. If the application program is a paint
related program, then the most similar shape will be chosen by the program
and then will be printed on the screen.

Since, JAVA Applet is suitable for both the drawings and strings, all
these applications can be put together by developing a single JAVA program.
The JAVA code that we will develop will also be installed on the pen so that
the processor inside the pen will type and draw the desired shape or text on
the display panel.
ABSTRACT

Anthropomorphic Robot hand

his paper presents an anthropomorphic robot hand called the Gifu


hand II, which has a thumb and four fingers, all the
joints of which are driven by servomotors built into the fingers and the palm.
The thumb has four joints with four-degrees-of-freedom (DOF); the other
fingers have four joints with 3-DOF; and two axes of the joints near the
palm cross orthogonally at one point, as is the case in the human hand.

The Gifu hand II can be equipped with six-axes force sensor at each
fingertip and a developed distributed tactile sensor with 624 detecting points
on its surface. The design concepts and the specifications of the Gifu hand
II, the basic characteristics of the tactile sensor, and the pressure
distributions at the time of object grasping are described and discussed
herein. Our results demonstrate that the Gifu hand II has a high potential to
perform dexterous object manipulations like the human hand.

IT IS HIGHLY expected that forthcoming humanoid robots will execute


various complicated tasks via communication with a human user. The
humanoid robots will be equipped with anthropomorphic multifingered hands
very much like the human hand. We call this a humanoid hand robot.
Humanoid hand robots will eventually supplant human labor in the execution
of intricate and dangerous tasks in areas such as manufacturing, space, the
seabed, and so on. Further, the anthropomorphic hand will be provided as a
prosthetic application for handicapped individuals.

Many multifingered robot hands (e.g., the Stanford-JPL hand by


Salisbury et al. [1], the Utah/MIT hand by Jacobsen et al. [2], the JPL four-
fingered hand by Jau [3], and the Anthrobot hand by Kyriakopoulos et al.
[4]) have been developed. These robot hands are driven by actuators that
are located in a place remote from the robot hand frame and connected by
tendon cables. The elasticity of the tendon cable causes inaccurate joint
angle control, and the long wiring of tendon cables may obstruct the robot
motion when the hand is attached to the tip of the robot arm. Moreover,
these hands have been problematic commercial products, particularly in
terms of maintenance, due to their mechanical complexity.

To solve these problems, robot hands in which the actuators are built into
the hand (e.g., the Belgrade/USC hand by Venkataraman et al. [5], the
Omni hand by Rosheim [6], the NTU hand by Lin et al.

Potrebbero piacerti anche