Sei sulla pagina 1di 9

Neural Networks And Fuzzy Control

Neural Networks And Fuzzy Control


Abhishek Kumar Tiwari Kruti Neema
Boda Bag Road, Vallabh Bhawan,
Geology and Mining Office Opp. St. Paul Primary School,
Civil Lines, REWA (M.P) Indore (M.P.).
E-mail: abhi_sun02@yahoo.co.in E-mail: talktokruti@yahoo.co.in

ABSTRACT
A Neural Network is basically a self-
adjusting network whose output is consistent It works along two directions. Firstly, it deals
with the desired output and once the network is with realizing the anatomy and functioning of
‘trained’, only the input data are provided to the the human brain in order to define models
network, which then ‘recalls’ the response that adherent to its physical behavior. Secondly, it
it ‘learned’ during training. To add more aims at computing on the basis of the parallel
flexibility to the definition of the system, and distributed structure of the brain. [7.] This
incorporate vague inputs, describe general is implemented by a process of self adjusting
boundaries for the system and hence to provide the weights or inter-connection strengths,
a better control of the system, fuzzy logic is known as Learning. There exist robust learning
implemented with the neural networks. algorithms that guarantee convergence in the
The various methodologies that are presence of uncertainties (in inputs). This
required to adapt the synaptic weights, the imprecision in the input values is dealt with the
different learning algorithms used to implement help of fuzzy logics which usually uses IF-
parallel distributed processing and robust THEN rules or equivalent constructs. Neural
computation and the major applications of network automatically generates and updates
neuro-fuzzy systems have been discussed in fuzzy logic governing rules and membership
brief in the paper. The paper also brings forth functions while the fuzzy logic infers and
the shortcomings of the various algorithms and provides crisp or defuzzified output when fuzzy
techniques that are currently used in numerous parameters exist. Thus neural network and
applications, simultaneously proposing other fuzzy control work hand in hand.
methods for more efficient control. In addition,
the paper demonstrates some fuzzy parameters 2. BIOLOGICAL NEURAL
and principles in a neural network which adds NETWORKS
user flexibility and robustness to the system.
The biological neural network is an
KEYWORDS interlinked structure of billions of neurons, each
Neural network, modeling, learning, learning of which is linked to thousands of other
algorithm, training, fuzzy logic. neurons, either fully or partially connected.
Neurons are complex cells that respond to
1. INTRODUCTION electrochemical signals. They consist of a
An ANN is an information processing paradigm nucleus, a cell body, numerous dendritic links
that is inspired by the way biological nervous which provide “interconnection strengths” from
systems, such as the brain, process information. other neurons through synapses and an axon
that carries an action potential trunk which
carries an action potential output to other
Copyright © 2006 neurons through synapses. There are two types
Paper Identification Number: SC-3.7 of connections: excitatory and inhibitory. If the
This peer-reviewed paper has been published by the
Pentagram Research Centre (P) Limited.
action potential value exceeds its threshold
Responsibility of contents of this paper rests upon the value, neuron fires and thus either excites or
authors and not upon Pentagram Research Centre (P) inhibits other neurons. [1.]
Limited. Copies can be obtained from the company

368
International Conference on Systemics, Cybernetics and Informatics

(ii) Feed forward and Feed back network.


Feed forward network allows
signals to travel one way only, from input to
output. They tend to be straight forward and
their is no feedback(loops) and hence output of
any layer does not affect that same layer.
Feed back network consists of set of processing
units, the output of each unit is fed as input to
all other units including the same unit.
3. NEED OF NEURAL NETWORKS (iii) Fully connected and Partially connected
Conventional computers of the recent network.
times use algorithmic approach to solve a The neural network is said to be fully
problem. The problem solving capability of connected if every node in each layer of the
these computers is thus, restricted to only what network is connected to every other node in the
they already know. Therefore, in the adjacent forward layer. If, however, some of the
development of the intelligent machines for communication links are missing from the
application areas, such as information network, the network is said to be partially
processing, control applications, connected.
communication networks, this approach proves
to be a major hurdle [7.] Moreover, the
computer power, in terms of storage capacity
and speed, is continuously increasing for which
the microprocessors, that incorporate reduced
instruction set computer architecture, are used.
In order to meet the on-growing demand of this
exponential increase in performance/cost, more
potent machines are required that can "think". 5. MODELING AND LEARNING
And to create this "brain-like machine", such Neural networks process information
theories need to be devised that can explain using parallel decomposition of complex
"intelligence". [6.] information into basic elements. So,
relationships can be established and stored and
4. NETWORK ARCHITECTURES similar to the brain, networks can use it later for
Architecture defines the network updation or to achieve certain desired response.
structure, that is, number of artificial neurons in Modeling of the network is done to match its
the network and their inter connectivity. problem solving ways with those of the brain
Network architecture is categorized into : and can be viewed as our expression(attempt) to
(i) Single layer and multilayer network approximate nature’s creation.
A network with a single output
layer and no intermediate hidden layers is CONCEPT OF MODELING
known as single layer network while a network In artificial neural networks, modeling can be
with one or more intermediate hidden layers in achieved fundamentally by artificial neurons. A
between the input and output layers is termed as neuron has a set of ‘n’ inputs, each weighted by
multi layer network. the connection strength or weight factor, a
‘bias’ term i.e. threshold value (which when
exceeded, the neuron fires), a non-linearity that

369
Neural Networks And Fuzzy Control

In these networks, learning methods


can be applied to adjust the weights
according to the problem, i.e. dW / dt
≠ 0.
The various learning methods are classified
into:
(i) Supervised Learning
(ii) Unsupervised Learning
(iii) Reinforcement Learning
(iv) Stochastic Learning

(i) SUPERVISED LEARNING


The method of learning which is used to solve
the problem of error convergence, i.e., the
minimization of error between desired output
and the actual computed output is known as
Supervised Learning. It is also called “Learning
with teacher”. The various learning algorithms
that utilize the approach of Supervised Learning
acts on activation signal produced and an output are:
response. (a) LEARNING THROUGH ERROR
CORRECTION
The purpose of non-linear function is to It depends upon the availability of desired
ensure that neuron’s response is bounded, i.e., output for a given input. The minimization
neuron’s actual response is conditioned or of error is resolved by various learning
damped. Different non-linearity functions and rules:
algorithms used in the network, are used: 1. DELTA RULE: It is based upon the idea
(i)Linear or ramp function of continuous weight adjustments of the value
It is also termed as hard-limiter and is of weights such that the squared difference of
linear within upper and lower limits. the error between the target output value and
the actual output is minimized. This is also
(ii)Threshold function known as Widrow-Hoff learning rule or LMS.
(iii)Sigmoid function [5.]
It is the most popular function used 2. GRADIENT DESCEND RULE: the
and is monotonic and differentiable. values of the weights are adjusted by the
amount proportional to the first derivative of the
5.1 LEARNING error between the desired o/p and the actual o/p,
The Neural network resembles the human w.r.t the value of the weight. [5.]
brain by acquiring knowledge through learning
and by storing this knowledge within the (ii) UNSUPERVISED LEARNING
synaptic weights. Learning is the process of This method uses no teacher and is based upon
determining the weights by adapting to an input local information as no target o/p exists. This
stimulus and producing a desired response. learning self-organizes the data presented to the
When the actual output response is the same as network. The paradigms of unsupervised
the desired one, the network is said to have learning are:
“acquired knowledge”. (a) HEBBIAN LEARNING:
On the basis of learning method used, Donald Hebb formulated that the changes in the
neural networks are classified as [9.]: synaptic strengths is proportional to the
(i.) Fixed Networks correlation between the firing of the post and
In these networks, the weights are pre-synaptic neurons. [2.]
fixed a priori according to the
problem to solve and so cannot be
changed, i.e. dW / dt = 0.
(ii) Adaptive Networks

370
International Conference on Systemics, Cybernetics and Informatics

target output (T) is compared with the actual


output (O) and the error (E) is used to adjust the
weights.

(b). COMPETITIVE LEARNING:


When an input stimulus is applied, each o/p
neuron competes with the others to produce the
closest o/p signal to the target. This o/p then
becomes dominant and the others cease
producing an o/p signal for that stimulus. [2.]

(iii) REINFORCEMENT LEARNING:


This method is also termed as “Learning with a
Critic”. It is used to handle situations where the
desired o/p for a given i/p is not known, and E=T–O
only a binary result that the result is right or Change in synaptic weights is calculated as:
wrong is available. Δw = μ [ T – f ( w(k)x ) ] x
Sigmoidal non-linearity is used in multi-layer
(iv) STOCHASTIC LEARNING: perceptron model.[6.] Following deficiencies
This method involves the adjustment of weights are encountered in this model:
of a neural network in a probabilistic manner. It (a) Different perceptrons need to be trained for
is used to determine the optimum weights of a different set of input patterns.
multilayer feed-forward network by (b) It could not differentiate between two
overcoming the local minima problem. [6.] linearly separable sets of patterns.

5.2 NETWORK MODELS AND THE


LEARNING ALGORITHMS USED
(i) McCULLOCH–PITTS MODEL(MP Model)
This model is quite simple, with no learning or
adaptation. The activation (x) is given by a
weighted sum of its M input values (ai) and a
threshold value (Θ).the output signal (s) is
typically a nonlinear function f(x) of the
activation value x. The output is represented by:

Oi = f [ j=1∑N (xijwij) – Θ ].
With the original MP model[7.], binary function
was used as nonlinear transfer function. The
major drawback of this model was that the
weights were fixed and no learning could be
incorporated. (iii) DELTA LEARNING ALGORITHM
(ii) PERCEPTRON MODEL This algorithm is based on the least-square-
This model was given by Frank Rosenblatt and error minimization method and its objective is
consists of outputs from sensory units to a fixed to express the difference of the actual and target
set of association units, the outputs of which are outputs in terms of the inputs and weights. The
fed to an MP neuron. The association units least- squared error is defined by:
perform predetermined manipulations on their E = ½ (Ti – Oi )2 = ½[Ti – f ( wixi ) ]2
inputs. This model incorporates learning and the

371
Neural Networks And Fuzzy Control

(iv) ADALINE that is an outgrowth of an earlier multilayer


This model[6.] consists of trainable signed self-adapting neural model, proposed as a
weights and +1 or -1 as inputs. The weighted model of visual pattern recognition in the brain,
sum is applied to a quantizer transfer function called the cognitron model. The network
that restores the outputs to +1 or -1 based on the consists of several stages of simple cells(S) and
mean-square learning algorithm and weights complex neurons(C) layer pairs arranged in
are adjusted by the function: rectangular planes of cells. The S layers act as
E=T–R feature detectors, while the C layers perform a
type of feature blurring on the S cell outputs to
(v) WINNER-TAKES-ALL ALGORITHM make the network less sensitive to shift and
This algorithm is best suited for the cases of deformation in image patterns. The
competitive unsupervised learning where there Neocognitron can learn in either a supervised or
is a single layer of N nodes and each node has unsupervised mode using the competitive
its own set of weights wn. An input vector x is learning. Only the weights on the S layer are
applied to all nodes, and each node provides an adaptable while those on C layer remain fixed.
output On = ∑j wnj xj. The node with the best
response to the applied i/p vector x is declared (viii)ADAPTIVE RESONANCE THEORY
the winner according to the winner selection PARADIGM
criteria: This is an unsupervised paradigm based on
On = max i = 1,2….N (wn x) competitive learning[see 5.i.b] and is consistent
Now, the change in weights is calculated as: with cognitive and behavioral models. It has
Δwn = α(k) (x- wn ) two main layers: first is the input/comparison
layer, and the second is the output/recognition
(vi) BACK-PROPOGATION ALGORITHM layer; both of which interact extensively
This algorithm is applied to the multilayer feed- through feed forward and feedback
forward ANNs. During the training session of connectivity.[5.]
the network, a pair of patterns is presented (Xk,
Tk), where Xk is the input pattern and Tk I the (ix) HOPFIELD MODEL
target pattern. the Xk pattern causes o/p This model conforms to the asynchronous
responses at each neuron in each layer and, nature of the biological neurons. It is a more
hence, an actual output ok at the output layer. At abstract, fully connected, random, asynchronous
the o/p layer, the difference between the actual and a symmetrically weighted network which
and target o/ps yields an error signal accepts either bipolar (+1; -1) or binary (0; 1)
depending upon the weights of the neurons in inputs.[8.] The outputs of each processing
each layer. This error is minimized and new element can be coupled back to the inputs of
weight values are obtained. any other processing element except itself. It
uses sigmoid function as nonlinearity. Based on
this model, Analog to Digital converter was
demonstrated.

(x) SELF-ORGANIZING MAP (SOM)


Developed by Teuvo Kohonen, SOM is a
clustering algorithm which creates a map of
relationships among input patterns. During
training, it finds the output node that has the
least distance from the training pattern and then
changes the node's weights to increase the
similarity to the training pattern. The overall
effect is to move the output nodes to "positions"
that map the distribution of the training
patterns. It has a single layer of nodes and the
(vii) COGNITRON AND NEOCOGNITRON
output nodes do not correspond to known
MODELS
classes but to unknown clusters that the SOM
A neocognitron model is a hybrid hierarchical
finds in the data autonomously.
multilayer feed forward (and feedback) network

372
International Conference on Systemics, Cybernetics and Informatics

PNN stores the training patterns to avoid the


iterative process. It is a classifier paradigm that
instantly approximates the optimum boundaries
between categories. It has two hidden layers:
the first containing a dedicated node for each
training pattern and the second containing a
dedicated node for each class; both connected,
on a class-by-class basis. Each new input is
classified according to the weighted average of
the closest training examples.

(B) TIME DELAY NEURAL NETWORK


(TDNN)
A tapped delay line, or a shift register, and a
multilayer perceptron, with the tapped outputs
of the delay line as inputs, constitute the time
delay neural network. The output has a finite
temporal dependence on the input
(xi) CONTENT ADDRESSABLE MEMORY u(k) = F[ x(k), x(k-1), ...., x(k-n) ]
(CAM) F being a nonlinearity function. When this
It is a matrix memory, in which the patterns are function is a weighted sum, then TDNN is
written during the learning phase. While equivalent to finite impulse response filter and
recalling, the input data pattern is presented at when the output is fed back via a unit delay to
the data bus at all locations simultaneously. If the input, it is equivalent to the infinite impulse
matched, the CAM provides a confirmation response filter.
signal and the address where it is stored, hence,
providing match and no-match signals in a 7. FUZZY LOGIC
single operation. The CAM may be viewed as It was proposed by Lotfi Zadeh, to generalize
associating(mapping) data to address i.e. for classical set theory and to deal with subsets of
every data in memory,[7.] there corresponds the universe which have no well defined
some unique address, thus, avoiding ambiguity boundaries. Fuzzy language links language with
.It may also be viewed as a data correlator; computing (reasoning) through linguistic
input data is correlated with stored data in the variables and quantifiers. The variables and
CAM. It can be implemented with the help of quantifiers are mapped to fuzzy membership
RAM by using iterative algorithm. functions (possibility distributions) which
assumes values in the range [0,1].(‘0’
(xii) REGRESSION ANALYSIS corresponds to member not included, ‘1’ to
Regression analysis is used to fit a smooth fully included and values between 0&1 define
curve to a number of sample data points which fuzzy members). This process of changing input
represent some continuously varying value to a fuzzy value is called
phenomena. The fitting technique can be used Fuzzification.[9.]
to predict the values of one or more variables on Fuzzy logic replaces Boolean truth
the basis of the information provided by the values with degrees of truth. Fuzzy truth
measurements on the other independent represents membership in vaguely defined sets,
variables. In regression analysis parameters not likelihood of some event/condition. Thus,
defining the functional relationship are fuzzy logics are conceptually distinct from
estimated using the statistical criteria. probabilities.[10.]
Fuzzy logic is well suited to low-cost
6. TYPES OF ANN implementations based on cheap sensors low
The artificial neural networks are broadly resolution A/D converters in 4-bit or 8-bit
categorized into: microprocessor which can be easily upgraded.
(A) PROBABILISTIC NEURAL NETWORK
(PNN) 7.1 FUZZY CONTROL

373
Neural Networks And Fuzzy Control

A fuzzy controller consists of an input stage, a adaptively, to perform associative memory


processing stage and an output stage. Input feats, and to tolerate high levels of noise and
stage maps inputs via appropriate membership pattern deformations, and the capabilities that
sets. Processing stage invokes each appropriate are needed for tasks like perception, learning,
rule and generates results for each i/p and and predictive behavioral response. Thus,
combines them. The output stage converts the neural networks are merged conceptually and
combined result back into an output value. Most are implemented along with fuzzy logic
common shapes of the membership function are systems. This is known as SOFT
triangular, trapezoidal etc. Logic rules are in the COMPUTING.[9.]
form of IF-THEN statements(IF is called The neuro-fuzzy system has five
antecedent and THEN, consequent). layers of neurons with selected feed-forward
The antecedents are combined using interconnections. [8. ]
fuzzy operators such as AND, OR and NOT.
AND uses minimum weight of antecedents, OR 9. APPLICATIONS OF NEURO-FUZZY
uses maximum value and NOT for SYSTEMS AND THEIR LIMITATIONS
complementary value of antecedents.[3.] (a) Sensors in Chemical Engineering
To define result of a rule, “MAX- In this the problem was to relate values
MIN” inference method, in which the output produced by ultrasound sensors with actual
membership function is given the truth value physical characteristics of air bubbles in
generated by the premise, is used. Rules can be fermenter. Since this was a mapping problem,
solved in parallel in hardware, or sequentially Multilayer Perceptron was used. As few data
in software. The result of rules are was available for training, simulation of
‘defuzzified’ to a crisp value by either Centroid physical system was developed.
method (most popular method), in which center Suggestion: MLP nets are not robust as loss of
of mass of the result provides the crisp value, or neurons results in MLP and thus, new method..
Height method which takes the value of the
biggest contributor. In centroid method, the (b) Financial Data Modeling and
values are OR’d and not added and results are Prediction
combined using centroid calculation.[11.] The problem here was to predict if a
company would raise funds by issuing shares or
7.2 BUILDING OF FUZZY CONTROLLER making debts. It’s also used in stock-market
predictions. The data comprised of many
parameters describing financial profiles and
decisions of hundreds of companies and a
number of techniques were tried such as MLP,
Linear Regression, NRBF.. But the setup and
the training of the network require skill,
experience and patience.
Suggestion: The data proved inconsistent with
accepted economic models and so the quality of
data must be assessed. There isn’t an
Antecedents consist of logical combination of established track for the reliability and
the error and error delta signals while the robustness of such techniques. So, a back-
consequent is a control command output. The propagated neural network with two or more
rule outputs can be defuzzified using a discrete hidden layers and more variables must be used.
centroid computation[9.]
(c )Forecasting
8.NEURO–FUZZY COMPUTING Considering forecasting requirements, a
differentiation between predictive classification
SYSTEM
task, where forecasted values are class
While fuzzy logic provides a closed link
memberships or probabilities of belonging to
between natural language and “approximate
certain class i.e., binary predictors, and
computational reasoning”, fuzzy computing
regression tasks i.e., point prediction-
methods do not include the ability to learn
based(with single number of scales).

374
International Conference on Systemics, Cybernetics and Informatics

Suggestion: Thus distinct modeling approaches (b) In VLSI


and preprocessing is required in financial VLSI provides a means of capturing truly
modeling as neural networks have not yet been complex behavior in a highly hierarchical
established as a valid and reliable method in fashion. The adaptation allows us to
business forecasting field either on a strategic, compensate for inaccuracies in the physical
tactical or operational level. analog VLSI implementation besides
uncertainties and fluctuations in the system
(d) Image Compression under optimization. Adaptive algorithms based
Neural Network can accept a vast array of on physical observation of the ``performance"
input at once and process it quickly and so they gradient in the parameter space are better suited
are useful in image compression. A bottleneck- for robust analog VLSI implementation than are
type network comprises of an input, an output algorithms based on a calculated gradient..
layers of equal sizes and an intermediate layer
of smaller size in between. The ratio of the size (c) Creatures – The World Most
of the input layer to size of intermediate layer Advanced Artificial Life!
is- the Compression Ratio. Creature features the most advanced,
Pixels which are fed into input node genuine Artificial Life software ever developed
must be outputted after compression. The in a commercial product, technology that has
outputs of the hidden layers are, however, blown the imaginations of scientists worldwide.
decimal values between -1 and 1 and so require
possibly an infinite number of bits. Therefore, 10. LIMITATIONS OF NEURO-
the image is quantized and is encoded ( FUZZY SYSTEMS
compressed to about 1/10th of the original) (i)(a) Neural techniques are executed
Suggestion: the encoding scheme used is not sequentially and difficult to parallelize.
lossless. The original image can’t be retrieved (b) When the quantity of data increases,
because our information is lost in the process of methods may suffer a combinatorial explosion.
quantizing. Again, the actual results of original (c) The learning process seems difficult to
compression can’t be seen. Also, we need to simulate in a symbolic system.
train the network continuously if the output ( ii) in perceptron network,
isn’t of high quality. (a) the output values of a perceptron can
take only one of two values (0,1) due to hard-
(e)Intelligent Control limiter transfer function.
Neuro-fuzzy systems are used in many (b) perceptrons can only classify linearly
vehicular applications, including trains, and separable sets of vectors[5.] else learning will
smart automobiles and intelligent robots. never reach a point where all vectors are
Controller has to account for several variables, classified properly.
many of them non-linear.
Suggestion: So for fuzzy logic control systems (iii) In computational approach of neural
controlling the idle speed of automatic engines networks,
is required. This can be improved by using a (a)When we try to solve a stochastic
radial basis function neural network with a optimization problem,[6.] we need to
Gaussian function. approximate. And hence one must decide on
how accurately to estimate the quantity before
9.1 LATEST APPLICATIONS using it for the purpose of updating. For a finite
(a) Framsticks computing budget, one can either spend most of
Framsticks is a 3D life simulation project the budget on estimating each iterative step very
both physical structure of the creatures and their accurately and settle for fewer steps, or estimate
control systems are evolved. Evolutionary each step very poorly but use the budget to
algorithms are used with selections, crossovers calculate many iterative steps.
and mutations. These features enable us to (b) In many problems/techniques, the
study the evolution of social behavior through computations involved grow exponentially with
synthetic modeling of the evolutionary forces the size of the problem which renders such
that may have led to (co-operative or techniques impractical.
competitive) social behavior.

375
Neural Networks And Fuzzy Control

(c ) Learning under dynamic constraints pressure, skin ionization, and so on),can provide
imposes difficulty in identification of cause- effective feedback into a neural control system
and-effect in the sense that future desired output and other variables which the system can learn
now depend on all past input and it isn’t to correlate with a person's response state.
possible to know which past input deserves
Again the paper is trying to put forward a
credit for the success in current output. So,
number of possible alternatives that needs to be
dynamic programming must be invoked to
applied to various modern day applications of
convert the problem to a series of static learning
neuro-fuzzy systems so that they may serve the
problems. purpose that they are designed for. It also
(d) NP-hardness is a fundamental conveys, that side by side genetic algorithms,
limitation[11.] on what computation can do. artificial intelligence must be blended to make it
Quantifying Heuristics and acquiring structural faster and thus can be implemented in VLSI
knowledge seems to be the only salvation for design. Neuro-Fuzzy system which indeed is a
the effective solution of complex real problems. powerful tool in realizing our daily needs and is
So, human expertise proves to be better. not just a far-out research trend must be dealt
with all efficient algorithms. Because a lot is to
(iv) A neural network is used to obtain some be done as regards.
information out of given data where other
methods are not available. But sometimes,
general mathematical models,[11.] can be REFERENCES
simulated faster & more effectively. eg.
drawing of a lottery does not need any past [1.] Bose.K.N & Liang.P. Neural network
inputs. Similarly, weather forecasting. fundamentals with algorithms and applications
[2.]Anderson A.James. Introduction to neural
11. CONCLUSIONS networks.
This paper has presented an overview of the
neuro-fuzzy systems, their uses, learning [3.] Driankov D.& Hlendron.H Introduction to
methods to train them, their limitations in Fuzzy Control
various applications and suggestion to rectify [4.] Hassoun H. Mohamad Fundamentals of
them in order to make them more efficient from Artificial Neural Networks
the implementation point of view. The major
issues that this paper has addressed are the [5.]Hagan & Beale Neural Network Design
scalability problem, testing, verification, and [6.] Haykin Simon Neural Networks
integration of neural network systems into the
modern environment. It also states that neuro- [7.]Stamatious V. Kartalopoulos Understanding
fuzzy programs sometimes become unstable Neural Networks and Fuzzy Logic
when applied to larger problems. The defence, [8.]Limmin Fu (TMH) Neural Network in
nuclear and space industries are concerned Computer Intelligence
about the issue of testing and verification. The
mathematical theories used to guarantee the [9.] Patterson W. Dan Artificial Neural
performance of an applied neural network are Networks Theory and Applications
still under development. [10.] Kosko Bart Neural Network and Fuzzy
As suggested, the solution for the time being Systems
may be to train and test these intelligent
[11.] www.wikipedia.org
systems much as we do for humans .In addition
to this, the paper also proposes to solve the [12] IEEE Special Issue 2002 A self-growing
problem of parallelism and sequential execution network that grows when required
by implementing neural networks directly in
hardware, which needs a lot of development
still.
This "programming” will require feedback
from the user in order to be effective but simple
and "passive" sensors (e.g. fingertip sensors,
gloves, or wristbands to sense pulse, blood

376

Potrebbero piacerti anche