Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
A simple Neural Network 'roadly speaking! there are two methods for training an ANN! depending on the problem it must solve. A self(organi)ing ANN is exposed to large amounts of data and tends to discover patterns and relationships in that data. #esearchers often use this type to analy)e experimental data. A back(propagation ANN! conversely! is trained by humans to perform specific tasks. The true power and advantage of neural networks lies in their ability to represent both linear and non(linear relationships and in their ability to learn these relationships directly from the data being modeled. An ANN is configured for a specific application! such as pattern recognition or data classification! through a learning process. *earning in biological systems involves ad$ustments to the synaptic connections that exist between the neurons. Neural networks take a different approach to problem solving than that of conventional computers. +onventional computers use an algorithmic approach i.e. the computer follows a set of instructions in order to solve a problem. ,nless the specific steps that the computer needs to follow are known the computer cannot solve the problem. 'ut computers would be so much more useful if they could do things that we don-t exactly know how to do.
The synapse +omponents of a neuron .igure /.0 'asically! it is a processing element (P1 in a brain2s nervous system that receives and combines signals from other similar neurons through thousands of input paths called dendrites. &f the combined signal is strong enough! the neuron 3fires!4 producing an output signal along the axon! which connects to dendrites of thousands of other neurons. 1ach input signal coming along a dendrite passes through a synapse or synaptic $unction! as shown. This $unction is an infinitesimal gap in the dendrite which is filled with neurotransmitter fluid that either accelerates or retards the flow of electrical charges. These electrical signals flow through the nucleus. The ad$ustment of the impedance or conductance of the synaptic gap by the neurotransmitter fluid contributes to the 3memory4 or 3learning process4 of the brain.
.igure /./ The neuron model &t is also called a neuron! P1 (processing element ! node or cell. The input signals 50! 5/... 5n are normally continuous variables but can also be discrete pulses. 1ach input signal flows through a gain or a weight! called a synaptic weight or connection strength! whose function is analogous to that of synaptic $unction in a biological neuron. The weights can be positive or negative corresponding to 3acceleration4 or 3inhibition4 respectively! of the flow of electrical signals in a biological cell. The summing node accumulates all the input weighted signals! then passes to the output through the activation function! which is usually non linear in nature! as shown in figure. 6athematically! the output expression can be given as!
the current output. &f the input pattern does not belong in the taught list of input patterns! the firing rule is used to determine whether to fire or not.
",T8 9 9 9;0 9;0 9;0 0 9;0 0 As an example of the way the firing rule is applied! take the pattern 909. &t differs from 999 in 0 element! from 990 in / elements! from 090 in : elements and from 000 in / elements. Therefore! the -nearest- pattern is 999 which belongs in the 9(taught set. Thus the firing rule re<uires that the neuron should not fire when the input is 990. "n the other hand! 900 is e<ually distant from two taught patterns that have different outputs and thus the output stays undefined (9;0 . 'y applying the firing in every column the following truth table is obtained7 508 9 9 9 9 0 0 0 0 5/8 5:8 ",T8 9 9 9 9 0 9 0 9 9 0 0 9;0 9 9 9;0 9 0 0 0 9 0 0 0 0
A--lications
The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations. This is particularly useful in applications where the complexity of the data or task makes the design of such a function by hand impractical.
.igure :.: .or example8 The network of figure :.: is trained to recogni)e the patterns T and ?. The associated patterns are all black and all white respectively as shown below.
&f we represent black s<uares with 9 and white s<uares with 0 then the truth tables for the : neurons after generali)ation are7 5008 9 9 9 9 0 0 0 0 50/8 50:8 ",T8 5/08 5//8 5/:8 ",T8 9 9 9 9 9 9 0 9 0 9 9 9 0 9;0 0 9 0 0 9 9 9 0 9 9 9 0 9 0 9 0 9 0 9 0 0 0 9 9;0 0 0 0 0 0 0 9
0 0 To- neuron 9 9 0 9 0 0
*otto$ neuron
9 9 9
9 9 0
9 0 9
9 0 0
0 9 9
0 9 0
0 0 9
0 0 0 9
",T8 0 9 0 0 9 9 0 .rom the tables it can be seen the following associations can be extracted8
&n this case! it is obvious that the output should be all blacks since the input pattern is almost the same as the -T- pattern.
?ere also! it is obvious that the output should be all whites since the input pattern is almost the same as the -?- pattern.
?ere! the top row is / errors away from the T and : from an ?. @o the top output is black. The middle row is 0 error away from both T and ? so the output is random. The bottom row is 0 error away from T and / away from ?. Therefore the output is black. The total output of the network is still in favour of the T shape.
.igure :.B An 6+P neuron &n mathematical terms! the neuron fires if and only if7 50>0 C 5/>/ C 5:>: C ... D T The addition of input weights and of the threshold makes this neuron a very flexible and powerful one.
4" Conclusion
The computing world has a lot to gain from neural networks. Their ability to learn by example makes them very flexible and powerful. .urthermore there is no need to devise an algorithm in order to perform a specific task7 i.e. there is no need to understand the internal mechanisms of that task. They are also very well suited for real time systems because of their fast response and computational times which are due to their parallel architecture. Neural networks also contribute to other areas of research such as neurology and psychology. They are regularly used to model parts of living organisms and to investigate the internal mechanisms of the brain. Perhaps the most exciting aspect of neural networks is the possibility that some day -conscious- networks might be produced. There are a number of scientists arguing that consciousness is a -mechanical- property and that -conscious- neural networks are a realistic possibility.
#eferences8
0. 6odern Power 1lectronics and A+ =rives! 'imal E 'ose. /. Neural Networks! +hristos @tergiou and =imitrios @iganos. :. Neural Networks! >ikipedia! the free encyclopedia.htm"