Sei sulla pagina 1di 5

Artificial Intelligence has had its fair share from the field of neuroscience.

Neuroscience is the study of nervous system, particularly the brain. How the brain enables human beings to think has remained a mystery until the present day. But significant leaps and bounds in the field have enabled scientists to come close to the nature of thought processes inside a brain.

Artificial Neural Networks


With the lack of information available on neural networks as such, Warren McCulloch and Walter Pitts sat down together in 1943 to try and explain the workings of the brain demonstrating how individual neurons can communicate with others in a network. Largely based on the feedback theory by Norbert Wiener, their paper on this atomic level of psychology enthrilled Marvin Minsky and Dean Edmonds so much as to build the first ever neural network in 1951 out of three hundred vacuum tubes and a surplus automatic pilot from a B-24 bomber[1] . In 1958 Professor Frank Rosenblatt of Cornell proposed the Perceptron, A little later In 1969 Marvin Minsky and Seymore A. Papert, released a book called Perceptrons in which they pointed out the linear nature of perceptron calculations. This killed the interest that had been generated by the perceptron, and the first lull in neural networks was experienced. Neural Network research has gone through a number of these lulls, as new methods have been created have shown brief promise, have been over-promoted, and have suffered from some setback. However scientists have always come back to the technology because it is a real attempt to model neural mechanisms despite the hype. Neural Networks can be loosely separated into Neural Models, Network Models and Learning Rules. the earliest mathematical models of the Neuron predate Mcullock and Pitts who developed the first Network models to explain how the signals passed from one neuron to another within the network. When you hear of a network being described as a feedforward or feedback network, they are describing how the network connects neurons in one layer to neurons in the next. Weiners work allowed Mculloch and Pitts to describe how these different connection types would affect the operation of the network. In a feedforward network the output of the network does not affect the operation of the layer that is producing this output. In a feedback network however the output of a layer after the layer being fedback into, can affect the output of the earlier layer. Essentially the data loops through the two layers and back to start again. This is important in control circuits, because it allows the result from a previous calculation to affect the operation of the next calculation. This means that the second calculation can take into account the results of the first calculation, and be controlled by them. Weiners work on cybernetics was based on the idea that feedback loops were a useful tool for control circuits. In fact Weiner coined the term [2]cybernetics based on the Greek kybernutos or metallic steersman of a fictitional boat mentioned in the Illiad. Neural models ranged from complex mathematical models with Floating point outputs to simple state machines with a binary output. Depending on whether the neuron incorporates the learning mechanism or not, neural learning rules can be as simple as

adding weight to a synapse each time it fires, and gradually degrading those weights over time, as in the earliest learning rules, Delta rules that accelerate the learning by applying a delta value according to some error function in a back propagation network, to Presynaptic/Post-synaptic rules based on biochemistry of the synapse and the firing process. Signals can be calculated in binary, linear, non-linear, and spiking values for the output. Today there are literally hundreds of different models, that all call themselves neural networks, even if some of them no longer have models of nerves, or no longer actually require networks to achieve similar effects. Because scientists still have not yet described fully the structure of mammalian neural cells or nerves, we must accept that for now, we will have to wait for the definitive nerve model before we can have the definitive Neural Network. In the meantime this is a rich area of research because it has the potential to be both phenomenal and computational and thus to capture perhaps a greater range of the operation of the brain than computational models have by themselves.

A Description of the Hopfield Network


The Hopfield neural network is a simple artificial network which is able to store certain memories or patterns in a manner rather similar to the brain - the full pattern can be recovered if the network is presented with only partial information. Furthermore there is a degree of stability in the system - if just a few of the connections between nodes (neurons) are severed, the recalled memory is not too badly corrupted - the network can respond with a "best guess". Of course, a similar phenomenon is observed with the brain - during an average lifetime many neurons will die but we do not suffer a catastrophic loss of individual memories - our brains are quite robust in this respect (by the time we die we may have lost 20 percent of our original neurons). The nodes in the network are vast simplifications of real neurons - they can only exist in one of two possible "states" - firing or not firing. Every node is connected to every other node with some strength. At any instant of time a node will change its state (i.e start or stop firing) depending on the inputs it receives from the other nodes. If we start the system off with a any general pattern of firing and non-firing nodes then this pattern will in general change with time. To see this think of starting the network with just one firing node. This will send a signal to all the other nodes via its connections so that a short time later some of these other nodes will fire. These new firing nodes will then excite others after a further short time interval and a whole cascade of different firing patterns will occur. One might imagine that the firing pattern of the network would change in a complicated perhaps random way with time. The crucial property of the Hopfield network which renders it useful for simulating memory recall is the following: we are guaranteed that the pattern will settle down after a long enough time to some fixed pattern. Certain nodes will be always "on" and others "off". Furthermore, it is possible to arrange that these stable firing patterns of the network correspond to the desired memories we wish to store! The reason for this is somewhat technical but we can proceed by analogy. Imagine a ball rolling on some bumpy surface. We imagine the position of the ball at any instant to represent the activity of the nodes in the network. Memories will be represented by

special patterns of node activity corresponding to wells in the surface. Thus, if the ball is let go, it will execute some complicated motion but we are certain that eventually it will end up in one of the wells of the surface. We can think of the height of the surface as representing the energy of the ball. We know that the ball will seek to minimize its energy by seeking out the lowest spots on the surface -- the wells. Furthermore, the well it ends up in will usually be the one it started off closest to. In the language of memory recall, if we start the network off with a pattern of firing which approximates one of the "stable firing patterns" (memories) it will "under its own steam" end up in the nearby well in the energy surface thereby recalling the original perfect memory. The smart thing about the Hopfield network is that there exists a rather simple way of setting up the connections between nodes in such a way that any desired set of patterns can be made "stable firing patterns". Thus any set of memories can be burned into the network at the beginning. Then if we kick the network off with any old set of node activity we are guaranteed that a "memory" will be recalled. Not too surprisingly, the memory that is recalled is the one which is "closest" to the starting pattern. In other words, we can give the network a corrupted image or memory and the network will "all by itself" try to reconstruct the perfect image. Of course, if the input image is sufficiently poor, it may recall the incorrect memory - the network can become "confused" - just like the human brain. We know that when we try to remember someone's telephone number we will sometimes produce the wrong one! Notice also that the network is reasonably robust - if we change a few connection strengths just a little the recalled images are "roughly right". We don't lose any of the images completely.

Perception and Learning.


PERCEPTION Perception is an essential component of intelligent behavior. We perceive the world around us through five basic senses of sight, hearing , touch, smell, and taste., of these, sight and hearing have been the main areas of Artificial Intelligence research leading to speech understanding . when we perceive some signal . it may a be sound or light. We respond appropriately to that signal. To produce an appropriate response we must categorize or analyze that signal. For example to analyze a sentence we must first identify individual sounds, then combine these sounds into words, and then combine words into a meaningful sentence structure . but this is hard because dividing sounds into words needs additional knowledge or information about the situation. A series of sounds may be interpreted in many ways . For instance Tigers care their kids and Tiger scare their kids might both have been the possible interpretations of the same series of sounds. To overcome the perceptual problems in speech understanding , the process of analyzing a speech is divided into five stages.

1. Digitization : The continuous input is divided into discrete chunks . in speech the division is done on a time scale and in images, it may be based on color or area or tint. 2. Smoothing: Since the real world is usually continuous , large spikes and variation in the input is avoided. 3. Segmentation: Group the smaller chunks produced by digitization into larger chunks corresponding to logic components of the signal. For speech understanding segments correspond to individual sounds called phonemes. 4. Labeling: Each segment is given a label. 5. Analysis : The labeled segments are put together to form a coherent object. LEARNING Learning is the improvement of performance with experience over time. Learning element is the portion of a learning AI system that decides how to modify the performance element and implements those modifications. We all learn new knowledge through different methods, depending on the type of material to be learned, the amount of relevant knowledge we already possess, and the environment in which the learning takes place. There are five methods of learning . They are, 1. Memorization (rote learning) 2. Direct instruction (by being told) 3. Analogy 4. Induction 5. Deduction Learning by memorizations is the simplest from of le4arning. It requires the least amount of inference and is accomplished by simply copying the knowledge in the same form that it will be used directly into the knowledge base. Example:- Memorizing multiplication tables, formulate , etc. Direct instruction is a complex form of learning. This type of learning requires more inference than role learning since the knowledge must be transformed into an operational form before learning when a teacher presents a number of facts directly to us in a well organized manner. Analogical learning is the process of learning a new concept or solution through the use of similar known concepts or solutions. We use this type of learning when solving

problems on an exam where previously learned examples serve as a guide or when make frequent use of analogical learning. This form of learning requires still more inferring than either of the previous forms. Since difficult transformations must be made between the known and unknown situations. Learning by induction is also one that is used frequently by humans . it is a powerful form of learning like analogical learning which also require s more inferring than the first two methods. This learning re quires the use of inductive inference, a form of invalid but useful inference. We use inductive learning of instances of examples of the concept. For example we learn the concepts of color or sweet taste after experiencing the sensations associated with several examples of colored objects or sweet foods. Deductive learning is accomplished through a sequence of deductive inference steps using known facts. From the known facts, new facts or relationships are logically derived. Deductive learning usually requires more inference than the other methods.

Potrebbero piacerti anche