Sei sulla pagina 1di 20

AIT Important Questions

1) Explain the concept of biological neuron model with the help of a neat
diagram?

Answer:

The biological neural network consists of nerve cells (neurons) as shown in above
Fig., which are interconnected as in Fig. given below. The cell body of the neuron,
which includes the neuron's nucleus is where most of the neural computation takes
place.
Neural activity passes from one neuron to another in terms of electrical
triggers which travel from one cell to the other down the neuron's axon, by means of
an electro-chemical process of voltage-gated ion exchange along the axon and of
diffusion of neurotransmitter molecules through the membrane over the synaptic
gap.

The axon can be viewed as a connection wire. However, the mechanism of


signal flow is not via electrical conduction but via charge exchange that is
transported by diffusion of ions. This transportation process moves along the
neuron's cell, down the axon and then through synaptic junctions at the end of the
axon via a very narrow synaptic space to the dendrites and/or soma of the next
neuron at an average rate of 3 m/sec., as in Fig. given below.

It is important to note that not all interconnections, are equally weighted.


Some have a higher priority (a higher weight) than others. Also some are excitory
and some are inhibitory (serving to block transmission of a message). These
differences are effected by differences in chemistry and by the existence of chemical
transmitter and modulating substances inside and near the neurons, the axons and
in the synaptic junction.

2) Name the different learning methods and explain any one method of
supervised learning?

Answer:

Basic Learning rules

There are five basic learning rules: They are

1. Error-correction learning
2. Memory-based learning
3. Hebbian learning
4. Competitive learning
5. Boltzmann learning
6.
1. Error-Correction Learning

Consider the simple a neuron k constituting the only computational node in


the output layer of a feedforward neural network as shown in figure.

Neuron k is driven by a signal vector x(n) produced by one or more layers of


hidden neurons, which are themselves driven by an input vector applied to the
input layer of the neural network.

The output signal of neuron k is denoted by yk(n).

This output signal, representing the only output of the neural network, is
compared to a desired response or target output, denoted by dk(n).

An error signal, denoted by ek(n), is represented as ek(n) = dk(n) yk(n)

The error signal ek(n) actuates a control mechanism.

The corrective adjustments are designed to make the output signal yk(n)
come closer to the desired response dk(n) in a step-by-step manner.

This objective is achieved by minimizing a cost function or index of


performance, (n), defined in terms of the error signal ek(n) as

n ek2 n
1
2
(n) is the instantaneous value of the error energy.

The step-by-step adjustments to the synaptic weights of neuron k are


continued until the system reaches a steady state (i.e., the synaptic weights are
essentially stabilized). At that point the learning process is terminated.

The learning process is referred to as error-correction learning.

2. Memory-Based Learning

In memory-based learning, all or most of the past experiences are explicitly


stored in a large memory of correctly classified input-output examples: xi , d i iN1 .
Where xi denotes an input vector and di denotes the corresponding desired response.

All memory-based learning algorithms involve two essential ingredients:

Criterion used for defining the local neighborhood of the test vector xtest.
Learning rule applied to the training examples in the local neighborhood of
xtest.

In a simple yet effective type of memory-based learning known as the nearest


neighbor rule.

3. Hebbian Learning

Hebb's postulate of learning is the oldest and most famous of all learning
rules; it is named in honor of the neuropsychologist Hebb (1949). According to Hebb

When an axon of cell A is near enough to excite a cell B and repeatedly or


persistently takes part in firing it, some growth process or metabolic changes take
place in one or both cells such that A's efficiency as one of the cells firing B, is
increased.

Above rule can be reframed as

If two neurons on either side of a synapse (connection) are activated


simultaneously (i.e., synchronously), then the strength of that synapse is
selectively increased.
If two neurons on either side of a synapse are activated asynchronously, then
that synapse is selectively weakened or eliminated. Such a synapse is called a
Hebbian synapse.
A Hebbian synapse: Hebbian synapse is a synapse that uses a time-dependent,
highly local, and strongly interactive mechanism to increase synaptic efficiency as a
function of the correlation between the presynaptic and postsynaptic activities.

Hebb's hypothesis

The simplest form of Hebbian learning is described by

kj n y k nx j n

Where is a positive constant that determines the rate of learning.

Above Equation clearly emphasizes the correlational nature of a Hebbian synapse.


It is sometimes referred to as the activity product rate.

4. Competitive Learning

The output neurons of a neural network compete among themselves to become


active (fired).

There are three basic elements to a competitive learning rule.

A set of neurons that are all the same except for some randomly
distributed synaptic weights, and which therefore respond differently to a
given set of input patterns.
A limit imposed on the "strength" of each neuron.
A mechanism that permits the neurons to compete for the right to respond
to a given subset of inputs, such that only one output neuron, or only one
neuron per group, is active (i.e., "on") at a time. The neuron that wins the
competition is called a winner-takes-all neuron.

Accordingly the individual neurons of the network learn to specialize on


ensembles of similar patterns; in so doing they become feature detectors for
different classes of input patterns.

In the simplest form of competitive learning, the neural network has a single
layer of output neurons, each of which is fully connected to the input nodes. The
network may include feedback connections among the neurons, as indicated in
below Figure. In the network architecture described herein, the feedback
connections perform lateral inhibition with each neuron tending to inhibit the
neuron to which it is laterally connected. In contrast, the feedforward synaptic
connections in the network shown below Fig. are all excitatory.
For a neuron k to be the winning neuron, its induced local field vk for a
specified input pattern x must be the largest among all the neurons in the network.
The output signal yk of winning neuron k is set equal to one; the output signals of
all the neurons that lose the competition are set equal to zero.

Where the induced local field vk represents the combined action of all the
forward and feedback inputs to neuron k.

Let wkj denote the synaptic weight connecting input node j to neuron k.
Suppose that each neuron is allotted A fixed amount of synaptic weight (i.e., all
synaptic weights are positive), which is distributed among its input nodes; that is
A neuron then learns by shifting synaptic weights from its inactive to active
input nodes. If a neuron does not respond to a particular input pattern, no learning
takes place in that neuron. If a particular neuron wins the competition, each input
node of that neuron relinquishes some proportion of its synaptic weight, and the
weight relinquished is then distributed equally among the active input nodes.
According to the standard competitive learning rule, the change wkj applied to
synaptic weight wkj is defined by

Where is the learning-rate parameter. This rule has the overall effect of
moving the synaptic weight vector wk of winning neuron k toward the input pattern
x.

5. Boltzmann Learning

The Boltzmann learning rule, named in honor of Ludwig Boltzmann, is a


stochastic learning algorithm derived from ideas rooted in statistical mechanics. A
neural network designed on the basis of the Boltzmann learning rule is called a
Boltzmann machine.

In a Boltzmann machine the neurons constitute a recurrent structure, and


they operate in a binary manner since, for example, they are either in an "on" state
denoted by +1 or in an "off" state denoted by -1. The machine is characterized by an
energy function, E, the value of which is determined by the particular states
occupied by the individual neurons of the machine, as shown by

Where xj is the state of neuron j, and wkj is the synaptic weight connecting
neuron j to neuron k. The fact that j k means simply that none of the neurons in
the machine has self-feedback. The machine operates by choosing a neuron at
randomfor example, neuron kat some step of the learning process, then flipping
the state of neuron k from state xk to state xk at some temperature T with
probability
Where Ek is the energy change (i.e., the change in the energy function of the
machine) resulting from such a flip. Notice that T is not a physical temperature, but
rather a pseudo temperature, as explained in Chapter 1. If this rule is applied
repeatedly, the machine will reach thermal equilibrium.

The neurons of a Boltzmann machine partition into two functional groups:


visible and hidden. The visible neurons provide an interface between the network
and the environment in which it operates, whereas the hidden neurons always
operate freely. There are two modes of operation.

Clamped condition, in which the visible neurons are all clamped onto specific
states determined by the environment.
Free-running condition, in which all the neurons (visible and hidden) are
allowed to operate freely.

According to the Boltzmann learning rule, the change wkj applied to the
synaptic weight wkj from neuron to neuron k is defined by

Where is a learning-rate parameter. Note that both kj+ and kj- range in
value from -1 to +1.

3) Develop McCulloch-Pitts neuron model to implement AND, OR logics


for 2 inputs?

Answer:
The AND function gives the response "true" if both input values are "true";
otherwise the response is "false." If we represent "true" by the value I, and "false" by
0, this gives the following four training input, target output pairs:

The OR function gives the response "true" if either of the input values is "true";
otherwise the response is "false." This is the "inclusive or," since both input values
may be "true" and the response is still "true." Representing "true" as 1, and "false"
as 0, we have the following four training input, target output pairs:

4. Name different activation functions used in neuronal networks and


explain those networks?

Answer:

The different neural networks use a considerable number of different


activation functions. Here we shall introduce some of the most common ones, and
indicate which neural networks employ them.

1. Sign, or Threshold Function


2. Piecewise-linear Function
3. Linear Function
4. Sigmoid Function
5. Hyperbolic tangent
6. Gaussian functions
7. Spline functions

1. Sign, or Threshold Function

For this type of activation function, we have:

This activation function is represented in fig.

2. Piecewise-linear Function

This activation function is described by:


3. Linear Function

This is the simplest activation function. It is given simply by:

A linear function is illustrated in fig.


4. Sigmoid Function

This is by far the most common activation function. It is given by:

It is represented in the following figure.


5. Hyperbolic tangent

Sometimes this function is used instead of the original sigmoid function. It is


defined as:

It is represented in the following figure.

6. Gaussian functions

This type of activation function is commonly employed in RBFs. Using this


notation, the activation function can be denoted as:
7. Spline functions

These functions, as the name indicates, are found in B-spline networks.

The following figure illustrates univariate basis functions of orders o=14.


For all graphs, the same point x=2.5 is marked, so that it is clear how many
functions are active for that particular point, depending on the order of the
spline.
5) Name different architectures of neuronal networks and explain them
with the help of neat diagram?

Answer:

According to the flow of the signals within an ANN, we can divide the
architectures into feedforward networks, if the signals flow just from input to
output, or recurrent networks, if loops are allowed. Another possible classification is
dependent on the existence of hidden neurons, i.e., neurons which are not input nor
output neurons. If there are hidden neurons, we denote the network as a multilayer
NN, otherwise the network can be called a singlelayer NN. Finally, if every neuron
in one layer is connected with the layer immediately above, the network is called
fully connected. If not, we speak about a partially connected network.

1. Singlelayer feedforward network

The simplest form of an ANN is represented in fig. below. In the left, there is the
input layer, which is nothing but a buffer, and therefore does not implement any
processing. The signals flow to the right through the synapses or weights, arriving
to the output layer, where computation is performed.

2. Multilayer feedforward network

In this case there is one or more hidden layers. The output of each layer
constitutes the input to the layer immediately above. For instance, a ANN [5,4, 4,
1] has 5 neurons in the input layer, two hidden layers with 4 neurons in each one,
and one neuron in the output layer.
3. Recurrent networks

Recurrent networks are those where there are feedback loops. Notice that any
feedforward net-work can be transformed into a recurrent network just by
introducing a delay, and feeding back this delay signal to one i nput, as represented
in fig.

6) What is meant by perceptron model and how a perceptron model can be


classified?

Answer:

The Perceptron Model

The Perceptron model is the simplest type of neural network developed by Frank
Rosenblatt in 1962. This type of simple network is rarely used now but it is
significant in terms of its historical contribution to neural networks. A very simple
form of Perceptron model is shown in Fig. below. It is very much similar to the MCP
model discussed in the last section. It has more than 1 inputs connected to the node
summing the linear combination of the inputs connected to the node. Also, the
resulting sum goes through a hard limit er which produces an output of +1 if the
input of the hard limiter is positive. Similarly, it produces an output of -1 if the
input is negative. It was first developed to classify a set of externally inputs into 2
classes of C1 or C2 with an output +1 signifies C1 or C2.

A 2-layer Perceptron network can be used for classifying linear non-separable


problems. First, we consider the classification regions in a 2 dimensional space.
Fig.2 shows the 2-D input space in which a line is used to separate the 2 classes C1
and C2 .
The region below or on the right of the line is

Thus the region above or on the left of the line is


7. What do you mean by knowledge representation? Where it is used in
neural networks.

Answer:

Knowledge is the information about a domain that can be used to solve


problems in that domain. To solve many problems requires much knowledge, and
this knowledge must be represented in the computer. As part of designing a
program to solve problems, we must define how the knowledge will be represented.

Figure: Role of representations in solving Problems.

A representation scheme is the form of the knowledge that is used in an agent.


A representation of some piece of knowledge is the internal representation of the
knowledge.
A representation scheme specifies the form of the knowledge.
A knowledge base is the representation of all of the knowledge that is stored by
an agent.
The general framework for solving problems by computer is given in
Once you have some requirements on the nature of a solution, you must
represent the problem so a computer can solve it.
Computers and human minds are examples of physical symbol systems.
A symbol is a meaningful pattern that can be manipulated.
Examples of symbols are written words, sentences, gestures, marks on paper, or
sequences of bits.
A symbol system creates copies, modifies, and destroys symbols.
Essentially, a symbol is one of the patterns manipulated as a unit by a symbol
system.
The term physical is used, because symbols in a physical symbol system are
physical objects that are part of the real world, even though they may be
internal to computers and brains.
They may also need to physically affect action or motor control.
8. How do you differentiate between learning process and learning tasks?

Answer:

Learning is the ability of an agent to improve its behavior based on experience.


This could mean the following:

The range of behaviors is expanded; the agent can do more.


The accuracy on tasks is improved; the agent can do things better.
The speed is improved; the agent can do things faster.

The ability to learn is essential to any intelligent agent. As Euripides pointed,


learning involves an agent remembering its past in a way that is useful for its
future.

Machine learning tasks are typically classified into three broad categories,
depending on the nature of the learning "signal" or "feedback" available to a
learning system.
They are

1. Supervised Learning
2. Unsupervised Learning
3. Reinforcement Learning

Supervised Learning

The machine is presented with an example of inputs and their desired outputs.
These are given by a teacher and the goal of learning is to learn the general rule
that maps inputs and outputs.

Unsupervised Learning

No labels are given to the learning system; the learning system has to find out its
own structure to the input. Unsupervised learning can be a goal in itself or a means
towards an end.

Reinforcement Learning

The machine program continuously interacts with a dynamic environment in which


it must perform a certain goal. Without a teacher explicitly telling it whether it has
came close to its goal or not.

Example: - Driving a car or a vehicle, learning to play a game by playing against


an opponent.
9. In what way the humans are better than computer? Explain.

Answer:

Human and computers behaves in a completely different manner in


information processing, analysis and decision making process.
Humans Use knowledge in the form of rules of thumb or heuristics to solve
problems in a narrow domain.
Whereas computers Process data and use algorithms, a series of well-defined
operations, to solve general numerical problems.
In a human brain, knowledge exists in a compiled form.
Whereas in computers not possible to separate knowledge from the control
structure to process this knowledge.
Human beings are Capable of explaining a line of reasoning and providing
the details.
Computers do not explain how a particular result was obtained and why
input data was needed.
Humans Use inexact reasoning and can deal with incomplete, uncertain and
fuzzy information.
Computers Work only on problems where data is complete and exact.
Humans can make mistakes when information is incomplete or fuzzy.
Computers provide no solution at all, or a wrong one, when data is
incomplete or fuzzy.
Humans enhance the quality of problem solving via years of learning and
practical training. This process is slow, inefficient and expensive.
Enhance the quality of problem solving by changing the program code, which
affects both the knowledge and its processing, making changes difficult.
From above listed points we can clearly say that humans are better than
computers.

Potrebbero piacerti anche