Sei sulla pagina 1di 14

PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.

P)

SUBJECT: Lab II BRANCH:- SS

INDEX

DATE DATE
S.NO. NAME OF PRECTICAL OF OF REMARK
PERFORM SUBMISSION

1. Explain functioning and architecture


of counter propagation neural
network(CPNN)

2. Implement Back Propagation in


Neural Networks.

3. Error Back Propagation Algorithm

4. Generate MATLAB function for


simulating shallow neural network

5. Implementation of McCulloch-Pitts
model for OR gate & AND gate

DURGESHVARI DANGI Page 1 ROLL NO.0828CS18MT21


PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.P)

SUBJECT: Lab II BRANCH:- SS

EXPERIMENT NO.1

Object:- Counter Propagation Neural Network.


INTRODUCTION :-
Neural networks have high fault tolerance and potential for adaptive training. A Full Counter
Propagation Neural Network (Full CPNN) is used for restoration of degraded images. The quality of the
restored imaged image is almost the same as that of the original image. This chapter is organized as follows.
In section 6.1, the features of CPN are discussed. In section 6.2, the architecture of Full CPNN is presented.
In section 6.3, the training phase of Full CPNN is discussed. Some experimental results which confirm the
performance of CPNN for image restoration are presented in section 6.4. Finally section 6.5 concludes the
chapter.

COUNTER PRAPOGATION :-
Counter Propagation Networks (CPN) are multilayer networks based on a
combination of input, competitive and output layer. Counter propagation is a combination of two well-
known algorithms: self organizing map of Kohenen and the Grossberg outstar (Liang et al 2002). The Counter
Propagation network can be applied in a data compression approximation functions or pattern association.

Counter propagation networks training include two stages:


1. Input vectors are clustered. Clusters are formed using dot
product metric or Euclidean norm metrics.
2. Weights from cluster units to outputs units are made to produce
the desired response
CPN is classified in two types. They are i) Full counter propagation network
and ii) Forward only counter propagation network. CPN advantages are that,
it is simple and forms a good statistical model of its input vector environment.
The CPN trains rapidly. If appropriately applied, it can save large amount of
computing time. It is also useful for rapid prototyping of systems.

Full Counter Propagation Neural Network (Full CPNN) The full CPNN possess the generalization capability
which allows it to produce a correct output even when it is given an input vector that is partially incomplete
or partially correct (Freeman and Skapura 1999). Full CPNN can represent large number of vector pairs, x:y
by constructing a look up table. Figure 6.1 shows the schematic block diagram of restoration process using
full CPNN. The architecture of full CPNN for image restoration is shown in Figure 6.2. The architecture of a
counter propagation network resembles an instar and outstar model. Basically, it has two input layers and
two output layers with hidden (cluster) layer common to the input and output layers. The model which
connects the input layers to the hidden layers is called Instar model and the model which connects the
hidden layer to the output layer is called Outstar model. The weights are updated both in the Instar and
Outstar model. The Instar model performs the first training and the 137 Outstar model performs the second
phase of training. The network is a fully interconnected network. The major aim of full CPNN is to provide an
efficient means of representing a large number of vector pairs, X:Y by adaptively constructing a look up
table. It produces an approximation X:Y based on input of a X vector alone or input of a Y vector alone or
input of a X:Y pair, possibly with some distorted or missing elements in either or both the vectors. During the
first phase of training of full Counter propagation network, the training pairs X:Y are used to form the

DURGESHVARI DANGI Page 2 ROLL NO.0828CS18MT21


PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.P)

SUBJECT: Lab II BRANCH:- SS

clusters. The Full CPNN is used for bi-directional mapping. X is the input image and Y is the degraded image.
X* and Y* are the restored and the degraded images respectively.

DURGESHVARI DANGI Page 3 ROLL NO.0828CS18MT21


PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.P)

SUBJECT: Lab II BRANCH:- SS

EXPERIMENT NO.2

Object:- Implement Back Propagation in Neural Networks.

When building neural networks, there are several steps to take. Perhaps the two
most important steps are implementing forward and backward propagation. Both
these terms sound really heavy and are always scary to beginners. The absolute
truth is that these techniques can be properly understood if they are broken down
into their individual steps. In this tutorial, we will focus on backpropagation and
the intuition behind every step of it.

What is Back Propagation?

This is simply a technique in implementing neural networks that allow us to


calculate the gradient of parameters in order to perform gradient descent and
minimize our cost function. Numerous scholars have described back propagation
as arguably the most mathematically intensive part of a neural network

DURGESHVARI DANGI Page 4 ROLL NO.0828CS18MT21


PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.P)

SUBJECT: Lab II BRANCH:- SS

Implementing Back Propagation

Assuming a simple two-layer neural network — one hidden layer and one output
layer. We can perform back propagation as follows

Initialize the weight and bias to be used for the neural network: This
involves randomly initializing the weights and biases of the neural networks. The
gradient of these parameters will be obtained from the backward propagation and
used to update gradient descent.

DURGESHVARI DANGI Page 5 ROLL NO.0828CS18MT21


PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.P)

SUBJECT: Lab II BRANCH:- SS

EXPERIMENT NO.3

Object:- Error Back Propagation Algorithm

The backpropagation algorithm is used to find a local minimum of the errorfunction. The network is
initialized with randomly chosen weights. The gradient of the error function is computed and used to
correct the initial weights. Our task is to compute this gradient recursively.

Learning as gradient descent We saw in the last chapter that multilayered networks are capable of
computing a wider range of Boolean functions than networks with a single layer of computing units.
However the computational effort needed for finding the correct combination of weights increases
substantially when more parameters and more complicated topologies are considered. In this chapter we
discuss a popular learning method capable of handling such large learning problems — the backpropagation
algorithm. This numerical method was used by different research communities in different contexts, was
discovered and rediscovered, until in 1985 it found its way into connectionist AI mainly through the work of
the PDP group [382]. It has been one of the most studied and used algorithms for neural networks learning
ever since. In this chapter we present a proof of the backpropagation algorithm based on a graphical
approach in which the algorithm reduces to a graph labeling problem. This method is not only more general
than the usual analytical derivations, which handle only the case of special network topologies, but also
much easier to follow. It also shows how the algorithm can be efficiently implemented in computing systems
in which only local information can be transported through the network

The Error Back-Propagation Learning Algorithm :-

 This algorithm was discovered and rediscovered a number of times - for details, see, e.g. chapter 4
of Haykin, S. Neural Networks - a comprehensive foundation, 2nd ed., p.43. This reference also
contains the mathematical details of the derivation of the backpropagation equations, (2nd ed.,
p.161-167, 3rd ed., p.129-134) which we shall omit. (This is covered in COMP9444 Neural Networks.)
 Like perceptron learning, back-propagation attempts to reduce the errors between the output of
the network and the desired result.
 However, assigning blame for errors to hidden nodes (i.e. nodes in the intermediate layers), is not so
straightforward. The error of the output nodes must be propagated back through the hidden nodes.
 The contribution that a hidden node makes to an output node is related to the strength of the
weight on the link between the two nodes and the level of activation of the hidden node when the
output node was given the wrong level of activation.
 This can be used to estimate the error value for a hidden node in the penultimate layer, and that
can, in turn, be used in making error estimates for earlier layers.

DURGESHVARI DANGI Page 6 ROLL NO.0828CS18MT21


PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.P)

SUBJECT: Lab II BRANCH:- SS

EXPERIMENT NO.3

Object:- Generate MATLAB function for simulating shallow neural network


collapse all in page
This function generates a MATLAB® function for simulating a shallow neural network. genFunction does not
support deep learning networks such as convolutional or LSTM networks. For more information on code
generation for deep learning, see Deep Learning Code Generation.
Syntax
genFunction(net,pathname)
genFunction(___,'MatrixOnly','yes')
genFunction(___,'ShowLinks','no')
Description
example
genFunction(net,pathname) generates a complete stand-alone MATLAB function for simulating a neural
network including all settings, weight and bias values, module functions, and calculations in one file. The
result is a standalone MATLAB function file. You can also use this function with MATLAB
Compiler™ and MATLAB Coder™ tools.
example
genFunction(___,'MatrixOnly','yes') overrides the default cell/matrix notation and instead generates a
function that uses only matrix arguments compatible with MATLAB Coder tools. For static networks, the
matrix columns are interpreted as independent samples. For dynamic networks, the matrix columns are
interpreted as a series of time steps. The default value is 'no'.
genFunction(___,'ShowLinks','no') disables the default behavior of displaying links to generated help and
source code. The default is 'yes'.
Examples
collapse all
Create Functions from Static Neural Network
This example shows how to create a MATLAB function and a MEX-function from a static neural network.
First, train a static network and calculate its outputs for the training data.
[x,t] = bodyfat_dataset;
bodyfatNet = feedforwardnet(10);
bodyfatNet = train(bodyfatNet,x,t);
y = bodyfatNet(x);
Next, generate and test a MATLAB function. Then the new function is compiled to a shared/dynamically
linked library with mcc.
genFunction(bodyfatNet,'bodyfatFcn');
y2 = bodyfatFcn(x);
accuracy2 = max(abs(y-y2))
mcc -W lib:libBodyfat -T link:lib bodyfatFcn
Next, generate another version of the MATLAB function that supports only matrix arguments (no cell
arrays), and test the function. Use the MATLAB Coder tool codegen to generate a MEX-function, which is
also tested.
genFunction(bodyfatNet,'bodyfatFcn','MatrixOnly','yes');

DURGESHVARI DANGI Page 7 ROLL NO.0828CS18MT21


PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.P)

SUBJECT: Lab II BRANCH:- SS

y3 = bodyfatFcn(x);
accuracy3 = max(abs(y-y3))

x1Type = coder.typeof(double(0),[13 Inf]); % Coder type of input 1


codegen bodyfatFcn.m -config:mex -o bodyfatCodeGen -args {x1Type}
y4 = bodyfatodeGen(x);
accuracy4 = max(abs(y-y4))
Create Functions from Dynamic Neural Network
This example shows how to create a MATLAB function and a MEX-function from a dynamic neural network.
First, train a dynamic network and calculate its outputs for the training data.
[x,t] = maglev_dataset;
maglevNet = narxnet(1:2,1:2,10);
[X,Xi,Ai,T] = preparets(maglevNet,x,{},t);
maglevNet = train(maglevNet,X,T,Xi,Ai);
[y,xf,af] = maglevNet(X,Xi,Ai);
Next, generate and test a MATLAB function. Use the function to create a shared/dynamically linked library
with mcc.
genFunction(maglevNet,'maglevFcn');
[y2,xf,af] = maglevFcn(X,Xi,Ai);
accuracy2 = max(abs(cell2mat(y)-cell2mat(y2)))
mcc -W lib:libMaglev -T link:lib maglevFcn
Next, generate another version of the MATLAB function that supports only matrix arguments (no cell
arrays), and test the function. Use the MATLAB Coder tool codegen to generate a MEX-function, which is
also tested.
genFunction(maglevNet,'maglevFcn','MatrixOnly','yes');
x1 = cell2mat(X(1,:)); % Convert each input to matrix
x2 = cell2mat(X(2,:));
xi1 = cell2mat(Xi(1,:)); % Convert each input state to matrix
xi2 = cell2mat(Xi(2,:));
[y3,xf1,xf2] = maglevFcn(x1,x2,xi1,xi2);
accuracy3 = max(abs(cell2mat(y)-y3))

x1Type = coder.typeof(double(0),[1 Inf]); % Coder type of input 1


x2Type = coder.typeof(double(0),[1 Inf]); % Coder type of input 2
xi1Type = coder.typeof(double(0),[1 2]); % Coder type of input 1 states
xi2Type = coder.typeof(double(0),[1 2]); % Coder type of input 2 states
codegen maglevFcn.m -config:mex -o maglevNetCodeGen -args {x1Type x2Type xi1Type xi2Type}
[y4,xf1,xf2] = maglevNetCodeGen(x1,x2,xi1,xi2);
dynamic_codegen_accuracy = max(abs(cell2mat(y)-y4))
Input Arguments
net — neural network
network object
Neural network, specified as a network object.
Example: net = feedforwardnet(10);

DURGESHVARI DANGI Page 8 ROLL NO.0828CS18MT21


PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.P)

SUBJECT: Lab II BRANCH:- SS

EXPERIMENT NO.4

Object:- Implementation of fuzzy operation like Difference & DeMorgan's law.

Among the basic operations which can be performed on fuzzy sets are the operations of union, intersection,
complement, algebraic product and algebraic sum. In addition to these operations, new operations called
"bounded-sum" and "bounded-difference" were defined by L. A. Zadeh to investigate the fuzzy reasoning
which provides a way of dealing with the reasoning problems which are too complex for precise solution.
This paper investigates the algebraic properties of fuzzy sets under these nex operations of bounded-sum
and bounded-difference and the properties of fuzzy sets in the case where these new operations are
combined with the well-known operations of union, intersection, algebraic product and algebraic sum. 1.
INTRODUCTION Among the well-known operations which can be performed on fuzzy sets are the operations
of union, intersection, complement, algebraic product and algebraic sum. Much research concerning fuzzy
sets and their applications to automata theory, logic, control, game, topology, pattern recognition, integral,
linguistics, taxonomy, system, decision making, information retrieval and so on, has been earnestly
undertaken by using these operations for fuzzy sets (see the bibliography in Gaines (1977) and Kandel
(1978)). For example, union, intersection and complement are found in most of papers relating to fuzzy sets.
Algebraic product and algebraic sum are also used in the study of fuzzy events (Zadeh, 1968), fuzzy
automata (Santos, 1972), fuzzy logic (Goguen, 1968), fuzzy semantics (Zadeh, 1971) and so on. In addition to
these operations, new operations called "bounded-sum" and

"bounded-difference" are introduced by Zadeh (1975) to investigate the fuzzy reasoning which provides a
way of dealing with the reasoning problems which are too complex for precise solution. This paper
investigates the algebraic properties of fuzzy sets under bounded-sum and bounded-difference as well as
the properties of fuzzy sets in the case where these new operations are combined with the well-known
operations of union, intersection, algebraic product and algebraic sum.

Fuzzy SETS AND THEIR OPERATIONS We shall briefly review fuzzy sets and their operations of union,
intersection, complement, algebraic product, algebraic sum, bounded-sum, bounded-difference and
bounded-product, which is a dual operation for bounded-sum. Fuzzy Sets: A fuzzy set A in a universe of
discourse U is characterized by a membership function ~t A which takes the values in the unit interval [0, 1],
i.e., u--, [0, l]. (1) The value of flA at u(EU), ¢G(u), represents the grade of membership (grade, for short) of u
in A and is a point in [0, 1 ]. The operations of fuzzy sets A and B are listed as follows. Union: A ~B ~, eta u ~
=/a a V #n. (2) Intersection: Complement: Algebraic Product: A lgebraic Sum: A ~B ez-~A ~ =/~A ACtB' (3) "~
~=>/~X -= 1 --/G" (4) A . B "~¢G-B =/'tA/~B" A ~- B ~ la A +B = PA + ¢ts -- t-tA PB = 1 -- (1 --/G)(1 --/as)" (5) (6)

ALGEBRAIC PROPERTIES OF FUZZY SETS UNDER VARIOUS KINDS OF OPERATIONS In this section we shall
investigate the algebraic properties of fuzzy sets under the operations (2)-(9). We shall first review the well-
known properties of fuzzy sets under union (2), intersection (3), and complement (4). I.

The Case of Union (U) and Intersection (n)


Let A, B and C be fuzzy sets in a universe of discourse U,
then we have (see Zadeh (1965)):

DURGESHVARI DANGI Page 9 ROLL NO.0828CS18MT21


PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.P)

SUBJECT: Lab II BRANCH:- SS

Idempotent laws: AUA =A, (10) ANA =A.

Commutative laws: AUB=BUA, (11) AAB=BAA.

Associative laws: (AUB)UC=AU(BUC), (AmB)mC=Am(BmC). (12)

Absorption laws: AU(ANB)=A, (13) A~(AUB)=A.

FUZZY SETS AND OPERATIONS 33

Distributive laws: Involution law: De Morgan's laws: Identity laws:

Complement laws: A U(BmC)=(A W B)n(A uC),


.4 m(Bu c)= (.4 c B) w (A mc). AUB=A~B, A~B=.4UB. AUO=A, AUU=~ A~O=O, A~U=A. Awd4:U,
where O is an empty fuzzy set defined by go = 0. Note. Equations (18) can be expressed more precisely as
0.5Uc_Audc_ U, O c_ A cq ,~ c_ O.5 U, where fl0.su = 0.5#v = 0.5 × 1 = 0.5. (14) (15) (16) (17) (18) (19)
THEOREM 1 (Zadeh, 1965). Fuzzy sets in U form a distributive lattice 1 under U and ~, but do not form a
Boolean lattice, since .4 is not the complement of A in the lattice sense (see (18, 19)). ~A set L with two
operations V and A satisfying idempotent laws, commutative laws, associative laws and absorption laws is
said to be a lattice. If the lattice L satisfies distributive laws, then L is a distributi

DURGESHVARI DANGI Page 10 ROLL NO.0828CS18MT21


PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.P)

SUBJECT: Lab II BRANCH:- SS

EXPERIMENT NO.5

Object:- Implementation of McCulloch-Pitts model for OR gate & AND gate .

I. INTRODUCTION:

The first formal definition of a synthetic neuron model based on the highly simplified considerations of the
biological model described was formulated by McCulloch and Pitts in 1943.They drew on three sources:
knowledge of the basic physiology and function of neurons in the brain; the formal analysis of propositional
logic due to Russell and Whitehead; and Turing's theory of computation. They proposed a model of artificial
neurons in which each neuron is characterized as being "on" or "off," with a switch to "on" occurring in
response to stimulation by a sufficient number of neighbouring neurons. The state of a neuron was
conceived of as "factually equivalent to a proposition which proposed its adequate stimulus." They showed,
for example, that any computable function could be computed by some network of connected neurons, and
that all the logical connectives could be implemented by simple net structures. [1]

II. McCULLOCH PITTS MODEL:

Every neuron model consists of a processing element with synaptic input connection and a single input. The
"neurons" operated under the following assumptions:- i. They are binary devices (Vi = [0,1]) ii. Each neuron
has a fixed threshold, theta values. iii. The neuron receives inputs from excitatory synapses, all having
identical weights. iv. Inhibitory inputs have an absolute veto power over any excitatory inputs.

At each time step the neurons are simultaneously (synchronously) updated by summing the weighted
excitatory inputs and setting the output (Vi) to 1 if the sum is greater than or equal to the threshold and if
the neuron receives no inhibitory input. Its architecture is shown by:

Fig-1: Architecture of McCulloch-Pitts Model

DURGESHVARI DANGI Page 11 ROLL NO.0828CS18MT21


PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.P)

SUBJECT: Lab II BRANCH:- SS

From the above fig, the connected path are of two types: excitatory or inhibitory. Excitatory have positive
weight and which denoted by “w” and inhibitory have negative weight and which is denoted by “p”. The
neuron fires if the net input to the neuron is greater than threshold. The threshold is set so that the
inhibition is absolute, because, nonzero inhibitory input will prevent the neuron from firing. It takes only
step for a signal to pass over one connection link. In this “y” is taken as output and X1, X2…………Xn
(excitatory) & Xn+1, Xn+2……., Xn+m (inhibitory) are taken as input signals.
The McCULLOCH Pitts neuron Y has the activation function: F (yin) = 1 if y-in ≥ Ѳ 0 if y-in < Ѳ Where,
Ѳ=threshold Y=net output By using MCCULLOCH Pitts model we are going to solve the following logic gates.
i. OR Gate ii. NOT Gate iii. AND Gate iv. NAND Gate v. XOR Gate vi. NOR Gate

A. OR GATE:
The OR gate is a digital logic gate that implements logical disjunction-it behaves according to the truth
table. A high output i.e. 1 results if one or both the inputs to the gate are high (1).If both inputs are low the
result is low (0)[1]. A plus (+) is used to show the or operation. [2] Its block diagram and truth table is shown
by

IMPLIMENTATION OF MCCULLOCH PITTS MODEL:

Fig -3: Architecture of OR Gate

The threshold for the unit is 3. [3] The net input is Yin=3A+3B. The output is given by Y=f (Yin) = 1 if Yin≥3 0 if
Yin<3

DURGESHVARI DANGI Page 12 ROLL NO.0828CS18MT21


PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.P)

SUBJECT: Lab II BRANCH:- SS

AND GATE :

It is a logic gate that implements conjunction. Whenever both the inputs are high then only output will be
high (1) otherwise low (0). [5]

IMPLIMENTATION OF MCCULLOCH PITTS MODEL:

The threshold value is 2.


Net input is yin=A+B.
Output is given by
Y=f (yin)
Activation function= 1 if yin≥2 0 if yin

DURGESHVARI DANGI Page 13 ROLL NO.0828CS18MT21


PATEL COLLAGE OF SCIENCE AND TECHNOLOGY INDORE (M.P)

SUBJECT: Lab II BRANCH:- SS

DURGESHVARI DANGI Page 14 ROLL NO.0828CS18MT21

Potrebbero piacerti anche