Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
P)
INDEX
DATE DATE
S.NO. NAME OF PRECTICAL OF OF REMARK
PERFORM SUBMISSION
5. Implementation of McCulloch-Pitts
model for OR gate & AND gate
EXPERIMENT NO.1
COUNTER PRAPOGATION :-
Counter Propagation Networks (CPN) are multilayer networks based on a
combination of input, competitive and output layer. Counter propagation is a combination of two well-
known algorithms: self organizing map of Kohenen and the Grossberg outstar (Liang et al 2002). The Counter
Propagation network can be applied in a data compression approximation functions or pattern association.
Full Counter Propagation Neural Network (Full CPNN) The full CPNN possess the generalization capability
which allows it to produce a correct output even when it is given an input vector that is partially incomplete
or partially correct (Freeman and Skapura 1999). Full CPNN can represent large number of vector pairs, x:y
by constructing a look up table. Figure 6.1 shows the schematic block diagram of restoration process using
full CPNN. The architecture of full CPNN for image restoration is shown in Figure 6.2. The architecture of a
counter propagation network resembles an instar and outstar model. Basically, it has two input layers and
two output layers with hidden (cluster) layer common to the input and output layers. The model which
connects the input layers to the hidden layers is called Instar model and the model which connects the
hidden layer to the output layer is called Outstar model. The weights are updated both in the Instar and
Outstar model. The Instar model performs the first training and the 137 Outstar model performs the second
phase of training. The network is a fully interconnected network. The major aim of full CPNN is to provide an
efficient means of representing a large number of vector pairs, X:Y by adaptively constructing a look up
table. It produces an approximation X:Y based on input of a X vector alone or input of a Y vector alone or
input of a X:Y pair, possibly with some distorted or missing elements in either or both the vectors. During the
first phase of training of full Counter propagation network, the training pairs X:Y are used to form the
clusters. The Full CPNN is used for bi-directional mapping. X is the input image and Y is the degraded image.
X* and Y* are the restored and the degraded images respectively.
EXPERIMENT NO.2
When building neural networks, there are several steps to take. Perhaps the two
most important steps are implementing forward and backward propagation. Both
these terms sound really heavy and are always scary to beginners. The absolute
truth is that these techniques can be properly understood if they are broken down
into their individual steps. In this tutorial, we will focus on backpropagation and
the intuition behind every step of it.
Assuming a simple two-layer neural network — one hidden layer and one output
layer. We can perform back propagation as follows
Initialize the weight and bias to be used for the neural network: This
involves randomly initializing the weights and biases of the neural networks. The
gradient of these parameters will be obtained from the backward propagation and
used to update gradient descent.
EXPERIMENT NO.3
The backpropagation algorithm is used to find a local minimum of the errorfunction. The network is
initialized with randomly chosen weights. The gradient of the error function is computed and used to
correct the initial weights. Our task is to compute this gradient recursively.
Learning as gradient descent We saw in the last chapter that multilayered networks are capable of
computing a wider range of Boolean functions than networks with a single layer of computing units.
However the computational effort needed for finding the correct combination of weights increases
substantially when more parameters and more complicated topologies are considered. In this chapter we
discuss a popular learning method capable of handling such large learning problems — the backpropagation
algorithm. This numerical method was used by different research communities in different contexts, was
discovered and rediscovered, until in 1985 it found its way into connectionist AI mainly through the work of
the PDP group [382]. It has been one of the most studied and used algorithms for neural networks learning
ever since. In this chapter we present a proof of the backpropagation algorithm based on a graphical
approach in which the algorithm reduces to a graph labeling problem. This method is not only more general
than the usual analytical derivations, which handle only the case of special network topologies, but also
much easier to follow. It also shows how the algorithm can be efficiently implemented in computing systems
in which only local information can be transported through the network
This algorithm was discovered and rediscovered a number of times - for details, see, e.g. chapter 4
of Haykin, S. Neural Networks - a comprehensive foundation, 2nd ed., p.43. This reference also
contains the mathematical details of the derivation of the backpropagation equations, (2nd ed.,
p.161-167, 3rd ed., p.129-134) which we shall omit. (This is covered in COMP9444 Neural Networks.)
Like perceptron learning, back-propagation attempts to reduce the errors between the output of
the network and the desired result.
However, assigning blame for errors to hidden nodes (i.e. nodes in the intermediate layers), is not so
straightforward. The error of the output nodes must be propagated back through the hidden nodes.
The contribution that a hidden node makes to an output node is related to the strength of the
weight on the link between the two nodes and the level of activation of the hidden node when the
output node was given the wrong level of activation.
This can be used to estimate the error value for a hidden node in the penultimate layer, and that
can, in turn, be used in making error estimates for earlier layers.
EXPERIMENT NO.3
y3 = bodyfatFcn(x);
accuracy3 = max(abs(y-y3))
EXPERIMENT NO.4
Among the basic operations which can be performed on fuzzy sets are the operations of union, intersection,
complement, algebraic product and algebraic sum. In addition to these operations, new operations called
"bounded-sum" and "bounded-difference" were defined by L. A. Zadeh to investigate the fuzzy reasoning
which provides a way of dealing with the reasoning problems which are too complex for precise solution.
This paper investigates the algebraic properties of fuzzy sets under these nex operations of bounded-sum
and bounded-difference and the properties of fuzzy sets in the case where these new operations are
combined with the well-known operations of union, intersection, algebraic product and algebraic sum. 1.
INTRODUCTION Among the well-known operations which can be performed on fuzzy sets are the operations
of union, intersection, complement, algebraic product and algebraic sum. Much research concerning fuzzy
sets and their applications to automata theory, logic, control, game, topology, pattern recognition, integral,
linguistics, taxonomy, system, decision making, information retrieval and so on, has been earnestly
undertaken by using these operations for fuzzy sets (see the bibliography in Gaines (1977) and Kandel
(1978)). For example, union, intersection and complement are found in most of papers relating to fuzzy sets.
Algebraic product and algebraic sum are also used in the study of fuzzy events (Zadeh, 1968), fuzzy
automata (Santos, 1972), fuzzy logic (Goguen, 1968), fuzzy semantics (Zadeh, 1971) and so on. In addition to
these operations, new operations called "bounded-sum" and
"bounded-difference" are introduced by Zadeh (1975) to investigate the fuzzy reasoning which provides a
way of dealing with the reasoning problems which are too complex for precise solution. This paper
investigates the algebraic properties of fuzzy sets under bounded-sum and bounded-difference as well as
the properties of fuzzy sets in the case where these new operations are combined with the well-known
operations of union, intersection, algebraic product and algebraic sum.
Fuzzy SETS AND THEIR OPERATIONS We shall briefly review fuzzy sets and their operations of union,
intersection, complement, algebraic product, algebraic sum, bounded-sum, bounded-difference and
bounded-product, which is a dual operation for bounded-sum. Fuzzy Sets: A fuzzy set A in a universe of
discourse U is characterized by a membership function ~t A which takes the values in the unit interval [0, 1],
i.e., u--, [0, l]. (1) The value of flA at u(EU), ¢G(u), represents the grade of membership (grade, for short) of u
in A and is a point in [0, 1 ]. The operations of fuzzy sets A and B are listed as follows. Union: A ~B ~, eta u ~
=/a a V #n. (2) Intersection: Complement: Algebraic Product: A lgebraic Sum: A ~B ez-~A ~ =/~A ACtB' (3) "~
~=>/~X -= 1 --/G" (4) A . B "~¢G-B =/'tA/~B" A ~- B ~ la A +B = PA + ¢ts -- t-tA PB = 1 -- (1 --/G)(1 --/as)" (5) (6)
ALGEBRAIC PROPERTIES OF FUZZY SETS UNDER VARIOUS KINDS OF OPERATIONS In this section we shall
investigate the algebraic properties of fuzzy sets under the operations (2)-(9). We shall first review the well-
known properties of fuzzy sets under union (2), intersection (3), and complement (4). I.
EXPERIMENT NO.5
I. INTRODUCTION:
The first formal definition of a synthetic neuron model based on the highly simplified considerations of the
biological model described was formulated by McCulloch and Pitts in 1943.They drew on three sources:
knowledge of the basic physiology and function of neurons in the brain; the formal analysis of propositional
logic due to Russell and Whitehead; and Turing's theory of computation. They proposed a model of artificial
neurons in which each neuron is characterized as being "on" or "off," with a switch to "on" occurring in
response to stimulation by a sufficient number of neighbouring neurons. The state of a neuron was
conceived of as "factually equivalent to a proposition which proposed its adequate stimulus." They showed,
for example, that any computable function could be computed by some network of connected neurons, and
that all the logical connectives could be implemented by simple net structures. [1]
Every neuron model consists of a processing element with synaptic input connection and a single input. The
"neurons" operated under the following assumptions:- i. They are binary devices (Vi = [0,1]) ii. Each neuron
has a fixed threshold, theta values. iii. The neuron receives inputs from excitatory synapses, all having
identical weights. iv. Inhibitory inputs have an absolute veto power over any excitatory inputs.
At each time step the neurons are simultaneously (synchronously) updated by summing the weighted
excitatory inputs and setting the output (Vi) to 1 if the sum is greater than or equal to the threshold and if
the neuron receives no inhibitory input. Its architecture is shown by:
From the above fig, the connected path are of two types: excitatory or inhibitory. Excitatory have positive
weight and which denoted by “w” and inhibitory have negative weight and which is denoted by “p”. The
neuron fires if the net input to the neuron is greater than threshold. The threshold is set so that the
inhibition is absolute, because, nonzero inhibitory input will prevent the neuron from firing. It takes only
step for a signal to pass over one connection link. In this “y” is taken as output and X1, X2…………Xn
(excitatory) & Xn+1, Xn+2……., Xn+m (inhibitory) are taken as input signals.
The McCULLOCH Pitts neuron Y has the activation function: F (yin) = 1 if y-in ≥ Ѳ 0 if y-in < Ѳ Where,
Ѳ=threshold Y=net output By using MCCULLOCH Pitts model we are going to solve the following logic gates.
i. OR Gate ii. NOT Gate iii. AND Gate iv. NAND Gate v. XOR Gate vi. NOR Gate
A. OR GATE:
The OR gate is a digital logic gate that implements logical disjunction-it behaves according to the truth
table. A high output i.e. 1 results if one or both the inputs to the gate are high (1).If both inputs are low the
result is low (0)[1]. A plus (+) is used to show the or operation. [2] Its block diagram and truth table is shown
by
The threshold for the unit is 3. [3] The net input is Yin=3A+3B. The output is given by Y=f (Yin) = 1 if Yin≥3 0 if
Yin<3
AND GATE :
It is a logic gate that implements conjunction. Whenever both the inputs are high then only output will be
high (1) otherwise low (0). [5]