Sei sulla pagina 1di 11

Image Encryption with Morphological Memories

María Elena Acevedo Mosqueda, José Ángel Martínez Navarro

​INSTITUTO POLITÉCNICO NACIONAL, ESCUELA SUPERIOR DE INGENIERÍA


MECÁNICA Y ELÉCTRICA ZACATENCO

Av. IPN s/n Col. Lindavista, C.P. 07738, Ciudad de México, México.

E-mail: ​ame1972@gmail.com​, ​josekun13@gmail.com

Abstract: Safety has always been a very important point in telecommunications, as networks
become more accessible and the private information becomes more available by illicit means,
for this reason is necessary protect the information in better ways, the proposed method
basically is to create a matrix using an image to be conveyed, the recipient of this matrix must
know to operate to recover the original values of the image, thus protecting the information

Key-Words: - ​ associative memories, morphological memories​. 

1 Introduction
Security is currently a subject of great importance in the management of data, for that reason we
have developed a method which allows you to encrypt an image using an algorithm, which is
almost impossible to decipher, not without the necessary data, other algorithms have been used
to achieve similar ends as in the method "A Puzzle Solver and Its Application in Speech
Descrambling " [1] developed by Department of Computer Science & Information Engineering
National Central University, TAIWAN, which achieve an effectiveness of 98.67%, compared
our method effectiveness in achieving a recovery of 100% image.

In our case we using associative memories to create an array which will be sent to the image
receptor, which the join them using a set of predefined patterns to reach the original image.

Anyone could copy the memory image trying to get note that is almost impossible without
predefined patterns due to the simple fact of the infinite number of combinations possible for
them to understand how this process is necessary to understand the concept of associative
memory.

An associative memory is a system that relates input patterns with output patterns. The purpose
of an associative memory is to retrieve the output pattern when presented with their
corresponding input pattern.

Designing an Associative Memory requires two phases: a learning and a recovery in the first
phase, the memory should be able to associate input patterns with output patterns. In the
recovery phase, the associative memory output recovers a pattern corresponding to the input
pattern received as information.

Most associative memories work only with binary numbers, which is a big disadvantage when
handling images, because the amount of information becomes very large so it was decided to
use the so-called morphological associative memories which allow you to work in decimal, this
reduces the number of data to handle a significantly faster processing enabling

To implement this method first converts the image to be sent to shades of gray. Subsequently

1
divides the image into square segments of nxn, n depends on the size of the image, then
vectorize the values ​for the pixels in each of the segments, and miss it proceeds to create the
matrix in which associate predetermined patterns which are equal to the receptor, thus ensure a
100% recovery in the second phase.

in the following sections explain the operation of morphological memories as well as its
implementation for this method together with the results obtained and the conclusions arrived

2 Morphological Memories
This part will discuss the operation of morphological memories. and Explain the associative
memories.

Associative memories are a class of artificial intelligence that can make associations between
two patterns, the interesting thing about them is the fact that the association can do one of the
patterns but not identical to that used for the association, in most cases, these patterns have to be
handled using binary numbers which can greatly extend the time required for its operation as
well as the capacity needed for an the same,

In the case of associative morphological memories they can work with patterns composed of
decimal numbers, which reduces the time and memory used for recovery, roughly
morphological memory creates an array using a series of additions and comparisons between the
two patterns that you want to associate, we call this a learning matrix, once you created the
pattern that we can recover wish you operate it with your employer partner, which will be
explained later

2.1 Phase Learning


Hence first clarify that there are two types of morphological memories min and max of which
vary in their method of construction and recovery, to build a max-type memory are added one
by one the terms of the associated vector (known pattern) with the first term of the vector * (in
this case the values ​of each of the pixels comprising the image segment) for the first column of
the matrix, this procedure is repeated until finishing adding all the terms of the vector *, by
associating with a second pattern is repeated the process, but at the time of placing the value in
the array is compared to the amount existing vector * and the associated vector, the most
valuable is that which is saved in max matrix, the procedure is the same for the matrix of min
with the only difference that the value is saved when we compare is the lesser. This process is
repeated until all segments have associated the image with their respective employers

M ax [ï , j ] =⋁ M ax [i , j] , (v [i] + v * [ j ])

M in [ï , j ] =⋀ M in [i , j] , (v [i] + v * [ j ])

2.2 Phase Recovery


In the second phase of the morphological memory for max is subtracted from the first term of
the first column with the first term of the associated pattern continued with the second end of the
first column minus the second term of the pattern. We repeat this process with the same
employer for all columns and obtain a new matrix of this new matrix will result in the vector *
associated with the pattern for the subtraction operation.

Comparing all values ​of one of each of the rows, the smallest of the values ​of each row store it
2
in a vector, at the end of this process, the vector must be the same as vector * in the case of min
as in the learning phase only thing that varies is the comparison, the min is stored in the vector
is greater value in the row.

v * [j ] = ⋀I=1 i M ax [I, j] − v [I ]

v * [j ] = ⋁I=1 i M in [I, j] − v [I ]

Noise types
There are three types of noise to facing any Associative Memory, which are: Additive,
Subtractive and Mixing. The Morphological Associative Memories Max and Min are able to
handle adequately the noise additive and subtractive noise, respectively. However, none of the
two types of memory is adequate to handle the noise mixing. Of the above illustrated in Figure
1.

Morphological max original image Morphological min

Additive noise Subtractive noise

Mixed noise

Figure 1. Types of Noise: Additive, Subtractive and Mixing.

To try to avoid mixing noise, which can not be removed with any of the two types of
morphological associative memories,

Tests on the proposed method showed a recovery of 100% recovery in the expected values, was
explain this later.

3 Implementation
To carry out the proposed method first there that a image section, we experimentally found that
the recovery algorithm showed more effective when the image segments were square therefore
obtain the greatest common divisor between length and width of the image then revising the
greatest common divisor falls in the range of 10 to 100 to avoid creating arrays occupying an

3
excessive memory space in the event that the greatest common divisor falls outside this range
are multiplied or divided by ten, until the condition unless the original image is 30x30 or less, to
try to get as many pixels of the image trying to avoid losing pixels long or wide. illustrated in
Figure 2.

Figure2. split image on segments of 30x30 pixels

Then translate the image in shades of gray to get a single value for each pixel and streamline the
process, this is achieved by a simple algorithm, which multiply the rgb values ​for their rates Red
30%, 59% Green, Blue 11% and then add them to obtain the value of the pixel gray tone.
illustrated in Figure 3.

4
Figure 3.image converted to gray scale

With the values ​obtained can vectorize the image segment to start with the use of memories in
this part is showing the advantage of using decimal numbers because there are 256 possible tints
on gray scale and work that in binary multiply by 8 memory space and the amount of
calculations required to obtain the association,

The pattern by which they are to carry out the associations is obtained by a simple algorithm
which only requires the number of segments.

Establishing a matrix of nxn, where n is the number of segments and the value of the main
diagonal is a number x which was determined experimentally, the other values ​are zeros, thus
the first column is the pattern associated with the first segment and so on.

⇒ ⇒ <255,254,250,.. ..>

Figure 4. application of the phase learning

This also facilitates their recovery because the receiving user only needs to know the number of
pixels on each side of the segments and the number of segments to reassemble the image.

with information on the number of segments and pixels per segment can reassemble the matrix
of Known standards, This Allows us to Perform the process in reverse using min or max
memory to retrieve the original image, we simply use the values ​in the order of the association
for recovery and depending on the value of x is recovered using memory type max or min type

5

Figure 5. application of the recovery phase

in the case of Figure 5 using the value of x as a negative number Because That the recovery
type with min was 0% and 100% max

If We Had Used a positive number the result would not Have Been Opposed only the sign of x
Influences, Also Makes x value if the value is very close to zero recovery fails, experimentally
Arrived at a value of ±300 has a perfect recovery
the sending process must be in an environment without noise, for example a CD flash memory,
etc … Because the recovery of the patterns is not affected if they are subtractive or additive
noise but will affect the matrix

4 Results
Were Carried out 306 tests on the method Proposed in the first 204 data obtained is the number
of the diagonal and the size of the segments

6
Figure 6. application of the method

for testing was created a database that contains 5 sets of 10 images each by theme and a
reference image, with these 50 pictures were made associations, two times for each image and
so check the behavior of the min and the max

7
Figure 6. examle of the method

then we proceeded to test the hypothesis, then a fragment of the result table

Image test Number of  Pixels to side of segment Rec.min Rec.max


segments % %

1 1 100 30 100 0

1 2 100 30 5.5 100

2 3 100 30 100 6.62

2 4 100 30 5.72 100

3 5 100 30 100 1.75

3 6 100 30 2.47 100

8
4 7 100 30 100 7.12

4 8 100 30 0 100

5 9 100 30 100 7.72

5 10 100 30 0.9 100

6 11 100 30 100 4

6 12 100 30 4.2 100

7 13 475 10 100 0

7 14 475 10 0 100

8 15 100 30 100 2.86

8 16 100 30 3.63 100

9 17 100 22 100 0

9 18 100 22 2.45 100

10 19 100 22 100 6.11

10 20 100 22 0.39 100

Table 1. result of the tests conducted in the first group of pictures

in the table can be seen as in all the expected values ​(min x= 300 max x = -300) is perfect
recovery

5 conclusions
Using this method we can send pictures with greater security because that only the sender and

9
recipient have the patterns needed to retrieve the image

the results with previously Obtained using the methods The proposed in A Puzzle Solver and Its
Application in Speech to show maximum efficiency Descrambling of 98.67% [1] at the time of
recovery

however our method shows a 100% Effectiveness in all the expected values, Which Allows us
to Always Have a 100% success rate at the time of recovery increasing 1.33% compared to
previous methods

6 References
[1] http : //cilab.csie.ncu.edu.tw/

[2] T. Altman, “Solving the jigsaw puzzle problem in linear time,” Applied Artificial
Intelligence,

vol. 3, pp. 453-462, 1989.

[3] E. Bonabeau, M. Dorigo, and G. Theraulaz, Swarm Intelligence: From Natural to Artificial

Systems, Oxford University Press, 1999.

[4] J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence, New York: Academic Press,
2001.

[5] A. Colorni, M. Dorigo, and V. Maniezzo,“Distributed Optimization by Ant Colonies,” In

Processings First Europ. Conference on Artifical Life, edited by F. Varela and P. Bourgine,
Cambridge, MA: MIT Press, pp. 134-142. 1991.

[6] M. Dorigo, “Ottimizzazione, Apprendimento Automatico, ed Algoritmi Basati su Metafora

Naturale.” Ph.D. Dissertation, Politecnico di Milano, Press, 1992.

[7] M. Dorigo, and L. M. Gambardella, “Ant Colonies for the Traveling Salesman Problem,”

BioSystems 43, pp. 73-81, 1997.

10
11

Potrebbero piacerti anche