Sei sulla pagina 1di 2

Extracting the randomness in the information beehives is a root of many problems in

the field of text analytics and data mining. If we attribute this randomness to the
entropy, though not sure how much mathematically correct it is, I must say that the
data algorithm duality (DAD) has been stalking our computational logic since
inception.
Like our industrial machinery days, our computer scientists were treating data and
algorithm different. In those days, we treated energy and mass as dual and entirely
different. With the emergence of high energy physics and relativistic frames, we
have realized the fundamental nature of mass energy equivalence. Though not on
the same astronomical scales, it is high time for us to realize our skewed approach
towards the data explosion problems.
My assertion is that data algorithm duality cannot be a scientific and logical
standpoint to the problems of computation. Algorithms can exist along with data and
sometimes data itself can be the algorithm. There is an essential data algorithm
equivalence (DAE) in the natural and simulated sources and channels of information.
This equivalence will not be obvious to the current models and methods of
computation as we have fundamentally deviated from this position. For those
dealing with data driven equivalence checking (DDEC) in the area of compiler
optimization, they must be knowing how limited we are in this direction. Further
more saddening is the realization that we are still following the same dualistic
approach in DDEC problems.
The DAE will not be a singular relation in the manifolds where they are positioned
naturally. Instead, they are dialectical in themselves. It means that data and
algorithms are born at different moments in space time conjecture, however they
are connected by another degree of connectedness. The concept of listeners and
connectors will be very important in designing DAE. The listeners and connectors
should only be observers that does not interfere with the relationship between the
data and algorithm.
When we deal with the information emitted by sources, whether physical or
simulated, they definitely have a pattern, albeit being spontaneous and random.
This is quite a natural phenomenon. Why cant we create sources of random
information that generates pattern as we desire? This does not means that the
sources or the information will be completely predictive to another listener. But we
can ensure that the data will approximate the algorithm and algorithm will
synchronize with the information from time to time.
There may be a question that what will be the relationship between data and
algorithm in the moments of equivalence. Will it be symmetrical or asymmetrical?
Will it be reciprocal or orthogonal? We must begin this study at the very first
element of natural computers and natural information sources. This will be explored
in another article.

Potrebbero piacerti anche