Sei sulla pagina 1di 4

Cybernetics and Systems Analysis, Vol. 43, No.

5, 2007

CYBERNETICS

AUTOMATA THEORY, COMPUTER-AIDED DESIGN,


AND SOME PROBLEMS OF INFORMATICS

Yu. V. Kapitonova and A. A. Letichevsky UDC 519.8

The main stages of formation and development of automata theory and its applications to
computer-aided design of computing systems are considered. Some open problems of modern
theoretical cybernetics are discussed.

Keywords: automata theory, computer-aided design, computing system, theoretical cybernetics,


computer science, informatics.

Victor Mikhailovich Glushkov, an outstanding algebraist who was well known for his works on the solution of the
(generalized) Hilbert fifth problem, drastically changed the direction of his activity in 1956 and headed the laboratory of
computational mathematics and computer facilities. He began with the definition of long-term and short-term objectives of
development of a new science, which was later called informatics (at that time, he used the concept of cybernetics in the wide
sense that included main entities and concepts related to computer facilities). Victor Mikhailovich Glushkov perfectly
understood that the development of computers and their applications requires the creation of a new field of fundamental
knowledge that must be based on the stable foundation of modern mathematics. Therefore, one of short-term objectives was
the creation and development of automata theory.
The main works on finite automata theory are written by V. M. Glushkov in 1959–1961. They were simultaneously
performed in two directions. The first direction, namely, the theory of abstract automata, was considered as a mathematical
theory that used abstract-algebraic constructions and was especially oriented toward mathematicians. Original algorithms of
analysis and synthesis of automata were constructed (the synthesis algorithm was based on the concept that was called the
Glushkov automaton later on), the connection with well-known mathematical problems (the small Burnside problem, etc.) was
established, the foundations of the theory of semigroup and group automata were laid, etc. The second direction was oriented
toward applied mathematicians and hardware design engineers. In 1962, V. M. Glushkov published his monograph “Synthesis
of Digital Automata” that played an important role in propagating methods of formalized design among engineers and
increasing their mathematical culture. Based on this book, several generations of specialists in the field of computer engineering
were trained. In 1964, V. M. Glushkov was awarded the State Lenin Prize for his works in the field of automata theory.
An important role in the formation of automata theory was played by a seminar on automata theory. During its
functioning from 1959 till now, main participants of the seminar were Yu. V. Kapitonova, A. A. Letichevsky,
V. G. Bodnarchuk, V. N. Redko, F. I. Andon and other Glushkov’s disciples. Later on, each of them had created his own line
of investigation and continued to develop and enrich automaton-algebraic methods in theoretical and application areas.

DISCRETE TRANSFORMERS

Along with theoretical problems, the question of applications of automata theory arose already at the earliest stages of
its development. All basic algorithms of the technology of development of electronic circuits on the basis of finite automata
theory were realized on the computer “Kiev” and formed the basis of the so-called “Small System of Automated Synthesis of

Cybernetics Institute, National Academy of Sciences of Ukraine, Kiev, Ukraine, Julia_kapitonova@iss.org.ua;


let@d105.icyb.kiev.ua. Translated from Kibernetika i Sistemnyi Analiz, No. 5, pp. 3–7, September–October 2007. Original
article submitted July 19, 2007.
©
1060-0396/07/4305-0621 2007 Springer Science+Business Media, Inc. 621
Digital Automata.” The next stage of application of automata theory was due to be the small computer MIR. Despite the use
of theoretic-automaton methods in designing this computer, the primary idea of V. M. Glushkov that consisted of a complete
representation of this project in the form of a composition of a small number of finite automata and application of formal
synthesis methods to completely realize them had not met with success.
It should be noted that methods of synthesis of finite automata are based on algorithms that require to individually
consider each state and each input symbol of the automaton being synthesized. At the same time, if the structure of a device
includes several registers, then the number of states is the exponent of the word length multiplied by the number of registers.
Therefore, to algorithmically support the development of such devices, another approach should be used that formalizes the
stages of block and algorithmic design of computers. This approach was proposed by V. M. Glushkov in his monographs
“Theory of automata and questions of designing structures of digital computers” and “Automata theory and formal
transformations of microprograms” published in 1965. In the first monograph, a model was proposed for a computer
consisting of two interacting automata, namely, control and operational ones. The operational automaton was specified as a
collection of infinite registers with periodically determined transformations defined on them. Using this new model of a
computer made it possible to formulate and find methods of solution of new problems that could not be solved with the help
of the theory of finite automata. The second monograph initiated a new direction in theoretical cybernetics, namely, the
theory of formal transformations of programs and microprograms on the basis of the algebra of algorithms.
The composition of two interacting automata that was considered by V. M. Glushkov is a special case of the general
concept of a discrete transformer that was investigated in works of V. M. Glushkov and his disciples later on. These
investigations were developed in two directions. The first direction consisted of the investigation of abstract-algebraic
problems such as equivalence recognition, optimization with respect to operation time, investigation of semigroup relations,
etc. The second direction was the development of applied systems for computer-aided design of computers, languages for the
description of algorithms of functioning of devices, and design methods and algorithms. In the 70th, the system PROEKT
was developed for computer-aided design of computers together with systems of their software support (codesign of
computer hardware and software). The theoretical grounding of this system consisted of the theory of discrete transformers
and algebra of algorithms.
A high contribution to the development of the algebra of algorithms and its applications in the field of automation of
development and design of software systems on the basis of formal transformations was made by G. E. Tseitlin and
E. L. Yushchenko. The monograph “Algebra, Languages, and Programming” written by V. M. Glushkov, E. L. Yushchenko,
and G. E. Tseitlin was repeatedly published and was translated abroad.
Based on V. M. Glushkov’s ideas, the work on the creation of new architectures of multiprocessor supercomputers
began in the late seventies and early eighties. The idea of a recursive computer was first proposed; this idea was connected
with the revision of the von Neumann principles (the report of V. M. Glushkov with co-authors on the IFIP Congress in
Stockholm in 1974), and then it was transformed into a more practical idea of a macroconveyor computer. After
V. M. Glushkov death, this idea was realized in the eighties under the direction of V. S. Mikhalevich. Production prototypes
of the macroconveyor computer ES-2701 that was the first multiprocessor computer with distributed memory and
high-performance parallelization of processes of solving problems were created, many scientific-technical, economic, and
optimization problems were solved with record (for that time) parameters of efficiency and performance. The prototypes
were developed under the guidance of S. B. Pogrebinskii (engineering development), I. N. Molchanov (applied
mathematics), A. A. Letichevsky (system mathematics), and Yu. V. Kapitonova (computer-aided design). The theoretical
basis of this development consisted of automata-theoretic methods and models of distributed computations that underlie new
methods of parallelization of algorithms and programs and the creation of foundations of advanced technologies for solution
of problems. A result of investigations in the field of development of automaton-algebraic models and their applications was
the monograph “Mathematical theory of computing systems” published by Yu. V. Kapitonova and A. A. Letichevsky.
A new stimulus to the development of the technology of solution of problems requiring high-performance computers
was the widespread occurrence of cluster systems. At the present time, widespread investigations in this direction are
pursued in the Cybernetics Institute under the guidance of I. V. Sergienko. The computational capability of cluster systems
that are developed in the Institute and whose engineering development was carried out under the guidance of V. N. Koval’
and A. V. Palagin is among top supercomputers developed in the CIS.

622
INSERTION MODELING

In the nineties of the last century, important shifts took place in the theoretical footing for computations. The focus of
attention was shifted from functional models to models of interaction in distributed systems. A new life was breathed into
algebras and calculations such as CCS, CSP, and algebras of processes that had been created earlier, and new models began
to appear, including models used in domains such as, for example, bioinformatics (mobile ambients). A model of interaction
of agents and environments (A. A. Letichevsky, D. Gilbert, and Yu. V. Kapitonova) was proposed in the middle of the
nineties with a view to unifying different approaches in the theory of parallel interacting processes. The model is based on
the concept of a function of insertion and transformation of the behavior of an environment as a result of recursive insertion
of agents in environments of different levels. The model can be considered as a far-reaching generalization of the concept of
a discrete transformer. The further development of this model of interaction led to the creation of new technological
approaches in the field of hardware development, namely, insertion programming and modeling. The idea of modeling
assumed a particularly important significance in computer engineering in recent years. The passage from a model to a
product is a principle that is more and more widely used in software industry. Insertion modeling is used during creating the
system VRS for verification of software systems that is developed for the Motorola company with the assistance of
specialists of the Cybernetics Institute.

SOME PROBLEMS OF MODERN COMPUTER SCIENCE

During recent years, many investigations were devoted to the prospects of development of computer science and
identification of the most important challenges, i.e., difficult problems that have high social and scientific significance and
are complex scientific and engineering problems that should be heavily invested and intensively investigated. We note some
of such problems that are directly connected with the historical traditions and scientific potential of the Cybernetics Institute.
1. Creation of an Intelligent Technological Environment for the Solution of Complicated Applied Problems
with Efficient Use of Multilevel Distributed Resources. Modern computing systems include the following means of
parallel computations at different levels: multinuclear processors provide parallelization over common memory, cluster
systems support parallel computations over distributed memory, grid systems make it possible to combine powerful
geographically remote distributed resources into a unified system, and, finally, at the upper level, services provided by the
Internet (for example, using distributed large-capacity databases) can be used. The construction of efficient programs for
distributed systems requires high qualification even if automatic parallelization tools are used. An intelligent environment
must assist in constructing efficient systems by suggesting the methods to be used for the organization of parallel execution
and providing a combination of program components, tuning and modifying components in order to reuse earlier developed
moduli, provide insurance against unauthorized access to programs during computations, etc.
2. Verification of Requirements on and Specifications of Large-Scale Systems. At the present time,
model-controlled software engineering becomes an increasingly widespread technology of development of distributed
large-scale systems (telecommunication, built-in systems, etc.). As a result, the use of formalized requirements and
specifications becomes a standard. New projects usually arise as a result of modification of earlier projects and a combination
of components located at different levels. This leads to contradictions and errors whose elimination (and thereby the
reduction in the cost of their quality that is one of the most important indicators of industrial designs) is considerably cheaper
at early development stages. The necessity of ensuring reliability and protection additionally complicates the problem. The
solution of the problem requires a combination of modeling and deductive methods and the development of new types of
models allowing one to use thin abstraction methods and convenient methods for control of verification processes.
3. Verified Software. This problem was central at the International Conference on Grand Challenges of Informatics
that took place in Budapest in 2006. A typical problem in this field is the creation of a verifying compiler (Tony Hoare).
Despite the fact that many important results are accumulated in the program verification problem, the problem remains
sufficiently difficult. Nevertheless, new advances in the field of automation of deductive reasoning and modeling allow one
to hope to obtain positive results. At present, the solution of problems of creation of verified qualitative software is heavily
invested by interested companies.
4. Cyber Security. The problem of protection of strategically important information systems against any kind of attacks
that destroy such systems is now considered at the state level. To completely solve it, new ideas, intelligent tools of evolving
protection systems, and powerful computing means for modelling and checking systems for tamper resistance are necessary.

623
5. Economic Models. The solution of optimization problems is at the center of attention of traditional applications of
mathematical methods in economy. In this case, it is assumed that each of players in economic games has unbounded
computational resources and uses an optimal strategy. However, in actual fact, promising models are those with bounded
resources and also models using algorithmic mechanisms of behavior of agents in environments with incomplete information
and aggressive players.
6. Bioinformatics. The human genome is the most sophisticated example of a program written in a biological code.
But, despite outstanding strides of bioengineering, very little is known about the execution of this program and the
organization of the computer that interprets it. Moreover, the level of biochemical reactions is too low to completely
understand the processes of formation and functioning of organisms. Therefore, the question whether a higher-level
algorithmic language can be constructed for the description of genetic programs remains open.
7. Structure of Brain and Thinking. The structure of brain and thinking is one of seven projects considered by the
UK Computing Research Committee as a great challenge in computer science. Its authors consider that the time has come
and possibilities are for the creation of a full-function robot combining a neural-like model of brain with “algorithms of
thinking” (in the terminology of N. M. Amosov). We note that the experiments conducted in due time under the guidance of
N. M. Amosov and aimed at the creation of an intelligent robot were first steps in this direction.
8. Brain-Like Computing Structures. V. M. Glushkov discoursed on brain-like computing structures as long ago as
in the seventies. The idea of the recursive computer as an alternative of the von Neumann architecture occurred as some
intermediate solution or the first step in the direction of creation of computers modelling brain-like structures with
large-scale parallelism, self-organization, and efficient adaptation to working conditions. The problem of construction of
brain-like computing structures has also something in common with the project “Journeys in Non-Classical Computation”
considered by the UK Computing Research Committee.
9. Massively Multiplayer Computer Games as an Approach to the Modeling of Evolving Artificial Intelligence.
The idea of modeling of evolution was discussed in the well-known monograph “An Introduction to Cybernetics” published
by V. M. Glushkov as long ago as in 1961. Of course, modern technological and scientific capabilities are far from more or
less rigorous experiments. But, at the present time, some new capabilities arise in massively multiplayer computer games in
which the players themselves construct personages, provide them with definite functionalities, and improve them during
game experiments. As an example, the popular game “Simpsons-2" can be considered. In the new project ”Spore," the
evolution controlled by a player is modelled. The world evolves from simplest organisms to beings and then to populations
and communities that construct towns, etc. up to an activity on a cosmic scale. The application of the principles of massively
multiplayer computer games to the creation of evolving high-intelligent systems for supporting human activity in various
fields has great potential.

REFERENCES

1. I. V. Sergienko, Informatics and Computer Technologies [in Ukrainian], Cybernetics Institute of NASU-Naukova
Dumka, Kyiv (2004).
2. Yu. V. Kapitonova and A. A. Letichevsky, Paradigms and Ideas of Academician V. M. Glushkov [in Russian],
Naukova Dumka, Kiev (2003).

624

Potrebbero piacerti anche