automata theory

automata theory
Body of physical and logical principles underlying the operation of any electromechanical device (an automaton) that converts information input in one form into another, or into some action, according to an algorithm.

Norbert Wiener and Alan M. Turing are regarded as pioneers in the field. In computer science, automata theory is concerned with the construction of robots (see robotics) from basic building blocks of automatons. The best example of a general automaton is an electronic digital computer. Networks of automata may be designed to mimic human behaviour. See also artificial intelligence; Turing machine.

* * *

Introduction

      body of physical and logical principles underlying the operation of any electromechanical device (an automaton) that converts information from one form into another according to a definite procedure. Real or hypothetical automata of varying complexity have become indispensable tools for the investigation and implementation of systems that have structures amenable to mathematical analysis.

      An example of a typical automaton is a pendulum clock. In such a mechanism the gears can assume only one of a finite number of positions, or states, with each swing of the pendulum. Each state, through the operation of the escapement, determines the next succeeding state, as well as a discrete output, which is displayed as the discrete positions of the hands of the clock. As long as such a clock is wound and its operation is not interfered with, it will continue to operate unaffected by outside influences except the effect of gravity on the pendulum.

      More general automata are designed to respond to changes in external conditions or to other inputs. For example, thermostats (thermostat), automatic pilots of aircraft, missile guidance systems, telephone networks, and controls of certain kinds of automatic elevators are all forms of automata.

      The internal states of such devices are not determined solely by their initial state, as is the case of the pendulum clock, but may be determined by an input from a human operator, from another automaton, or by an event or series of events in the environment. A thermostat, for instance, has an “on” or “off” state that depends on the temperature. The best known general automaton is the modern electronic computer, the internal states of which are determined by the data input and which operates to produce a certain output.

Bayard Rankin R.J. Nelson

Nature and origin of modern automata
      The components of automata consist of specific materials and devices, such as wires, transistors, levers, relays, gears, and so forth, and their operation is based on the mechanics and electronics of these parts. The principles of their operation as a sequence of discrete states can, however, be understood independently of the nature or arrangement of their components. In this way, an automaton may be considered, abstractly, as a set of physically unspecified states, inputs, outputs, and rules of operation, and the study of automata as the investigation of what can be accomplished with these. This mode of abstraction yields mathematical systems that in certain respects resemble logical (logic) systems. Thus, an automaton can be described as a logically defined entity that can be embodied in the form of a machine, with the term automaton designating both the physical and the logical constructions.

      In 1936 an English mathematician, Alan Mathison Turing (Turing, Alan M.), in a paper published in the Proceedings of the London Mathematical Society (“On Computable Numbers with an Application to the Entscheidungsproblem”), conceived a logical machine (Turing machine) the output of which could be used to define a computable number. For the machine, time was considered to be discrete and its internal structure, at a given moment, was described simply as one of a finite set of states. It performed its functions by scanning an unbounded tape divided into squares, each of which either contained specific information in the form of one of a finite number of symbols or was blank. It could scan only one square at a time, and, if in any internal state except one called “passive,” it was capable of moving the tape forward or backward one square at a time, erasing a symbol, printing a new symbol if the square was blank, and altering its own internal state. The number it computed was determined by symbols (the “program”) provided on a finite portion of the tape and the rules of operation, which included stopping when the passive state was reached. The output number was then interpreted from the symbols remaining on the tape after the machine stopped.

      Automata theory since the middle of the 20th century has been extensively refined and has often found practical application in civilian and military machines. The memory banks of modern computers (computer memory) can store large (though finite) amounts of information. (For further information on computers and their applications, see information processing.) The original Turing machine had no limit to the memory bank because each square on the unbounded tape could hold information. The Turing machine continues to be a standard reference point in basic discussions of automata theory, and many mathematical theorems concerning computability have been proved within the framework of Turing's original proposal.

Neural nets (neural network) and automata

The finite automata of McCulloch and Pitts
      Part of automata theory lying within the area of pure mathematical (mathematical model) study is often based on a model of a portion of the nervous system in a living creature and on how that system with its complex of neurons (neuron), nerve endings, and synapses (separating gap between neurons) can generate, codify, store, and use information. The “all or none” nature of the threshold of neurons is often referred to in formulating purely logical schemata or in constructing the practical electronic gates of computers. Any physical neuron can be sufficiently excited by an oncoming impulse to fire another impulse into the network of which it forms a part, or else the threshold will not be reached because the stimulus is absent or inadequate. In the latter case, the neuron fails to fire and remains quiescent. When several neurons are connected together, an impulse travelling in a particular part of the network may have several effects. It can inhibit another neuron's ability to release an impulse; it can combine with several other incoming impulses each of which is incapable of exciting a neuron to fire but that, in combination, may provide the threshold stimulus; or the impulse might be confined within a section of the nerve net and travel in a closed loop, in what is called “feedback.” Mathematical reasoning about how nerve nets work has been applied to the problem of how feedback in a computing machine can result in an essential ingredient in the calculational process.

      Original work on this aspect of automata theory was done by Warren S. McCulloch and Walter Pitts at the Research Laboratory of Electronics at the Massachusetts Institute of Technology starting in the 1940s.

      The definitions of various automata as used here are based on the work of two mathematicians, John von Neumann (von Neumann, John) and Stephen Cole Kleene (Kleene, Stephen Cole), and the earlier neurophysiological researches of McCulloch and Pitts, which offer a mathematical description of some essential features of a living organism. The neurological model is suggested from studies of the sensory receptor organs, internal neural structure, and effector organs of animals. Certain responses of an animal to stimuli are known by controlled observation, and, since the pioneering work of a Spanish histologist, Santiago Ramón y Cajal, in the latter part of the 19th and early part of the 20th century, many neural structures have been well known. For the purposes of this article, the mathematical description of neural structure, following the neurophysiological description, will be called a “neural net.” The net alone and its response to input data are describable in purely mathematical terms.

      A neural net may be conveniently described in terms of the kind of geometric configuration that suggests the physical structure of a portion of the brain. The component parts in the geometric form of a neural net are named (after the physically observed structures) neurons. Diagrammatically they could be represented by a circle and a line (together representing the body, or soma, of a physiological neuron) leading to an arrowhead or a solid dot (suggesting an endbulb of a neuron). A neuron may be assumed to have either an excitatory or an inhibitory effect on a succeeding one; and it may possess a threshold, or minimum number of unit messages, so to speak, that must be received from other neurons before it can be activated to fire an impulse. The process of transmission of excitation mimics that which is observed to occur in the nervous system of an animal. Messages of unit excitation are transmitted from one neuron to the next, and excitation is passed along the neural net in quantized form, a neuron either becoming excited or remaining non-excited, depending on the states (excitatory or quiescent) of neurons whose endbulbs impinge upon it. Specifically, neuron N, with threshold h, will be excited at time t, if and only if h or more neurons whose excitatory endbulbs impinge upon it are excited at time t - 1 and no neuron whose inhibitory endbulb impinges upon it is excited at time t - 1. A consistent picture can be made of these conditions only if time and excitation are quantized (or pulsed). It is assumed conventionally that a unit of time is required for the transmission of a message by any neuron.

      Certain neurons in the configuration mathematically represent the physiological receptors that are excited or left quiescent by the exterior environment. These are called input neurons. Other neurons called output neurons record the logical value, excited or quiescent, of the whole configuration after time delay t and transmit an effect to an exterior environment. All the rest stimulate inner neurons.

      Any geometric or logical description of the neural structure of an organism formulated as the basis of physical construction must be sufficiently simple to permit mechanical, electric, or electronic simulation of the neurons and their interconnections.

The basic logical organs
      The types of events that can excite the automaton and the kinds of responses that it can make must next be considered. By stripping the description down to the most simple cases, the basic organs from which more complicated robots (robot) can be constructed may be discovered. Three basic organs (or elementary automata) are necessary, each corresponding to one of the three logical operations of language: the binary operations of disjunction and conjunction, leading to such propositions as AB (readA or B”), AB (readA and B”), and the unary operation of negation or complementation, leading to such propositions as Ac (read “not A” or “complement of A”). First to be considered are the stimulus-response pattern of these elementary automata.

      Assuming that a neuron can be in only one of two possible states—i.e., excited or quiescent—an input neuron at a given instant of time t - 1 must be either excited or nonexcited by its environment. An environmental message transmitted to two input neurons N1 and N2 at time t - 1 can then be represented numerically in any one of the four following ways, in which binary digit (bit) 1 represents excitation and binary digit 0 represents quiescence: (0, 0), (0, 1), (1, 0), (1, 1). The disjunction automaton must be such that a single output neuron M correspondingly registers at time t the response: 0, 1, 1, 1. The conjunction automaton must be such that a single output neuron M correspondingly registers at time t the response: 0, 0, 0, 1. The negation automaton considered as having two input neurons N1 and N2, of which N1 is always excited, must respond to the environmental messages (1, 0) and (1, 1) with 1, 0, respectively, at the output neuron M.

The generalized automaton and Turing's machine
      The construction of more complicated robots from these basic building blocks constitutes a large part of the theory of automata. The first step in the direction of generalization is to define the neural nets that correspond to formal expressions in n variables of the propositional calculus—that is, the formal system that concerns “or,” “and,” “not,” and “implies.” A single output automaton (of which the above three are simple examples) is a neural net with n input neurons, one output neuron, and with interconnections between neurons that conform to the rule that no neuron stimulated at time t can impinge upon a neuron that could have experienced its first stimulation at the same or an earlier time. The latter rule is the requirement of no feedback. Given this concept of a single output automaton, it is possible to examine the output response at time t + s, considered as a function of the configuration of stimuli at the n input neurons at time t. This response can be compared with the truth value (truth-value) of a logical statement (polynomial) from the propositional calculus. A logical statement is formed from n component propositions, each of which can assume the truth value either true or false. The comparison between automaton and logical statement is accomplished by matching response at the output neuron at time t + s with truth value of the statement for every one of the 2n cases in which the configuration of stimuli conforms to the configuration of truth values of the component propositions. If, in this sense of comparison, the functional response of the automaton is identical to the functional value of the logical statement (polynomial), the automaton is then said to compute the statement (polynomial) or the statement is said to be computable. A wider class of computable statements is introduced with the general automaton, yet to be defined, as with the more general Turing machine.

      The important distinction between the logical statement and the automaton that computes it is that the first is free of any time ingredient while the second is defined only with reference to a time delay of length s.

      A basic theorem states that for any polynomial P of the propositional calculus, there exists a time delay s and a single output automaton A, such that A computes P with time delay s. The proof of the theorem rests on the fact from the propositional calculus that all statements are composed from component propositions with the operations of disjunction, conjunction, and negation and the fact from the automata theory that all single output automata can be composed by interconnecting elementary automata of the disjunctive, conjunctive, and negative types.

      A second step of generalization in the construction of robots proceeds from the single output automata to the neural net that possesses more than one output neuron and in which the internal connections may include feedback. Such a construction is called a “general automaton.” The class of general automata includes all-purpose, electronic digital computers (digital computer) the memory-storage units of which are of fixed, though possibly of very considerable, size. It is within the context of the general automaton that the purely automated decision-making, computing, controlling, and other sophisticated neural functions so suggestive of the mental ability of human beings may appropriately be discussed.

      The Turing machine can be defined not only as it was in the introduction (roughly following Turing's approach) but as a general automaton to which an unbounded memory unit (such as an unbounded tape) is added. Thus, the general automaton and the Turing machine differ in logical design only with respect to the extent of memory storage.

      The distinction is critical, however, for Turing proposed that the class of numbers computable on his machine (a wider class than can be obtained by general automata) coincide with those that are effectively computable in the sense of constructive logics. A simple convention also makes it possible to interpret the output of a Turing machine as the computation of a function. The class of functions so computed, called “Turing computable” or “computable,” are of basic importance at the foundations of mathematics and elsewhere. It can also be stated that a useful class of functions that are definable without reference to machines, namely, the so-called partial recursive functions, has the same membership as the class of computable functions. For the present purposes, then, no effort need be made to define the partial recursive functions.

      Turing's approach admitted mathematical formalization to the extent that a finite list of symbols q1, q2, q3, · · · , qn could be used to denote internal states and a finite list of symbols a, b, c, · · · , λ could designate abstractly what is called “the alphabet”—that is, the list from which individual members could be chosen and printed into the squares of the machine's tape. If the symbols R and L, respectively, designate a move of the tape one square to the right and one square to the left, it remains only to list in some orderly fashion the alternative possible steps in the machine's operation in order to define it completely. Turing himself chose to list alternate steps, or instructions, in the form of quintuples of the above symbols. It is also possible to use quadruples to define a machine. Such a list, then, of, say, quadruples of instructions is equivalent to a Turing machine, and it is significant that the list is finite.

      The finiteness of the list of quadruples of instructions leads to the idea that all Turing machines can be listed—that is, they are at most countable in number. This being the case, it can be proved that there is what Turing called a “universal” machine capable of operating like any given Turing machine. For a given partial recursive function of a single argument, there is a corresponding integer, called the Gödel number, that identifies the Turing machine capable of computing the given function. The Gödel number and the argument value of the function to be computed can be given as input data on the tape of the universal machine. From the Gödel number, the list of instructions, defined in the form of quadruples, that are necessary for the computation of the given recursive function at the specific argument value can be encoded by the universal machine on its own tape, and, from that point on, the universal machine will duplicate the required Turing machine.

Input: events that affect an automaton
      Once having reached the definition of the general automaton and the more general universal Turing machine, a general definition of the events in the environment that stimulate it may be introduced. The automaton, which computes logical statements, is not defined without reference to time, a characteristic that distinguishes the machine itself from the logic. In the same way, stimuli are not definable, in general, without reference to time. These facts are indicative of the simulation features that the computing machine bears with respect to man.

      For an automaton with n input neurons, N1, N2, · · · , Nn, an individual history of stimulation, starting with the present moment, t = 0, and continuing to the remote past, can be recorded as a sequence of n-tuples, (β1, β2, · · · , βn), in which each binary digit, βk, is either a 0 or a 1. Thus, the beginning of one such individual history for an automaton of four neurons might be recorded in tabular form as an unending list of quadruples of the type (1, 0, 1, 1) (see Box, display 1—>).

      An event is a collection of individual histories. This is a generalization of the idea already used to characterize an environmental message transmitted to the two input neurons of an elementary automaton at time t - 1. As an example, the stimulus (0, 1) is the same as the collection of all individual histories in which neuron N2 was stimulated at time t - 1 and neuron N1 was not. As another example, the event that neuron N2 (of a two-neuron automaton) is presently stimulated and has always been stimulated on alternate second can be represented as the collection of two individual histories (see 2—>). While some events require an infinite tabulation, others that specify the states of each neuron over a finite past (allowing that anything might have occurred before) permit a finite tabulation. Events of the second kind are called definite events, or stimuli.

      The construction (either actual or theoretical) of a general automaton with the help of the logical components and interconnections of a neural net results in an entity that responds in reproducible ways to stimuli. A response becomes recorded as a configuration of binary digits, corresponding to the states of the finite number of output neurons (neuron) at a specified time t in the future, while a stimulus is a collection of individual histories extending over the past and including the present. The logical construction implies a behaviour in the guise of a listing of responses to all possible stimuli. Reciprocally, for a given behaviour of the type defined, the possible structure of a machine that could produce such behaviour can be investigated.

Probabilistic questions
      It was traditional in the early treatment of automata theory to identify an automaton with an algorithm, or rule of computation, in which the output of the automaton was a logically determined function of the explicitly expressed input. From the time of the invention of the all-mechanical escapement clock in Europe toward the end of the 13th century, through the mechanistic period of philosophy that culminated in the work of the French mathematician Pierre-Simon Laplace, and into the modern era of the logically defined Turing machine of 1936, an automaton was a mechanical or logical construction that was free of probabilistic components. It was also understood to be immersed in an environment (that is, activated or supplied with input data) that could be logically specified without the concept of chance.

      After the middle of the 20th century, mathematicians explicitly investigated questions concerning automata that included in their formulation the idea of chance, and in doing so they drew upon earlier applicable mathematical results. While the automata themselves are prototypes of deterministic machines, the U.S. mathematician Norbert Wiener (Wiener, Norbert) showed that they may be programmed in such a way as to extrapolate certain types of random data that are introduced as input. A prediction of data that are not yet received as input can be accomplished, provided the data are what will later be defined to constitute a stationary time series and provided the prediction is restricted according to a well-defined optimization procedure. In this way a logically defined robot, or automaton, may be placed in an environment that evolves according to both deterministic and random processes (the bifurcation of the environment into deterministic and random processes being mathematically postulated by the designer of the robot) and may be seen to respond to the advantage of its designer: The robot can control a ship's rudder, guide an airplane to its landing, reorient a rocket on its course, predict weather, and so forth. The programming of an automaton so that it will react in a suitable way when placed in a naturalistic environment falls under the heading of prediction theory.

      Of the types of probabilistic questions considered, four (which will be listed in arbitrary order) were predominant. The first, that of Wiener, was broached in 1948. It concerned the use of mathematically expressed algorithms or physically constructed computers to predict the future of a system, such as the weather, that includes random components—i.e., an automaton in Turing's logical sense immersed in a random environment. The second, of von Neumann, was concerned with the reliability of large computing machines with many components and sought methods of design, called “ multiplexing,” that would reduce the chance for unwanted error during the machine calculation of a problem. In this context, the automaton was interpreted as a randomly operating device that in practice approximates the operation of a Turing machine under the influence of better and better design. The third, considered by various researchers, concerned the possibility of computing a wider class of sets than are accessible to Turing machines by adding a random component to the machine itself. In this context, the automaton was being interpreted as a Turing machine modified with the potentiality for injecting the output of a random number generating device into one or more of its operational steps. The fourth concerned the logical possibility of an automaton, such as a Turing machine, actually yielding as output a sequence of random numbers. In this context, the automaton was considered to be simultaneously a Turing machine and a generator of numbers that are indistinguishable from measurements on random phenomena.

      Some results that have been achieved from examination of each of these four types of questions will constitute the remainder of this section.

The automaton and its environment
      It must first be observed that, just as an automaton is an acceptable description (or model) of a neural structure, an automaton, though frequently thought of as a computing machine, is in general a response mechanism that produces output (or behaviour) as a consequence of the input (or environmental stimuli). “Environment” is then another name for the input and output of an automaton. Some poetic license in identifying automata with living things may justify the use of the term.

      During his researches on cybernetics, Wiener recognized that, if computers could be programmed to solve certain mathematical equations, then the data read from physically generated time series (or numerical values indexed consecutively in time and related through a transformation) could be extrapolated. He saw that, if this process could be accomplished with sufficient speed, as would be possible with modern electronic circuits, then the extrapolated values would be obtained faster than the actual physically evolving process that produced the time series, and a prediction of the future would result. Errors would be inevitable because a complete history of data and adequate measurements would be unobtainable. For this reason, the mathematical equations that would be at the heart of such an extrapolation could be deduced, in part, from the objective of minimizing the errors. Thus, the matching of an automaton, or computer, with a real physical environment could result in the anticipation of the future, if certain mathematical equations were derived that minimized prediction error.

Control and single-series prediction
      A derivation of the mathematical equations of prediction had been accomplished in a limited sense some years before Wiener's work on cybernetics. In 1931 Wiener had collaborated with an Austrian-born U.S. mathematician, Eberhard Hopf, to solve what is now called the Wiener-Hopf integral equation, an equation that had been suggested in a study of the structure of stars but later recurred in many contexts, including electrical-communication theory, and was seen to involve an extrapolation of continuously distributed numerical values. During World War II, gun- and aircraft-control problems stimulated further research in extrapolation, and Wiener composed a purely mathematical treatise, Extrapolation, Interpolation, and Smoothing of Stationary Time Series, which was published in 1949. As early as 1939, a note on extrapolation by a Russian mathematician, A.N. Kolmogorov (Kolmogorov, Andrey Nikolayevich), had appeared in the French journal Comptes Rendus. Although the Wiener-Hopf work was concerned exclusively with astronomy and done without the guiding influence of computers, it was recognized in World War II that high-speed computations could involve input information from a moving object and, through prediction or extrapolation, provide output data to correct its path. This recognition was the seed of the concept of the guided missile and radar-controlled aircraft. Weather prediction was also possible, as was computerized research on brain waves whose traces on the electroencephalograph offered another physical realization of the time series that are predictable. The mathematics that was necessary for a complete understanding of prediction included the concept of a stochastic process, as described in the article probability theory.

      The Wiener and Kolmogorov research on extrapolation of time series became known as single-series prediction and owed much to the studies (1938) of a Swedish mathematician named Herman Wold, whose work was predicated on the assumption that, if X1, X2, X3, · · · , are successive values of a series identified with discrete points in time t = 1, t = 2, t = 3, · · · , then the successive values are not entirely unrelated (for if they were, there would be no way for an algorithm or an automaton to generate information about later members of the sequence—that is, to predict). It was assumed, with anticipation that there is frequently such a thing in nature, that a transformation T relates members of the series by successively transforming an underlying space of points ω according to a rule. The rule states that the kth member of the time series is a function of an initial point ω that has migrated in the underlying space Xk = X(Tkω). It was also assumed that, if sets of points {ω} constituted a region (of sufficient simplicity called “measurable”) in space, then when the set was transformed under the influence of T its volume would not be changed. The last assumption had, in fact, been proved by a French mathematician, Joseph Liouville (Liouville, Joseph), a century earlier for a wide class of physical processes whose behaviour is correctly described by the so-called Hamiltonian (Hamiltonian function) equations. The clearly stated assumptions of Wiener and Kolmogorov, referred to as the stationarity of the time series, were supplemented with the idea (the linearity restriction) that a solution Sk(ω) for the predicted value of the series, displaced in time k steps into the future, should be restricted to a linear combination of present and past values of the series (see 3—>).

      With the help of one other mathematical assumption, it was then possible to solve the single-series prediction problem by specifying an algorithm that would determine the coefficients in the linear combination for Sk(ω), in which k is a positive integer (see 4—>). It was possible also to solve for the error of prediction (see 5—>)—that is, a measure of the discrepancy between the value predicted and the true value of the series that would occur at time k in the future. This meant that for a variety of circumstances, such as the prediction of atmospheric pressure measured at one weather station, or the prediction of a single parameter in the position specification of a particle (such as a particle of smoke) moving according to the laws of diffusion, an automaton could be designed that could sense and predict the chance behaviour of a sufficiently simple component of its environment.

Multiple-prediction theory
      Generalizations of the above limited accomplishments are tantalizing to mathematicians. If animals, and humans in particular, are viewed, even in part, as automata with varying degrees of accomplishment and success that depend on their abilities to cope with their environment, then human beings could be better understood and their potentialities could be further realized by exploring a generalized version of an automaton's ability to predict. Success in generalizations of this kind have already been achieved under the heading of what is called multiple-prediction theory. A reference to the problem of multiple prediction without a complete solution was made as early as 1941 by a Russian mathematician, V. Zasuhin. The first major step forward, after Zasuhin, was taken by Wiener in 1955 under the title “On the Factorization of Matrices.” Many significant results soon followed.

      If multiple-prediction theory is identified with part of automata theory (which is not always done), it is possible to consider the construction of a computing machine, or automaton, capable of sensing many interdependent elements of its environment at once and, from a long history of such data, of predicting a future that is a function of the same interdependent elements. It is recognized that multiple prediction is the most general approach to the study of the automaton and its environment in the sense that it is a formulation of prediction free of the linearity restriction earlier mentioned with reference to single series (see 3—>). To express a future point Sk(ω), for example, as a linear function of its present and past values as well as first derivatives, or rates of change, of its present and past values is to perform a double prediction or prediction based on the two time series X1, X2, X3, · · · ; X1, X2, X3, · · · , in which primes indicate derivatives with respect to time. Such double prediction is a first step toward nonlinear prediction.

Automata with unreliable components
      In 1956 with the continuing development of faster and more complex computing machines, a realistic study of component misfiring in computers was made. Von Neumann recognized that there was a discrepancy between the theory of automata and the practice of building and operating computing machines because the theory did not take into account the realistic probability of component failure. The number of component parts of a modern all-purpose digital computer was in the mid-20th century already being counted in millions. If a component performing the logical disjunction (A or B) misfired, the total output of a complex operation could be incorrect. The basic problem was then one of probability: whether given a positive number δ and a logical operation to be performed, a corresponding automaton could be constructed from given organs to perform the desired operation and commit an error in the output with probability less than or equal to δ. Affirmative results have been obtained for this problem by mimicking the redundant structure of parallel channels of communication that is frequently found in nature—i.e., rather than having a single line convey a pulse of information, a bundle of lines in parallel are interpreted as conveying a pulse if a sufficient number of members in the bundle do so. Neumann (von Neumann, John) was able to show that with this redundancy technique (multiplexing) “the number of lines deviating from the correctly functioning majorities of their bundles” could with sufficiently high probability be kept below a critical level.

Automata with random elements
      The term algorithm has been defined to mean a rule of calculation that guides an intelligent being or a logical mechanism to arrive at numerical or symbolic results. As discussed above under Neural nets and automata, a formalization of the intuitive idea of algorithm has led to what is now called an automaton. Thus, a feature of an automaton is the predictability, or the logical (logic) certainty, that the same output would be obtained for successive operations of an automaton that is provided with the same input data. If, as a substitute for the usual input data, random numbers (or results due to chance) are provided, the combination of input data and automaton is no longer completely predictable. It is notable, however, that unpredictable results that might be obtained with the use of uncertain input are not without their practical application. Such a method of combining the operation of a computer with the intentional injection of random data is called the “Monte-Carlo method” of calculation and in certain instances (such as in the numerical integration of functions in many dimensions) has been found to be more efficient in arriving at correct answers than the purely deterministic methods.

      Quite apart from the questions of efficiency that might bear upon the addition of an element in a computing machine (automaton) that could produce numbers due to chance, the purely logical question has been asked: Is there anything that can be done by a machine with a random element that cannot be done by a deterministic machine? A number of questions of this type have been investigated, but the first clear enunciation and study of such a question was accomplished in 1956 by the U.S. engineer Claude E. Shannon and others. If the random element in the machine is to produce a sequence of digits 0 and 1 in a random order so that the probability is p for a digit 1 to occur, then (assuming that the number p is, itself, obtainable from a Turing machine as a computable number) the machine can do nothing new, so to speak, as compared to the unmodified Turing machine. This result is precisely expressed in the language of automata theory by saying that the sets enumerated by the automaton with random elements can be enumerated also by the unmodified automaton. The computability of p, however, is critical and is necessary for the result. It is also important to emphasize, in order to distinguish this result from what follows, that the computability of p is under discussion, not the computability of the sequence of random digits.

Computable probability spaces
      Finally, it is to be observed that the concept of chance or random number, wherever it has occurred in the above discussion, submits to the interpretation of result of observation of an experiment or physical phenomenon. The chance ingredients in the weather data to which prediction theory applies could be due to molecular disturbances in the atmosphere that are of diverse and minute origin. The chance failure that might cause a component breakdown in a computing machine is due to the physical structure and environment of the defaulting part. The source of chance that could be used to augment the input of a computer for the purposes of the Monte-Carlo method of calculation may be chosen as the erratic emission of electrons from the cathode of an electronic tube and is frequently so chosen.

      An entirely distinct question is involved in relating chance and computers. It would be important to know whether an automaton in the sense defined by Turing can generate random numbers. The question is tantamount to asking whether a Turing machine can logically describe the behaviour of those sources of chance that are found in nature and are the subject of the study of probability theory. Because there are many points of view—too many to consider here—more tightly phrased questions may serve as an introduction to the subject, and a few conclusions that can be brought as answers will be mentioned. At the outset, one limited question can be affirmatively answered: Can a computable sequence of numbers, S = (a1, a2, a3, · · · ), serve as the basic ingredient of a probability theory by providing all of the necessary points in a probability space? In this question the term computable sequence is defined to mean that the numbers ak are real and there is a Turing machine that, for any pair of positive integers A, B, will print out in order the first A digits of all ak, for k ranging from 1 to B, in a finite number of steps. It might appear that an affirmative answer to the above question is not striking if simple probability theory alone is considered—that is, a theory of events in which the number of possible outcomes is finite, as in the theory of dice, coins, roulette, and the like. On the other hand, it was shown in 1960 that, although a computable sequence can serve as a set of points in a simple probability space, the mathematical expectations of all random variables X defined on the space can be computed according to an explicit algorithm (see 6—>) that makes use of the sample values, X(a1), X (a2), X(a3), · · · , which themselves are computable if X is computable. In this algorithm it is evident that the potential number of values to be calculated is infinite, though the number of possible outcomes (distinct values of X) might be finite.

      In the language of the limited question considered, a listing of all sample values (random numbers) of an infinite sequence of statistically independent random variables can be printed out by a Turing machine, at least in the simple case, with strict adherence to the definition of all probabilistic terms as based on measure theory, the theory that generalizes the concept of length.

      Extension of such constructions beyond the simple case has also been shown to be possible, provided the concept of a random variable can be extended to a class of functions that are more general than the measure-theoretic class. The most explicit formulation of a suitable generalization was given in 1966, and on the basis of this work it is possible to answer affirmatively a second question: For any sequence of probability distributions, is there a sequence of statistically independent random variables with these respective distributions, each of whose sample values can be computed on a Turing machine and whose mathematical expectations are also attainable by algorithm?

      Such results would seem to affront the intuition that tends to divide phenomena into deterministic (or computable) and random (or uncomputable) parts. It is to be observed, however, that in probabilistic matters, passage to the limit and infinite collections are essential ingredients, and such entities are unfamiliar objects in the world in which intuitions are formed.

Bayard Rankin

Classification of automata
      All automata referred to from this point on may be understood to be essentially Turing machines classified in terms of the number, length, and movement of tapes and of the reading and writing operations used. The term discrete state automaton is sometimes used to emphasize the discrete nature of the internal states. The principal classes are transducers (transducer) and acceptors. In automata theory, a transducer is an automaton with input and output; any Turing machine for computing a partial recursive function, as previously described, can stand as an example. An acceptor is an automaton without output that, in a special sense, recognizes or accepts words on the machine alphabet. The input of an acceptor is written on tape in the usual way, but the tape is blank at the end of the computation, and acceptance of the input word is represented by a special state called a final state. Thus, a word x, or sequence of symbols from an alphabet denoted by the letter S, is said to be accepted by an acceptor A if A computes, beginning in an initial state q0 with x on tape, and halts in a final state with tape being entirely blank. A subset designated U of the set of words S* on an alphabet S is called an accepted set if there is an automaton A that accepts any word xU.

Acceptors
      An elementary result of automata theory is that every recursively enumerable set, or range of a partial recursive function, is an accepted set. In general the acceptors are two-way unbounded tape automata.

      A useful classification of acceptors has been introduced in conjunction with a theory of generative grammars (generative grammar) developed in the United States by a linguist, Noam Chomsky (Chomsky, Noam). A generative grammar is a system of analysis usually identified with linguistics. By its means a language can be viewed as a set of rules, finite in number, that can produce sentences. The use of a generative grammar, in the context of either linguistics or automata theory, is to generate and demarcate the totality of grammatical constructions of a language, natural or automata oriented. A simple grammar for a fragment of English, determined by 12 rules (see 7—>), can serve to introduce the main ideas.

      In this simple grammar, each rule is of the form gg′ (read, “g′ replaces g”) and has the meaning that g′ may be rewritten for g within strings of symbols. The symbol  that appears in the rules may be understood as standing for the grammatical category “sentence,” Pr for “pronoun,” VP for “verb phrase,” NP for “noun phrase,” and so forth. Symbols marked with a vinculum (-) constitute the set VN of nonterminal symbols. The English expressions “she,” etc., occurring in the rules constitute the set VT of terminal symbols.  is the initial symbol.

      Beginning with , sentences of English may be derived by applications of the rules. The derivation begins with ; the first rule allows Pr VP to be rewritten for , yielding Pr VP; the fourth rule allows  NP to be rewritten for VP, yielding Pr  NP; and so forth (see 8—>). A last step yields a terminal string or sentence; it consists solely of elements of the terminal vocabulary VT. None of the rules apply to it; so no further steps are possible.

      The set of sentences thus generated by a grammar is called a language. Aside from trivial examples, grammars generate denumerably infinite languages.

Recursively enumerable grammars and Turing acceptors
      As noted above, an elementary result of automata theory is that every recursively enumerable set constitutes an accepted set. Generally speaking, acceptors are two-way unbounded tape automata. On the other hand, a grammar consisting of rules gg′, in which g and g′ are arbitrary words of (VTVN)*, is an unrestricted rewriting system, and any recursively enumerable set of words—i.e., language in the present sense—is generated by some such system. These very general grammars thus correspond to two-way acceptors, called Turing acceptors, that accept precisely the recursively enumerable sets.

Finite-state grammars and finite-state acceptors
      Acceptors that move tape left only, reading symbol by symbol and erasing the while, are the simplest possible, the finite-state acceptors. These automata have exactly the same capability as McCulloch-Pitts automata and accept sets called regular sets. The corresponding grammars in the classification being discussed are the finite-state grammars. In these systems the rules gg′ are restricted so that g is a nonterminal v of VN (as exemplified above) and g′ is of the form us, uVN and sVT. The languages generated by finite-state grammars, owing to this correspondence, are called regular languages.

      Although these simple grammars and acceptors are of some interest in information theory and in neural network modelling, they are not descriptively adequate for English or for such standard computer languages as Algol because they are not able to account for phrase structure. In particular, finite-state grammars cannot generate self-embedded sentences such as “the man the dog bit ran away,” nor can they produce sentences with several readings such as “she is a pretty little girl.”

Context-free grammars and pushdown acceptors
      Context-free, or phrase-structure, grammars, although apparently not affording completely adequate descriptions of vernacular languages, do have the desirable properties just noted. For this family, the rules gg′ contain single nonterminals on the left, as in the case of the finite-state grammars, but allow g′ to be any word of (VTVN)*. The example discussed above is a context-free grammar. Grammars of this kind can account for phrase structure and ambiguity (see 9—>).

      Pushdown acceptors, which play a key role in computer-programming theory, are automata corresponding to context-free grammars. A pushdown acceptor is a finite-state acceptor equipped with an added two-way storage tape, the so-called pushdown store. At the beginning of operation this tape is blank. As the automaton computes, the store is used to analyze the syntactical structure of the sentence being read. The store moves left when printed, and only the last symbol printed may be read, then the next to the last, and so forth. The input is accepted if both the (one-way) input and storage tapes are blank when the automaton halts in a final state.

      The representation of Turing machines in quadruple form may be replaced here by a somewhat clearer list of rules that simulate tape action in their application. Rules can be formulated for a pushdown acceptor P for a context-free language L of items xcx-1, in which x is a word on an abstract alphabet {a, b} and x-1 is x written in reverse. A first such rule can be formulated to mean that, if P is in state q0 scanning a on input and any (defined) symbol on the pushdown store, it moves tape left, erases a from the input, prints a on the store, and goes into state q1. A symbolic expression for the rule might be: q0aaq1. Another rule might be of the form: if P is in state q1 scanning c on input and anything on store, it moves input left, erases c, and does nothing with respect to the store—briefly, q1cq2. Another requires that, if P is in q2 scanning a on input and a on store, then it moves input left, erases a, moves store right, and erases a (see 10—>). An example is easily constructed to show that under certain rules a set, say, abcba is accepted (see 11—>). If q0abcba indicates the outset of a computation with P in the initial state q0 scanning the first a in abcba on input tape and blank on store tape, and if q2 is a final state, then the computation is determined by the rules given above (see 10—>). At the end of the computation the automaton is in a final state q2, both tapes are blank, and there is no rule with q2 alone on the left; P halts and hence abcba is accepted.

      Reflection on the example and on others easily constructed shows that a pushdown acceptor is able, in effect, to parse sentences of context-free languages.

Context-sensitive grammars and linear-bounded acceptors
      A fourth type of acceptor, which is mainly of mathematical rather than applied interest, is the two-way acceptor with bounded tape—i.e., tape the length of which never exceeds a linear function of the input length. These are the linear-bounded acceptors. They correspond in the present classificatory scheme to context-sensitive grammars. Unlike the context-free grammars, these latter systems use rules gg′, in which the nonterminal symbol ν ∊ VN in g may be rewritten only in a context xwy; thus gg′ is of the form xvyxwy, x, y, w ∊ (VT ∪ VN)*. An example of a context-sensitive language accepted by a linear-bounded automaton is the copy language xcx.

      The family of recursively enumerable languages includes the context-sensitive languages, which in turn includes the context-free, which finally includes the regular, or finite-state, languages. No other hierarchy of corresponding acceptors has been intensively investigated.

Finite transducers
      The most important transducers are the finite transducers, or sequential machines, which may be characterized as one-way Turing machines (Turing machine) with output. They are the weakest with respect to computing power, while the universal machine is the most powerful. There are also transducers of intermediate power.

Algebraic definition
      Because the tape is one-way with output, a finite transducer T may be regarded as a “black box” with input coming in from the right and output being emitted from the left. Hence, T may be taken to be a quintuple •S, Q, O, M, NÒ, in which S, Q, and O are finite, nonempty sets of inputs, states, and outputs, respectively, and M is a function on the product Q × S into Q and N is a function on the same domain into O. The values are written in the usual functional notation M(q, s), and N(q, s), s ∊ S and q ∊ Q. M and N may be extended to the domain Q × S* by four relations (see 12—>).

Equivalence and reduction
      The most natural classification is by equivalence. If two machines (finite transducers) share the same inputs, then representative states from each are equivalent if every sequence x belonging to the set of words on the alphabet causes the same output from the two machines. Two finite transducers are equivalent if for any state of one there is an equivalent state of the other, and conversely. Homomorphisms (homomorphism) between transducers can also be defined (see 13—>). If two automata are onto homomorphic they are equivalent, but not conversely. For automata that are in a certain sense minimal, however, the converse holds.

      Each equivalence class of transducers contains a smallest or reduced transducer—that is, one having the property that equivalence between its states implies equality. There is an algorithm for finding the reduced transducer of a class, which proceeds in a natural way from equivalence classes or blocks of states of a given transducer, each such block being defined as a state of the reduced transducer. Reduced equivalent finite transducers are unique up to an isomorphism—that is to say, if two finite transducers are reduced and equivalent, they differ only in the notations for their alphabets.

Classification by semi-groups
      A mathematically significant classification of transducers may be obtained in terms of the theory of semi-groups. In outline, if the transducer T is reduced, the functions ϕs given in terms of M, for fixed input, as maps from and to the space of states Q constitute a semi-group termed the semi-group of T (see 14—>). By a certain procedure these semi-groups and their associated transducers T may be decomposed into more elementary systems called serial-connected and parallel-connected transducers. In explanation, the next state (starting from state qa, qb in the serially connected machine TATB is the pair of states made up of the next state in TA from qa with input s and the next state in TB from qb with input Na (qa, s)—which latter is the output of Ta (see 15—>). Schematically, the connection may be depicted, indicating that in a serial connection the output of TA is the input to TB.

      The parallel connection of two transducers is a system that may be rigorously defined (see 16—>) and that may be schematically depicted with input leading in parallel to both machines and output leading in parallel out of both machines. It has been shown that any finite transducer whatsoever can be decomposed into a system of series-parallel-connected automata, such that each element is either a two-state automaton or one whose semi-group is a simple group. This affords a classification of machines that depends ultimately on the determination of the simple groups of finite order.

      An earlier decomposition scheme was based on a generalization of the concept of congruence relations over sets of states, but discussion of it is omitted here.

Post machines
      Types of automata have been investigated that are structurally unlike Turing machines though the same in point of computational capability. The mathematician E.L. Post (U.S.) proposed in 1936 a kind of automaton (or algorithm) that is a finite sequence of pairs •1, a1Ò, •2, a2Ò, · · · , •m, amÒ, such that ai is either an instruction to move an associated two-way tape one square right or left, an instruction to print a symbol, including a blank, from a finite alphabet, or an integer. A Post machine begins at 1 and at step n obeys the instruction an and then goes to step n + 1, unless an is an integer m, in which case it goes to step m if the square scanned at n is marked or to step n + 1 if that square is blank. Post machines are prototypes of the program schemes developed 10 years later by von Neumann and his associates. For any partial recursive function a Post machine can be found that is capable of computing it.

      Generalizations to automata or information processors in which the restriction to finiteness on sets is dropped or in which additional information from arbitrary sets is available to a machine during computation continue to be considered in the literature.

R.J. Nelson

Additional Reading
Daniel I.A. Cohen, Introduction to Computer Theory, rev. ed. (1991), provides an introduction to automata, formal languages, and Turing machines at the undergraduate level. J.E. Pin, Varieties of Formal Languages, rev. and updated ed. (1986; originally published in French, 1984), introduces the theory of finite automata and regular languages. A readable introduction to automata theory as a formal theory of systems is M.W. Shields, An Introduction to Automata Theory (1987). Marvin L. Minsky, Computation: Finite and Infinite Machines (1967); and R.J. Nelson, Introduction to Automata (1967), are comprehensive elementary introductions to automata theory. Michael A. Arbib, Theories of Abstract Automata (1969); and Boleslaw Mikolajczak (ed.), Algebraic and Structural Automata Theory (1991; originally published in Polish, 1985), are more advanced introductions. Jiří Adámek and Věra Trnková, Automata and Algebras in Categories (1990), develops automata theory and category theory together. Samuel Eilenberg, Automata, Languages, and Machines, 2 vol. (1974–76), provides a centralized presentation of important topics in automata theory and formal language theory. Martin Davis, Computability & Unsolvability (1958); H. Rogers, Theory of Recursive Functions and Effective Computability (1967, reissued 1987); George S. Boolos and Richard C. Jeffrey, Computability and Logic, 3rd ed. (1989); J. Glenn Brookshear, Theory of Computation: Formal Languages, Automata, and Complexity (1989); and Robert I. Soare, Recursively Enumerable Sets and Degrees: A Study of Computable Functions and Computably Generated Sets (1987), are concerned with the concepts of Turing computability and the theory of recursive functions.C.E. Shannon and J. McCarthy (eds.), Automata Studies (1956), contains some of the original basic material concerning neural nets and automata with unreliable components or with random elements. Michael Chester, Neural Networks: A Tutorial (1993), is an elementary introduction and survey. Françoise Fogelman Soulié, Yves Robert, and Maurice Tchuente (eds.), Automata Networks in Computer Science (1987), introduces automata networks and gives applications. Eric Goles and Servet Martínez, Neural and Automata Networks (1990), focuses on dynamical neural networks and reviews important results with applications to statistical physics.Pál Ruján, “Cellular Automata and Statistical Mechanical Models,” Journal of Statistical Physics, 49(1–2):139–222 (1987), introduces cellular automata in connection with lattice models. Kumpati S. Narendra and Mandayam A.L. Thathachar, Learning Automata (1989), is a technical introduction. Norbert Wiener et al., Differential Space, Quantum Systems, and Prediction (1966), discusses the automaton and its environment in the sense of prediction theory and gives reference to other literature in this area as well as the area of computable probability spaces. Good accounts of automata theory and its relations to switching theory are Michael A. Harrison, Introduction to Switching and Automata Theory (1965); and S.T. Hu, Mathematical Theory of Switching Circuits and Automata (1968). A good introduction to machine decomposition theory is J. Hartmanis and R.E. Stearns, Algebraic Structure Theory of Sequential Machines (1966). Noam Chomsky, “Formal Properties of Grammars,” in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (eds.), Handbook of Mathematical Psychology, vol. 2 (1963), pp. 323–418, is a well-respected survey of the field of automata and generative grammars. Articles presenting approaches to languages and automata from very general mathematical points of view are Seymour Ginsburg and Sheilah Greibach, “Abstract Families of Languages,” Memoirs of the American Mathematical Society, 87:1–32 (1969); and Gene F. Rose, “Abstract Families of Processors,” Journal of Computer and System Sciences, 4:193–204 (1970). J. Richard Büchi, Finite Automata, Their Algebras and Grammars: Towards a Theory of Formal Expressions (1989), presents a centralized discussion of automata theory considering automata as algebras.Bayard Rankin R.J. Nelson Ed.

* * *


Universalium. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Automata theory — Automata is defined as a system where energy, information and material is transformed, transmitted and used for performing some function without the direct participation of man .In theoretical computer science, automata theory is the study of… …   Wikipedia

  • automata theory — automatų teorija statusas T sritis automatika atitikmenys: angl. automata theory vok. Automatentheorie, f rus. теория автоматов, f pranc. théorie des automates, f …   Automatikos terminų žodynas

  • Introduction to Automata Theory, Languages, and Computation —   …   Wikipedia

  • Automata-based programming — is a programming paradigm in which the program or its part is thought of as a model of a finite state machine or any other (often more complicated) formal automata (see automata theory). Sometimes a potentially infinite set of possible states is… …   Wikipedia

  • Autómata finito determinista — que reconoce el lenguaje regular conformado exclusivamente por las cadenas con un número par de ceros y un número par de unos …   Wikipedia Español

  • Automata — may refer to * Automata theory, in theoretical computer science, the study of abstract machines * The plural form of Automaton, a self operating machine. * Cellular Automata, a model of computation that is the basic design behind a broad class of …   Wikipedia

  • Theory of computation — In theoretical computer science, the theory of computation is the branch that deals with whether and how efficiently problems can be solved on a model of computation, using an algorithm. The field is divided into three major branches: automata… …   Wikipedia

  • Autómata finito — Un autómata finito (AF) o máquina de estado finito es un modelo matemático que realiza cómputos en forma automática sobre una entrada para producir una salida. Este modelo está conformado por un alfabeto, un conjunto de estados y un conjunto de… …   Wikipedia Español

  • theory — /thee euh ree, thear ee/, n., pl. theories. 1. a coherent group of general propositions used as principles of explanation for a class of phenomena: Einstein s theory of relativity. 2. a proposed explanation whose status is still conjectural, in… …   Universalium

  • Autómata celular — Saltar a navegación, búsqueda Animación del juego de la vida de Conway, un autómata celular. Un autómata celula …   Wikipedia Español

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”