Variations of Physical Turing Machines

April 6, 2019 08:23am
by Thomas Tran


Introduction

A Turing Machine (TM) manipulates symbols on a strip of tape according to a set of functions that defines the device. It can represent the logic of any type of algorithm from just these functions through a set of transition states. The TM itself is sometimes pictured as the following. First there is a tape that that starts from one end and runs indefinitely toward the other direction. This tape is separated into individual cells that each hold a discrete, atomic, symbol. There is also a head that reads or writes symbols located within each cell. A TM essentially controls all manipulations performed by a CPU, and is often referred to as an algorithm.

In 1936, Alan Turing proposed what is now considered the formal TM. Although the TM is often used in theoretical computer science, it was never implemented as an actual computing device. For instance, all present day computers are based on the electronic computer architecture devised by von Neumann et al. in the 1940s [1], in which programs and data are all accessible with random access. The universal Turing Machine, on the other hand suggests a linear sequential access model.

According to the Church-Turing Thesis, a number function is computable by a Turing Machine if and only if it is computable by a physical machine. In this review, I will discuss some conceptualizations of physical TMs to compute functions. Some findings from the past thirty years are discussed and summarized below.

Robert Nirenberg published a method to represent this previously-described TM back in 1986 [2]. In his paper, he introduced a tool to practically represent TMs that was based on the Recursive Function Algorithmic Language (Refal). This language has helped researchers to study and simulate TMs.

Robert Nirenberg and Refal

Refal was developed in 1966 in the then Soviet Union, specifically for the purpose of artificial intelligence (AI) research. The language could be used for string manipulation, especially for its use in Supercompiler, which acts as a producer of compilers. Supercompilers takes in a description of a language L(i) and produces a compiler for that language in L(m), the language on which L(i) is to be compiled. Therefore, Refal minimizes the range of program structures on which optimizing functions must work. Refal is a great way to represent Turing Machines because it relies on pattern matching. For instance, a pattern to match any expression which starts and ends with the same symbol might be written in Refal as s.same e0 s.same. More importantly, functions define how Refal evaluates its string inputs, and these functions do exist within the language.

The Refal Machine is a model of an abstract machine which executes Refal programs on Refal data. It consists of three different parts: the memory field, the view-field, and the control unit. The memory field contains a fixed Refal program, which is essentially a long steam of atomic symbols that is fed into the machine. The view-field contains an object on which the Refal machine operates, displaying the output of its algorithm. The machine also requires a control unit, which attempts to find the transition function from within the memory field.

It is commonly known that a Turing Machine M may be represented in the following way:

M = ( Q, s, G, B, d, q0, F)

A TM can be represented in Refal with the following format:

M1 S.state(E.left)  S.head E.right

Here, E.left represents the string of tape symbols to the left of the read/write head, S.head represents the symbol at which the tape head has stopped, E.right represents the string of symbols to the right of S.head, S.state is the current state.

Refal can be a great tool for researching Turing Machines by allowing them to write and simulate TMs. For instance, two researchers modeled supercompilation and found that a TM written with Refal sped up significant in terms of running time [4].

Turing Machines and Computational Complexity

There have also been recent findings on the computational complexity of probabilistic TMs. Complexity theory is interested in which problem can be solved using a Turing machine. Since the Turing Machine is the underlying computational framework for measuring complexity, it is often the model used when computing complexity. There are several important resources when considering complexity. The first is time, or the time it takes a computation to complete. This is measured by counting the number of transitions for the computation. Another significant resource to be considered when considering complexity is space. Space complexity on a TM can be measured by counting the number of tape squares needed for the computation.

A nondeterministic TM has the ability to make decisions based on the outcomes of unbiased coin tosses, or the ability to make random decisions. However, the ability to make random decisions does not increase the ultimate computational power of TMs. In a study, John Gill showed that a palindrome-like language can be recognized by a fixed one-tap probabilistic TM faster for infinitely many inputs than by any one-tape deterministic TM [5]. We can also convert an algorithm from a non-deterministic form into its deterministic counterpart in a systematic way to obtain an exponential time algorithm. He also showed that classes of languages recognize probabilistically in polynomial time, such as PP, which contains NP. The class PP is accepted by polynomial bounded threshold machines.

Beyond Classical Computers, Into The Quantum Realm

Advancing technology is beginning to bring into reality what was once theoretical. In our case, we want to understand what the limits of computations are. Of course, we are not merely bounded by the electronic architecture. Computation can and does occur in other forms as well. In 1980, Richard Feynman proposed a Quantum Turing Machine, which is another type of Turing machine that is an analogous version of a quantum system [6]. A Quantum Turing Machine is often used to model the effects of a quantum computer, a theoretical device that researchers are starting to implement right now. This quantum computer is expected to expand theory of computation beyond classical digital Turing Machines into other platforms, namely the atomic world. The strange occurrences in quantum mechanics also introduces new computational irregularities. Quantum computing essentially relies on the processing of bits of information which can be simultaneously 1 and 0. As long as pairs among sets of quantum bits are are the same at any given time, they can simultaneously take on more than one value, which can result in parallelism, which will ultimately allow us to solve problems faster than is possible with a traditional computer. In his paper, David DiVincenzo states that there are five requirements for the implementation of quantum computation [3]. First, we would need a scalable physical system with well-characterized qubits. Secondly, we would need the ability to initialize the state of the qubits. Then we would need long relevant decoherence times, which characterize the dynamics of a qubit. We would also need a universal set of quantum gates that specifies the sequence of data transformation. Finally, we would need a method to measure specific qubits. However, the study of quantum effects is in its infancy. New methods to implement this quantum computer would need to be realized.

Applications to Biomedicine

Researchers have also recently applied the TM ideological model into something more applied to the biological world. In Shapiro's 2012 paper [7], he describes a working mechanical device that embodies the theoretical TM, and thus is a universal programmable computer. The machine is three-dimensional at the molecular level and is smartly implemented with polymers, ligation, and using allosteric conformational changes. Since 1994, we have seen that combinatorial search problems can be solved with at the molecular level [8]. In that study, the researcher Adleman used in vitro selection techniques to solve these problems, showing that molecular computation have been researched for a very long time.

In the present study, Shapiro used a chain of basic building blocks (what he called alphabet monomers) to represent the TM's tape. He used another set of building blocks (called transition molecules) to encode the machine's transition rules. The set of loaded transition molecules constitutes the computer's program. The computer operates two chains, one of which is edited by the computer similar to the way a TM modifies its tape.

Implications of this machine are vast. First, the computer design allows it to respond to the existence of a to the concentration level of specific molecules in its environments, and to construct program-defined polymers as well as release them into the environment. If this machine is implemented using natural molecules to be used in the body, then it can be a part of our body's natural biochemical pathways, releasing certain polymers when it needs to or when it is told to. This could then have vast implications in healthcare as an alternative method of delivering medicine, for instance.

As it relates to this review, however, the implications of this type of TM does not apply to the medical world, but can be seen as another way to represent a Turing Machine. Much like Nirenberg's Refal Machine, this biomedical TM is a novel way to represent a TM, and represents its applications far beyond the realm of computer science into the molecular realm.

Summary

According to the Church-Turing Thesis, a number function is computable by a Turing Machine if and only if it is computable by a physical machine. In this review, we have seen a Turing machine abstractly represented and mathematically defined by Robert Nirenberg's Refal language. We have also seen how a Turing Machine might be mathematically used to determine complexity, specifically time and space complexity. We then expanded our understanding of Turing Machines into other physical domains, namely the quantum and biological realms. In the physical realm, we learned of the Quantum Turing Machine, or the Universal Quantum Computer. In the biological realm, we learned of the molecular Turing machine that has massive implications for biomedicine. In concluding this review, the Turing Machines of the past are built upon to include other models as well. Applications into other realms will help us to understand the widespread appeal of the Turing Machine itself.

References

[1] J. Neumann, “First Draft of a Report on the EDVAC,” The Origins of Digital Computers, pp. 383–392, 1982.

[2] R. M. Nirenberg, “A practical turing machine representation,” SIGACT News ACM SIGACT News, vol. 17, no. 3, pp. 35–44, Jan. 1986. [3] D. P. DiVincenzo, “Quantum Computation,” Science, vol. 270, pp. 255–260, Oct. 1995.

[4] A. Karliukou and A. P. Nemytykh , “Supercompilation of Double Interpretation (How One Hour of the Machine's Time Can Be Turned to One Second) ,” 2002.

[5] J. T. Gill, “Computational complexity of probabilistic Turing machines,” Proceedings of the sixth annual ACM symposium on Theory of computing - STOC '74, 1974.

[6] C. R. Monroe, “What quantum computers may tell us about quantum mechanics,” Quantum Theory, Cosmology, and Complexity Science and Ultimate Reality, pp. 345–360.

[7] E. Shapiro, “A mechanical Turing machine: blueprint for a biomolecular computer,” Interface Focus, vol. 2, no. 4, pp. 497–503, 2012.

[8] Adleman, L. M. 1994 Molecular computation of solutions to combinatorial problems. Science 266, 1021–1024.