World Library  
Flag as Inappropriate
Email this Article

Quantum finite automata

Article Id: WHEBN0007926008
Reproduction Date:

Title: Quantum finite automata  
Author: World Heritage Encyclopedia
Language: English
Subject: QFA, Quantum nanoscience, Quantum information theory, Read-only Turing machine, Finite automata
Collection: Automata Theory, Finite Automata, Quantum Information Theory
Publisher: World Heritage Encyclopedia

Quantum finite automata

In quantum computing, quantum finite automata or QFA or quantum state machines are a quantum analog of probabilistic automata or a Markov decision process. They are related to quantum computers in a similar fashion as finite automata are related to Turing machines. Several types of automata may be defined, including measure-once and measure-many automata. Quantum finite automata can also be understood as the quantization of subshifts of finite type, or as a quantization of Markov chains. QFA's are, in turn, special cases of geometric finite automata or topological finite automata.

The automata work by accepting a finite-length string \sigma=(\sigma_0,\sigma_1,\cdots,\sigma_k) of letters \sigma_i from a finite alphabet \Sigma, and assigning to each such string a probability \operatorname{Pr}(\sigma) indicating the probability of the automaton being in an accept state; that is, indicating whether the automaton accepted or rejected the string.

The languages accepted by QFA's are not the regular languages of deterministic finite automata, nor are they the stochastic languages of probabilistic finite automata. Study of these quantum languages remains an active area of research.


  • Informal description 1
  • Measure-once automata 2
  • Example 3
  • Measure-many automata 4
  • Geometric generalizations 5
  • See also 6
  • References 7

Informal description

There is a simple, intuitive way of understanding quantum finite automata. One begins with a graph-theoretic interpretation of deterministic finite automata (DFA). A DFA can be represented as a directed graph, with states as nodes in the graph, and arrows representing state transitions. Each arrow is labelled with a possible input symbol, so that, given a specific state and an input symbol, the arrow points at the next state. One way of representing such a graph is by means of a set of adjacency matrices, with one matrix for each input symbol. In this case, the list of possible DFA states is written as a column vector. For a given input symbol, the adjacency matrix indicates how any given state (row in the state vector) will transition to the next state; a state transition is given by matrix multiplication.

One needs a distinct adjacency matrix for each possible input symbol, since each input symbol can result in a different transition. The entries in the adjacency matrix must be zero's and one's. For any given column in the matrix, only one entry can be non-zero: this is the entry that indicates the next (unique) state transition. Similarly, the state of the system is a column vector, in which only one entry is non-zero: this entry corresponds to the current state of the system. Let \Sigma=\{\alpha\} denote the set of input symbols. For a given input symbol \alpha\in\Sigma, write U_\alpha as the adjacency matrix that describes the evolution of the DFA to its next state. The set \{U_\alpha | \alpha\in\Sigma\} then completely describes the state transition function of the DFA. Let Q represent the set of possible states of the DFA. If there are N states in Q, then each matrix U_\alpha is N by N-dimensional. The initial state q_0\in Q corresponds to a column vector with a one in the q0'th row. A general state q is then a column vector with a one in the q'th row. By abuse of notation, let q0 and q also denote these two vectors. Then, after reading input symbols \alpha\beta\gamma\cdots from the input tape, the state of the DFA will be given by q = \cdots U_\gamma U_\beta U_\alpha q_0. The state transitions are given by ordinary matrix multiplication (that is, multiply q0 by U_\alpha, etc.); the order of application is 'reversed' only because we follow the standard application order in linear algebra.

The above description of a DFA, in terms of linear operators and vectors, almost begs for generalization, by replacing the state-vector q by some general vector, and the matrices \{U_\alpha\} by some general operators. This is essentially what a QFA does: it replaces q by a probability amplitude, and the \{U_\alpha\} by unitary matrices. Other, similar generalizations also become obvious: the vector q can be some distribution on a manifold; the set of transition matrices become automorphisms of the manifold; this defines a topological finite automaton. Similarly, the matrices could be taken as automorphisms of a homogeneous space; this defines a geometric finite automaton.

Before moving on to the formal description of a QFA, there are two noteworthy generalizations that should be mentioned and understood. The first is the non-deterministic finite automaton (NFA). In this case, the vector q is replaced by a vector which can have more than one entry that is non-zero. Such a vector then represents an element of the power set of Q; its just an indicator function on Q. Likewise, the state transition matrices \{U_\alpha\} are defined in such a way that a given column can have several non-zero entries in it. After each application of \{U_\alpha\}, though, the column vector q must be renormalized so that it only contains zeros and ones. Equivalently, the multiply-add operations performed during component-wise matrix multiplication should be replaced by Boolean and-or operations, that is, so that one is working with a ring of characteristic 2.

A well-known theorem states that, for each DFA, there is an equivalent NFA, and vice versa. This implies that the set of languages that can be recognized by DFA's and NFA's are the same; these are the regular languages. In the generalization to QFA's, the set of recognized languages will be different. Describing that set is one of the outstanding research problems in QFA theory.

Another generalization that should be immediately apparent is to use a stochastic matrix for the transition matrices, and a probability vector for the state; this gives a probabilistic finite automaton. The entries in the state vector must be real numbers, positive, and sum to one, in order for the state vector to be interpreted as a probability. The transition matrices must preserve this property: this is why they must be stochastic. Each state vector should be imagined as specifying a point in a simplex; thus, this is a topological automaton, with the simplex being the manifold, and the stochastic matrices being linear automorphisms of the simplex onto itself. Since each transition is (essentially) independent of the previous (if we disregard the distinction between accepted and rejected languages), the PFA essentially becomes a kind of Markov chain.

By contrast, in a QFA, the manifold is complex projective space \mathbb{C}P^N, and the transition matrices are unitary matrices. Each point in \mathbb{C}P^N corresponds to a quantum-mechanical probability amplitude or pure state; the unitary matrices can be thought of as governing the time evolution of the system (viz in the Schrödinger picture). The generalization from pure states to mixed states should be straightforward: A mixed state is simply a measure-theoretic probability distribution on \mathbb{C}P^N.

A worthy point to contemplate is the distributions that result on the manifold during the input of a language. In order for an automaton to be 'efficient' in recognizing a language, that distribution should be 'as uniform as possible'. This need for uniformity is the underlying principle behind maximum entropy methods: these simply guarantee crisp, compact operation of the automaton. Put in other words, the machine learning methods used to train hidden Markov models generalize to QFA's as well: the Viterbi algorithm and the forward-backward algorithm generalize readily to the QFA.

Although the study of QFA was popularized in the work of Kondacs and Watrous in 1997[1] and later by Moore and Crutchfeld, they were described as early as 1971, by Ion Baianu.[2][3]

Measure-once automata

Measure-once automata were introduced by Cris Moore and James P. Crutchfield.[4] They may be defined formally as follows.

As with an ordinary finite automaton, the quantum automaton is considered to have N possible internal states, represented in this case by an N-state qubit |\psi\rangle. More precisely, the N-state qubit |\psi\rangle\in \mathbb {C}P^N is an element of N-dimensional complex projective space, carrying an inner product \Vert\cdot\Vert that is the Fubini–Study metric.

The state transitions, transition matrixes or de Bruijn graphs are represented by a collection of N\times N unitary matrixes U_\alpha, with one unitary matrix for each letter \alpha\in\Sigma. That is, given an input letter \alpha, the unitary matrix describes the transition of the automaton from its current state |\psi\rangle to its next state |\psi^\prime\rangle:

|\psi^\prime\rangle = U_\alpha |\psi\rangle

Thus, the triple (\mathbb {C}P^N,\Sigma,\{U_\alpha\vert\alpha\in\Sigma\}) form a quantum semiautomaton.

The accept state of the automaton is given by an N\times N projection matrix P, so that, given a N-dimensional quantum state |\psi\rangle, the probability of |\psi\rangle being in the accept state is

\langle\psi |P |\psi\rangle = \Vert P |\psi\rangle\Vert^2

The probability of the state machine accepting a given finite input string \sigma=(\sigma_0,\sigma_1,\cdots,\sigma_k) is given by

\operatorname{Pr}(\sigma) = \Vert P U_{\sigma_k} \cdots U_{\sigma_1} U_{\sigma_0}|\psi\rangle\Vert^2

Here, the vector |\psi\rangle is understood to represent the initial state of the automaton, that is, the state the automaton was in before it started accepting the string input. The empty string \varnothing is understood to be just the unit matrix, so that

\operatorname{Pr}(\varnothing)= \Vert P |\psi\rangle\Vert^2

is just the probability of the initial state being an accepted state.

Because the left-action of U_\alpha on |\psi\rangle reverses the order of the letters in the string \sigma, it is not uncommon for QFA's to be defined using a right action on the Hermitian transpose states, simply in order to keep the order of the letters the same.

A regular language is accepted with probability p by a quantum finite automaton, if, for all sentences \sigma in the language, (and a given, fixed initial state |\psi\rangle), one has p<\operatorname{Pr}(\sigma).


Consider the classical deterministic finite automaton given by the state transition table

State Transition Table
1 0
S1 S1 S2
S2 S2 S1
  State Diagram

The quantum state is a vector, in bra–ket notation

|\psi\rangle=a_1 |S_1\rangle + a_2|S_2\rangle = \begin{bmatrix} a_1 \\ a_2 \end{bmatrix}

with the complex numbers a_1,a_2 normalized so that

\begin{bmatrix} a^*_1 \;\; a^*_2 \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \end{bmatrix} = a_1^*a_1 + a_2^*a_2 = 1

The unitary transition matrices are

U_0=\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}


U_1=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}

Taking S_1 to be the accept state, the projection matrix is

P=\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}

As should be readily apparent, if the initial state is the pure state |S_1\rangle or |S_2\rangle, then the result of running the machine will be exactly identical to the classical deterministic finite state machine. In particular, there is a language accepted by this automaton with probability one, for these initial states, and it is identical to the regular language for the classical DFA, and is given by the regular expression:

(1^*(01^*0)^*)^* \,\!

The non-classical behaviour occurs if both a_1 and a_2 are non-zero. More subtle behaviour occurs when the matrices U_0 and U_1 are not so simple; see, for example, the de Rham curve as an example of a quantum finite state machine acting on the set of all possible finite binary strings.

Measure-many automata

Measure-many automata were introduced by Kondacs and Watrous in 1997.[1] The general framework resembles that of the measure-once automaton, except that instead of there being one projection, at the end, there is a projection, or quantum measurement, performed after each letter is read. A formal definition follows.

The Hilbert space \mathcal{H}_Q is decomposed into three orthogonal subspaces

\mathcal{H}_Q=\mathcal{H}_\mbox{accept} \oplus \mathcal{H}_\mbox{reject} \oplus \mathcal{H}_\mbox{non-halting}

In the literature, these orthogonal subspaces are usually formulated in terms of the set Q of orthogonal basis vectors for the Hilbert space \mathcal{H}_Q. This set of basis vectors is divided up into subsets Q_\mbox{acc} \subset Q and Q_\mbox{rej} \subset Q, such that

\mathcal{H}_\mbox{accept}=\operatorname{span} \{|q\rangle : |q\rangle \in Q_\mbox{acc} \}

is the linear span of the basis vectors in the accept set. The reject space is defined analogously, and the remaining space is designated the non-halting subspace. There are three projection matrices, P_\mbox{acc}, P_\mbox{rej} and P_\mbox{non}, each projecting to the respective subspace:

P\mbox{acc}:\mathcal{H}_Q \to \mathcal{H}_\mbox{accept}

and so on. The parsing of the input string proceeds as follows. Consider the automaton to be in a state |\psi\rangle. After reading an input letter \alpha, the automaton will be in the state

|\psi^\prime\rangle =U_\alpha |\psi\rangle

At this point, a measurement is performed on the state |\psi^\prime\rangle, using the projection operators P, at which time its wave-function collapses into one of the three subspaces \mathcal{H}_\mbox{accept} or \mathcal{H}_\mbox{reject} or \mathcal{H}_\mbox{non-halting}. The probability of collapse is given by

\operatorname{Pr}_\mbox{acc} (\sigma) = \Vert P_\mbox{acc} |\psi^\prime\rangle \Vert^2

for the "accept" subspace, and analogously for the other two spaces.

If the wave function has collapsed to either the "accept" or "reject" subspaces, then further processing halts. Otherwise, processing continues, with the next letter read from the input, and applied to what must be an eigenstate of P_\mbox{non}. Processing continues until the whole string is read, or the machine halts. Often, additional symbols \kappa and $ are adjoined to the alphabet, to act as the left and right end-markers for the string.

In the literature, the meaure-many automaton is often denoted by the tuple (Q;\Sigma; \delta; q_0; Q_\mbox{acc}; Q_\mbox{rej}). Here, Q, \Sigma, Q\mbox{acc} and Q\mbox{rej} are as defined above. The initial state is denoted by |\psi\rangle=|q_0\rangle. The unitary transformations are denoted by the map \delta,

\delta:Q\times \Sigma \times Q \to \mathbb{C}

so that

U_\alpha |q_1\rangle = \sum_{q_2\in Q} \delta (q_1, \alpha, q_2) |q_2\rangle

Geometric generalizations

The above constructions indicate how the concept of a quantum finite automaton can be generalized to arbitrary topological spaces. For example, one may take some (N-dimensional) Riemann symmetric space to take the place of \mathbb{C}P^N. In place of the unitary matrices, one uses the isometries of the Riemannian manifold, or, more generally, some set of open functions appropriate for the given topological space. The initial state may be taken to be a point in the space. The set of accept states can be taken to be some arbitrary subset of the topological space. One then says that a formal language is accepted by this topological automaton if the point, after iteration by the homeomorphisms, intersects the accept set. But, of course, this is nothing more than the standard definition of an M-automaton. The behaviour of topological automata is studied in the field of topological dynamics.

The quantum automaton differs from the topological automaton in that, instead of having a binary result (is the iterated point in, or not in, the final set?), one has a probability. The quantum probability is the (square of) the initial state projected onto some final state P; that is \bold{Pr} = \vert \langle P\vert \psi\rangle \vert^2. But this probability amplitude is just a very simple function of the distance between the point \vert P\rangle and the point \vert \psi\rangle in \mathbb{C}P^N, under the distance metric given by the Fubini–Study metric. To recap, the quantum probability of a language being accepted can be interpreted as a metric, with the probability of accept being unity, if the metric distance between the initial and final states is zero, and otherwise the probability of accept is less than one, if the metric distance is non-zero. Thus, it follows that the quantum finite automaton is just a special case of a geometric automaton or a metric automaton, where \mathbb{C}P^N is generalized to some metric space, and the probability measure is replaced by a simple function of the metric on that space.

See also


  1. ^ a b  
  2. ^ I. Bainau, "Organismic Supercategories and Qualitative Dynamics of Systems" (1971), Bulletin of Mathematical Biophysics, 33 pp.339-354.
  3. ^ I. Baianu, "Categories, Functors and Quantum Automata Theory" (1971). The 4th Intl.Congress LMPS, August-Sept.1971
  4. ^ C. Moore, J. Crutchfield, "Quantum automata and quantum grammars", Theoretical Computer Science, 237 (2000) pp 275-306.
  • L. Accardi (2001), "Quantum stochastic processes", in Hazewinkel, Michiel, (Provides an intro to quantum Markov chains.)  
  • Alex Brodsky, Nicholas Pippenger, "Characterization of 1-way Quantum Finite Automata", SIAM Journal on Computing 31(2002) pp 1456–1478.
  • Vincent D. Blondel, Emmanual Jeandel, Pascal Koiran and Natacha Portier, "Decidable and Undecidable Problems about Quantum Automata", SIAM Journal on Computing 34 (2005) pp 1464–1473.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.