Luca Marx

The Automatic Deutsch-Jozsa Algorithm

Last time we saw a parody of the Deutsch-Jozsa algorithm that used automata to evaluate a function over all of its arguments at once. Today I want to show you how it can be made more serious and how it compares to the true quantum algorithm.

In the last post we saw that we can represent any binary function \(f: \{0,1\}^n \to \{0,1\}\) as a weighted automaton. This allowed us to evaluate \(f\) over many arguments at once.

Then we built a "poor-man's" version of Deutsch–Jozsa algorithm to distinguish a constant function from a balanced one with just one evaluation.

The question for today is: can we push it further? could we lift the poor-man's algorithm up and bring it closer to the quantum one?

Let's have a look, first we review the quantum Deutsch-Jozsa algorithm step by step (with calculations in Python). As usual all the code you'll need is here: wfa2.py1

Suppose that Bob has a function2

\[f: \{0,1\}^3 \to \{0,1\}\]

that can be: constant or balanced3.

Alice's task is to determine which is the case with just one evaluation.

She proceeds in this way:

  1. Alice prepares a 4-qubit register: three qubits for the function argument one for the result

    \[\ket{\psi_1} = \ket{0} \otimes \ket{0} \otimes \ket{0} \otimes \ket{1} = \ket{0001}\]

    in Python

    import numpy as np
    zero, one = np.array([1,0]), np.array([0,1])
    
    ψ1 = np.einsum("i,j,k,l->ijkl", zero, zero, zero, one)
    
  2. Alice applies the Hadamard transform to each qubit

    \[\ket{\psi_2} = H \otimes H \otimes H \otimes H \ket{\psi_1}\]

    H = (1/np.sqrt(2)) * np.array([[1, 1],
                                   [1,-1]])
    
    ψ2 = np.einsum("ij,kl,mn,op,jlnp->ikmo", H, H, H, H, ψ1)
    

    then she sends her register to Bob

  3. Bob has a function \(f: \{0,1\}^3 \to \{0,1\}\), he builds the unitary operator \(U_f\) that performs the transformation

    \[\ket{x_1 \; x_2 \; x_3 \; y} \xrightarrow{U_f} \ket{x_1 \; x_2 \; x_3 \; y \oplus f(x)}\]

    that is

    \[U_f = \sum_{x_1,x_2,x_3,y=0,1} \ket{x_1 \; x_2 \; x_3 \; y \oplus f(x_1 x_2 x_3)} \bra{x_1 \; x_2 \; x_3 \; y}\]

    from itertools import product
    
    def U(f, x0):
        b, x1 = [zero, one], np.zeros([2]*4)
        for x in product(*[[0,1]]*3):
            for y in [0,1]:
                #        |x y⊕f(x)>   <x y | x0>
                x1[x + (y ^ f(x),)] += np.einsum("i,j,k,l,ijkl->", *[b[i] for i in x], b[y], x0)
        return x1
    
  4. Bob applies \(U_f\) to Alice's register

    \[\ket{\psi_3} = U_f \ket{\psi_2}\]

    # balanced function case
    def bal_f(x):
        x_s = ''.join([str(i) for i in x])
        return 1 if x_s in ['001','010','011','101'] else 0
    
    ψ3 = U(bal_f, ψ2)
    

    then he sends the result back to Alice

  5. Alice applies again the Hadamard transform

    \[\ket{\psi_4} = H \otimes H \otimes H \otimes H \ket{\psi_3}\]

    ψ4 = np.einsum("ij,kl,mn,op,jlno->ikmp", H, H, H, H, ψ3)
    
  6. Alice performs a measurement on the argument part, the probability that she gets all zeroes is

    \[P = |\bra{0000}\ket{\psi_4}|^2 + |\bra{0001}\ket{\psi_4}|^2\]

    P = 0
    for y in [0,1]:
      # s = <000y|ψ4>
      s = np.einsum("i,j,k,l,ijkl->", zero, zero, zero, [zero, one][y], ψ4)
      P += np.conj(s) * s
    
    print(f"probability of Alice measuring all zeroes: {P:.2f}")
    
    probability of Alice measuring all zeroes: 0.00
    

    What would have happened if Bob had a constant function? Let's replay steps 4-6

    # 4. Bob evaluates U_f for the constant case
    ψ3c = U(lambda _: 1, ψ2)
    # 5. Alice performs Hadamard transform
    ψ4c = np.einsum("ij,kl,mn,op,jlno->ikmp", H, H, H, H, ψ3c)
    # 6. Alice measures the argument part
    P = 0
    for y in [0,1]:
      # s = <000y|ψ4>
      s = np.einsum("i,j,k,l,ijkl->", zero, zero, zero, [zero, one][y], ψ4c)
      P += np.conj(s) * s
    
    print(f"probability of Alice measuring all zeroes: {P:.2f}")
    
    probability of Alice measuring all zeroes: 1.00
    
  7. if Alice measures all zeroes in the argument register she concludes that \(f\) is constant otherwise it is balanced

If you compare the quantum algorithm to its poor-man's version you'll get the hint of an analogy between quantum states and automata.

If we could make it more precise it would be straightforward to translate the quantum algorithm into the automatic one by replacing quantum states and operations on them with automata.

But there are some pieces missing: the Hadamard transform \(H\), the \(U_f\) operator and the concept of a quantum measurement, let's see if we can complete the analogy.

First: take the Hadamard transform

\[H = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \\ \end{bmatrix}\]

and take the basis

\[\ket{X_{+}} = \frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) \hspace{1cm} \ket{X_{-}} = \frac{1}{\sqrt{2}}(\ket{0} - \ket{1})\]

then \(H\) can be rewritten as

\[H = \ket{X_{+}}\bra{0} + \ket{X_{-}}\bra{1}\]

from the perspective of an automaton it looks very much like a mapping between alphabets

\[\ket{0} \to \ket{X_{+}} \hspace{1cm} \ket{1} \to \ket{X_{-}}\]

Take an automaton

\[\psi = (\boldsymbol{\alpha}, \mathbf{A}, \boldsymbol{\omega})\]

we have already seen how to evaluate it over superpositions of letters then the effect of \(H\) on \(\psi\) can be defined as4

\[\psi \to H\psi = (\boldsymbol{\alpha}, \sum_{\sigma=0,1} \mathbf{A}^{(\sigma)} H_{\sigma \tau}, \boldsymbol{\omega})\]

To see more clearly how \(H\psi\) behaves let's evaluate it over a word that contain '0' and '1'

\begin{align*} (H\psi)(\cdots 0 \cdots) &= \boldsymbol{\alpha}^\top \cdots \left( \sum_{\sigma} \mathbf{A}^{(\sigma)} H_{\sigma 0} \right) \cdots \boldsymbol{\omega} \\ &= \boldsymbol{\alpha}^\top \cdots (\mathbf{A}^{(0)} H_{0 0} + \mathbf{A}^{(1)} H_{1 0}) \cdots \boldsymbol{\omega} \\ &= \boldsymbol{\alpha}^\top \cdots \frac{1}{\sqrt{2}}(\mathbf{A}^{(0)} + \mathbf{A}^{(1)}) \cdots \boldsymbol{\omega} \\ \\ (H\psi)(\cdots 1 \cdots) &= \boldsymbol{\alpha}^\top \cdots \left( \sum_{\sigma} \mathbf{A}^{(\sigma)} H_{\sigma 1} \right) \cdots \boldsymbol{\omega} \\ &= \boldsymbol{\alpha}^\top \cdots (\mathbf{A}^{(0)} H_{0 1} + \mathbf{A}^{(1)} H_{1 1}) \cdots \boldsymbol{\omega} \\ &= \boldsymbol{\alpha}^\top \cdots \frac{1}{\sqrt{2}}(\mathbf{A}^{(0)} - \mathbf{A}^{(1)}) \cdots \boldsymbol{\omega} \\ \end{align*}

then evaluating \(H\psi\) over '0', '1' is the same thing as evaluating the original automaton \(\psi\) over the (normalized) superpositions '(0+1)', '(0-1)'.

Second: take

\[U_f = \sum_{x_1,x_2,x_3,y=0,1} \ket{x_1 \; x_2 \; x_3 \; y \oplus f(x_1 x_2 x_3)} \bra{x_1 \; x_2 \; x_3 \; y}\]

again, it resembles a lot string replacement

all of this is done in the computational basis but of course you can evaluate \(U_f\) in any other basis5.

We can encode \(U_f\) into an automaton in the same way we have encoded a generic function \(f: \{0,1\}^n \to \{0,1\}^m\): we build an automaton that recognizes the language6

\[L = \{ a b c y a b c \bar{y} \; | \; \bar{y} = y \oplus f(a b c); \; a,b,c,y = 0,1 \}\]

then we use the derivative to apply \(U_f\) to a single word or to another automaton7, the resulting automaton encodes in turn the function \(f\) on a set of arguments.

Then \(U_f\) is an higher order automaton: an automaton operating over other automata.

Third: quantum measure, this is the trickiest of all, for now we take a very simplicistic approach: to make a measure we just take a sample from the automaton by tracing a single path in the automaton graph (having fixed the basis).


Ok then, we are ready to remake the quantum Deutsch-Jozsa algorithm: for each step we replace items on the left column with items on the right

quantum Deutsch-Jozsa automatic Deutsch-Jozsa
quantum states automata
Hadamard transform alphabet remapping
\(U_f\) operator string replacement
quantum measurement sampling

the result is the automatic Deutsch-Jozsa algorithm:

  1. Alice prepares an automaton that recognizes '0001'

    from wfa2 import make_automaton_from_words
    
    φ1 = make_automaton_from_words('0001')
    φ1.diag('alice_phi1')
    
    alice_phi1.png
  2. Alice applies the Hadamard transform

    φ2 = φ1.rebase(H)
    φ2.diag('alice_phi2')
    
    alice_phi2.png

    (dashed lines correspond to negative weights) then she sends her automaton to Bob

  3. Bob has a function \(f: \{0,1\}^3 \to \{0,1\}\), he builds the automaton repl_Uf that performs the string replacement

    'a b c y' ⟶ 'a b c y ⊕ f(a b c)'

    def make_repl_U(f, n):
        L = []
        for x in product(*[['0','1']]*n):
            xs = ''.join(x)
            for y in ['0','1']:
                x0 = xs + y
                x1 = xs + str(int(y) ^ int(f(xs)))
                print(f"'{x0}' ⟶ '{x1}' (f({xs}) = {f(xs)})")
                L.append(x0 + x1)
    
        return make_automaton_from_words(' '.join(L))
    
  4. Bob applies repl_Uf to Alice's automaton

    # balanced function case
    repl_U_bal = make_repl_U(bal_f, 3)
    
    '0000' ⟶ '0000' (f(000) = 0)
    '0001' ⟶ '0001' (f(000) = 0)
    '0010' ⟶ '0011' (f(001) = 1)
    '0011' ⟶ '0010' (f(001) = 1)
    '0100' ⟶ '0101' (f(010) = 1)
    '0101' ⟶ '0100' (f(010) = 1)
    '0110' ⟶ '0111' (f(011) = 1)
    '0111' ⟶ '0110' (f(011) = 1)
    '1000' ⟶ '1000' (f(100) = 0)
    '1001' ⟶ '1001' (f(100) = 0)
    '1010' ⟶ '1011' (f(101) = 1)
    '1011' ⟶ '1010' (f(101) = 1)
    '1100' ⟶ '1100' (f(110) = 0)
    '1101' ⟶ '1101' (f(110) = 0)
    '1110' ⟶ '1110' (f(111) = 0)
    '1111' ⟶ '1111' (f(111) = 0)
    
    from wfa2 import normalize_wfa
    # this normalize_wfa will need a post of its own
    φ3 = normalize_wfa(repl_U_bal.D_wfa(φ2, 4), 4)
    φ3.diag('bob_bal_phi3')
    
    bob_bal_phi3.png

    then he sends the result back to Alice8.

    If you compare this automaton with the one in poor-man's algorithm you'll see that it's essentially the same: the only difference are the negative weight transitions needed for computing \(f\)'s complement

  5. Alice applies again the Hadamard transform

    φ4 = normalize_wfa(φ3.rebase(H), 4)
    φ4.diag('alice_bal_phi4')
    
    alice_bal_phi4.png
  6. Alice takes a sample from \(\varphi_4\), looking at the automaton graph she will get one of: '0111' '0011' '1101' '1001', she will never get '0000' or '0001'.

    What would have happened if Bob had a constant function? Let's replay steps 4-6

    # 4. Bob evaluates repl_Uf
    repl_U_cst = make_repl_U(lambda _: 1, 3)
    φ3c = normalize_wfa(repl_U_cst.D_wfa(φ2, 4), 4)
    # 5. Alice applies the Hadamard transform
    φ4c = normalize_wfa(φ3c.rebase(H), 4)
    φ4c.diag('alice_cst_phi4')
    # 6. Alice samples the resulting automaton
    
    alice_cst_phi4.png

    in this case Alice will always get '0001'

  7. If Alice samples all zeroes in the argument part she concludes that \(f\) is constant otherwise it is balanced

How do the two algorithm compare? Take the step 4, you'll see that

\[\bra{x} \ket{\psi_3} = \varphi_3(x)\]

for any string \(x \in \{0,1\}^4\)

for x in product(*[[0,1]]*4):
    xs = ''.join(str(i) for i in x)

    assert(np.isclose(ψ3[x],  φ3(xs)))
    assert(np.isclose(ψ3c[x], φ3c(xs)))

this means that the two algorithms are the same: the automaton computes the coefficients of the quantum state.

Automata and quantum states, at least in the very specific context of the Deutsch-Jozsa algorithm, are interchangeable.

Footnotes:

1

the only requirements are numpy and graphviz for making diagrams

2

length 3 is for keeping things simple, it can be over words of any size

3

equal to one for half the values of its argument and zero for the other half

4

to see how it's implemented in Python have a look at the rebase(·) method

5

the net effect of \(U_f\) is to turn a product state into an entangled state

7

see the implementation of the D(·) or D_wfa(·) methods

8

the normalize_wfa function uses a modified Brzozowski algorithm to minimize an automaton, more on that in a future post