Basics of Quantum Information Theory: Topological Toric Code

\def\bra#1{\mathinner{\left\langle{#1}\right|}} \def\ket#1{\mathinner{\left|{#1}\right\rangle}} \def\braket#1#2{\mathinner{\left\langle{#1}\middle|#2\right\rangle}}

Introduction

In order to realize the fault tolerant qubit, the error rate of each part is about $ 10 ^ {-4} = 0.01 % $ or less, and the configuration of the concatenated code (based on the Steane code) is $ 7 $. I talked about the need for many qubits last time. This makes many people feel hopeless, but the "topological surface code" described here should give you the light of hope.

Let me first say what I am trying to explain in the sections that follow. First, in "Theoretical explanation", the sign state of the topological surface code (hereinafter referred to as "surface code") and how to construct the basic logical operation are described, and then error correction is performed well. Explain what you can do. In "Operation check", a series of steps of creating a code state, adding noise and correcting errors using the quantum calculation simulator qlazy works correctly. Make sure that.

The following documents were used as references.

  1. Koshiba, Morimae, Fujii "Quantum Calculation Based on Observation" Corona Publishing Co., Ltd. (2017)
  2. Keio University "Quantum Computer Class # 14 Geometric Code"
  3. K.Fujii,"Quantum Computation with Topological Codes - from qubit to topological fault-tolerance",arXiv:1504.01444v1 [quant-ph] 7 Apr 2015
  4. [Quantum Computer Error Correction Technology](https://www.google.com/url?sa=t&source=web&rct=j&url=https://ipsj.ixsq.nii.ac.jp/ej/%3Faction% 3Drepository_uri%26item_id% 3D101754% 26file_id% 3D1% 26file_no% 3D1 & ved = 2ahUKEwist9m8zuPqAhVKfd4KHaXEBg8QFjAAegQIBBAB & usg = AOvVaw0ayU0bWT63PaMKcXWIp9

Theoretical explanation

Definition of toric code

Face operator and vertex operator

Toric code is an error correction code defined by the generator of a group of stabilizers that is regularly defined on a plane in which qubits are arranged regularly. For example, consider a two-dimensional grid of $ n \ times n $ and place a qubit in the middle of that side (left in the figure below). And it is assumed that the periodic boundary conditions are satisfied. That is, suppose the top and bottom sides are the same, and the left and right sides are the same. After rolling the top side and the bottom side together to make a cylinder, turn the two circles on both ends of the cylinder and stick them together to make a donut shape = torus, but such an image is. Since it is difficult to illustrate a three-dimensional object, I will explain using a grid represented on a plane below, but in my head, think of it as a development of a torus.

fig1.png

You need a generator to build the stabilizer code. So, we define two generators on this grid. One is the "plaquette operator" and the other is the "star operator" (right in the figure above).

The face operator $ A_m $ is defined as follows for each face $ f_m $ of the grid.

A_m = \prod_{i \in \partial f_m} Z_i  \tag{1}

Here, $ \ partial f_m $ represents the four edges corresponding to the boundary of the $ m $ th face $ f_m $, and $ Z_i $ is Pauli $ Z $ for the qubit placed on the $ i $ th edge. It is an operator. As shown on the right side of the above figure, you can imagine it as a quadrangle that surrounds each surface. This is spread all over the grid.

On the other hand, the vertex operator is defined as follows for each vertex $ v_k $ in the grid.

B_k = \prod_{j \in \delta v_k} X_j  \tag{2}

Here, $ \ delta v_k $ represents the four edges connected to the $ k $ th vertex $ v_k $, and $ X_j $ is Pauli $ X for the qubit placed on the $ j $ th edge. The $ operator. As shown on the right side of the above figure, you can imagine it as a cross character centered on each vertex. This is also spread over the entire grid as before. The face operators and vertex operators are commutative, and when the vertex operator and face operator overlap, they always share two qubits, so they are commutative ($ X_ {i} X_ {j). } $ And $ Z_ {i} Z_ {j} $ are commutative). Therefore, we have now defined a commutative generator.

Well, here is the problem. How many qubits (edges), face operators, and vertex operators are there in this $ n \ times n $ two-dimensional lattice? Considering the periodic boundary conditions, the number of qubits (edges) is $ 2n ^ 2 $, the number of face operators (faces) is $ n ^ 2 $, and the number of vertex operators (vertices) is $. You can see that there are n ^ 2 $ pieces. There are $ 2n ^ 2 $ generators that are commutative in total, but not all are independent. The product of all face operators is the identity operator $ I $, and the product of all vertex operators is also the identity operator $ I $ [^ 1]. Since there are two constraints, the number of independent generators is $ 2n ^ 2-2 $. Since the number of qubits (edges) was $ 2n ^ 2 $, the number of logical bits that can be described in this code space is $ 2n ^ {2}-(2n ^ {2} -2) = 2 $. It will be an individual.

[^ 1]: Is this okay? Surface operators are spread on the torus without any gaps. When all face operators are collected, the $ Z $ operator corresponding to each edge always appears twice, so the product is the identity operator $ I $. Similarly, the vertex operators are spread on the torus without any gaps. When all vertex operators are collected, the $ X $ operator corresponding to each edge always appears twice, so the product is the identity operator $ I $.

Dual lattice

The grid shown in the above figure has edges connected by vertices, but you can think of another grid with edges connected by faces. In other words, it is a grid that connects them with vertices in the center of the surface (shown by the broken line on the left in the figure below). Such a grid is called a "dual grid".

fig2.png

When you use this dual lattice to represent surface and vertex operators, something strange happens (or rather, you'll understand it for a moment). The shape of the face operator becomes a cross, the shape of the vertex operator becomes a quadrangle, and the image of the figure is reversed (right in the above figure).

Since the faces, edges, and vertices of the original lattice are the vertices, edges, and faces in the dual lattice, the generators $ A_m and B_k $ can be rewritten in the terms of the dual lattice as follows. I will.

A_m = \prod_{i \in \delta \bar{v}_m} Z_i  \tag{3}
B_k = \prod_{j \in \partial \bar{f}_k} X_j  \tag{4}

Here, $ \ bar {v} _m $ is the vertex when the face $ f_m $ is represented by the dual grid, and $ \ bar {f} _k $ is the vertex when the vertex $ v_k $ is represented by the dual grid ( From now on, I will add a bar above the symbol to clearly indicate that it is an entity on a dual lattice). You'll see why we introduced this kind of thing here in a later explanation, so please remember it [^ 2].

[^ 2]: Around this time, in Reference 3, it is generally described in terms of algebraic topology using a chain complex. I will. It is interesting that toric code can be considered on various types of grids other than square grids, but it will be long, so I will omit it in this article.

Trivial loop

Now that the generator has been defined, let's consider what kind of operator properties the loop defined on the lattice = torus has. First, consider the boundary $ \ partial D $ of the plane $ D $ on the grid (see the figure below).

fig3.png

Such a loop is called a "trivial loop" [^ 3]. If we write the product of the $ Z $ operators on each side of the loop as $ Z (\ partial D) $,

[^ 3]: A loop that can be contracted to one point by continuous deformation is called a "trivial loop".

Z(\partial D) = \prod_{m,f_m \in D} A_{m}  \tag{5}

Is established. The right-hand side is the product of all the face operators that make up the face $ D $. Can you see this? Two adjacent face operators share one $ Z $ operator, so if you take the product of both, the common edge is $ I $. When the product is taken by all the face operators included in $ D $, all edges except the boundary of $ D $ become $ I $, and after all, only the $ Z $ operator above the boundary $ \ partial D $ remains. It will be. As is clear from equation (5), this operator $ Z (\ partial D) $ is commutative with all generators and is an element of the stabilizer group ($ A_ {m} $). Because it is a product).

Next, consider the boundary $ \ partial \ bar {D} $ of the plane $ \ bar {D} $ defined on the dual lattice (see the figure below).

fig4.png

If we write the product of the $ X $ operators on each side of the loop on the dual lattice as $ X (\ partial \ bar {D}) $,

X(\partial \bar{D}) = \prod_{k,\bar{f}_k \in \bar{D}} B_{k}  \tag{6}

Is established. As before, if you take the product of all the face operators that make up a face (the ones that were vertex operators in the original lattice), only the $ X $ operator on the boundary remains. As is clear from equation (6), this operator $ X (\ partial \ bar {D})) $ is also commutative with all generators and an element of the stabilizers.

So the product of the $ Z $ operators placed on the trivial loop of the original lattice and the product of the $ X $ operators placed on the trivial loop of the dual lattice are elements of the stabilizers. , Even if this is applied to the sign state, the sign state does not change.

Non-trivial loop

Next, let's look at the properties of the $ Z $ and $ X $ operators placed on top of non-trivial loops = non-trivial loops.

fig5.png

There is a straight line written as $ l_1 $ in the above figure, but this is a loop that wraps around the torus, assuming periodic boundary conditions. Although it is a loop, it is a non-trivial loop because it cannot be contracted to one point by continuous deformation. The product of the $ Z $ operators arranged on each side is

\tilde{Z}_1 = Z(l_1)  \tag{7}

I will write. Then this operator and the generators are commutative. Of course, the face operator contains only the $ Z $ operator, and the vertex operator always has two edges that overlap with $ l_1 $, so it is commutative [^ 4]. It's commutative, but it's not an element of the stabilizers. In other words, it cannot be expressed in the form of the product of the generator. If you think it's a lie, please do your best. For example, let's say you want to create a product of $ Z $ operators lined up on $ l_1 $, so you line up face operators next to $ l_1 $. But the product of these is $ Z (l_1) $ and another non-trivial loop of $ Z $ operators.

[^ 4]: In the figure, $ l_1 $ is represented by a straight line, but even if it is a loop that is not a straight line, there will always be two edges that overlap the vertex operator. You can understand it by drawing a diagram and thinking about it for about a minute.

What is an operator that is commutative with all generators and is not an element of the stabilizers? The answer is logical operators. Hmm? It may have become, so I will explain it for the time being. Stabilizer group $ S $ using an independent generator,

S = < g_1, g_2, \cdots g_{n-k} >  \tag{8}

Write as, and set the stabilizer state to $ \ ket {\ psi} $. Bringing in all the generators $ g_i $ and the commutative operator $ g \ notin S $,

g \ket{\psi} = g g_{i} \ket{\psi} = g_{i} g \ket{\psi}  \tag{9}

Is true, so $ g \ ket {\ psi} $ is the state on the product space of the eigenspaces corresponding to the eigenvalue $ + 1 $ of $ g_i $ (simultaneous eigenspace). That is, the state on the code space. However, $ g \ ket {\ psi} = \ ket {\ psi} $ does not hold (because $ g \ notin S $). Therefore, $ g $ is an operator that changes only the logical state without going out of the code space, that is, a logical operator.

Now suppose $ \ tilde {Z} _1 $ defined in equation (7) is the logical $ Z $ operator for the first logical bit [^ 5].

[^ 5]: I used $ l_1 $ to define the $ Z $ operator for the first logical bit, but it can be a non-trivial loop (not necessarily a straight line) that goes around vertically. You can use anything.

So how do you define the $ X $ operator for the first logical bit?

\tilde{X}_1 = X(\bar{l}_1)  \tag{10}

If so, it can be commutative with all generators and satisfy $ \ tilde {X} _1 \ tilde {Z} _1 \ tilde {X} _1 =-\ tilde {Z} _1 $. It's OK.

In addition, the $ Z $ operator for the second logical bit is

\tilde{Z}_2 = Z(l_2)  \tag{11}

You can do it. Since we used a non-trivial loop that spins vertically when defining $ \ tilde {Z} _1 $, we use a non-trivial loop that spins horizontally [^ 6].

[^ 6]: If you use a non-trivial loop that goes around in the vertical direction, it will be a logical operation that has the same effect as $ \ tilde {Z} _1 $, so you need to use a topologically different loop.

The $ X $ operator for the second logical bit is commutative with all generators and anti-commutative with $ \ tilde {Z} _2 $.

\tilde{X}_2 = X(\bar{l}_2)  \tag{12}

You can do it. Now you have all the basic logical operators. Again, the four loops shown above are just one example. Topologically different non-trivial loops can be provided for the original and dual lattices, respectively, to define the $ Z $ and $ X $ operators for the two logical bits [^ 7].

Error correction

Now that we have defined the code space and the basic logical operators, let's see how error correction can be achieved using this code.

Bit inversion error

First, let's look at how the error bit can be identified if a bit inversion error occurs in any one bit. For example, suppose a bit inversion error occurs in the number 4 bit in the figure below.

fig6.png

In this case, when all generators are measured, the surface operators $ Z_ {1} Z_ {2} Z_ {3} Z_ {4} $ and $ Z_ {4} corresponding to $ f_1 $ and $ f_2 $ in the above figure. Only Z_ {5} Z_ {6} Z_ {7} $ outputs the measured value $ -1 $. If the original logical state is $ \ ket {\ psi_ {L}} $

\begin{align}
Z_{1} Z_{2} Z_{3} Z_{4} (X_{4} \ket{\psi_{L}}) &= - X_{4} (Z_{1} Z_{2} Z_{3} Z_{4} \ket{\psi_{L}}) = -X_{4} \ket{\psi_{L}} \\
Z_{4} Z_{5} Z_{6} Z_{7} (X_{4} \ket{\psi_{L}}) &= - X_{4} (Z_{4} Z_{5} Z_{6} Z_{7} \ket{\psi_{L}}) = -X_{4} \ket{\psi_{L}} \tag{13}
\end{align}

From this, we can see that the measured value has a probability of $ 100 % $ and becomes $ -1 $. Therefore, if the surface operator is the one that measures $ -1 $ by measuring the syndrome of each generator, it means that there was a bit inversion error, and the location is where the surface operators are in contact with each other. Can be identified as. You can return to the original state by performing bit inversion again on the specified bit.

But what if two or more bits have a bit inversion error? For example, consider the case where there is an error in the three bits shown in blue as shown on the left in the figure below.

fig7.png

Only the face operators corresponding to faces $ f_2 $ and face $ f_7 $ measure the generator and output $ -1 $. An error has also occurred in the bits that make up the surface $ f_4 $ and the surface $ f_5 $, but since the bits are inverted at the same time for each of the two bits, the measured value will be $ + 1 $. If there is an error in the bits belonging to the adjacent faces like this, it looks like a connected chain when viewed in a dual grid, so it is called an "error chain" (shown by the blue line). , Only the surface operator corresponding to the end point of this chain responds to the syndrome measurement (measured value is $ -1 $, which is filled with light blue). We must somehow restore the entire error chain from this endpoint information and correct the error. So-called well-posed problem? So, even if you do your best, it's a method that only succeeds stochastically, right? It may have become an atmosphere. But don't worry. I think this is where the toric code is very erratic, but in fact it is okay if the error chain estimation is off to some extent. I'll explain why it's okay.

Look at the right in the above figure. Suppose the real error chain is $ \ bar {c} $ (shown by the blue line). Suppose the estimated error chain is $ \ bar {r} $ (shown by the red line). At this time, the real error is $ X (\ bar {c}) $, but the estimated error is $ X (\ bar {r}) $. As you can see from the figure, $ X (\ bar {c}) $ and $ X (\ bar {r}) $ are the products of the vertex operators $ B_2, B_4, B_5 $ (the part filled in light orange). You can migrate to each other by calculating. In other words

X(\bar{c}) = B_{2} B_{4} B_{5} X(\bar{r})  \tag{14}

is. Therefore, even if you think that the error is $ X (\ bar {r}) $ and perform recovery processing,

\begin{align}
X(\bar{r}) X(\bar{c}) \ket{\psi_{L}} &= X(\bar{r}) B_{2} B_{4} B_{5} X(\bar{r}) \ket{\psi_{L}} \\
&= X(\bar{r}) X(\bar{r}) B_{2} B_{4} B_{5} \ket{\psi_{L}} \\
&= B_{2} B_{4} B_{5} \ket{\psi_{L}} = \ket{\psi_{L}} \tag{15}
\end{align}

After all, it has returned to its original state [^ 8]. Therefore, if the true error chain and the error chain that can be transferred by continuous transformation can be estimated as shown on the right in the above figure, the original code state can be restored without any problem.

[^ 8]: $ X (\ bar {r} + \ bar {c}) $ has no effect on the sign state because $ \ bar {r} + \ bar {c} $ is a trivial loop It is also good to understand that it does not affect.

However, if the error chain is estimated so that it cannot be continuously deformed from the true error chain as shown in the figure below, the recovery process will fail.

fig8.png

Phase inversion error

Next, consider the case where a phase inversion error occurs in any one bit. Suppose that a phase inversion error occurs in the 4th bit as shown in the figure below.

fig9.png

In this case, when all generators are measured, the vertex operators $ X_ {1} X_ {3} X_ {4} X_ {6} $ and $ X_ {2} corresponding to $ v_1 $ and $ v_2 $ in the above figure. Only X_ {4} X_ {5} X_ {7} $ outputs the measured value $ -1 $. If the original logical state is $ \ ket {\ psi_ {L}} $

\begin{align}
X_{1} X_{3} X_{4} X_{6} (Z_{4} \ket{\psi_{L}}) &= - Z_{4} (X_{1} X_{3} X_{4} X_{6} \ket{\psi_{L}}) = -Z_{4} \ket{\psi_{L}} \\
X_{2} X_{4} X_{5} X_{7} (Z_{4} \ket{\psi_{L}}) &= - Z_{4} (X_{2} X_{4} X_{5} X_{7} \ket{\psi_{L}}) = -Z_{4} \ket{\psi_{L}} \tag{16}
\end{align}

From this, we can see that the measured value has a probability of $ 100 % $ and becomes $ -1 $. Therefore, if the vertex operator is the one that measures $ -1 $ by measuring the syndrome of each generator, it means that there was a phase inversion error, and that location is where the vertex operators are in contact with each other. Can be identified as. The original state can be restored by performing phase inversion again on the specified bit.

But what if two or more bits have a phase inversion error? For example, consider the case where there is an error in the three bits shown in blue as shown on the left in the figure below.

fig10.png

Only the vertex operators corresponding to the vertex $ v_3 $ and the vertex $ v_4 $ measure the generator and output $ -1 $ (the part filled in light blue). An error has occurred in the bits that make up vertex $ v_2 $ and vertex $ v_5 $, but the measured values are $ + 1 $ because the phases are inverted at the same time by 2 bits each. In this case, you need to estimate the error chain that connects the vertices $ v_3, v_2, v_5, v_4 $. However, as before, if the estimated error chain $ r $ can be continuously transformed from the true error chain $ c $, $ Z (r) $ created based on the estimated one is applied. By doing so, you can restore the original logical state. Recovery fails when the estimated chain cannot be continuously deformed from the real chain.

Error chain estimation

As we have just seen, even if the error chain estimation is slightly off, it can be recovered, but error correction will fail if the estimated error chain cannot be migrated due to the real error chain and continuous transformation. .. I want to avoid failure as much as possible, but let's think about how to estimate it best.

Now, suppose that a bit inversion error $ X (\ bar {c}) $ occurs on the error chain $ \ bar {c} $ (the same argument holds for the phase inversion error, so here the bit inversion error Think only about). Here, the error chain $ \ bar {c} $ is not necessarily a single chain. Imagine that multiple chains are grouped together to represent $ \ bar {c} $ (blue line in the figure below).

The probability of such an error $ p (\ bar {c}) $ is

p(\bar{c}) = (1-p)^{|E|} \prod_{i \in E} \Bigl(\frac{p}{1-p} \Bigr)^{z_i}  \tag{17}

Can be expressed as. here,|E|Is the total number of edges,z_iIsiThe second side is\bar{c}If included in1, If not0It is a binary value that takes the value[1]

The most plausible error chain is $ \ partial \ bar {c} $ when the end point of the syndrome measurement is $ \ bar {c} $ boundary $ \ partial \ bar {c} $. At $ \ bar {c} ^ {\ prime} $ where the conditional probability $ p (\ bar {c} ^ {\ prime} | \ partial \ bar {c}) $ is maximized It seems good to think that there is. In other words

\bar{r} = \arg \max_{\bar{c}^{\prime}} p(\bar{c}^{\prime}|\partial \bar{c})  \tag{18}

It seems good to estimate the error chain by. Where $ p (\ bar {c} ^ {\ prime} | \ partial \ bar {c}) $ is

p(\bar{c}^{\prime}|\partial \bar{c}) \propto (1-p)^{|E|} \prod_{i \in E} \Bigl( \frac{p}{1-p} \Bigr)^{z_{i}^{\prime}} \Bigr|_{\partial \bar{c}^{\prime} = \partial \bar{c}}  \tag{19}

Can be expressed as. At this time, $ z_ {i} ^ {\ prime} $ takes the value of $ 1 $ if the $ i $ th edge is included in $ \ bar {c} ^ {\ prime} $, and $ 0 $ otherwise. Since it is assumed to be a binary value, if you find the binary series $ \ {z_ {i} ^ {\ prime} \} $ that maximizes equation (19), then $ \ bar {r} in equation (18) You know $. As you can see by looking at equation (19), in order to maximize this, the number of $ 1 $ contained in the binary series $ \ {z_ {i} ^ {\ prime} \} $ should be minimized. It should be. In other words, all the endpoints should be connected so that the distance is minimized (red line in the figure below). Such a graph optimization algorithm is called a "minimum weight perfect matching algorithm", and an algorithm that can be solved in polynomial steps is known (although it is not well understood, so explanation is omitted. (Sweat).

fig11.png

Error probability and correction failure rate

In general, if the number of bits for encoding is increased to increase the redundancy, error correction with good performance (low correction failure rate) can be realized, but the premise is that the error probability per bit is below a certain threshold. Must be.

For example, considering the classical iterative code that makes one bit redundant with $ N $ bits, if an error does not exceed half of the $ N $ bits, it can be completely restored by taking a majority vote. Roughly speaking, if the error probability per bit is less than the threshold $ 1/2 $, it can be restored. By saying "roughly" here, it means that there is a probability that an error will be mixed in more than half of the bits that make up one signal that you want to transmit, and in that case the restoration will fail. However, if the value of $ N $ can be large enough, this failure rate can be reduced to zero. Of course, at this time, the precondition that the error probability is smaller than the threshold value ($ 1/2 $) is indispensable. If the error probability is greater than the threshold, increasing the value of $ N $ is counterproductive. The larger it is, the more surely the restoration will fail [^ 10].

[^ 10]: The characteristics of the error correction code can be expressed by a curve with the error probability on the horizontal axis and the correction failure rate on the vertical axis, but it is an S-shaped curve with the error probability threshold as the inflection point. .. So, as the value of $ N $ increases, the S-shaped curve becomes tighter, and at the limit of $ N $ infinity, it becomes a stepped characteristic curve.

The same applies to the quantum error correction code, and whether or not error correction is effective must be below a certain threshold value for the error probability per bit. If it is below a certain threshold, if $ N $ is made large enough, the correction failure rate will be as close to zero as possible. However, on the contrary, if the error probability is above the threshold value, increasing $ N $ will increase the correction failure rate.

So what is this threshold for toric code? The results of the numerical simulation are published in Reference 3 and are quoted below.

figure1.png

Here, the horizontal axis represents the error probability for each bit, and the vertical axis represents the correction failure rate. The solid line is $ n = 10 $, the broken line is $ n = 20 $, and the dotted line is $ n = 30 $. (I'm thinking of a grid of $ n \ times n $ now). From the inflection point of this S-shaped curve, the threshold of error probability is calculated to be about $ 10.3 % $. By the way, this simulation is the case where the error chain is estimated by the "minimum weight perfect matching algorithm".

The surface code is a degenerate code that gives the same syndrome measurement results for multiple error chain patterns, so the minimum distance is not always optimal. The best decoding is the one that maximizes the total probability of giving the same correction result, so in that case this threshold will be around $ 10.9 % $.

Also, in a square grid, both the bit inversion error and the phase inversion error have the same error probability threshold, but if the grid shape is changed to a honeycomb shape or a cage shape, the thresholds for the bit inversion error and the phase inversion error become asymmetric. .. In an actual physical system, phase inversion is often larger than bit inversion, so it seems that simulations based on such assumptions are also being performed (for details, see Reference 3. Please refer to org / abs / 1504.01444)).

The error threshold value of about $ 10 % $ just explained is related to the error that occurs when the code state is held and transmitted by the surface code. In actual quantum computation, errors associated with initial state preparation, logical operations, and measurements must be taken into consideration. It seems that the error rate per component is estimated to be about $ 1 % $ in order to realize fault-tolerant quantum computation by combining all of them. As explained in Previous article, assuming a CSS code-based concatenation code such as Steane code, $ 10 ^ {-4} = 0.01 \ This is a big step forward, as it meant that a threshold [^ 11] less than or equal to% $ was needed. This is what I used to say in the "Introduction" as "the light of hope."

[^ 11]: This seems to be a rather sweet estimate, and it seems that it is actually said to be $ 10 ^ {-5} -10 ^ {-6} $.

Operation check

Now, let's use the quantum calculation simulator qlazy to configure the surface code on the $ 3 \ times 3 $ grid and check that error correction can be performed correctly. I will.

Implementation

Here is the entire Python code.

from qlazypy import QState

Lattice = [{'edge': 0, 'faces':[0, 6], 'vertices':[0, 1]},
           {'edge': 1, 'faces':[1, 7], 'vertices':[1, 2]},
           {'edge': 2, 'faces':[2, 8], 'vertices':[0, 2]},
           {'edge': 3, 'faces':[0, 2], 'vertices':[0, 3]},
           {'edge': 4, 'faces':[0, 1], 'vertices':[1, 4]},
           {'edge': 5, 'faces':[1, 2], 'vertices':[2, 5]},
           {'edge': 6, 'faces':[0, 3], 'vertices':[3, 4]},
           {'edge': 7, 'faces':[1, 4], 'vertices':[4, 5]},
           {'edge': 8, 'faces':[2, 5], 'vertices':[3, 5]},
           {'edge': 9, 'faces':[3, 5], 'vertices':[3, 6]},
           {'edge':10, 'faces':[3, 4], 'vertices':[4, 7]},
           {'edge':11, 'faces':[4, 5], 'vertices':[5, 8]},
           {'edge':12, 'faces':[3, 6], 'vertices':[6, 7]},
           {'edge':13, 'faces':[4, 7], 'vertices':[7, 8]},
           {'edge':14, 'faces':[5, 8], 'vertices':[6, 8]},
           {'edge':15, 'faces':[6, 8], 'vertices':[0, 6]},
           {'edge':16, 'faces':[6, 7], 'vertices':[1, 7]},
           {'edge':17, 'faces':[7, 8], 'vertices':[2, 8]}]

F_OPERATORS = [{'face':0, 'edges':[ 0,  3,  4,  6]},
               {'face':1, 'edges':[ 1,  4,  5,  7]},
               {'face':2, 'edges':[ 2,  3,  5,  8]},
               {'face':3, 'edges':[ 6,  9, 10, 12]},
               {'face':4, 'edges':[ 7, 10, 11, 13]},
               {'face':5, 'edges':[ 8,  9, 11, 14]},
               {'face':6, 'edges':[ 0, 12, 15, 16]},
               {'face':7, 'edges':[ 1, 13, 16, 17]},
               {'face':8, 'edges':[ 2, 14, 15, 17]}]

V_OPERATORS = [{'vertex':0, 'edges':[ 0,  2,  3, 15]},
               {'vertex':1, 'edges':[ 0,  1,  4, 16]},
               {'vertex':2, 'edges':[ 1,  2,  5, 17]},
               {'vertex':3, 'edges':[ 3,  6,  8,  9]},
               {'vertex':4, 'edges':[ 4,  6,  7, 10]},
               {'vertex':5, 'edges':[ 5,  7,  8, 11]},
               {'vertex':6, 'edges':[ 9, 12, 14, 15]},
               {'vertex':7, 'edges':[10, 12, 13, 16]},
               {'vertex':8, 'edges':[11, 13, 14, 17]}]

LZ_OPERATORS = [{'logical_qid':0, 'edges':[0, 1,  2]},
                {'logical_qid':1, 'edges':[3, 9, 15]}]

LX_OPERATORS = [{'logical_qid':0, 'edges':[0, 6, 12]},
                {'logical_qid':1, 'edges':[3, 4, 5]}]

def make_logical_zero():

    qs = QState(19)  # data:18 + ancilla:1

    mvals = [0, 0, 0, 0, 0, 0, 0, 0, 0] # measured values of 9 plaquette operators
    for vop in V_OPERATORS: # measure and get measured values of 9 star operators
        qid = vop['edges']
        qs.h(18).cx(18,qid[0]).cx(18,qid[1]).cx(18,qid[2]).cx(18,qid[3]).h(18)
        mvals.append(int(qs.m(qid=[18]).last))
        qs.reset(qid=[18])
        
    return qs, mvals

def measure_syndrome(qs, mvals):

    syn = []
    for fop in F_OPERATORS:  # plaquette operators
        qid = fop['edges']
        qs.h(18).cz(18,qid[0]).cz(18,qid[1]).cz(18,qid[2]).cz(18,qid[3]).h(18)
        syn.append(int(qs.m(qid=[18]).last))
        qs.reset(qid=[18])
        
    for vop in V_OPERATORS:  # star operators
        qid = vop['edges']
        qs.h(18).cx(18,qid[0]).cx(18,qid[1]).cx(18,qid[2]).cx(18,qid[3]).h(18)
        syn.append(int(qs.m(qid=[18]).last))
        qs.reset(qid=[18])
        
    for i in range(len(syn)): syn[i] = syn[i]^mvals[i]

    return syn

def get_error_chain(syn):

    face_id = [i for i,v in enumerate(syn) if i < 9 and v == 1]
    vertex_id = [i-9 for i,v in enumerate(syn) if i >= 9 and v == 1]

    e_chn = []
    if face_id != []:  # chain type: X
        for lat in Lattice:
            if lat['faces'][0] == face_id[0] and lat['faces'][1] == face_id[1]:
                e_chn.append({'type':'X', 'qid':[lat['edge']]})
                break

    if vertex_id != []: # chain type: Z
        for lat in Lattice:
            if lat['vertices'][0] == vertex_id[0] and lat['vertices'][1] == vertex_id[1]:
                e_chn.append({'type':'Z', 'qid':[lat['edge']]})
                break

    return e_chn

def error_correction(qs, e_chn):

    for c in e_chn:
        if c['type'] == 'X': [qs.x(i) for i in c['qid']]
        if c['type'] == 'Z': [qs.z(i) for i in c['qid']]

def Lz(self, q):

    [self.z(i) for i in LZ_OPERATORS[q]['edges']]
    return self
        
def Lx(self, q):

    [self.x(i) for i in LX_OPERATORS[q]['edges']]
    return self
    
if __name__ == '__main__':

    QState.add_methods(Lz, Lx)
    
    print("* initial state: logical |11>")
    qs_ini, mval_list = make_logical_zero()  # logical |00>
    qs_ini.Lx(0).Lx(1)  # logical |00> -> |11>
    qs_fin = qs_ini.clone()  # for evaluating later

    print("* add noise")
    qs_fin.x(7)  # bit flip error at #7
    # qs_fin.z(7).x(7)  # bit and phase flip error at #7

    syndrome = measure_syndrome(qs_fin, mval_list)
    err_chain = get_error_chain(syndrome)
    print("* syndrome measurement:", syndrome)
    print("* error chain:", err_chain)

    error_correction(qs_fin, err_chain)
    print("* fidelity after error correction:", qs_fin.fidelity(qs_ini))
    
    QState.free_all(qs_ini, qs_fin)

First, the faces, edges, and vertex numbers on the grid were determined as follows. Here, $ v_i $ is the face, $ e_i $ is the edge, and $ f_i $ is the vertex.

fig12.png

The global variables Lattice, F_OPERATORS, V_OPERATORS, LZ_OPERATORS, LX_OPERATORS described at the beginning of the program are data for defining the graph structure, generation source and logical operators assuming these faces, edges and vertices. Lattice is a list of dictionaries consisting of face numbers and vertex numbers connected to each side. Now you can describe the graph structure of the grid without omission. F_OPERATORS is a list of side numbers that make up the boundaries of each face, which defines $ 9 $ face operators. V_OPERATORS is a list of dictionaries consisting of edge numbers that connect to each vertex, which defines $ 9 $ of vertex operators. LZ_OPERATORS is for defining two logical $ Z $ operators, where the $ 0 $ th logical $ Z $ is $ Z_ {0} Z_ {1} Z_ {2} $, the $ 1 $ th logical $ Z $ Indicates that is $ Z_ {3} Z_ {9} Z_ {15} $. LX_OPERATORS is for defining two logical $ X $ operators, where the $ 0 $ th logical $ X $ is $ X_ {0} X_ {6} X_ {12} $, the $ 1 $ th logical $ X $ Indicates that is $ X_ {3} X_ {4} X_ {5} $.

Based on this configuration, let's look at each part of the main processing section in order.

qs_ini, mval_list = make_logical_zero()  # logical |00>

Creates a logical zero state. The contents of the function make_logical_zero are as follows.

def make_logical_zero():

    qs = QState(19)  # data:18 + ancilla:1

    mvals = [0, 0, 0, 0, 0, 0, 0, 0, 0] # measured values of 9 plaquette operators
    for vop in V_OPERATORS: # measure and get measured values of 9 star operators
        qid = vop['edges']
        qs.h(18).cx(18,qid[0]).cx(18,qid[1]).cx(18,qid[2]).cx(18,qid[3]).h(18)
        mvals.append(int(qs.m(qid=[18]).last))
        qs.reset(qid=[18])
        
    return qs, mvals

This time, the required data bits are 18 bits (0-17), but an auxiliary bit is added (18) for initial state creation and syndrome measurement, and a total of 19 bits are prepared. Do you need more auxiliary bits? You might think that, but it's okay because you can reuse it while resetting it many times.

First, create a physical zero state with QState (19) and indirectly measure the generator in the for loop. It looks like we're only measuring for the vertex operator, which is fine. The first thing to prepare is the physical zero state, that is, the simultaneous eigenstate corresponding to the eigenvalue +1 of the surface operator, so the state does not change even if the surface operator is measured. Also, since the surface operator and the vertex operator are commutative, you may measure the surface operator first. Because of this, it is OK to just measure the vertex operator here.

So, I measure 9 vertex operators in order, but if the measurement result is +1 (0 as an index of the measured value), I can move on to the measurement of the next vertex operator as it is, but measurement The problem is when the value is -1 (1 as an indicator of the measured value). There is also a way to operate by preparing an operator that flips the eigenvalue, but here we will cut it off. That is, through. However, record the measured value (index). It is not possible to obtain simultaneous eigenstates for the eigenvalues +1 of all generators, but since we know which generator the measured value should be -1, we can now measure the syndrome. If you want to measure a generator with an eigenvalue of -1, you just have to reverse the interpretation of the syndrome measurement. In the above function, the measured value (index) is stored in a list variable called mvals. There are 18 elements, but since the face operator is absolutely 0, we first prepare a list of 9 0s regardless of presence or absence, and append the result to mvals each time we measure the vertex operator. I will continue to do it. Finally, it returns the quantum states qs and mvals.

Return to the main processing section.

qs_ini.Lx(0).Lx(1)  # logical |00> -> |11>

Then, the logical state $ \ ket {1_ {L} 1_ {L}} $ is created by applying logical $ X $ to the two logical bits ($ \ ket {0_ {L} 0_ {L} It was okay to start from} $, but I wanted to use the logical $ X $ operator, so I set the starting point to $ \ ket {1_ {L} 1_ {L}} $). The logical $ X $ operator is

def Lx(self, q):

    [self.x(i) for i in LX_OPERATORS[q]['edges']]
    return self

It is defined as. As you can see, the explanation is omitted.

qs_fin = qs_ini.clone()  # for evaluating later

Finally, state duplication is performed to evaluate the error correction result. You have now created the initial state.

next,

qs_fin.x(7)  # bit flip error at #7

Then, add a bit inversion error to the 7th bit (anything is fine, so I decided appropriately).

syndrome = measure_syndrome(qs_fin, mval_list)

Then, perform a syndrome measurement to identify the error bit and the type of error. The contents of the function measure_syndrome are as follows.

def measure_syndrome(qs, mvals):

    syn = []
    for fop in F_OPERATORS:  # plaquette operators
        qid = fop['edges']
        qs.h(18).cz(18,qid[0]).cz(18,qid[1]).cz(18,qid[2]).cz(18,qid[3]).h(18)
        syn.append(int(qs.m(qid=[18]).last))
        qs.reset(qid=[18])
        
    for vop in V_OPERATORS:  # star operators
        qid = vop['edges']
        qs.h(18).cx(18,qid[0]).cx(18,qid[1]).cx(18,qid[2]).cx(18,qid[3]).h(18)
        syn.append(int(qs.m(qid=[18]).last))
        qs.reset(qid=[18])
        
    for i in range(len(syn)): syn[i] = syn[i]^mvals[i]

    return syn

The measurement results are stored in the list while measuring the surface operator and the vertex operator in order. This is the syndrome value, but as explained earlier, there are some generators where the eigenvalue -1 (measurement index is 1) is a normal value, so in that case the interpretation needs to be reversed. According to the value of the measured value list mvals recorded at the time of initial state generation,

for i in range(len(syn)): syn[i] = syn[i]^mvals[i]

Inverts the syndrome value as in. This result is returned as the correct syndrome value.

Return to the main processing section.

err_chain = get_error_chain(syndrome)

Estimate the error chain. It's easy because the $ 3 \ times 3 $ grid can only correct 1-bit errors. The contents of the function get_error_chain are as follows.

def get_error_chain(syn):

    face_id = [i for i,v in enumerate(syn) if i < 9 and v == 1]
    vertex_id = [i-9 for i,v in enumerate(syn) if i >= 9 and v == 1]

    e_chn = []
    if face_id != []:  # chain type: X
        for lat in Lattice:
            if lat['faces'][0] == face_id[0] and lat['faces'][1] == face_id[1]:
                e_chn.append({'type':'X', 'qid':[lat['edge']]})
                break

    if vertex_id != []: # chain type: Z
        for lat in Lattice:
            if lat['vertices'][0] == vertex_id[0] and lat['vertices'][1] == vertex_id[1]:
                e_chn.append({'type':'Z', 'qid':[lat['edge']]})
                break

    return e_chn

Identify the face operator number face_id and the vertex operator number vertex_id from where the syndrome value is 1. Since the number of the side that constitutes the boundary of the corresponding face is known from the specified face operator, the number of the common side is the bit number where the bit inversion error occurred. Also, since the number of the side connected to the corresponding vertex can be known from the specified vertex operator, the number of the common side is the bit number where the phase inversion error occurred. In this way, you can see the bit numbers and error types that make up the error chain. Store it in the dictionary list e_chn and return.

Return to the main processing section. From the error chain just obtained

error_correction(qs_fin, err_chain)

To restore the original state. The contents of the function error_correction are as follows.

def error_correction(qs, e_chn):

    for c in e_chn:
        if c['type'] == 'X': [qs.x(i) for i in c['qid']]
        if c['type'] == 'Z': [qs.z(i) for i in c['qid']]

The process is as you see it (explanation omitted). This completes the error correction. At the end of the main processing section,

print("* fidelity after error correction:", qs_fin.fidelity(qs_ini))

Displays the fidelity between the initial state and the final state after error correction. If the fidelity is 1.0, the error correction is successful.

result

The execution result is shown below.

* initial state: logical |11>
* add noise
* syndrome measurement: [0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
* error chain: [{'type': 'X', 'qid': [7]}]
* fidelity after error correction: 1.0

Syndrome measurements have identified that the 1st and 4th face operators have reacted and that the 7th bit had a bit inversion error. The fidelity with the initial state is 1.0, and it was found that it is restored correctly.

Next, I tried to add the error as a bit / phase inversion error as follows.

# qs_fin.x(7)  # bit flip error at #7
qs_fin.z(7).x(7)  # bit and phase flip error at #7

The execution result is

* initial state: logical |11>
* add noise (phase and bit flip): X_7 Z_7
* syndrome measurement: [0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0]
* error chain: [{'type': 'X', 'qid': [7]}, {'type': 'Z', 'qid': [7]}]
* fidelity after error correction: 1.0

It turned out that this can also be corrected correctly.

in conclusion

Now you understand the basics of toric code. It was very fresh and interesting to be able to construct strong codes by making full use of topology. I was also able to study algebraic topology, which I was not familiar with, through references.

This time, I explained the toric code that is constructed in a torus shape, but how do you physically realize a torus? Or, even if it can be realized, there is a problem that only two logical bits can be expressed (even if there is one hole). In order to overcome this, it seems that a surface code using a simple plane (but with defects) that is not a torus is also proposed, so I would like to study it next time.

Acknowledgments: In compiling this article, a study session hosted by Nextremer Co., Ltd. ["4th Topological Quantum Calculation and Surroundings"](https: / I participated in /nextremer.connpass.com/event/176939/) and studied. The lectures on the key points were easy to understand, and the understanding progressed. Thank you for taking this opportunity.

that's all


  1. formula(17)At first glance it may seem difficult,\bar{c}The number of edges contained inMIf you tryp(\bar{c})=(1-p)^{|E|-M} p^{M}=(1-p)^{|E|}(\frac{p}{1-p})^{M}You can see from what you can do. ↩︎

Recommended Posts

Basics of Quantum Information Theory: Topological Toric Code
Basics of Quantum Information Theory: Universal Quantum Calculation by Toric Code (1)
Basics of Quantum Information Theory: Logical Operation by Toric Code (Brading)
Basics of Quantum Information Theory: Entropy (2)
Basics of Quantum Information Theory: Quantum Error Correction (Shor's Code)
Basics of Quantum Information Theory: Quantum Error Correction (CSS Code)
Basics of Quantum Information Theory: Quantum Error Correction (Stabilizer Code: 4)
Basics of Quantum Information Theory: Data Compression (1)
Basics of Quantum Information Theory: Horebaud Limits
Basics of Quantum Information Theory: Quantum State Tomography
Basics of Quantum Information Theory: Data Compression (2)
Basics of Quantum Information Theory: Fault Tolerant Quantum Computation
Read "Basics of Quantum Annealing" Day 5
Read "Basics of Quantum Annealing" Day 6
Basics of Tableau Basics (Visualization Using Geographic Information)
Basics of Python ①
Basics of python ①
Let's break down the basics of TensorFlow Python code