I'm trying to implement code capacity/phenomenological simulation for the rotated surface code from scratch (without using any specialized package like stim/pymatching). I have two questions.
Could anyone kindly help me check my workflow is correct? And any simplification?
- Define qubit coordinates, stabilizers with the qubits they connect to
- Random generate a binary error array, indicating whether $i$-th qubit has X/Z-type error
- Based on the error, generate syndromes
- Build matching graph, syndromes as the nodes, I build a complete graph with edge weight being proportional to the Chebyshev distance (in unrotated case you just use the Manhatten distance, but in the rotated case you need to walk diagonally)
- Decode the syndrome by min-weight-matching, obtaining pairs of syndrome nodes, find the shortest diagonal path between them. Flip data qubits along the path.
- Check intersection between the error and logical operator, decide logical error or not.
Second question is, while performing simulation, I realize that technically I need some "virtual check qubits", which I do not yet have in my current code. I think I understand that these virtual qubits are needed to avoid some error causing only 1 flagged check. But I'm not sure where and how to put in my current workflow. My thinking is, I'll define them in #1, as a separate set of stabilizers. In #3 I can generate the syndromes as the real stabilizers. But in #4 I'm not sure how to build the graph. From a figure in a paper I saw, it's not exactly, I have the real+virtual check and I build a complete graph. It seems like virtual checks only connect the nearest real checks? How do I implement this? Also edge weights? I have the impression that with virtual checks some edge weights need to be set as 0, how and why? Besides, it seems like in the matching these virtual checks can be left unmatched, how should I address this in the implementation?
Thanks!!
