4
$\begingroup$

I'm a theorist and I assume that I can use entanglement resources for practical use freely, but is this really the case? Is generating entanglement and storing it for long enough periods of time such that it can be consumed a realistic assumption? For example, is it a much longer process with respect to performing logic gates on qubits?

For example in quantum algorithms or protocols involving entanglement consumption, like quantum clock synchronization for one. Say the algorithm/protocol needs to be repeated often, continuous clock synchronization lets say. Would entanglement generation be a bottleneck in "realistic" scenarios?

I read about different types of qubits and how entanglement can be generated between them in a lab, but I can't seem to find information about how fast the process is or if it is a deterministic process for each qubit type. I did find some results regarding deterministic entanglement generation, but I couldn't find any metric regarding how fast the process can be repeated.

Any tips regarding speed or difficulty of quantum entanglement generation would help me a lot, or even just some general insights.

$\endgroup$
2
  • 2
    $\begingroup$ I think you can generate entangled photons pretty easily then send one of them via laser or optical fiber, so any protocol which uses EPR pairs (teleportation) can just generate them on the fly rather than storing them. Photons are cheap, so if you lose one it's no big deal. $\endgroup$ Commented May 6, 2019 at 17:21
  • 1
    $\begingroup$ Frame challenge: a lot of what we do as theorists is use entanglement as a resource in LOCC scenarios. We want to quantify this specifically because it is hard(er) to create/maintain entanglement. $\endgroup$ Commented Sep 22 at 8:29

2 Answers 2

1
$\begingroup$

Consider "Metropolitan-scale heralded entanglement of solid-state qubits" (2024). It demonstrated a heralded infidelity of ~45% (figure 2) over ~25km (the abstract) at a rate of around 1 per minute (bottom left of page 6).

Any realistic use case will need infidelity better than 45%. That's not a showstopper, because entanglement purification can turn many 45% infidelity pairs into fewer low infidelity pairs. But 45% is bad enough that it will take thousands, maybe tens of thousands, of inputs per good output. And the rate was already really slow, so it'd take days to finish making a good entangled pair starting from this baseline. Clearly this part still needs a lot more improvement.

Interestingly, it would be fine if a quantum network targeted a delivered infidelity of ~1%. That infidelity can be purified into arbitrarily low infidelity with a negligible increase in rate. In principle, 1% infidelity can be reached via purification with existing physical gates and existing physical storage. So the network routers don't need to be fault tolerant quantum computers. Existing ion trap computers are pretty close to good enough for this.

At the endpoints you likely want full fault tolerant quantum computers, because otherwise your gate error or storage error would set noise floors that are too high to reach truly useful infidelities like 1e-9. Fault tolerant quantum computation is still its infancy; there are active projects to demonstrate memories and gates thousands of times more reliable than physical hardware.

So roughly speaking the current bottleneck is establishing any remote entanglement at all. If that's fixed then it should be totally viable to make a big network with a target infidelity of ~1%, using entanglement purification at the physical level. Getting lower infidelity than that would then bottleneck on having fault tolerant quantum computers at the end points.

$\endgroup$
0
$\begingroup$

For quantum computing itself, it might be dependent on the model of computation (stabilizer/measurement-based/etc.) but I think it might also be helpful to distinguish between the present(ish) pre-fault-tolerant regime and the incoming fully fault-tolerant era.

For example, most gate-based devices that are available now quote the fidelity of single-qubit gate operations (Pauli rotations, etc.) as $\ge 99.999\%$, while quoting the fidelity of two-qubit entangling gates as maybe $99.8\%$ or a little higher. Thus at least from the devices that are engineered now, entanglement in the NISQ-era is harder to generate (because it has less overall fidelity). It is also true that the single-qubit gates I think are a lot faster than the entangling gates for most devices - maybe ten or more times longer.

The story changes a little bit in the fault-tolerant era, where entangling Clifford gates (such as $CX$ gates used to generate Bell pairs) are much easier to implement than even some single-qubit non-Clifford gates and states (such as magic $T$-states).

$\endgroup$

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.