The quantum singularity: The MIT researchers' experiment would use a larger number of photons, which would pass through a network of beam splitters and eventually strike photon detectors. The number of detectors would be somewhere in the vicinity of the square of the number of photons — about 36 detectors for six photons, 100 detectors for 10 photons.
For any run of the MIT experiment, it would be impossible to predict how many photons would strike any given detector. But over successive runs, statistical patterns would begin to build up. In the six-photon version of the experiment, for instance, it could turn out that there’s an 8 percent chance that photons will strike detectors 1, 3, 5, 7, 9 and 11, a 4 percent chance that they’ll strike detectors 2, 4, 6, 8, 10 and 12, and so on, for any conceivable combination of detectors.
Calculating that distribution — the likelihood of photons striking a given combination of detectors — is an incredibly hard problem. The researchers’ experiment doesn’t solve it outright, but every successful execution of the experiment does take a sample from the solution set. One of the key findings in Aaronson and Arkhipov’s paper is that, not only is calculating the distribution an intractably hard problem, but so is simulating the sampling of it. For an experiment with more than, say, 100 photons, it would probably be beyond the computational capacity of all the computers in the world.
No comments:
Post a Comment