__Login__or

__Take a Tour__!

**Article Source - "Quantized gravitational responses, the sign problem, and quantum complexity"**

by ZOHAR RINGEL, DMITRY L. KOVRIZHIN

*Abstract:*

- It is believed that not all quantum systems can be simulated efficiently using classical computational resources. This notion is supported by the fact that it is not known how to express the partition function in a sign-free manner in quantum Monte Carlo (QMC) simulations for a large number of important problems. The answer to the question—whether there is a fundamental obstruction to such a sign-free representation in generic quantum systems—remains unclear. Focusing on systems with bosonic degrees of freedom, we show that quantized gravitational responses appear as obstructions to local sign-free QMC. In condensed matter physics settings, these responses, such as thermal Hall conductance, are associated with fractional quantum Hall effects. We show that similar arguments also hold in the case of spontaneously broken time-reversal (TR) symmetry such as in the chiral phase of a perturbed quantum Kagome antiferromagnet. The connection between quantized gravitational responses and the sign problem is also manifested in certain vertex models, where TR symmetry is preserved.

- Whether we live in a simulation or not is not a question that physics can answer. Unless it's a question of "are we living in a simulation that's running on a computer in a universe with exactly identical laws of physics to our own" which is much more boring.

Behold the fine-tuned universe. The argument you're making is theological, not scientific; "what if it's a simulation run on a computer so alien that our physical laws do not apply there" is akin to saying "no man may know the mind of God." The whole point of injecting rigor into a navel-staring solopsistic conversation like this is to define the limits of the solipsism. The paper effectively says "for computers and math as applies in any universe we can conceive, this theory is disproven."

I think physics (and computer science) is a little more powerful than what you give it credit for. There is some (maybe not a lot, but definitely some) research into alternate physical laws that are plausible & consistent.

Complexity theory (the study of the P/NP/EXP/&c. thing) also talks about this in terms of 'oracles': what happens if we assume that we can solve any NP problem in one step (aka an 'NP oracle')? Well, it turns out that you get another level of problems that are 'NP-oracle'-hard. And if you have an 'NP-oracle'-oracle, well, then you get a class of ''NP-oracle'-oracle'-hard problems, which leads us to believe that our conception of computation really is a universal thing. Even if other universes exist with magical computation abilities, they'll still run into problems that are similar to the ones we have now.

Or at least that's how I understand this stuff.

As much as I hate to play devil's advocate here, I don't think that paper backs up all the claims made by the Cosmos author. Here's why:

1. The result just shows that certain QMC problems take exponential time/resources to simulate.* This means that if we are living in a simulation, the universe simulating us would be exponentially larger than our universe, which does suggest that maybe Musk's argument that it's highly likely that we live in a simulation is flawed. But, it does not suggest that it's **impossible** for us to live in a simulation.

2. These results are only for **classical** algorithms. One of the big draws of quantumn computing is, well, efficient simulation of quantumn physics! So this paper doesn't eliminate the possibility we live in a simulation on a quantumn computer.**

In short: simulation isn't **impossible**, and math/science has yet to determine that simulation **must** be difficult. Right now, we don't know an efficient simulation technique, but we also don't know that we can't find one.

* But is the problem itself in EXP, or is it perhaps NP-hard and it's just that the only known algorithms to solve the problem take exponential time? Some reading leads me to believe it's "merely" NP-hard, which would mean that efficient (read: polynomial-time) simulation on a classical machine is possible if P=NP.

** Going off the assumption this problem is NP-hard, efficient polynomial-time quantumn simulation may not be a foregone conclusion. On the other hand, if efficient simulation implies that we are being simulated on a quantumn machine, what does that say about the "host" universe's physics?

The fact that it requires exponential power to calculate a linear relationship suggests that size hits an asymptote, not that better efficiency would solve the problem. You can't clear the discontinuity out of a tangent curve and still have trigonometry.

div by zero isn't a hole you can fill through optimism. It is the abyss staring back at thee.

What I'm saying is that, if this problem is NP-hard, then we don't know that that exponential relationship has to be there. It's possible that there's an algorithm that takes linear time to simulate this linear relationship.

Now, between you and me, I would bet money that there is not such a thing, but I'm playing devil's advocate here!

Want to know something fun? If that exponential relationship does hold, then the size of the simulated universe that 'fits' in the 'initial' universe is proportional to the log of the host universe's size, and the size of the simulated universe that fits in *that* universe is proportional to the log of its host universe's size...

...which for all practical purposes means that you get *at most* 5 levels of simulation. Probably a lot less. Thinking about non-asymptotic functions that nonetheless grow so slowly that they might as well is fun!

This is not my field and I've been plenty wrong before, but this:

- Establishing an obstruction to a classical simulation is a rather ill-defined task. A related, yet more concrete, goal is to find an obstruction to an efficient quantum Monte Carlo (QMC), which is one of the chief numerical workhorses in the field. In the QMC, one considers the Euclidean (or thermal) partition function of a quantum system in d dimensions as a statistical mechanical (SM) model in d + 1 dimensions with the extra dimension being imaginary time (5). If this can be carried out such that the resulting SM model has Boltzmann weights that are local and non-negative, then efficient QMC sampling is possible (6).

Indicates to me that the method is not "can we do this with our current models" but "if this model can be done, it indicates that the approach is physically possible." Do you read that differently?

Their conclusion:

- Although QMC offers essentially numerically exact solutions for a wide variety of many-body quantum problems, in the case of aforementioned systems, the Boltzmann weights seem to be unavoidably negative or complex. Notably, signs and phases in the Euclidean partition function can sometimes be a matter of representation. Having a “sign problem” thus means that no local transformation that removes the signs and phases is possible. Notably, the definition of a sign problem must involve the notion of locality [see also the study of Hastings (7)]. By performing a nonlocal transformation on the physical degrees of freedom, one can always diagonalize the Hamiltonian and obtain a classical partition function. However, such a transformation requires computational resources that scale exponentially with the system size. Therefore, for a meaningful definition of the sign problem, one has to add an additional requirement, that is, that the degrees of freedom σ, that enter the Boltzmann weights must be expressed as local combinations of the physical ones or, more precisely, that σ are given by finite depth local quantum circuits acting on the physical degrees of freedom.

In other words, the argument isn't "the way we do the computation doesn't scale" the argument is "we can tell we're working on an *n*th order model of an *n^2*th order problem which indicates that any interpretation of the problem leads to kaboom."

They do give you kinda-sorta an out:

- For the sake of clarity in this work, we do not address the possibility of nonlocal approaches to QMC, such as determinant QMC or cluster algorithms (8–10).

Which, by my read, means "we didn't *also* club it down by eliminating it through this *other* approach." However, if "determinant QMC" disproved their hypothesis I expect they would have addressed it.

But then, I went all the way to Diff EQ. I'm well out of my depth.

For the sake of clarity in this work, we do not address the possibility of nonlocal approaches to QMC, such as determinant QMC or cluster algorithms |

I am also kind of out of my depth here (Devac, care to chime in if what follows is way off track?), but to my knowledge the interesting bit is here:

- Having a “sign problem” thus means that no local transformation that removes the signs and phases is possible. Notably, the definition of a sign problem must involve the notion of locality [see also the study of Hastings (7)]. By performing a nonlocal transformation on the physical degrees of freedom, one can always diagonalize the Hamiltonian and obtain a classical partition function. However, such a transformation requires computational resources that scale exponentially with the system size.

The problem of performing a nonlocal transformation is in NP: if you knew the exact set of steps to take when performing the transformation, you could do it in polynomial time --- but nobody knows whether or not you can compute the steps to take in polynomial time. What we have now is an exponential-time algorithm for finding those steps --- in this case, diagonalizing the Hamiltonian.

Sorry for stalling, I needed some time to get through that Hastings paper. Just to let you know, from what I understood, your interpretation is correct. Do keep in mind that for all my chops I'm an undergrad. I'm still stuck at one difficult place of OP article and don't feel ashamed admitting it.

Does this truly put the argument to rest though? The conclusion seems to be that because the complexity of a machine (that we can conceive) would need more atoms than (we know of) in our universe, that it's impossible that we're living in a simulation. All that says to me is that it's impossible we're living in a simulation constructed of our best understanding of what a computer can be, and made of/describing the particles we know of... which only constitutes like 5% of the universe. What's stopping a system we can't comprehend made of matter we neither know of nor understand from being what's running the simulation?

Disclaimer: My opinion is based on the cosmos magazine write-up, as reading the paper just showed me how illiterate I am in quantum particle physics, haha. So if someone could explain why the paper proves I'm wrong that would be awesome!

Hitting a limit of exponential expansion in computing is akin to hitting a speed-of-light limit in velocity: you cannot go faster than the speed of light because your mass divides by zero causing energy to multiply by infinity. The arguments against are akin to "yeah, but what if there's *another* universe where going faster than the speed of light *doesn't* make your mass divide by zero?"

Okay, fine. But your simulating universe no longer has relativity as we understand it. Fundamental shit like how protons and electrons interact has changed. "matter" means something different there. Prior to this paper the navel-staring eggheads were all "well what if it's all, like, a simulation, maaaan?" and the argument made by the paper is "if it's a simulation it's running somewhere so unlike our universe that now you're just trying to dig out a new definition of God."