following: 9

followed tags: 17

followed domains: 1

badges given: 3 of 3

member for: 583 days

style: clean

However, all of the vulnerabilities found in HTTPS have been fixed, and the TLS specs (the security layer of HTTPS) are regularly updated to remove insecure cryptographic algorithms and add new, improved algorithms.

Don't get me wrong; internet security is still a shitshow in many ways, but you should be confident in the encryption provided by HTTPS.

If you haven't already, look up This Old Tony as well.

If you're stuck in just one place you're doing better than I am!

I am also kind of out of my depth here (Devac, care to chime in if what follows is way off track?), but to my knowledge the interesting bit is here:

- Having a “sign problem” thus means that no local transformation that removes the signs and phases is possible. Notably, the definition of a sign problem must involve the notion of locality [see also the study of Hastings (7)]. By performing a nonlocal transformation on the physical degrees of freedom, one can always diagonalize the Hamiltonian and obtain a classical partition function. However, such a transformation requires computational resources that scale exponentially with the system size.

The problem of performing a nonlocal transformation is in NP: if you knew the exact set of steps to take when performing the transformation, you could do it in polynomial time --- but nobody knows whether or not you can compute the steps to take in polynomial time. What we have now is an exponential-time algorithm for finding those steps --- in this case, diagonalizing the Hamiltonian.

What I'm saying is that, if this problem is NP-hard, then we don't know that that exponential relationship has to be there. It's possible that there's an algorithm that takes linear time to simulate this linear relationship.

Now, between you and me, I would bet money that there is not such a thing, but I'm playing devil's advocate here!

Want to know something fun? If that exponential relationship does hold, then the size of the simulated universe that 'fits' in the 'initial' universe is proportional to the log of the host universe's size, and the size of the simulated universe that fits in *that* universe is proportional to the log of its host universe's size...

...which for all practical purposes means that you get *at most* 5 levels of simulation. Probably a lot less. Thinking about non-asymptotic functions that nonetheless grow so slowly that they might as well is fun!

I think physics (and computer science) is a little more powerful than what you give it credit for. There is some (maybe not a lot, but definitely some) research into alternate physical laws that are plausible & consistent.

Complexity theory (the study of the P/NP/EXP/&c. thing) also talks about this in terms of 'oracles': what happens if we assume that we can solve any NP problem in one step (aka an 'NP oracle')? Well, it turns out that you get another level of problems that are 'NP-oracle'-hard. And if you have an 'NP-oracle'-oracle, well, then you get a class of ''NP-oracle'-oracle'-hard problems, which leads us to believe that our conception of computation really is a universal thing. Even if other universes exist with magical computation abilities, they'll still run into problems that are similar to the ones we have now.

Or at least that's how I understand this stuff.

You should most definitely check out https://software-carpentry.org/lessons/ -- it's targeted at non-CS people doing data science and is quite well written.

Also, to toot my own horn, I have written a course on this sort of stuff: http://web.mst.edu/~nmjxv3/cs1001/

Extra-fun fact: You can do C++ style for loops too!

` for ((i = 0; i < 10; i++)); do echo $i; done;`

As much as I hate to play devil's advocate here, I don't think that paper backs up all the claims made by the Cosmos author. Here's why:

1. The result just shows that certain QMC problems take exponential time/resources to simulate.* This means that if we are living in a simulation, the universe simulating us would be exponentially larger than our universe, which does suggest that maybe Musk's argument that it's highly likely that we live in a simulation is flawed. But, it does not suggest that it's **impossible** for us to live in a simulation.

2. These results are only for **classical** algorithms. One of the big draws of quantumn computing is, well, efficient simulation of quantumn physics! So this paper doesn't eliminate the possibility we live in a simulation on a quantumn computer.**

In short: simulation isn't **impossible**, and math/science has yet to determine that simulation **must** be difficult. Right now, we don't know an efficient simulation technique, but we also don't know that we can't find one.

* But is the problem itself in EXP, or is it perhaps NP-hard and it's just that the only known algorithms to solve the problem take exponential time? Some reading leads me to believe it's "merely" NP-hard, which would mean that efficient (read: polynomial-time) simulation on a classical machine is possible if P=NP.

** Going off the assumption this problem is NP-hard, efficient polynomial-time quantumn simulation may not be a foregone conclusion. On the other hand, if efficient simulation implies that we are being simulated on a quantumn machine, what does that say about the "host" universe's physics?

I wish I had speaker notes or something from the talk because I hit 1 and 3 in person (and showed off ^r history search too). I didn't know about #2, though! And I'll add that scripting guide to the book as well.

I gave a talk on Bash yesterday; you can see the slides if you're interested.

Figured out that it'll take 3-4 weeks to print my lab book, which effectively means it needs to be ready to print by the last week of November, so that's where my Thanksgiving break is going to go...