following: 1
followed tags: 10
followed domains: 0
badges given: 0 of 0
hubskier for: 3422 days
The article tries to paint public health authorities as negatively as possible. I can't say I blame them for not wanting to invest much in a vaccine which has at best 60% efficacy. I'm sure a lot of money has been spent on more promising efforts which fizzled completely. We certainly are not going to eradicate cholera with this vaccine either. Still, all in all it's great news. Let's keep this graph going down and with a vaccine outbreaks are going to be less common and less severe.
Infinite Jest. A bit too much Infinite, not enough Jest. To this day the only novel I've read with so many footnotes. When I hit a footnote with its own footnote I called it quits.
How many existential risks are competing for our attention, brainpower, and funding? Let's brainstorm. * Asteroid impact * Solar flare * Epidemic of an infectious pathogen * Climate change * Artificial intelligence * Nuclear war That's all I got, and yes I think we should prepare for all of them.
Not much of an antidote, much of this is a straw-man argument. Plus a lot of it is just shitting on the kind of people who are interested in AI-risk, or saying that because certain logical arguments are uncomfortable they must be wrong. I'm fairly certain most AI-risk advocates agree there is a fairly small risk, but it's small like 1%. Plus if/when it does become a problem, it's likely to become a problem really fast, so it makes sense to be prepared. We've had decades to do something about climate change and we haven't, which will be catastrophic, making the same mistake again could be species-ending. >But if you’re committed to the idea of superintelligence, AI research is the most important thing you could do on the planet right now. It’s more important than politics, malaria, starving children, war, global warming, anything you can think of. There are already a few billion dollars being spent globally on malaria, I'm guessing a similar amount on food aid. A much much smaller amount is spent on AI research. MIRI spent about $1.8 million last year source; I'm guessing the worldwide total isn't more than $10 million. I'm not saying those numbers should be equal, but it may be that $10 million isn't enough. Or more to the point, a math/programming nerd interested in doing something which will help the world and hasn't picked a career path is probably better looking into AI-research than the other problems listed. Reason being, some problems which lend themselves to just throwing more money at them; last I knew $5 at the Against Malaria Foundation bought a bed-net. So if you just want to throw money at a problem the AMF is your best choice (note: I regularly donate to AMF among other charities). We have several drugs which treat malaria and bednets are effective at prevention, we also know how to prevent and treat starvation (give the person food). These are problems constrained by resources and will, not by knowledge. tl;dr Author is exaggerating though I agree with many of the conclusions edit: So apparently OpenAI has a $1 billion endowment. So my guess is that the cause has enough money right now. Also I think they have an image problem, so if anybody here is good at marketing and also cares about AI-research maybe you could help them out with that.
>If you’ve ever declared that you just couldn’t date someone whose favorite music artist was Taylor Swift Lazy-ass writer didn't even bother to fact-check. T Swift isn't on Spotify and I'm super salty about it.
Can't they switch to Adrenaclick? It's basically the same thing
We guess. Or use domain specific knowledge, if available. Imagine playing 3-card monte with your friend (who you know has no history of doing card tricks, possibly has learned something knew in the last week but nothing major) versus somebody on the street. You'd assign different probabilities of yourself winning in each case, right? Say I flip a coin once and you have to guess the face. You ask if the coin is fair. I answer "Unknown". One could assume that it's probably fair, and if it's unfair it's equally likely to be unfair in either direction, in which case it's 0.5 each (for the first flip only). b_b mentioned Bayesian inference, that's a way to include prior knowledge. But of course people with different prior assumptions will get different answers. So it goes.
This is sort of right but there are some underlying assumptions that it glosses over. Situation (a): We take a set of parents which all have 2 children. Select the group in which the youngest is a boy. What fraction of that group has two boys? 1/2, as stated in your first situation. Situation (b): Take a set of parents which all have 2 children. Select the group in which at least 1 is a boy and send the others home. What fraction of that group has two boys? 1/3, as stated in your second situation. Situation (c): Take a set of parents which all have 2 children. Select a group in which the youngest is a boy and send the others home. Tell the stranger "one of them is a boy". What fraction of that group two boys? The GGs and BGs left, the BBs and GBs stayed. So it's 1/2, same as situation (a). Situation (d): Take a set of of parents which all have 2 children. Have them pick one of their kids at random. If the kid they picked is a girl, send them home. Keep the remainder and tell the stranger of these parents children, "one of them is a boy" (which is true). What fraction of that group has two boys? 1/2, same as (a). The GGs all left, the BBs all stayed, half the BG and half the GB stayed, the pool will be equally divided between 2 boys and 1boy/1girl. In situations (c) and (d) we generated the samples in different ways, but phrased the statement in accordance with situation (b). Misleading maybe, but not technically wrong. The manner in which the children were selected guaranteed that at least 1 would be a boy. The normal intuition (at least my intuition) in the "one of them is a boy" statement is that the parent picked a one of their children at random and then stated its sex (first part of situation (d)). Were that the case, the probability of the other child being a boy would be 1/2. What this comes down to is the sample generating process matters, and the language describing a problem contains clues but they may not be fully fleshed out. This is known as the "Boy or Girl Paradox", see wikipedia for more details.
This would all be true unless you happen to make an enemy at the FBI, CIA, or NSA. Which is mostly a problem for politicians, so you'd think they'd be more concerned about domestic surveillance. Then again it's also a problem for political activists. John or Jane Q Activist starts to get some traction for their cause, which the government is very much against, so now they drill into this persons communications and find something to charge them with. There's always something. Which is a long-winded way of saying that it may not be a personal problem, but it is a political problem. Because it's hard to make political change when anybody who tries gets sent to prison.