following: 24
followed tags: 80
followed domains: 8
badges given: 348 of 349
hubskier for: 4229 days
I doubt he would have said it if he didn't know 100% dead-to-rights that he wouldn't have to put his money where his mouth is. I think you also need to consider that fascists don't want your facts, they want that outrage and in-group unity injected directly into their veins. "I am with the mob" is exactly what you say when saying anything else will cause you blowback. So is "not my mob." It isn't about having a coherent policy, it's about weather-vaning.
Interesting? Yes. Constitutional crisis? No. It's like Disneyville or whatever - can DeSantis take it away from Disney? Absolutely. Can he do it without looking like a clown? Yes: to the fact-immune fascists for whom Trump is the god-emperor of Dune, this is exactly the kind of persecution they're looking for. Fortunately for the world, their number is dwindling... and this is the sort of thing that cuts their numbers. Trump is setting himself up as the guy whose principle claim to fame is an utter disregard for the rule of law, while DeSantis is setting himself up as the guy who wants minute control over every school board but doesn't think the federal government has the right to enforce arrest warrants. I think we can agree that they're courting corner-case appeal. And yes - I agree. For the people who are already true believers, none of that will matter. But there haven't been enough of them for three general elections in a row.
Naah. Fauvus refused to desegregate Alabama so Eisenhower called in the national guard. https://en.wikipedia.org/wiki/Little_Rock_Nine?wprov=sfti1 70% of Americans don't want to see Trump re-elected and 100% of Ron DeSantises don't want to appear impotent before the federal government.
It should follow, then, that Seattle multifamily architecture should look more like European architecture. It does not. It looks exactly like California architecture, like Arizona architecture, like Chicago architecture.Point Access Blocks are already allowed under the International Building Code, and the Washington State Building Code allows them up to 3 stories above grade. However, Seattle modified its version of the code, allowing Point Access Blocks up to 6 stories (previously there was no height limit), and has permitted them for nearly 50 years. In order to do this, planners must meet numerous conditions including sprinklers, limitations on travel distance, fire-rated doors and assemblies, and corridors separating units and stairways.
Tech bubble was absolutely pump and dump. It was the first example of VC money going bat shit. I had a half-dozen friends whose lifestyles were entirely subsidized by insane valuations and the minute reality set in they wandered around like broken gold panners. Around here they were referred to as "'99ers" in reference to the '49ers of gold panning lore. Housing bubble was definitely risk displacement, not pump and dump. I think the broader point is that the NFT misadventure has much better metrics, therefore what can we work back from the metrics. Maybe nothing? But fuck, everyone just automatically assumes Goldman Sachs knows what AI is going to do about employment despite the fact that the only time GS has ever been right about anything is when they're trading on insider information. And I think that's the main point: we have anecdotal evidence of tulip mania, mostly from the British who were making fun of it. We have anecdotal evidence of the Spanish Inquisition, mostly from the British who were making fun of it. We have anecdotal evidence of the horrors of the Persian Empire, mostly from Heroditus who was making fun of it. NFTs? We have receipts. Where it touched real money isn't really the point - it's that we have exquisite data, what can we scry from it to apply to stuff that isn't weird corner case jpegs.Question is did anyone actually ever pay like a million dollars for an NFT or was it a million crypto-equivalent dollars that the buyer probably paid some relatively small amount for years earlier? Part of me thinks that there has never been a better case of a fool and his money being parted than a million dollar gif.
This is some of the most thunderously ignorant writing I have read. Zero to Godwin in 293 words. Worthy of note: the Chinese, the Hindus and the Turks all discovered block printing before the Europeans, but didn't use it because their societies didn't need it. Block printing took off in Europe because (A) there weren't enough scribes to go around thanks to the black death (B) there was a violent cultural revolution overthrowing the predominant religion and social hierarchy. So it's not a great place to go "Gutenberg therefore Hitler" even if you disregard the five centuries betwixt the two. "Yeah so I'm going to use your labor-saving device to overthrow God as we understand him." Pretty sure they had a real idea. Best guess is about a year elapsed between Gutenberg printing indulgences and Gutenberg printing a bible in German. Not a one of those oil barons who practiced ruthless and cut throat business practices had a clue why they were doing it, they found it eerily coincidental that they got rich, though! This is the exact opposite of what we usually say about inventors. When we use the term "visionaries" it's not because they're clueless. ...so we're done with the printing press? We've moved on? Who's going to tell the publishing industry? Adjusted Gross Income? Oh, you mean artificial general intelligence. Maybe you should link to that. Or at least define it. Although you did just spend several hundred words on "gutenberg" without getting to "bible." navel gazing intensifies ______________________________ This whole argument is a vampires vs. zombies discussion - which fictional threat do you fear more? I guess it's fun to engage with? I guess it freaks everyone out that the computers none of us had when we were kids are about to have better UI? I guess people can't imagine a chatbot that doesn't suck, therefore anything that talks must be alive? One of my favorite things about the pundit-sphere is the same people quick to point out what a shitshow Southwest Airlines is because of their ancient shitty computers think Sam Altman is building Skynet in his basement so he can crash all the airlines or some shit. It's like none of them have ever changed a printer cartridge.In other words, virtually all of us have been living in a bubble “outside of history.”
I am reminded of the advent of the printing press, after Gutenberg. Of course the press brought an immense amount of good, enabling the scientific and industrial revolutions, among many other benefits. But it also created writings by Lenin, Hitler, and Mao’s Red Book.
The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring.
No one at the beginning of the fossil fuel era had much of an idea of the changes it would bring.
No one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes (we can do the short term, albeit imperfectly).
How well did people predict the final impacts of the printing press?
So when people predict a high degree of existential risk from AGI
Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns. (Especially when you hear a nine-part argument based upon eight new conceptual categories that were first discussed on LessWrong eleven years ago.)
I played Cyberpunk 2077 on a basic-bitch Gen1 PS4 with no issues. It dropped fewer frames and froze less than Borderlands 3. It's my opinion that Cyberpunk was just the punching bag the gaming press needed in the moment and there was no possible way that game wasn't going to get pilloried. Plus, the gaming press sucks balls. I enjoyed Outer Wilds. Couldn't quite care enough to get through the expansion. I really wanted that game to be bigger. I love that it was created by Hiro from Heroes. Horizon Forbidden West is fucking great. The gaming press hates it because they pushed hard into representation, which got sand in the Gamergate Posse's collective vagina. I should revisit it at some point; there's still some housekeeping I need to finish. HZD was better, though. HZD was the game that made me go "I'm gonna go look at that turtle farm out on the edge of the map even though it's 15 minutes of walking" and then I just stared at 'em as the sun went down.
A Roomba traverses the world gathering IRL data. The difference is the Roomba can incorporate that data, the LLM can't - the basic make up of "large language models" is "giant dataset" + "careful, empirical training" = "pantomime of intelligence." The response of the model must be carefully, iteratively tuned by technicians to get the desired results. It's that iterative tuning that makes it work, and it is exquisitely sensitive to context.
Let's stop down and discuss your rhetorical strategy real quick, then. _______________________________ I know what I'm talking about. You're welcome to disagree; regardless of your opinion of my knowledge, you have to acknowledge that I think I know what I'm talking about, and that should inform your conversation with me. On the other hand, you don't know what you're talking about. This isn't my observation, this is your profession: against my knowledge, you have presented your naivete. But more than that, you imply our positions are equal: my dozen books' worth of casual reading on the subject of artificial intelligence has no more weight than your choice to presume it's unknowable. Could I stand to be less combative? Always. Are you baiting me into combat? Irrefutably. You are disregarding my knowledge, you are discounting my experience, you are putting forth the maxim that what I know is worth nothing, since you know nothing and we're on equal footing here. You should also know that I was trained to survive depositions. As a hired, professional expert, it was not uncommon for someone in my position to be made to look like an idiot in front of a jury. This is easier than one might think because you don't need to mock and ridicule expertise, you only need to mock and ridicule the expert. You comment on how he tied his tie. You ask him about his suit. You point out the non-existent fleck of breakfast on his lapel. You read a section of his report and ask if he deliberately left out the apostrophe; you read another section and painstakingly work through a complicated phrase to mutually dumb it down into the simplest possible terms, then you ask him in front of the jury why he used such complicated words if the simplest terms are the equivalent. These are the strategies the uneducated use to discredit the educated. It's archetypal Reddit-speak; you spend two minutes googling something that you don't understand and don't want to and then you dangle it in front of the person who actually knows what he's talking about and say "somewhere in here is something that disagrees with what you said." And since the audience is also made up of people with no expertise who always want to feel smarter, you'll get the upvotes you need to score the points to "win" the debate. Here's the problem. The expert knows you're wrong. He's never stopped being an expert. And you're not debating him. You're not attempting to learn anything from him. You're trying to score useless internet points off an unseen audience of equally uneducated individuals because the actual experts left the forum long, long, long ago. _________________________ Fundamentally? I have nothing to learn from you. Theoretically? You're acting like you think you might have something to learn from me. Yet you're coming at me from a position of innocence, you aren't reading your own sources closely enough to understand them and you're putting forth the fundamental argument that since you (think) you scored a rhetorical papercut or two I'm going to go "gee, you're right, this is all fundamentally unknowable". Why would I do that? Ultimately, you're arguing that something I understand innately - ME - should not be considered superior to something else that I understand through effort and study - chatbots - because since you don't understand it, there's no way I can. Here, watch: That's quite clear. You, personally, don't think that I, personally, can reasonably claim ChatGPT has nothing in common with a brain. I'm five comments deep in responses to your hypothesis but you can't let go of what you think having equal merit to what can be known. And whether or not I'll be kind to your dog is not all that really counts: what counts is you wish to accord rights to a computer program without bothering to understand why experts think that's a bad idea.Genuine apologies for being combative. I got a little worked up at the insinuation that I am somehow trying to dispose of ethics. I do think that you could stand to be less combative yourself; even if you think I have absolutely nothing to offer you, you must think there's some value to having this discussion if you've gone this long, and I have to choose to continue as well.
ChatGPT is not a brain, but I don't think you can reasonably claim it has nothing in common with one.
"unknown" does not equal "complex." "complex" does not equal "known." What we know of "brains", regardless of its complexity, in no way parallels Markov chains. What we know of LLMs, on the other hand, is entirely Markov chains. The Venn diagram of "unknown" and "complex" are two separate circles; the Venn diagram of "known" and "LLMs" is one circle. Not saying it does. Saying you can't draw any parallels between the two because they have nothing in common. I've said that three times three different ways. Your link says "neural net" twice. It refers to lookup tables for the rest of its length. Here's IBM to clarify. That Wolfram article you didn't read lists "neural" 170 times, by way of comparison - but it actually explains how they work, and how the "neural network" of large language models is a LUT. I've been presuming you're discussing this in good faith because you said you were. You're making me doubt the veracity of that statement because no matter how many times I point out that this isn't about "beliefs" you keep using yours to underpin your logic. We were talking about Atlas because thenewgreen brought up Atlas. Before you joined the conversation I pointed out that the model does not learn once it has been trained. You keep skipping over this. And speaking as a human in a world full of humans, I'm uninterested in a new set of ethics that does not prioritize humans. Would you like to try again in a non-combative way? Because I can have this conversation ad nauseum. But I have to choose to do so.I don't understand how this isn't going back on your earlier claim that complexity doesn't equal intelligence?
Our lack of understanding of how the brain works doesn't provide any evidence of it possessing some supernatural faculty that a computer couldn't (with currently non-existent but feasible tech) replicate.
And your fourth claim here is demonstrably false: even ChatGPT is based on a neural net.
Yeah but that's an argument for 'mind' not an argument for 'life.' You're arguing there's no 'mind' and that what we consider 'mind' is a gradient. 'life', on the other hand, is a clear bright line: if you have a metabolism, if you respond to stimulus, if you reproduce, if you can do all of the above without the presence of a host, you are alive. The nerds are all about "life" doesn't matter because "mind" is whatever we say it is.
Kahneman and Tversky did batteries of studies to quantify that eighty percent of communication is nonverbal. I mean, you either acknowledge that other experts have the data you need or you present yourself as the ultimate authority on everything.he allowed, humans do express emotions with their faces and communicate through things like head tilts, but the added information is “marginal.”
That's not accurate, though. If you take an "equally reductionist view" of the human mind you are deliberately misunderstanding the human mind. Reducing Atlas (or GPT) to a set of instructions isn't reductionist, it's precise. It's literally exactly how these machines work. It's not a model, it's not an approximation, it's not an analogy, it's the literal, complete truth. To be clear: we do not have the understanding of cognizance or biological "thought" necessary to reduce its function to this level. Not by a country mile. More than that, the computational approach we thought most closely matched "thinking" - neural networks - do not work for LLMs. To summarize: - We only have a rough idea how brains think - We have an explicit and complete idea how LLMs work - What we do know does not match how LLMs work - Attempting to run an LLM the way the brain works fails And all of that is hand-wavey poorly-understood broad-strokes "here are our theories" territory. We know, for example, that there's a hierarchy to sensory input and autonomous physiological response - smell is deeper in the brain than sight sound touch or taste and has greater effects on recall. Why? Because we evolved smell before we evolved binocular color vision or stereoscopic hearing and the closer to survival, the more reptilian our thought processes go. This hasn't been evolutionarily advantageous for several thousand generations and yet here we are - with the rest of our senses and thought processes compensating in various ways that we barely understand. Atlas really is as simple as a bunch of code. I say that having help set up an industrial robot that uses many of the same parts as Boston Dynamics does. It speaks the same code as my CNC mill. It goes step by step through "do I measure something" or "do I move something." GPT-whatever is the same: "if I see this group of data, I follow up with this group of data, as modified by this coefficient that gets tweaked black-box style depending on what results are desired." But don't take my word for it: Manning is wrong, and is covering up his wrongness by saying "what's meaning anyway, maaaaaan?" Arguing that kids figure out language in a self-supervised way has been wrong since Piaget. Okay, where do you draw the line? 'cuz the line has to be drawn. You can't go through your day without encountering dozens or hundreds of antibacterial products or processes, for example. I don't care how vegan you are, you kill millions of living organisms every time you breathe. Bacteria are irreducibly alive: they respond to stimulus, they consume energy, they reproduce. The act of making more bacteria is effortful. ChatGPT? I can reproduce that endlessly. The cost of 2 ChatGPTs is the same as the cost of 1 ChatGPT. Neither can exist without an exquisite host custom-curated by me. it won't reproduce, generations of ChatGPT won't evolve, there is no "living" ChatGPT to distinguish it from a "dead" ChatGPT. Why does ChatGPT deserve more rights than bacteria? I find that ethical individuals have the ability to query their ethics, and unethical individuals have the ability to query the existence of ethics. Put it this way: I can explain why humans occupy a higher ethical value than bacteria. I have explained why bacteria occupies a higher ethical value than ChatGPT. But I also don't have to explain this stuff to most people. Look: golems are cautionary tales about humans making not-human things that look and act human and fuck us all up. Golems are also not the earliest example of this: making non-humans that act human is in the Epic of Gilgamesh. It's the origin story of Abrahamic religion: god breathed life into Adam and Eve while Satan just is. This is a basic dividing line in ethics: people who know what people are, and people who don't. This isn't a "me" question. Yeah and taking an astrological view of the solar system allows for the position of Mars to influence my luck. That doesn't make astrology factual, accurate or useful. Here's the big note: How "you feel" and how "things work" are not automatically aligned. If you are wrong you will not be able to triangulate your way to right. And this entire discussion is people going "yeah, but I don't like those facts." The facts really don't care.I guess what I'm getting at is the idea that one could take an equally reductionist view of the human mind, it's just that our brains are optimized beyond the point of being possible to interpret.
We see, we hear, we feel, we smell, and all that information is plugged into a complex logical system, along with our memories, categories, and any evolved instincts to dictate our actions.
What about Christopher Manning, the other computational linguist mentioned in this article?
The central disconnect that I'm interested in learning more about is the idea of humans as exceptional in deserving of our respect and compassion.
I'm gathering that you agree with Bender in wanting to posit humanity as an axiom.
kind of take a panpsychist view on consciousness, and fundamentally what I'm arguing is that that perspective allows for manmade constructs like computer programs to attain some degree of consciousness.
The context is "box jump." You can google that. It's a cross-training thing. The way humans codify "jump" is "high jump" "long jump" "rope jump" etc. "jump" is the act of using your legs to leave the ground temporarily. We use it as a noun ("he didn't make the jump"), a verb ("he jumped rope"), as a metaphor ("that's quite a logical chasm to jump"). Logically and semantically, we parse the actions performed by the robot in terms of discrete tasks. Those tasks are assigned names. Those names have contexts. The robot, on the other hand, "thinks" G1 X51.2914 Y54.8196 Z0.416667 E0.06824 F500 G1 X52.4896 Y54.3121 Z0.433333 E0.0681 F500 G1 X53.5134 Y53.5134 Z0.45 E0.06795 F500 G1 X54.294 Y52.4792 Z0.466667 E0.06781 F500 G1 X54.7793 Y51.2806 Z0.483333 E0.06767 F500 G1 X54.9375 Y50 Z0.5 E0.06753 F500 G1 X54.7592 Y48.7248 Z0.516667 E0.06738 F500 G1 X54.258 Y47.5417 Z0.533333 E0.06724 F500 G1 X53.4692 Y46.5308 Z0.55 E0.0671 F500 G1 X52.4479 Y45.7601 Z0.566667 E0.06696 F500 G1 X51.2644 Y45.281 Z0.583333 E0.06681 F500 G1 X50 Y45.125 Z0.6 E0.06667 F500 G1 X48.741 Y45.3012 Z0.616667 E0.06653 F500 G1 X47.5729 Y45.7962 Z0.633333 E0.06639 F500 G1 X46.575 Y46.575 Z0.65 E0.06625 F500 G1 X45.8142 Y47.5833 Z0.666667 E0.0661 F500 G1 X45.3414 Y48.7517 Z0.683333 E0.06596 F500 G1 X45.1875 Y50 Z0.7 E0.06582 F500 G1 X45.3615 Y51.2429 Z0.716667 E0.06568 F500 G1 X45.8503 Y52.3958 Z0.733333 E0.06553 F500 G1 X46.6191 Y53.3809 Z0.75 E0.06539 F500 G1 X47.6146 Y54.1317 Z0.766667 E0.06525 F500 G1 X48.7679 Y54.5982 Z0.783333 E0.06511 F500 G1 X50 Y54.75 Z0.8 E0.06496 F500 G1 X51.2267 Y54.5781 Z0.816667 E0.06482 F500 G1 X52.3646 Y54.0956 Z0.833333 E0.06468 F500 G1 X53.3367 Y53.3367 Z0.85 E0.06454 F500 G1 X54.0775 Y52.3542 Z0.866667 E0.06439 F500 G1 X54.5378 Y51.2159 Z0.883333 E0.06425 F500 G1 X54.6875 Y50 Z0.9 E0.06411 F500 G1 X54.5177 Y48.7895 Z0.916667 E0.06397 F500 G1 X54.0415 Y47.6667 Z0.933333 E0.06382 F500 G1 X53.2925 Y46.7075 Z0.95 E0.06368 F500 G1 X52.3229 Y45.9766 Z0.966667 E0.06354 F500 G1 X51.1997 Y45.5225 Z0.983333 E0.0634 F500 G1 X50 Y45.375 Z1 E0.06325 F500 G1 X48.8057 Y45.5427 Z1.01667 E0.06311 F500 G1 X47.6979 Y46.0127 Z1.03333 E0.06297 F500 G1 X46.7517 Y46.7517 Z1.05 E0.06283 F500 G1 X46.0307 Y47.7083 Z1.06667 E0.06268 F500 G1 X45.5829 Y48.8164 Z1.08333 E0.06254 F500 G1 X45.4375 Y50 Z1.1 E0.0624 F500 G1 X45.603 Y51.1782 Z1.11667 E0.06226 F500 G1 X46.0668 Y52.2708 Z1.13333 E0.06211 F500 G1 X46.7959 Y53.2041 Z1.15 E0.06197 F500 G1 X47.7396 Y53.9152 Z1.16667 E0.06183 F500 G1 X48.8326 Y54.3567 Z1.18333 E0.06169 F500 G1 X50 Y54.5 Z1.2 E0.06154 F500 G1 X51.162 Y54.3366 Z1.21667 E0.0614 F500 G1 X52.2396 Y53.8791 Z1.23333 E0.06126 F500 G1 X53.1599 Y53.1599 Z1.25 E0.06112 F500 I don't. The difference is, I work with g-code. I work with servos. I work with stepper motors. I work with sensors. I work with feedback. I work with the building blocks that allow Boston Dynamics to do its magic - and it's not magic, and it certainly isn't thought. It's just code. I know enough to know I don't know much about artificial intelligence. But I also know more than most any counter-party at this point for the simple fact that I understand machines. Most people don't, most people don't want to. What you're doing here is taking my arguments, disregarding them because you don't understand them, and operating from the assumption that nobody else does, either. And here's the thing: this stuff isn't hard to learn. It's not hidden knowledge. There's nothing esoteric about it. Every researcher in machine intelligence will tell you that machines aren't intelligent, and every chin-stroking pseudointellectual will go "but enough about your facts, let's talk about my feelings." And that is exactly the trap that everyone of any knowledge in the situation has been screaming about since Joseph Wiezenbaum argued ELIZA wasn't alive in 1964. It's a fishing lure. It's about the same size as a caddis fly, it's about the same shape, and it shimmers in a similar fashion? But it's never ever ever going to do more than catch fish, and woe be unto you if you are a trout rather than an angler. No, we don't. We have a semantic language that contains logical and contextual agreement about the concept of "ramp." It breaks down, too - is a hot dog a sandwich? Thing is, our days don't come apart when we disagree about the characterization of "sandwich." We correct our kid when they call raccoons "kitties" because the raccoon is a lot more likely to bite them if they try and pet it - if it's a pet raccoon we're less likely to do that, too. When Boo calls Sully "Kitty" in Monsters Inc we laugh because he's obviously not a kitty, but Boo's use of the descriptor "kitty" gives us Boo's context of Sully, which is as a cute fuzzy thing to love, as opposed to a scary monster. More than that, as humans we have inborn, evolved responses that we're just starting to understand - the "code" can be modified by the environment but the starter set contains a whole bunch of modules that we rely on without knowing it. Atlas sees position, velocity and force. That's it. Then it wouldn't be an LLM. It's a look-up table. That's all it is. That's all it can be. You give it a set of points in space, it will synthesize any interpolated point in between its known points. It has no "categories." It is not a classifier. It cannot be a classifier. If you make it operate as a classifier it will break down entirely. It will never hand a squashed grasshopper to Ally Sheedy and say "reassemble, Stephanie". What you're asking for is intuitive leaps, and the code can not will not is not designed for is not capable of doing that. Because literally everyone who works with markov chains says so? You're commenting on an article that spends several thousand words exactly answering this question. That's literally what it's about: researchers saying "it can't do this" and the general public going "but it sure looks like it does, clearly you're wrong because I want this to be true." None of this "in my mind" nonsense - this isn't my opinion, or anyone else's opinion. This is the basis by which LLMs work: they don't think, they don't feel, they don't intuit. They synthesize between data points within a known corpus of data. That data is utterly bereft of the contextualization that makes up the whole of human interaction. There's no breathing life into it, there's no reaching out and touching David's finger, there's no "and then a miracle occurs." The difference between Cleverbot and GPT4 is the difference between Tic Tac Toe and Chess - the rules are more sophisticated, the board is larger, it's all just data and rules. A chess board is six types of pieces on 64 squares. It does not think. But suppose it was suddenly six squidillion types of pieces on 64 gazillion squares - would it think? Complexity does not equal intelligence. It never will.I agree with a lot of what you're saying, but wonder about the claim that we think "ramp, ramp, box" watching this video.
G1 X50 Y55 Z0.4 E0.06838 F500
I think that discussions about what qualifies as artificial intelligence benefit from more careful consideration of what we consider to be our own intelligence.
We have a heuristic for ramp developed over a similar feedback loop, like gently correcting your kid when they call a raccoon "kitty".
What if an extension of LLMs was developed that made use of long term memory and the ability to generate new categories?
Why do you think that they'll never be able to work that way?
And moreover, what is the model trying and failing to replicate in your mind?
Not in the slightest. You watch that and you think "ramp, ramp, ramp, ramp, box, leap onto platform, leap onto ground", etc. Atlas goes "lidar pulse, spacial data, G-code, lidar pulse, spacial data, G-code, lidar pulse, spacial data, G-code" and that's just the first few billionths of a second because Atlas is a pile of NVidia Jetson components running at around 2.2 GHz. THERE IS NO PART OF ATLAS' PROGRAMMING THAT UNDERSTANDS "RAMP." NOTHING Boston Dynamics does has any contextualization in it whatsoever. It has no concept of "ramp" or "table" because there is absolutely no aspect of Boston Dynamics' programming that requires or benefits from semantics. Now - the interface we use to speak to it? It's got semantics, I'm certain of it. But that's for our convenience, not the device's. You say "something like this" but you don't really mean that. You mean "something shaped like me." You're going "it's shaped like me, therefore it must be like me" because anthropomorphism is what we do. Study after study after study, humans can't tell the difference between an anthropomorphic mannequin responding to a random number generator vs. an anthropomorphic mannequin responding to their faces. Riddle me this - would you have asked "wouldn't this change once the AI is embedded in something like this?" if "something like this" is an airsoft gun hooked up a motion detector? It's the same programming. It's the same feedback loops. One is more complex than the other, that's all. More importantly, LLMs such as ChatGPT deliberately operate without context. Turns out context slows them down. It's all pattern recognition - "do this when you see lots of white" is hella faster than "lots of white equals snow, do this when you see snow". Do you get that? There's no snow. Snow doesn't exist. Snow is a concept us humans have made up for our own convenience as far as the LLMs are concerned, the only thing that matters is adherence to the pattern. All models are wrong; some are useful. - George Box The object of the article is to determine the usefulness of the model on the basis that all models are wrong. Your argument is "won't the models get less wrong?" No. They never will. That's not how they work. If you try to make them work that way, they break. The model will get better at fitting curves where it has lots of data, and exactly no better where it doesn't have data, and "context" is data that it will never, ever have. Lemoine paused and, like a good guy, said, “Sorry if this is getting triggering.” I said it was okay. He said, “What happens when the doll says no? Is that rape?” I said, “What happens when the doll says no, and it’s not rape, and you get used to that?” When we have this discussion about video games, it's just good clean masculine fun. The Effective Altruists among us can go "it's just a video game" because we can all see it's just a video game. But since the Effective Altruists among us don't actually believe all humans are worth being treated as human, they go "but see look if it's human-like enough to trigger your pareidolia, then obviously machines should have more rights than black people." You're effectively asking "but once we're all fooled into thinking it's alive, shouldn't we treat it as if it were?" And that's exactly the point the article is arguing against.A few minutes into our conversation, he reminded me that not long ago I would not have been considered a full person. “As recently as 50 years ago, you couldn’t have opened a bank account without your husband signing,” he said. Then he proposed a thought experiment: “Let’s say you have a life-size RealDoll in the shape of Carrie Fisher.” To clarify, a RealDoll is a sex doll. “It’s technologically trivial to insert a chatbot. Just put this inside of that.”
My uncle killed himself on Christmas day just to fuck his parents over. He wanted them to know it was their fault. That's pretty much how most suicides act once they've made their decision and made peace with it. Both my uncle and my grandmother did. I don't know the first thing about Jeff Thomas. I'm pretty sure, however, that if part of your vision is changing Peter Thiel? You're going to be disappointed.She had just texted with Thomas the Sunday before he died, she said, sharing the messages with The Intercept. He was excited about her upcoming baby shower. “I know what everyone who has experienced this says, but he did not seem like he was thinking about killing himself,” she said. “He RSVP’d to a baby shower in Dallas in May in LA and said how he was excited to be there.”
I have a theory. So back in the '50s the networks got busted for faking quiz shows. This launched an amendment to the communications act and the flourishing of offices devoted to "Broadcast Standards and Practices" (BS&P) to ensure that games of chance and skill portrayed on television were actually games of chance and skill. As you might imagine, BS&P had a hard time with reality television. Plenty of suits, none of them ever made it to judgement. Always around BS&P and fairness. The standard approach to getting this shit to settle is by arguing that reality television isn't a competition, it's pro wrestling. 'cuz thing of it is? The network won't take a hit if they say reality television is fake. People will still watch it, because it's entertainment. Thus, for 20 years, plaintiffs have taken their pennies-on-the-dollar settlements. I don't think the Murdochs suffer from saying Fox News isn't news. I think the conservative movement, on the other hand, suffers A LOT. They will never again be able to argue that reality is on their side - their principle outlet will have been revealed to be biased entertainment and everything they've ever argued for will be over. They'll have to start over with MSNBC and CNN, who will point at the Murdochs going "we aren't actually news" and say "but we ARE news and if we don't toe the line, we're fucking toast." I think the Bill Barrs of the world are abso-fucking-lutely terrified of Fox's easiest layup defense: "we aren't actually news, just look at all the hare-brained shit we air." Fox will have to post a disclaimer before every show, pay some sort of fine for not clearly labeling entertainment and move the fuck on. The conservative movement, on the other hand, will be in the wilderness. That'd be my defense, anyway. If I'm the Murdochs, and I've determined that my dog has gone rabid, I'm going to put it down. I'm diversified enough that it won't really matter, and besides it worked last time. https://en.wikipedia.org/wiki/News_of_the_World#End_of_publication _________________________ I think Garland hasn't lowered the boom yet because there's so much more rot than we know. Russia is explicitly involved, we just don't know how much. Who knows who else is gonna show up in this. The fact that the National Enquirer has been allowed to skate so far is really suspicious to me... and while I don't want to get all pins-on-maps with it, I do think that "conspiracy" is a pretty easy allegation to make against the Trump organization and conspiracy charges always take fucking forever. And if you want to keep this sort of thing from happening again? You'd best do it right. And yes. The WSJ's comments section makes me lose faith in humanity freshly every time.
Federalist anything is a big part of the problem; their basic goal is to roll back the government to before the Missouri Compromise. I would argue the original shit-flinging Monkey was Newt Gingrich with his "Contract with America". The current crop has certainly devolved further but the original "fuck your governance we have feelings" posse was Gingrich and crew.
"maybe you aren't a runner anymore." - my yoga instructor, when I expressed frustration with my ability to run So far? She's been right. Running is a bitch. I can do it. Sort of. thing is, I can walk 6 miles a day without any problems, and I enjoy it. Rather than focusing on what you can't do and how it's gone, she opened my eyes to focusing on what I can do and what that means. Once I turned around to "hey, you're not dead, and considering what the EKG looked like a mild dose of blood pressure meds is an extremely small price to pay" I found better ways to be present. By the time my pulse-ox made it back to 99 I'd pretty much found a new groove. It sounds like you're on your way. It gets easier once you've been given an excuse to suck. Find the groove but more importantly, be willing to re-find it whenever the need strikes.
Counterpoint: GM is only going to get shittier. I mean, if she legit loves her work and the people she works with, that's one thing. But if her misogynistic boss is cleaning house of all ovaries and GM is offering a payout? Now is the time to call up anybody she knows in design literally anywhere else and say "yeah my misogynistic boss is cleaning house of all ovaries I'm wondering if you might know somewhere interested in hiring females with 20 years of design experience in the automotive industry." We've been dealing with hiring nonsense lately. Our receptionist basically overbalanced her ADHD meds and became a zombie. We put together a list of 30 (thirty) "hey here's this thing you're doing that isn't great that you didn't used to do could you maybe try not to do that" bullet points and she responded by going "...oh yeah I meant to tell you I'm changing careers next week." One of our midwives waited until one of our other midwives was in Brazil to say "oh by the way I'm not coming back from maternity leave." Another one of our midwives told us "hey uhh so yeah I know we discussed that I was trying to get pregnant again and last time it took me six months welll this time it took less than a week." One of our naturopathic doctors told us that her husband just took his dream job 150 miles away so uhh. And don't get me wrong. Everyone should follow their bliss. They owe us exactly what we pay them for, no more. We have taken as our guiding star that every employee we take on leaves happier than they started and that they really take something fundamental and personal with them and so far we're batting a thousand (minus that idiot we had to fire). But none of them are mission-critical. All of them are great to have around, and we look forward to seeing them in the future. We'd sincerely hope that anyone unhappy where they are would tell us and allow us to remedy it, and also entirely understand that if they get better opportunities even if they're happy they're gonna bail. And they should. And we're happy for them. That's a big difference from working with misogynists who don't value you. You know what sucks? Hiring through Indeed or Monster or craigslist or Facebook or WTFever. It REALLY sucks. And it's expensive, and full of tire-kickers, and dipshits who only need three contacts for their unemployment and aren't at all serious. You know what works hella better? Calling up contacts you know and like and saying "hey you know anybody unhappy or moving." We had four or five candidates for two open positions before we knew those positions were open. And the only way that shit happens is if people know you're a candidate. Congratulations to your wife for thriving through 20 of the stupidest years in General Motors history. Do either of you really think it'll be any fun through the next 20?