Note that they don't say "AI" they say "chatbot software." Every program profiled is quite clearly "AI" to its creators but that, right there, is Quanta going "hollupaminnit."
- To be clear, this is not the end of LLMs. Wilson of NYU points out that despite such limitations, researchers are beginning to augment transformers to help them better deal with, among other problems, arithmetic. For example, Tom Goldstein (opens a new tab), a computer scientist at the University of Maryland, and his colleagues added a twist (opens a new tab) to how they presented numbers to a transformer that was being trained to add, by embedding extra “positional” information in each digit. As a result, the model could be trained on 20-digit numbers and still reliably (with 98% accuracy) add 100-digit numbers, whereas a model trained without the extra positional embedding was only about 3% accurate. “This suggests that maybe there are some basic interventions that you could do,” Wilson said. “That could really make a lot of progress on these problems without needing to rethink the whole architecture.”
I wanna set the WABAC machine for 1997, when the whole world lost its mind over the revelation that a Pentium could be tricked into fucking up the fourth digit of 8-digit long division if you came at it just so and squinted and here we are going "we can get 98% accuracy in addition if we tweak the program just so."
The basic problem with LLMs is pareidolia; we see a smile in the clouds and presume God is happy with us. If we can have a conversation with it, isn't it intelligent? Imaginary friend were real AF at one point, too.
As for me I really wanna try this. Not enough to spend money on it but having an AI living on my desktop sounds fucking fun. Especially if I can watch its temperature rise on the readouts. Maybe when I run out of other shit to do.