following: 78
followed tags: 56
followed domains: 5
badges given: 233 of 233
hubskier for: 5206 days
I am probably working on Hubski.
Send me a PM or post with the tag #bugski if you think something is broken.
I filter #thebeatles.
I do not agree with everything that I post. I hope you don't either.
Image by veen.
I for one love a world where such fever dreams get so far that the market crushes them. We had VC-subsidized scooters for years.
I am conflicted between carrying out my original mission in starting this place, and the knowledge that we have been collectively altered (if not hacked) by social media in a way that few would deem positive. I love my MAGA aunt and my MSNBC uncle. Thank god they found pickleball as a medium to preserve their sibling relationship, because it seems everyone they listen to is telling them how truly awful the other is. I used to identify with the left, not just for their social progressiveness, but because intellectual discussion was their pickleball. My aunt and uncle aren't awful. I bought a HAM radio.
No.
Let's be fair. The FDA has totally fucked this one up.Or to the FDA and Dept of Agriculture to control how much actual poison and microplastics are in the food supply.
Keep it thoughtful, y'all.
I wouldn't want to see NASA ended, but it obviously needs major reform. It is far too slow and far too expensive.
Probably one of our strongest biases is that our mental models reflect the environment in an objective quality that depends more upon the nature of the environment than ourselves. More likely, our mental models reveal the nature of our cognitive abilities in the context of the environment which expands beyond them. My dog cannot fathom my tax return. That reveals more about his cognitive abilities than it does the nature of my tax return.
“The subject is having intercourse. Is there something else I can help you with?”
I expect that AI will have thoughts that we cannot for similar reasons. That’s why I don’t think they’ll work for us in the long run.
Good to see you old friend.
Skipping labeled data? Seems like a bold move for RL in the world of LLMs. I've learned that pure-RL is slower upfront (trial and error takes time) — but it eliminates the costly, time-intensive labeling bottleneck. In the long run, it’ll be faster, scalable, and way more efficient for building reasoning models. Mostly, because they learn on their own. https://www.vellum.ai/blog/the-training-of-deepseek-r1-and-ways-to-use-itThe team at DeepSeek wanted to prove whether it’s possible to train a powerful reasoning model using pure-reinforcement learning (RL). This form of "pure" reinforcement learning works without labeled data.
It doesn't seem like it. These chain-of-thought models kinda broke the mold. https://youtubetranscriptoptimizer.com/blog/05_the_short_case_for_nvdaWhat they did was to continue developing open source LLMs, which were trained at great expense by others, right?
I'm good. Thanks!
I’m going to make hubski viewable only if you’re logged in. I don’t think mass adoption is the goal, and I’m not thrilled about it being scraped.