It means that either the test is flawed, the results are bogus or the report is a lie.
Intelligence is a measure of reasoning ability.
Current AIs have been designed to produce content that (optimally) mimics the products of reason, but they do not in fact reason at all, so they cannot possess measurable intelligence.
Much more to the point, current AIs have been designed to make enormous piles of money for corporations and venture capitalists, and I would pretty much guarantee that that has more to do with this story than anything else.
One extremely minor correction - you said they’re designed to make enormous piles of money and yet none of these(1) are cash flow positive or have any clear path to profitability. The only way a company makes money off this (outside an acquisition to let founders exit with bags of cash) is if one of these companies is allowed to create a monopoly leading to a corporate autocracy. General language models are absolutely shit in terms of efficiency compared to literally any other computing tool - they just look shiny.
- Please note - lots of pre chatgpt neural networks are happily chugging away doing good and important works… my statement excludes everything pre-ML bubble and a fair few legitimately interesting ML applications developed afterwards which you’ll never fucking hear about.
Edited to add: Just as a note it’s always possible that this AI gold rush actually does lead to an AGI but, lucky for me, if that happens the greedy as fuck MBAs will absolutely end civilization before any of you could type up “told you so” so I’m willing to take this bet.
none of these(1) are cash flow positive or have any clear path to profitability.
Only if you consider the companies developing these algorithms and not every other company jamming “AI” into their products and marketing. In a gold rush, the people who make money aren’t the people finding the gold. It’s the people selling shovels and gold pans.
ML-bubble? You mean the one in the 1960’s? I prefer to call this the GenAI bubble, since other forms of AI are still everywhere, and have improved a lot of things invisibly for decades. (So, yes. What you said.)
AI winter is a recurring theme in my field. Mostly from people not understanding what AI is. There have been Artificial Narrow Intelligence that beat humans in various forms of reasonings for ages.
AGI still seems like a couple AI winters out of having a basic implementation, but we have really useful AI that can tell you if you have cancer more reliably and years earlier than humans (based on current long term cancer datasets). These systems can get better with time, and the ability to learn from them is still active research but is getting better. Heck, with decent patching, a good ANI can give you updates through ChatGPT for stuff like scene understanding to help blind people. There’s no money in that, but it’s still neat to people who actually care about AI instead of cash.
It means some people fudged a test.
Here’s what that means
That we need to produce a better, generally-accepted benchmark of human-level general intelligence, I expect.
Coming up with such a metric is a real problem that probably is an important step on the way to producing such a artificial general intelligence.
Think of the average human intelligence, then realize half of them are below that.
People are saying “there is no way an AI is as smart as a human”. There are a few humans i know that being as smart as them wouldn’t be much of a challenge.
Think of the average human intelligence, then realize that more than half of those who voted in 2024, voted for trump.
That the ai industry has finally produced a text-based frontend UI for general search, aka a ‘search bar’ but you’d still have to vet the results yourself?
Man, woman, person, camera, TV.