As someone who has been using both chatbots and code assistance on a daily basis, I am strongly convinced that chat bots are still much and much more stupid than the average human. But they are stupid in reasoning. They know a lot of shit, sure, and in that they are usefull, but they halicunate, regurgitate and make errors in even the simplest calculus or logic reasoning. But importantly, that is LLMs, not AI in general, only generative AI in general and LLMs in specific. AGI isn't I think about the "level" of inteligence, it's about how it is applied. It's about combining language and generative skills with world models and causal reasoning. It is about working from first principles to explore fields the AI wasn't trained from. The 'General" part of AGI. AI companies are trying to convince investors LLMs can lead to AGI, and benchmarks seem to agree, but when you realize what LLMs are: amazing overfitting machines; you relialize LLMs are simply overfitting on the benchmarks.
I thinl 'eventualy' AGI might be reachable, but NOT through the LLM path, we need causal AI at the wheel, maybe LLMs for the user interaction, but the central core should be causal AI that talks both to the user with an LLM as mediator, and with an advanced world model that is going to to take massive amounts of expert input hours from thousands of experts, not just like LLMs training on scientific publications, carefull manual world model building. I think most AI companies are on the wrong track, putting too much focus on LLMs, puting them too central.
RE: Are We At AGI?