Have we reached the peak of what LLMs can do?


Gary Marcus recently put forth a solid argument for that being the case on a podcast with Steve Eisman. It's definitely worth a listen.

If you can remember back to the good old days of ChatGPT 2 and the massive jump from 2 to 3. It was a significant increase in capability.

Then the leap from 3 to 4, which still had a notable increase in capabilities.

But when it came to going from GPT4 to 5, the intelligence gain was less impressive. It got cheaper, which is good, and I suppose tool calls got better, but we didn’t see a massive gain in reasoning or anything like that.

Basically, if you graphed out the gains of intelligence in LLMs, it once looked like a hockey stick pattern. Now, Gary argues that it's flattening out.

He also came to a similar conclusion as I have that there really isn't a “moat” around this tech and their fore already profitable companies that already have access to all the internet's data and insane amounts of compute power are likely going to eat OpenAI.

Specifically Google. I would make the argument that Amazon will do pretty well just by keeping people on their platform. So even if AWS offers a product that is 80% as good as the market leader, people already on AWS will use it.

Amazon wins even if they offer that product at break-even prices because they will make money on all the other products their customers continue to use on AWS.

Let me know what your thoughts are.