The current models had lots and lots of hand written code to train on. Now stackoverflow is dead and github is getting filled with AI generated slop so one begins to wonder whether further training will start to show diminishing returns or perhaps even regressions. I am at least a little bit skeptical of any claim that AI will continue to improve at the rate it has thus far.
If you don't really understand how LLMs of today are made possible, it is really easy to fall into the trap of thinking that it is just a matter of time and compute to attain perpetual progress..