Not so useless. In my experience LLMs are about 50/50 on making a regex that actually works and covers the cases you asked it for. Even less when you get into cases needing advanced features like backreferences and lookahead.
You can go local now with qwen 3.5 9B Q4 powering hermes agent at 35 to 50 tok/s with 99 percent tool call success rate on a used RTX 3060 for the price of two months of ChatGPT Pro and never bother. https://xcancel.com/sudoingX/status/2033020823846674546#m
> Nope, if nobody trains the models on new data you have at some point an outdated model.
As people train the models on new data they'll be increasingly training on AI output including hallucinations and slop. More garbage in means even more garbage out and the cycle will continue as "updated" models decline in quality.
https://mdp.github.io/2026/03/17/the-kids-are-alright-and-th...