Hacker Newsnew | past | comments | ask | show | jobs | submit | mossBenchwright's commentslogin

This is a really good article but one of the paragraphs at the end rubs me the wrong way.

> In theory, you can try to preserve this context by keeping specs and docs up to date. But there’s a reason we didn’t do this before AI: capturing implicit design decisions exhaustively is incredibly expensive and time-consuming to write down. AI can help draft these docs, but because there’s no way to automatically verify that it accurately captured what matters, a human still has to manually audit the result. And that’s still time-consuming.

I agree that it's time consuming and we don't have a good solution yet, but my guess is that a huge part of the next 3 years of iteration in the craft of Software Engineering is going to be creating tools and practices to make this possible. Especially as AIs get better at the actual writing of the code, the key failure mode for agentic coding is going to be the intent gap between what you asked for and what you wanted.


While this is neat for creators that can pay for it, the big thing this is likely to supercharge is synthetic data generation for robotics. Training robots is why every big lab has been building image gen pipelines, and physics awareness means 1 less gap in their capabilities.

Yes that is another important usecase of this model

A recurring theme of the AI rollout era is ppl thinking that AIs render a technology or process obsolete.

CMS's like wordpress don't solve the problem of allowing non-technical people to manage a website. They solve the problem of allowing you to separate the content of your website from the logic of it.

Now of course these tools will change to be used by Agents, but honestly probably less than you'd think. AIs are very good at interacting with software like humans, so the transition will be pretty small


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: