People championing the absolution of billionaires who create a chatbot that can't spell strawberry who then say it should be allowed to choose who lives and dies wasn't what I expected at the turn of the decade.
Half of these people have financial interests in the companies in question either directly working for them or indirectly, or are already part of that class. Realize they're behind the keyboard, and there's nothing surprising about it.
This can only be an intentional misreading the bill, or you haven't read the underlying bill at all. Because the headline is patently false. It indemnifies them ONLY if they unknowingly assist in mass murder.
If someone asks ChatGPT "hey chatgpt, where are spots in my city where a lot of people hang out on the street", then uses his car to mass murder 18 people, you want OpenAI to be on the stand? Sounds like an objectively insane position.
In a world with broad liability as you desire, the person who rented a hostel room to Luigi Mangione while he plotted murder should be held liable for aiding him, despite knowing nothing of his intentions.
The API offers that. Pay X per month, get Y tokens. Then you can look at all the graphs of money being deleted by OpenClaw, for transparency.
People want a free lunch. If the API was cheaper than the subscription then everyone would use the API. Instead people flock to an, apparently, unsustainable pice at a fixed monthly rate; presumably subsidized by others who don't use their full capacity every month.
Tell that to Trump and his glorious way of bombing Iran. Nothing against the idea itself, the Mullahs all but asked for it to happen.
But the execution? That was a level of dogshit I haven't seen in the time I was alive lol. Even Russia was better prepared with their invasion of Ukraine.
Both Trump and Netanyahu had a somewhat solid perspective on not getting utterly wasted in the next elections. Instead they go on one of the most ill-prepared wars in modern history, with results that may seriously upend the global economy if not lead us to WW3 outright.
But, have you any code that has been vetted and verified to see if this approach works? This whole Agentic code quality claim is an assertion, but where is the literal proof?
It’s agents all the way down - until you have liability. At some point, it’s going to be someone’s neck on the line, and saying “the agents know” isn’t going to satisfy customers (or in a worst case, courts).
> It's not like humans aren't already deflecting liability
They attempt to, sure, but it rarely works. Now, with AI, maybe it might, but that's sort of a worse outcome for the specific human involved - "If you're just an intermediary between the AI and me, WTF do I need you for?"
> or moving it to insurance agencies.
They aren't "moving" it to insurance companies, they are amortising the cost of the liability at a small extra cost.
Just today I had an agent add a fourth "special case" to a codebase, and I went back and DRY'd three of them.
Now I used the agent to do a lot of the grunt work in that refactor, but it was still a design decision initiated by me. The chatbot, left unattended, would not have seen that needed to be done. (And when, during my refactor, it tried to fold in the fourth case I had to stop it.)
(And for a lot of code, that's ok - my static site generator is an unholy mess at this point, and I don't much care. But for paid work...)
Beautiful.
reply