Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How can an entity which consists of between dozens to hundreds of thousands of people, each of whom has limited agency over the aggregate actions of that entity, and can be replaced at any time while fulfilling the same function (on paper) be remotely capable of demonstrating human-like moral agency?


That's a question, not an argument.

Here's another question: "How can a human being, which consists of many neurons, each of which has limited agency over the aggregate actions of that entity, be remotely capable of demonstrating moral agency?"

Incidentally, I noticed that you substituted 'human-like moral agency' for 'moral agency' in the OP. I never said states have human-like moral agency (nor am I making a claim to the contrary per se).


No it's a rhetorical question, turn it into one with an answer and you'd have rebutted the argument.

You might also note that the answer to your question is that they don't. It is a problem applicable at scale - and we do terrible things to single neurons all the time in the name of science.


My question wasn't whether neurons have agency. It was whether people have agency.

Also, I intended 'limited agency' to be read as 'zero or very little agency'.


States have not-human-like moral codes: agree Amoral states: don't agree

I believe each group of people have their own social and moral codes that guide their behavior. I don't believe its something easy to understand, determine or represent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: