>It only a loss if you think the skill and ability you are losing is intrinsically valuable
What about the skill of learning itself? I would suggest that's one of the most important skills humans have evolved. The more integrated AI becomes in our societies, the more it will automate away potential opportunities for learning. I can forsee a world tightly integrated with AI where people are not only physically sedentary, but mentally as well.
As we progress further into the future, we need more educated people than ever to tackle the exponentially increasing complexities of our society. But AI presents an obstacle that many will never cross due to how to convenient it is to skip the messy work of understanding.
Also, this problem is not unique to AI. It existed before the GPTs and Claude's of the world. But it's a problem of scale, and every company on the Earth right now is trying to scale AI up as fast as possible.
Here's a practical example: I am using AI to help me with my garden. It's been amazing - it helps me identify plants, identify soil issues, what fertilizer to use and what days to apply it, etc.
What exactly did AI take from me? Spending hours of research on Google and Youtube to glean little incomplete bits and pieces? Calling a yard service?
It's also clearly obvious when AI gives bad or incorrect advice - I am still trying different things and watching for the results.
Coding is a outlier example where AI can just do the work semi-competently without anyone checking it. But I think it speaks more to the nature of coding itself - coding is a means to an end and for most people not an actual pursuit in itself.
>What exactly did AI take from me? Spending hours of research on Google and Youtube to glean little incomplete bits and pieces? Calling a yard service?
An opportunity for a deeper understanding of gardening? If you spend hours researching on gardening and come away with an incomplete understanding of what you were attempting to do, I'm not sure that's immediately the fault of the research available. It could be that you just didn't do a good job searching for the necessary information.
In this way, AI can be a boon. It helps figure out what you actually want to know in the moment. But I think it would be a step to far to say that a smattering of specific questions can replace the sturdy foundation povided by a typical education--e.g. through apprenticeship, books, etc.
>It's also clearly obvious when AI gives bad or incorrect advice
Is it? Isn't this a __core__ problem that researchers around the world are trying to solve? Also, __how__ could you make such a statement unless you already possessed the knowledge ahead of time to make such a judgment? I think it's hard to know if something is bad advice by looking at just cause and effect. It could be that you just lack the understanding to put the advice into practice.
> It could be that you just didn't do a good job searching for the necessary information.
How can you? The existing resources are terrible.
> But I think it would be a step to far to say that a smattering of specific questions can replace the sturdy foundation povided by a typical education--e.g. through apprenticeship, books, etc.
I am not going to go through a college program for my own garden. And I have books! But unless you can read a tiny and perform a small research project, you are not going to know how all of the plants in your specific garden in your specific region in your specific weather are going to behave.
The best I could do is hire an expert - but again I am learning less by hiring it out.
> Also, __how__ could you make such a statement unless you already possessed the knowledge ahead of time to make such a judgment?
"Use X to kill the moss". It didn't kill the moss. I will now use AI to find a list of alternative things to try to kill the moss, and learn what works in my garden.
The idea that AI is going to make people stop learning I don't think is born out in practice. It might make some people stop researching as an activity though.
Again, writing replacing memorization is not a good 1:1 comparison to AI replacing technical understanding. Someone still needs to understand what is written and act upon that knowledge. That requires skill and experience in the domain they're working within.
However, a person using an AI does not need to understand the underlying problem to get results. A person can ask Claude Code to write them a web app dashboard without having ever learned JS/CSS/HTML. It does not require them to have skills within a domain.
Also, we need to be honest with ourselves. Human brains did not evolve for the instant gratification of modern technology. We've already seen what technology has done to our attention spans. I am concerned over what further reliance on technology, particularly AI, will do to our brains.
> However, a person using an AI does not need to understand the underlying problem to get results. A person can ask Claude Code to write them a web app dashboard without having ever learned JS/CSS/HTML. It does not require them to have skills within a domain.
This perspective is funny to me because of how much the modern web is already built around web developers refusing to use CSS and PHP. The giving up of the skills happened before the automation.
>Ultimately, out there, people value you by just 5-6 things and almost never they are your beliefs or personal values.
This is such a rigid world view that is demonstrably wrong. I won't even argue with you because others have already pointed out why this cannot be true elsewhere.
What I will say is that I hope you're doing alright.
I have no idea what's your goal with that message. I would love to live in your world but I am fairly sure that wherever I was geographically (5 countries), it does not exist there.
>Prove to me that this value exists. Prove to me it can be measured. To pre-empt your reply, I don't think you can.
Why does any of this matter? Do you require a person proves their utility to you before you hold the door open for them? When a child falls and scrapes their knee, do you ask about their grades in school, or parents net worth, before lending them a hand?
My point: human society is deeply interwoven with sentimental behaviors that make zero sense in economic theory. You can try to apply all the models you want to model human compassion and it will get you nil.
But that doesn't mean we should optimize that out of societies. I think it's the most wonderful part of our societies, and if we were to remove it, we'd stop being humans.
If you're going to make the claim that people hold intrinsic value, people are going to challenge you for proof. Holding a door open for someone and asking questions doesn't necessarily indicate value. It could indicate personal interest. Empathy. Projection. Self-interest. The concept of altruism doesn't necessitate the belief that other life holds value at all. Altruism by its definition is giving without the expectation of return.
I think you make a good point re culture and tradition. Humans like many "valueless" activities. Some of these are hardcoded into our psyche through evolution. Some are for sentimental reasons. Some are religious. Some are enforced. Some are situational. Etc. I am not suggesting we eliminate those. I am simply agreeing with the top comment which is that we cannot force people to place any value on them. Some people do not see value in those traditions (or in other people). There is no objective way to prove them wrong.
>If you're going to make the claim that people hold intrinsic value, people are going to challenge you for proof.
But this is assuming we share the same set of axioms?
It sounds like you don't accept humans having intrinsic value as a core axiom. However, I do, and it makes zero sense to me to try and "prove" such a notion.
I think its a valid emotion to feel. I genuinely resonated with the story, but when I learned it was written by Claude it kind of left me feeling ... betrayed?
One of the many things I love about art is when I encounter something that speaks to emotions I've yet to articulate into words. Few things are more tiring than being overwhelmed with emotion and lacking the ability to unpack what you're feeling.
So when I encounter art that's in conversation with these nebulous feelings, suddenly that which escaped my understanding can be given form. That formulation is like a lightning bolt of catharsis.
But I can't help but feel a piece of that catharsis is lost when I discover that it wasn't a humans hand who made the art, but a ball of linear algebra.
If I had to explain, I guess I would say that it's life affirming to know someone else out there in the world was feeling that unique blend of the human experience that I was. But now that AI is capable of generating text, images, music, etc. I can no longer tell if those emotions were shared by the author or if it was an artifact of the AI.
In this way, AI generated art seems more isolating? You can never be sure if what you're feeling is a genuine human experience or not.
You can never be sure if what you're feeling is a genuine human experience or not.
This is what the deconstructionists were preparing us for, I guess. The author is dead, and if not dead, then fake. It was never a good idea to tie our sense of meaning to external validation.
The humanity immanent in the text came from you, the reader, not the author, and it has always been that way. Language never gave us access to the author's mind -- and to the extent that statement is wrong, it doesn't matter. AI is just another layer of text, coming between the reader and the same collective consciousness that a human author would presumably have drawn on. The artistic appreciation of that text is the sole privilege of the reader.
>If you trust human beings to continue to appreciate art as we have done since the beginning, there's not going to be any intractable issues
When did the discussion become about trusting the audience? I think the discussion has always surrounded whether it's worthwhile to treat art as a problem that needs solving.
Well, if you think it's not a problem to solve then you should go tell all the paintbrush manufacturers that their services will no longer be needed. Ditto for publishers, patrons of the arts, etc., etc.
If art is worth doing it's worth doing with good tools.
I think it's a perfectly fine point. The OP said (my interpretation) that LLMs are messy, non-deterministic, and can produce bad code. The same is true of many humans, even those whose "job" is to produce clean, predictable, good code. The OP would like the argument to be narrowly about LLMs, but the bigger point even is "who generates the final code, and why and how much do we trust them?"
As of right now agents have almost no ability to reason about the impact of code changes on existing functionality.
A human can produce a 100k LOC program with absolute no external guardrails at all. An agent Can't do that. To produce a 100k LOC program they require external feedback forcing them from spiraling off into building something completely different.
It has nothing to do with determinism. It's the difference between nearly perfectly but not quite perfectly translating between rigorously specified formal languages and translating an ambiguous natural language specification into a formal one.
The first is a purely mechanical process, the second is not and requires thousands of decisions that can go either way.
The difference is that a human is that a human can reason about their code changes to a much higher degree than an AI can. If you don't think this is true and you think we're working with AGI, why would you bother architecting anything all or building in any guard rails. Why not just feed the AI the text of the contract your working from and let it rip.
You give way too much credit to the average mid level ticket taker. And again, why do I care how the code does it as long as it meets the functional and none functional requirements?
You realize that coding agents aren’t AGI right? They aren’t capable of reasoning about a code changes impact on anything other than their immediate goal to anywhere near the level even a terrible human programmer is. That why we have the agentic workflow in the first place. They absolutely require guardrails.
Claude will absolutely change anything that’s not bolted to the floor. If you’ve used it on legacy software with users or if you reviewed the output you’d see this.
What about the skill of learning itself? I would suggest that's one of the most important skills humans have evolved. The more integrated AI becomes in our societies, the more it will automate away potential opportunities for learning. I can forsee a world tightly integrated with AI where people are not only physically sedentary, but mentally as well.
As we progress further into the future, we need more educated people than ever to tackle the exponentially increasing complexities of our society. But AI presents an obstacle that many will never cross due to how to convenient it is to skip the messy work of understanding.
Also, this problem is not unique to AI. It existed before the GPTs and Claude's of the world. But it's a problem of scale, and every company on the Earth right now is trying to scale AI up as fast as possible.
reply