I find OP's communication style abrasive and off-putting, which tracks with them saying they've been coached on this, and found that advice lacking.
Maybe it's still insufficient advice, but it hasn't worked for them at least in part because they haven't figured out how to apply it.
From the post, I see low empathy and an air of superiority, (perhaps earned by genuinely being smarter than their peers-- doesn't make it more attractive).
That's going to cause friction because a team is a _social_ construct.
I realize it's been "written" by an LLM, but the content could have been written by someone I know. It's eerie how this person thinks exactly the same way. It's never their fault, always the others', and they are always obviously right and no amount of arguing can change their mind.
"Write an essay about struggling to change a software org that doesn't want to change. Make me the hero. Post it at 1am so it looks like I was up late suffering with the burden of what I know."
This is not a politically correct thing to say but there is a class of neurodiverse software developers who display these characteristics and I suspect the author belongs to this group.
Yeah, a lot of the examples made me think "wait, there's something else going on there, right?", which would make sense if the author has difficulty communicating or negotiating their proposals.
In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.
> In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.
I do not find anything missing here. This is how things often plays out in reality. Both your retelling of it and what was actually written in the article.
Your retelling: Some people agree and some disagree with new metric. That is completely normal. Then someone who agree or want to achieve the peace or just temporary does not feel like doing "real jira" tasks fixes warnings. Team moves on.
Actual article: the warnings get solved when it becomes apparent one of them caused production issue. That is when "this new process step matters" side wins.
I'm referencing the footnote where the author says that the discussion caused one team member to go and fix the issue. The warnings causing a production issue is, I think, a complete hypothetical.
What this story is missing is an explanation for why people were disagreeing. Like, why is someone not looking at warnings? Is it that the warnings are less important than the author understands? Is it that the warnings come from something that the team have little control over? And the solution the author suggests - would it really have changed anything if they already weren't looking at warnings? The author writes as if their proposal would have fixed things, but that's not really clear to me, because it's basically just a view into whether the problem is getting worse, which can be ignored just as easily as the problem itself.
Someone hacked his site or something, so I cant get back. But, I thought you mean situation in one of the first paragraphs where the team started take some issue seriously after actual problem.
And honestly, I have seen people disagree and fight literal standard changes like "lets have pipeline that runs tests before merge" or "database change must go through test environment before being sent over".
It is perfertly possible and normal for people to fight change and be wrong without there being grave smart missing reason. I have no problem to m trust the author that he was simply right in hindsight.
If you ever tried to improve processes or project with persistent issues, the problems author described are entirely believable. The author does not know what to do in that situation, but he described the usual dynamic pretty accurately.
LLMs originally learned these patterns from LinkedIn and the “$1000 for my newsletter” SEO pillions. Both accomplish a goal. Now that's become a loop.
There is a delayed but direct association between RLHF results we see in LLM responses and volume of LinkedIn-spiration generated by humans disrupting ${trend.hireable} from coffee shops and couches.
// from my couch above a coffee shop, disrupting cowork on HN. no avatars. no stories. just skills.md
The titles are giveaways too: Comfort Over Correctness, Consensus As Veto, The Nuance, Responsibility Without Authority, What Changes It. Has that bot taste.
If you want I can compile a list of cases where this doesn't happen. Do you want me to do that?
Neither is Vonnegut's (which your short, choppy sentences reminded me of), but he was a very successful and beloved author. I'm in no way comparing myself to Vonnegut, but my point is just because it doesn't appeal to you, it doesn't mean it isn't good.
Writing is art. Does it get the intended point across? Does it resonate with the reader? Does it make them feel something? Then it is good.
I disagree on Vonnegut. Most human authors at least have a voice, even if you don't like it it's recognisable and theirs, and I would rarely think to criticise that, it makes the writing come alive. If you truly write like an LLM (there is little evidence here of that) it would not be the same.
LLMs serve up a sort of bland pap with sugary highs of excitement which resembles a cross between manic advertising copy and a breathless teenager who's just discovered whatever subject they're talking about. They also sometimes confabulate and generate text which is at best tangential and at worst completely misleading.
It's exhausting and if you haven't carefully read what they generate (which most people clearly have not), you should not expect another human to read it.
Just as an interesting taste, here is my copy above rewritten to sound even more EXCITING and ENGAGING.
"They deliver a horrifying concoction – a sickly sweet, manufactured echo of thought, a grotesque blend of relentless advertising whispers and the manic, unearned enthusiasm of a teenager just discovering a world they don't understand! But the truly chilling thing is this: they fabricate. They weave elaborate lies, constructing text that’s not just tangential, but actively, dangerously misleading!
It’s a psychic assault, a draining vortex of intellectual despair! And if you haven’t wrestled with every single word, dissected it, exposed its flaws – and frankly, I suspect most haven’t – then don’t dare expect anyone else to salvage this wreckage! This is not a passive observation; it’s a desperate plea against a future where genuine thought is suffocated by the cold, sterile logic of a machine! We must guard against this, or we risk losing everything!” -- gemma3:4b
I don't disagree with your take on what how LLM copy is awful; I just disagree that this was written by an LLM. For example, this paragraph at the end:
> If you're in this position (relied upon, validated, powerless), you're not imagining it. And it's not a communication problem. "Just communicate better" is the advice equivalent of "have you tried not being depressed?"
I've seen "you're not imagining it" countless times from LLMs, but always as the leading sentence in the paragraph; for something like the above, they tend to use em-dashes, not parentheses.
FWIW, Grammarly's AI Detector thinks that 17% of it resembles LLM output, and ZeroGPT thinks that 4.5% of it resembles LLM output.
An occasional "it's not X, it's Y", rule of three, or em-dash isn't atypical nor intrinsically bad writing. LLM-slop stands out because of the frequency of those and other subliminal cues. And LLM-slop is bad writing, at least to me, because:
- It's not unique (like how generic art is bad compared to distinct artstyles)
- It's faux-authentic ("how do you do, fellow kids?")
- It's extremely shallow in information. Phrases like "here's the kicker" and "let that sink in" are wasted words
- The meaning is "fuzzy". It's hard to describe, but connotations and figurative language are "off" (inconsistent to the larger idea? Like they were picked randomly from a subset of acceptable candidates...); so I can't get information from them, and it's hard to form in my mind what the LLM is trying to convey (perhaps because the words didn't come from a human mind)
- It doesn't always have good organization: some parts seem to go on and on, high-level ideas drift, and occasionally previous points are contradicted. But I suspect a plan+write process would significantly reduce these issues
> I find OP's communication style abrasive and off-putting
Your comment is hilarious on a meta-level: it's an example of exactly the sort of socially-mediated gatekeeping the author of the article (machine or human, I don't care) criticizes. It is, in fact, essential to match authority and responsibility to achieve excellence in any endeavor, and it's a truth universally acknowledged that vague consensus requirements are tools socially adept cowards use to undermine excellence.
Competent dictatorship is effective. Look at how much progress Python made under GVR. People who rail against hierarchy and authority, even when deployed correctly, are exactly the sort of people who should be nowhere near anything that requires progress.
Imagine running a military campaign by seeking consensus among the soldiers.
> Look at how much progress Python made under GVR.
Or, you know, Linus Fucking Torvalds. If you were carrying the success or failure of most of the world's digital infrastructure on your shoulders, you also might be grating to some.
The difference is the loom is performing linear work.
Programming is famously non-linear. Small teams making billion dollar companies due to tech choices that avoid needing to scale up people.
Yes you need marketing, strategy, investment, sales etc. But on the engineering side, good choices mean big savings and scalability with few people.
The loom doesn't have these choises. There is no make a billion tshirts a day for a well configured loom.
Now AI might end up either side of this. It may be too sloppy to compete with very smart engineers, or it may become so good that like chess no one can beat it. At that point let it do everything and run the company.
Anything that can be automated can be automated poorly indeed. But while it has been proven that textile manufacturing can be automated well (or at least better than a hand weaver ever could), the jury is still out if programming can be sufficiently automated at all. Even if programming can be completely automated, it's also unclear if the current LLM strategy will be enough or whether we'll have another 30 year AI winter before something better comes along.
The difference is that one can make good cloth with a loom using less effort than before. With AI one has to choose between less effort, or good quality. You can't get both.
Have you actually seriously tried using an AI? It really isn't that hard to get good code with less effort using an AI. Just manage the scope of the tasks you give it. And of course, review the code that it generates. And of course, do NOT vibe code.
And I've actually grown quite fond of the "review the selected code (that I wrote) and make suggestions for improvements, but don't actually make any changes" prompt. Or "is this code correct?" And AIs are also exceptionally good at doing large-scale code refactoring. So I am actually producing even better code with less effort.
Yes, it requires good judgement -- something that you learn by doing. And developing a sense of what an AI can and cannot handle. Although I am, truthfully, falling behind the curve on that, as coding AIs are making major leaps and bounds in the complexity of what they can deal with, and the quality you can expect out of them, that changes on pretty much a monthly basis. I was quite amazed to get a C++ port of a 3,900 line python library to write to professional standards, in about 5 prompts total, including .deb packaging, test cases, and .md API documentation.
If you are basing your judgements on anything earlier than Claude 4.5 Sonnet(or any of the ChatGPT models prior to 5.2 Codex, which seems to be the first in the ChatGPT series of models that seems to be halfway comparable to Claude 4.5 Sonnet), then you urgently need to give it another try. Avoid any of the lite models. The difference between old and new models is dramatic. (Currently still figuring out what Claude 4.6 Sonnet is capable of. I haven't yet had a chance to feed it something difficult).
> That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth.
This has not been true for a while, maybe forever. On the internet, no one knows you're a dog (bot).
“I would argue that…” is a weaker statement, because it ends with an implied “…but since I don’t care that much, I’m not ‘seriously’ arguing that.” It’s not at all equivalent to the strong statement “I argue that…”, which has no such qualifier.
Why cure yourself of useful conversational nuance?
Maybe it's still insufficient advice, but it hasn't worked for them at least in part because they haven't figured out how to apply it.
From the post, I see low empathy and an air of superiority, (perhaps earned by genuinely being smarter than their peers-- doesn't make it more attractive).
That's going to cause friction because a team is a _social_ construct.
reply