>LLM-creation ("training") involves detecting/compressing patterns of the input.
There's a pretty compelling argument that this is essentially what we do, and that what we think of as creativity is just copying, transforming, and combining ideas.
LLMs are interesting because that compression forces distilling the world down into its constituent parts and learning about the relationships between ideas. While it's absolutely possible (or even likely for certain prompts) that models can regurgitate text very similar to their inputs, that is not usually what seems to be happening.
They actually appear to be little remix engines that can fit the pieces together to solve the thing you're asking for, and we do have some evidence that the models are able to accomplish things that are not represented in their training sets.
If people find this cool and wanna play with it, they can, just make sure to only mix compatible licenses in the training data and license the output appropriately. Well, the attribution issue is still there, so maybe they can restrict themselves to public domain stuff. If LLMs are so capable, it shouldn't limit the quality of their output too much.
Now for the real issue: what do you think the world will look like in 5 or 10 years if LLMs surpass human abilities in all areas revolving around text input and output?
Do you think the people who made it possible, who spent years of their life building and maintaining open source code, will be rewarded? Or will the rich reap most of the benefit while also simultaneously turning us into beggars?
Even if you assume 100% of the people doing intellectual work now will convert to manual work (i.e. there's enough work for everyone) and robots don't advance at all, that'll drive the value of manual labor down a lot. Do you have it games out in your head and believe somehow life will be better for you, let alone for most people? Or have yo not thought about it at all yet?
> Do you think the people who made it possible, who spent years of their life building and maintaining open source code, will be rewarded?
I think they should be rewarded more than they are currently. But isn't the GNU Public License bassically saying you can use such source-code without giving any rewards what so ever?
But I see your The reward for Open Source developers is the public recognition for their works. LLMs can take that recognition away.
UBI only means you won't starve or die of exposure. It doesn't mean that people who are already rich today won't become so obscenely rich tomorrow they are above the law or can change the law (and decide who gets medical treatment or even take your UBI away).
>then say they will still be on BlueSky and Mastodon then you know it's purely ideological.
Both Bluesky and Mastodon are open/federated networks, which aligns more with EFF's values. So, yes, but I don't think for the reasons you're hinting at.
I just checked their Facebook and X page. The X page is getting much more eyes. For instance, they posted their article "The FAA’s “Temporary” Flight Restriction for Drones is a Blatant Attempt to Criminalize Filming ICE" to both accounts. The results:
I think it has been proven again and again that these "engagement numbers" are a mix of bots, social media company itself trying to inflate the numbers, and real engagement. Unless there is an impartial third party, these numbers are there to attract advertisers. In this situation, I would trust the source themselves, i.e. account holders.
That assumption is only true if there is no manipulation of likes. I believe that the presence of bot farms has been extensively documented by now, which should disprove the usefulness of likes on any social media platform nowadays.
You are making lots of assumptions when evaluating GitHub projects that you aren’t writing here.
GH stars can indicate: which of many forks of a repo might be the most active, which of many projects in a category might be the most used/trusted, the growth trajectory of a projects (stars over time).
You can just look at the numbers. They're seeing 15x more engagement on BlueSky, and even more engagement on Mastodon compared to X:
X post: 124 comments, 79 reblogs, and 337 likes
BlueSky post: 245 comments, 1400 reblogs, and 6.2K likes
Mastodon post: 403 reposts, 458 likes
There's more ROI posting on BlueSky or Mastodon, even ignoring the fact that BlueSky and Mastodon are projects clearly more aligned with internet freedom than X is.
Which post are you looking at? I just posted the numbers for the first post I could find that was the same across X, Bluesky, and Facebook (a little hard since the feeds for all three are different). The X post had 16 times the number of likes as Bluesky and 26 times the number of likes as Facebook. The X post had 17 times the number of comments as Bluesky, 6 times the number as Facebook.
Your post made me randomly spot check another one from a month ago ("The U.S. government on Wednesday..."), the numbers aren't quite as drastic but X is still ahead. Likes/comment shares:
X: 280, 4, 172.
Bluesky: 182, 2, 98.
Because of the algorithms I wouldn't be surprised if you'd be able to cherry pick some Bluesky post that's ahead. But a casual browse through both feeds makes it look like X gets much more engagement.
The people on BlueSky and Mastodon aren't the people they need to convince in the correctness of their message.
If you actually care about getting your point across, hostile environments are exactly the place that you need to be broadcasting. Especially when they haven't put up any barriers for you.
EFF leadership just totally doesn't get it.
Unless the goal isn't what they say it is and they just need the cheerleading squad to make it look like their fundraising is effective.
If an organisation had any serious chance of moving the needle by staying on X, musk would simply find a reason to ban them. X leadership isn't interested in fair and balanced discussion.
An online argument has NEVER EVER EVER changed anyone's mind.
Source: I've argued with strangers on the internet since the mid-90's.
Don't feed the trolls was the rule back then when trolls were just actual people arguing for the sake of getting a reaction - and now the trolls are either a piece of software connected to a language model or paid to argue in bad faith. Like WOPR says: the only winning move is not to play.
This just fundamentally isn't true. What people see online massively influences how they think, to the extent that entire media conglomerates have been bought and sold to do exactly that.
I specifically said "online argument". You talking to someone online, in text format. You can change people's minds in video calls, sometimes. No amount of 1-on-1 online discourse has ever changed anyone's mind on anything.
The general sentiment people observe online definitely changes how they think, it moves the Overton Window considerably. And that's exactly what the bots[0] on Twitter and other platforms like TikTok do, they argue about whatever they get paid to argue for in bad faith, endlessly.
People see this, not knowing it's all artificial, and go "ooh, MANY PEOPLE think like this" and start thinking it's normal to think like that.
[0] I'm using "bot" as shorthand here for bad faith actors, usually the first level is just spamming static canned arguments, stage two is some kind of smart system that responds to the replies somewhat in context and stage three will ping an actual human who will come in with VERY specific deep-cut arguments.
Source: I argue online a lot for fun and relaxation.
So how do you know you've never changed someone's mind? Also, the opposite is just retreating to echo chambers where everyone agrees?
I personally don't care if EFF leaves X. However the message in the article does not line up, it's a bad decision and not justified by the reasons cited.
TBH echo chambers are just fine as long as you know you're in one.
I have peeked outside of my curated chamber and the people in there are completely batshit insane. Like objectively not following any sane logic or reason. And no amount of online discourse will not make them change their ways unless they WANT to change.
If there is an organization who should be promoting federated, decentralized social media services over centralized robber baron engagement factories more than the EFF, I don't know who it would be.
And the EFF is also looking at conversion rates for those views. Are you convinced that the Elon-pilled still on X are interested in donations to the EFF compared with the weirdos on Mastodon?
This is on point but someone is taking offense by being called a "weirdo" (thus the down votes, I think). Yes, we are weirdos on alternate social media, just like we are weirdos who use Linux, Emacs, write Lisp, etc.. It's weird, i.e.: Unusual. "Geek" might have been a better term to use though.
On average, they're getting <9,000 views per post on X. With 100 - 150K followers on both Bluesky and Mastodon, I'd expect their impressions to beat those X numbers.
But as they say in the article, their reason for leaving isn't solely the low impressions. It's the low impressions, plus "Musk fired the entire human rights team and laid off staffers in countries where the company previously fought off censorship demands from repressive regimes," plus X's unwillingness to give users more control, consider end-to-end DM encryption, or offer transparent moderation.
Its wild that we've gotten to the point that 'allows tyrants to silence users on their platform' is no longer something we're allowed to dislike without it being a 'political' stance. Some time in the last 30 years acting like a reasonable and decent human being became a political statement.
Musk is a giant piece of shit who turned Twitter into a cesspool worse than 4chan and is arguing in court that he should be allowed to use grok to generate CSAM.
So yeah they’re absolutely right to get the fuck out of the place he destroyed.
The reason to leave ex-twitter and the reason to keep using lesser platforms may not be the same reason.
Probably the reason EFF keeps using mastodon/bluesky is not for reach, but to support federated platforms.
As an activist organization EFF needs reach people, but also it needs to show people alternatives to surveillance capitalism exist and encourage their use.
"...and we win by putting our time, skills, and members’ support where they will have the most impact. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube"
So pretty much all major sites except X. They are saying LinkedIn is more important to reach people than X, really?
They're also still posting on LinkedIn, Instagram, TikTok, Facebook, and YouTube (in addition to BlueSky and Mastodon). It's silly to suggest that anything outside of X is an echo chamber, or that one must communicate on a platform dominated by white supremacists to expose your ideas to a diverse audience.
Worth the time? Can you not just use some automation or tool to post your stuff to multiple platforms including X?
I find it really hard to believe that even with lower views on X than the past, that it's literally not worth the tiny about of effort to get their messages posted there.
>March 2025, Anthropic was claiming that 90% of code would be written by LLMs in three to six months, and "essentially all" code within twelve months.
There's a pretty big difference between "We predict in X time frame our model will be capable of Y" and "Our model did Y."
This is like watching someone measure the size of an object and saying "I don't believe you because you guessed it was X before you pulled out your tape measure."
>However, my guess is this is mostly the typical scare tactic marketing that Dario loves to push about the dangers of AI.
Evaluate it yourself. Look at the exploits it discovered and decide whether you want to feel concerned that a new model was able to do that. The data is right there.
Yeah, I'd pretty pissed at my doctor for finding cancerous cells that probably wouldn't have been a problem for quite some time, either. Ignorance is bliss, security through obscurity, whatever.
You may joke, but this is a genuine issue in certain screening tests. e.g. most cancerous cells found in PSA prostate screening are so slow growing that they never cause any symptoms during a person's lifetime, so the treatment is almost always worse than the disease. It's similar for some sorts of thyroid and breast cancer tests. This is why a lot of countries are heavily reducing these sort of tests
The doctor analogy is more like you're grateful that your doctor found cancerous cells before they became a problem, but at the same time his other business is selling cigarettes.
There is still a remarkable amount of friction here in doing so. There should be a one click button for "don't show me notifications like this", which incentivizes apps to have appropriate granular notification settings.
And don't even get me started on how Samsung on certain models hid the notification categories behind a feature gate with a random OS update.
Is it a serious problem that you can run whatever software you want on your computer? Should we make it so that no one can do that without permission to protect them?
I recommend Cory Doctorow's talk on why this is a serious problem for society:
Yes, lots of vulnerable users get harmed by modern tech. E.g. people have lost their minds using AI, their livelihoods using smartphones, their life savings using the Internet. In general, I prefer a solution where any mental health issue (age-related infirmity, ADHD, etc.) result in protection from modern exploitative tech like this.
Every application use for such people should be supervised by a government official trained to ensure you are not hurting yourself.
This way people who want to use AI, smartphones, or the Internet can do so if they’re healthy and the mentally disabled can be protected. We know that this need exists because even on this “Hacker” News forum everyone gets very upset when a mentally disabled person gets injured after AI use.
Not enough people give a shit about "general purpose computing" to matter. They use computers for a few things and as long as they can do those things they're fine with it. My wife loves all her Apple gear. It provides her with a wonderful, curated experience. Okay, maybe it hasn't been so good with recent iOS releases but it still beats Android or Microslop. Being able to hack, modify, or install arbitrary stuff on your device is something only a minority of a minority care about, statistical noise in the quarterly sales figures. When you compare that to the harm done by malware, illegal or indecent material, and the negative blowback to YOUR OS's reputation—or worse, the "felony contempt of business model" enabled by a general-purpose OS (piracy, ad blocking, etc.)—it's a no-brainer to implement restrictions.
Could you try to put more of a effort into keeping the specifics in view and not turning the whole conversation into a view from 10,000 ft filled with drive by generalities? You might as well be linking to a Wikipedia entry on 1984.
We have moved away from an existential threat to F-Droid to a speed bump which lets it live. As is often the case, it's a both can be true situation in that I don't like the ratcheting up of restrictions, but think possible without contradiction to note how the change over time has impacted F-Droid compared to prior iterations of the proposed policy.
It disappoints me that people on HN aren't sufficiently in control of their own attention to the point of being able to show up to that conversation, as the fate of F-Droid has been central to this saga if you've been following it over previous HN threads.
>This is a clear signal that generative video is deeply unpopular.
Or, it's a clear signal that AI video is too expensive as a consumer product and/or not quite yet at a quality bar that the average person finds acceptable.
I think someone could have looked at computer graphics and SFX circa the '80s and decided that they would always pale in comparison to practical effects. And yet..
It's an annoying trope, but this is the worst and most expensive (at this quality level) that these models will ever be.
There's a pretty compelling argument that this is essentially what we do, and that what we think of as creativity is just copying, transforming, and combining ideas.
LLMs are interesting because that compression forces distilling the world down into its constituent parts and learning about the relationships between ideas. While it's absolutely possible (or even likely for certain prompts) that models can regurgitate text very similar to their inputs, that is not usually what seems to be happening.
They actually appear to be little remix engines that can fit the pieces together to solve the thing you're asking for, and we do have some evidence that the models are able to accomplish things that are not represented in their training sets.
Kirby Ferguson's video on this is pretty great: https://www.youtube.com/watch?v=X9RYuvPCQUA
reply