Aside from the social risks and centralization of control, I think the rush to convert everything to LLMs puts quite a bit of our progress in computer science at risk should it prove to be successful. Over the last several decades we've built up more complex and precise notation and systems to allow for repeatability, reliability, composability, etc. And large companies are currently attempting to trash all of the field's progress on notation in favor of talking to the computer in English (a rich but highly error-prone language) and then having the computer do impenetrable transformations guided only by pretty-please prompts with the hope that some high percentage of the time the result will be usable.
It's about an ancient civilization so advanced they have tools and computers that can create art just from their intent. But they have grown so distant from the inner workings of these tools that they are incapable of modifying or repairing them and their survival is put into jeopardy because of it.
If we're on the topic of Star Trek, observe how the crew of Enterprise-D talked with the ship's computer. LLMs are the quality jump in technology that, for the first time, make it possible to create this kind of DWIM interaction, where the computer can actually understand what you mean when you talk to it like to a person, in natural language, instead of a restricted language.
Ironically, later shows dialed down on the smartness of Starfleet computers - from what I heard, TNG level voice interface seemed too advanced to be believable. But it turns out, the unbelievable became reality sooner than anyone expected.
For as substandard as most of the season 1 TNG episodes were, there was a lot of interesting ideas sprinkled here and there like the one you mentioned. It's too bad that poor acting and the ridiculously small scale of the conflict in the episode distracted from things like that. They probably should have abandoned the whole "we can't reproduce so let's kidnap children" thing for contrasting that civilization's relationship to technology with that of Earth's. IMO, there's already a moral dilemma in whether a failing technological civilization should be taught how to reproduce by outsiders whose relationship with technology is successful.
I think the standard was set in the last few seasons of TNG and the entirety of DS9. The crew gelled better and the topics were more weighty, yet human. I think probably it was around the time that Picard became Locutus and then the cardassian war
Shameless self-plug: I wrote a couple of short stories which explore the idea that society forgets how to do stuff after outsourcing its knowledge to machines
The problem is the perverse incentive of the market currents. It doesn't matter how you do it, just make the product work before the competition does (an cheaper). If anything crops up, you'll fix it later. AI promises just that and will act as a negative feedback loop for our understanding of the code. The result is Idiocracy scenario where people are just too dumb to realize that the AI organizing the society is not that intelligent.
Survival risk, maybe not, but we are certain to see massive die-off and a decades-long cold start given a perturbation maybe an order larger than the COVID pandemic, when the system is already undergoing unprecedented stress from climate change, as appears likely in the coming century...
I don’t think any company is seriously considering that, having played with chatgpt for a bit. Getting it to generate functional code requires so much guardrailing and validating the output that you basically need the domain knowledge anyhow as well as the ability to code to even use ai well as it is. I don’t believe it will ever get to a state where it can just be trusted to write good code for a complicated purpose without a skilled human shepherding it all along the way. The problems we solve are too bespoke a lot of the times. Its just a problem of how these models are fundamentally constructed: train on data and project new data onto it. Problems arise when you understand training sets are finite and contain things that already exist.
No, it will be more subtle. The features you will get in software products, as a user, will get AI-ified. Mostly for cost reasons. What it means, instead of building a proper feature that works correctly and handles all the edge cases, AI will be invoked to resolve the discrepancies and "make it work". It will mostly work, but will be less reliable and understandable. The end result will be more shifting of burden of errors from the software companies to users.
Our underlying data should still be extremely principled and structured. But I think LLMs enable significantly better human computer interaction, where humans aren't interacting directly with the data in your preferred manner.
For example, I've been parsing sentiment across the internet from the app I run, and using it to crunch it down into a very structured internal tracking database I have. I want to canvass as many places as possible. Using an LLM for that transformation has been stellar, and I'd casually say I'm at around 90% accuracy, which is more than sufficient for my planning & justification needs.
In that sense, it should just be an additional tool in a toolbox, not a platform to bet the house on.
Having LLMs use programming languages/tools/libraries built for humans is the wrong path to fully leverage their potential. They are capable of so much, but trying to use them as a drop-in replacement for humans isn't the best use for them. Frankly, it's impressive it works at all, but I think we will find hard limits on the actual complexity and reliability of the systems they can build in this manner.
We still need intelligent, rigorous system design. LLMs have the strong potential to let system builders operate at a higher level of abstraction, but in my opinion having LLMs write Python or SQL is an evolutionary dead end.
My own work is directly focused on this and what IS the proper place of a LLM in a data processing or knowledge system.
I don't like this breathless articles that complain about next evolution of technology like it has sprung from nothingness yesterday. Computing science has been working on this technology for decades and it's that, only now, has it achieved some level of usefulness. This is the pattern of all technologies.
It feels like the claim is that this was just developed by industry giants to mess with people.
> Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward.
The only way forward to what? What we have is just the continued evolution of hardware and software. Hardware keeps getting more powerful and software continues to get more powerful to match it. We are doing with that hardware exactly what we want to do: make things easy that were once difficult and make things possible that were once impossible.
I feel like you took an extremely uncharitable reading of the article and missed the point. It's not a manifesto against technological progress, it's a reminder to consider the downsides of these technologies as well as the benefits.
The main thrust of the article is that there will be both positive and negative consequences. There will be a big push by many players to extol the positives while suppressing information on the negatives to profit as much as they can. Then they can stop suppressing the negative consequences as they make their exit and leave everyone else to deal with them. This is such a standard playbook for business at this point it should not be a revelation to anyone.
Remember, you don't have to wait until Google develops a customer service AI that arbitrarily locks you out of your digital platforms because of a hallucination. You can in fact push back and request baseline levels of accountability!
Your comment provided more (and different) substance than that entire article did. Certainly your comment is less hyperbolic. Where you say "negative consequences" the article says "tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity"
> Remember, you don't have to wait until Google develops a customer service AI that arbitrarily locks you out of your digital platforms because of a hallucination. You can in fact push back and request baseline levels of accountability!
AI changes nothing here there. People already get locked out of their accounts because of a bunch of If-statements. If anything I think AI is merely exacerbating issues that already exist and would exist without it. Focusing on AI as the problem is missing the forest for the trees. Google being unaccountable to people relying on their service has nothing to do with AI. Misinformation and propaganda being easier and cheaper to spread than legitimate information is not just an AI problem. Perhaps many of the problems caused by people, if-statements, and AI can also be solved with AI. But for that we need to focus on solving those problems explicitly and not on the technologies involved.
“The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability ‘...a sense that the future is just more of the present, ... that there are no alternatives, and therefore nothing really to be done.’ There is no discussion of underlying values. Facts that don’t fit the narrative are disregarded.”
Couldn't make it past the first paragraph with the credential and name dropping. I don't care about stating your background, but don't do it in off putting corporate speak.
Students, open your textbooks to the Industrial Revolution and describe the immediate costs to society, the long term gains in productivity, and the driving factors towards continued industrialization.
And remember kids, be sure to remember what you learned in class!
"The most influential woman in SV" - cool. "The mother of the cloud", though? The so-called "cloud", however vaguely defined, had many parents, so call me nitpicky but the here is inappropriate.
The difference is that the forums were small, self-contained and didn't rule your digital life. They were (are) much more similar to how the "real world" operates - you may get excluded from a local fizzbuzz club after a disagreement, but you still have many other clubs to choose from. Now compare that to being banned by Google.
> Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite.
I wasn't expecting to be a Luddite at 36, but I am. I not only agree with the author, but, more generally, also really dislike the way that the future is developing.
I have zero interest in today's version of social media. The version that keeps people glued to their phone screens at all times, mindlessly viewing or scrolling through near-zero-substance content, giving constant hits of dopamine to break down self-control.
I have very conflicting feelings about _kids_ getting sucked into this, I.e. a parent handing their child a tablet at a restaurant to stop the crying.
I absolutely hate how abused phone notifications are and have most turned off. I'm actually uninstalling more apps in favor of their websites (while that is still possible), partly because of this, and partly because of the tracking these apps do that you can't easily turn off.
Voice assistants are buggy as hell and are annoying to use more often than not. I turned off Siri on everything except my Apple Watch.
Lastly, I have absolutely no use for LLMs.
I really like being able to write code or learning how to write boilerplate in other languages. I like understanding everything that's going on and what incremental progress looks like.
I like searching for answers on search engines and reading through answers or (sometimes) documentation to find what I'm looking for.
It is blisteringly obvious that the goal of big LLM is to (a) get users to think less, criticize less (because why criticize if ChatGPT tells you the "right" answer every time), and consume more, and (b) to centralize wealth in companies that are big enough to scour the Internet, train their own LLMs on that data and generate answers with minimal latency.
I'm starting to wonder if the all the AI stuff is being overhyped. "When all you have is a hammer, everything looks like a nail." There's a lot of cargo culting going on right now.
The genie is out of the bottle, though. Frankly, nobody is able to do everything about it, at and individual or even state level. If as a government you decide to limit AI (in terms of research, applications, grants etc.), you will put your country at a disadvantage because there will always be others who will not have such objections. The same is true at all lower levels.
People have this one track (or binary) mind about AI. There are more options.
We should cut out the middle man and start building a map of all likely programming symbols a stream of integers/bytes can create.
LLMs can hide data and source code behind weighted nodes. The easiest way to cut through that is map out the binary stream directly.
Sounds insanely complex, but it isn't really. It's safe to say the cast majority of code is boiler plate running in popular coding languages, with similar symbolic 'shapes'.
We should probably cut the head from the current abstraction stack (isa>assembly>C>C++) and start-over, interpreting blobs of binary straight up.
This site has way too much ads, is there a clean version somewhere else?
In the old days I had an automation to grab things, clean up and push to my kindle to read texts. This was at the time of RSS feeds and just after Google had killed Google Reader and I already despised ads.
I am using an iPhone and it's just dreadful to use the internet on this thing due to the ridiculous amount of ads.
Firefox Reader mode works pretty well, but I don't know what the FF situation is like on iOS. I know you can download adblockers for Safari that may improve your experience.
iOS FF is pretty subpar. You can’t even crudely disable all js like in safari. Though I still use it, as Brave crashes on launch half the time and refuses to open (1.3k tabs) and safari also has a 500 tab limit
You can still choose automation. The easier route for me is to use wallabag to save the article. Then on my remarkable tablet I can grab a very readable document with https://github.com/koreader/koreader.
One other option is to use https://github.com/danburzo/percollate to convert a webpage to a nice document directly. I use both tools depending on my needs.
I came back to this thread expecting to see at least one reply in a let-me-LLM-that-for-you style to task an LLM with reader-mode cleanup.
Is the lack of such posts due to newfound restraint? Or is this kind of task already too complex for an LLM? Could one manage to work from raw page sources and figure out what text to summarize to the kindle reader? Or would there need to be other browser-simulating layers to first do all the page rendering and scraping before handing off text to the LLM for further cleanup?
> I am using an iPhone and it's just dreadful to use the internet on this thing due to the ridiculous amount of ads.
I'm not seeing any as I read this on iPhone. If you want a similar experience there are many good solutions, but my "quiet web" extensions are 1Blocker, Hush, Rekt, and Vinegar.
Turns out I already had that and it didn't do nothing. So here's what happened. 95% of the time I am on my phone I am on GitHub but the Safari browser annoys me to use the app instead and the Chrome browser doesn't do it, so I switched to that, but it has no extension functionality (probably purposefully) so it gets all Ads. Seriously, the iPhone experience on the web is dreadful compared to Android.
There's no browser extensions for Chrome on Android either, and Google shoves ads into the entire OS experience if you accidentally left swipe without going in an explicitly configuring stuff. Android is an ad-infested shit show by comparison to iOS right out of the box. If you're willing and able to fix that on Android, then you're able to head over to Settings -> Safari and toggle off Safari Suggestions and turn off the app install nags you consider annoying.
Or install AdGuard as a local VPN on your iPhone and you're set or set your DNS to an ad-blocking DNS, even Chrome for iOS will get ads blocked. This is a you problem, stop whining.
Disabling js was the only way to make it bearable for me. Breaks some functionality here and there, but for general “browsing” the trade-off is worth it. Eg zero ads, popups, paywalls.
The argument to made like this is already everywhere understood and almost nowhere addressed: that specialization and localization and globalization reliant on various preconditions, has created an incomprehensible web of dependencies which regularly, belatedly, turn out to be points of failure which cause cascading disequilibrium.
Maintaining a tribal-level small community, and some loose federation across a set of those communities? 20K years of adaptable stability.
Empire-level governance on the scale of contemporary empires (the US and China specifically) is proving almost impossible to maintain; the stressors from contemporary technologies regularly outmatch their benefits wrt operational stability. We can't solve federation for a Twitter clone; why do we think we can solve it at the nation state level? Every indication is that this is proving the US was not capable of scaling 10x or beyond without a refactor.
Modernity rests on an evolved rather than designed chaos of systems few of which are load-bearing within their originally conceived limits, and those which were so conceived are themselves single points of failure (again: Twitter. But also? Amazon, SpaceX, Walmart, UPS...)