Another funny history fact with the Wii is Windows Vista released the same month in North America. People were so upset that the minimum requirement for Vista said 512 MB (which was already more than the average existing home PC of the time had without an upgrade) but it ran like crap unless you had more.
We truly had to get away with less back then. These days it feels like there is a bit more headroom where 8 GB is on the downtrend, 16 GB is becoming the most common, and the user's apps are enjoying the extra fat.
> The example you point out is the advanced case, someone only needs in a very specific case
This is exactly how C++ landed where it is now. Every time it's "you only need to know that syntax if..." well it ends up everyone has to know that syntax because someone will use it and if you're a responsible programmer you'll end up reading a lot code written from other people.
I would've loved an F# that found a way to improve on the performance issues, especially when using computation expressions. That and, either, a deeper integration of .NETs native OOP subtyping, or some form of OCaml-like module system, would have been enough to make it an almost perfect language for my tastes.
Obviously, these are big, and maybe impossible, issues. But Microsoft as a whole never really dedicated enough resources to find out. I feel for the people still working on it, their work is definitely appreciated :)
My knowledge on functional languages is limited, but as I understand it, it’s possible to formulate expressions that are basically NP problems?
And hence impossible to speed up?
So is it a F# issue or inherent to functional programming?
AFAIK it was a much more down-to-earth thing. The implementation of computation expressions in F# compiled down to lots of function objects that were not very GC-friendly. Or something like that. To be honest, I never looked that deeply at it :)
Union is almost a net positive to C# in my opinion.
But I do agree. C# is heading to a weird place. At first glance C# looks like a very explicit language, but then you have all the hidden magical tricks: you can't even tell if a (x) => x will be a Func or Expression[0], or if a $"{x}"[1] will actually be evaluated, without looking at the callee's signature.
Mario having to write this many words to justify this decision alone is a red flag already. The fact I can't understand what Earendil is after reading this many words is an even bigger red flag.
But who am I to judge? If I made a project as popular as pi I'd sell out so fast.
There is variation in how comfortable someone is with the idea of "selling out". It seems to me that Mario is less comfortable than many other people in the AI space. How comfortable someone is with the idea of selling out doesn't really seem like a good signal for how bad something is.
If Mario had instead just written: "I'm excited to work on the future of PI at Palantir." and nothing else, clearly this is worse?
> The fact I can't understand what Earendil is
From the article:
> I learned a few things. My European brain thinks pi is just another small, mildly useful OSS project of mine with no commercial value. My peers in the space seem to think it has properties that make it stand out over the alternatives. VCs and big corps seem to think that pi has commercial value. Some demonstrated their conviction by sending term sheets or "dream job" offers.
Mario thinks there's no commercial value, but clearly some VC people do. If you have the VC money, but don't have any ideas for how to justify commercial viability, why bother? If someone just handed me a big bag of cash and told you to just "do something", I could imagine coming up with Earndil. Make a nice landing page, write some nice guiding principles, and figure out the AI stuff later.
> "I'm excited to work on the future of PI at Palantir." and nothing else, clearly this is worse?
Yeah, but mainly just because what Palantir is.
It would be better if he just told us what part of the future pi tool chain would be proprietary, but from the article it seems that they haven't even decided on that, so I don't know what to make of this.
> The fact I can't understand what Earendil is after reading this many words is an even bigger red flag.
Same, but I heard about Armin Ronacher, and the things I heard were not exactly terrible. He's the creator of the Flask microframework, he's contributed to open source more than most people. Perhaps it'll be fine!
And, as Mario has written about ten times in his post: Pi is MIT licensed and it's not hard to find the "fork" button on GitHub.
> When you hear the first guitar note, stop the song, find the note on the guitar, and write it down
How do you do that with chords? I know everyone who isn't completely tone deaf can do that with one single note. But when it comes to chords, unless you already know some music theory, aren't there huge number of combinations you have to try before you find the correct one?
I'm still an amateur and always had problems with this, got some advice from an experienced jazz pianist I know. His suggestions were roughly to:
1. Start by transcribing the root note that you hear
2. Go back and see if each chord sounds major or minor - most of the time that will give you a major/minor third
3. Go back and play the 5th, and see if it sounds right - most of the time it'll be there
4. Don't worry about 7ths/9ths yet, they'll come, but give them a go
5. Once you've got a few chords, try and figure out the key, and that will help figure out others
So he was basically suggesting to transcribe by each note's function in a chord, starting with most -> least important. It still needs some music theory of course, but doesn't need you to listen to an entire complex chord at once.
How do you do that with chords? I know everyone who isn't completely tone deaf can do that with one single note. But when it comes to chords, unless you already know some music theory, aren't there infinite number of combinations you have to try before you find the correct one?
Each interval has a unique "flavor" and once you can hear them you should be able to hear multiple intervals at the same time, which effectively identifies the chord. (Admittedly for complex jazz chords it can get very difficult and you probably need more powerful tools, I can't say.)
Theory will make it a million times easier. Figure out the key and changes and you'll have likely chords and if you can do substitutions you'll have some alternatives.
Even if they're not exactly what was played, you'll be able to get to a working version with the right idea.
In any case, theory and experience will narrow the field down a great deal so you're not just stabbing at things in the dark.
Well, the guitar has a finite number of strings and each string is partition into a finite number of frets. It's definitely not more than, say, 30^6 ~ 729 million.
That said, common chords are A, B, C, D, E, F, G (and their sharps and flats), combined with either major or minor mode. Hence "C, G, F, Am, Em" is an example of what someone could play. Now, of course, if it doesn't sound exactly like a G, perhaps it's a G7? After some practice, you can even hear, by the sound of the strings, exactly which chord it is. Em, G, and D are particularly simple to recognize.
It wouldn't hurt to know how to do the 'cowboy chords' and then the 'barre chords' before (or in parallel to) doing the transcriptions. Anyway, you should start with easy songs that mostly just include those until that seems easy.
> infinite number of combinations you have to try before you find the correct one?
Kinda, but on Guitar, most pop songs are major/minor, possible sevenths. I think this post is aimed at someone who can read tab, but isn't "good" (what ever that means) so they should have an understanding of basic chord shapes.
The post does imply that this only really works if you can comfortably read tab, which is probably 6month-2 years of work (part time)
You don't have to get it right. If you know the basic guitar chords in the open positions, you can sort of play along to the vast majority of popular songs. As your hearing, knowledge of the neck, and maybe music theory improves you will start to recognise more things.
The point is not a perfect outcome. The point is the effort.
In some genres there are an infinite number. Most of the music regular people listen to is diatonic though and uses either power chords or triads, and then there are not that many options.
They were more than right. They were correct in an intentional, precise manner. This is what OpenAI actually stated[0]:
> Synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns.
> ‘The public at large will need to become more sceptical of text they find online, just as the ”deep fakes” phenomenon calls for more scepticism about images.
Yeah, I find it a bit odd how at the time everyone was pointing and laughing at OpenAI for being obviously wrong about this. Now in 2026, AI slop is very obviously a serious problem - it inundates all platforms and obscures the truth. And people are still saying OpenAI in 2019 were wrong?
I think people today are more focused on how OpenAI released a model "too dangerous to release", not that they were right or wrong, as part of the general trend of criticizing OpenAI for not following any of its stated principles.
Exactly. The (real) issues were ultimately disregarded even if they were correctly identified.
My assumption is that it was too expensive to actually release at the time. It wasn't good enough for anybody to pay to use it yet, and it surely was very expensive to run, especially for a (fake, granted) non profit.
I think it's important to consider that OpenAI's qualms weren't with making the dangerous models usable, they were with making the model usable without paying them. They're perfectly fine with any harm, as long as they get money out of it and can't be held liable.
It's the same with the Mythos stuff, I appreciate their concern/work on safety, but if it's "too dangerous", it should be unavailable until it is less dangerous.
Both crowds are right because two messages were spread. The researchers spread reasonable fears and concerns. The marketing charlatans like Altman oversold the scare as "Terminator in T-4 days" to imply greater capacity in those systems than was reasonably there.
The problem is the most publicly disseminated messaging around the topic were the fear mongering "it's god in a box" style messaging around it. Can't argue with the billions secured in funding heisted via pyramid scheme for the current GPU bonfire, but people are right to ridicule, while also right to point out warnings were reasonable. Both are true, it depends on which face of "OpenAI" we're talking about, researchers or marketing chuds?
Ultimately AGI isn't something anyone with serious skill/experience in the field expects of a transformer architecture, even if scaled to a planet sized system. It is an architecture which simply lacks the required inductive bias. Anyone who claims otherwise is a liar or a charlatan.
TIL Wii has only 88MB of RAM. Fortunately games weren't electron-based.
reply