FreeCAD is amazing these days. It has completely replaced my use of Autodesk Fusion 360 for woodworking projects. It is capable and the UI is understandable. Its feature depth is incredible.
FreeCAD is becoming like Blender and Inkspace - incredibly robust and capable and equivalent in most cases to the commercial alternatives.
I find the rendering side of things under developed though.
"Amazing" is however not the word I would use though, the UI is still very convoluted and very hard to learn.
The worst part in FreeCAD, and which remains true to this day is the load of minutia you need to know to handle/avoid weird corner cases that you inevitable run into when you start building complex models and where FreeCAD stubbornly refuses to let you carry on with your work.
When you paint yourself into one of these corners, the software is hugely unhelpful when it comes to understanding what you did wrong and how to correct it.
In short, the word "Amazing" only works if you compare it to the absolute abomination the UI was a few years back.
But compare FreeCAD today to, for example, how slick Fusion is, there is still a very, very wide gap.
Finally, the geometry engine, is a somewhat old and creaky thing that sometimes downright fails to compute fillets or surface/surface intersections correctly, so yeah, YMMV.
FreeCAD is however, free software, and not controlled by one of the worst corp. in the world of software: Autodesk. So huge thumbs up there.
This is really accurate to my experience learning FreeCAD earlier this year. I am a former professional CAD user (of a lesser software than AutoCAD) and I don't think I would have gotten far without being able to ask ChatGPT for help understanding some of the quirks of FreeCAD.
For free and open it's truly impressive though. Actually I think my time building iOS UIs in Storyboard was at least as useful as previous CAD experience, since constraints are the foundation of (at least one approach to) designing parts.
The last Autodesk software I've used was AutoCAD 2000 (released in 1999). And I've not followed them since.
Perhaps they have indeed become "one of the worst corp. in the world of software", but in the early years they were very interesting. The founder of Autodesk, John Walker (he died in 2024) wrote/edited and interesting book on the early years: "The Autodesk File" https://fourmilab.ch/autofile/
Statement of fact with my interpretation --- folks should verify the fact and read what he has written and come to their own conclusions.
While I'm grateful Autodesk stepped in and kept TinkerCAD afloat, I'm relieved Sketchbook escaped their clutches, and am glad I never got involved in Fusion 360 so as to suffer from their on-going "rug pulls" --- which of these are a result of his influence, I've not found a need to discern.
Yeah anything involving 2d art I confess I just send to Blender, even technical illustration, with the exception of O&D style sheets.
The fact anyone got a CAD kernel working in the browser is insane. Parsing the vagaries, vendor cruft, and gaping holes in STEP files has occupied a non-trivial amount of my career.
- Is significant life-long usage of real-time mental spatial navigation protective?
- Are those who end up in these positions self-selected for better than average real-time mental spatial navigation and that above average performance correlates with protection against Alzheimer's.
Anecdotal, but I've spoken with many taxi and ride-share drivers, and my impression is that their decision to seek out and continue that line of work is almost always driven by outside economic considerations. I've never heard someone base their decision on their ability to perform the job.
Exactly - I'm thinking the bad spatial navigators have a higher probability of washing out of driving and pursue some other career. They may not say "I'm bad at figuring out where I am", but the economics of the job are just a little bit worse for these people.
Yes, I'm the author of the paper. It's received more than a tiny bit of peer review. I'm happy to answer any questions about it or answer anything that is unclear.
If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.
We have to keep defining AGI upwards or nitpick it to show that we haven't achieved it.
I would argue that LLMs are actually smarter than the majority of humans right now. LLMs do not have quite the agency that humans have, but their intelligence is pretty decent.
We don't have clear ASI yet, but we definitely are in a AGI-era.
I think we are missing an ego/motiviations in the AGI and them having self-sufficiency independent of us, but that is just a bit of engineering that would actually make them more dangerous, it isn't really a significant scientific hurdle.
Ok, but it's not AGI. People five years ago would have been wrong. People who don't have all the information are often wrong about things.
ETA:
You updated your comment, which is fine but I wanted to reply to your points.
> I would argue that LLMs are actually smarter than the majority of humans right now. LLMs do not have quite the agency that humans have, but their intelligence is pretty decent.
I would actually argue that they are decidedly not smarter than even dumb humans right now. They're useful but they are glorified text predictors. Yes, they have more individual facts memorized than the average person but that's not the same thing; Wikipedia, even before LLMs also had many more facts than the average person but you wouldn't say that Wikipedia is "smarter" than a human because that doesn't make sense.
Intelligence isn't just about memorizing facts, it's about reasoning. The recent Esolang benchmarks indicate that these LLMs are actually pretty bad at that.
> We don't have clear ASI yet, but we definitely are in a AGI-era.
> Intelligence isn't just about memorizing facts, it's about reasoning.
Initially, LLMs were basically intuitive predictors, but with chain of thought and more recently agentic experimentation, we do have reasoning in our LLMs that is quite human like.
That said, there is definitely a biased towards training set material, but that is also the case with the large majority of humans.
For the Esoland benchmarks, I would be curious how much adding a SKILLS.md file for each language would boost performance?
I am pretty confidence that we are in the AGI era. It is unsettling and I think it gives people cognitive dissonance so we want to deny it and nitpick it, etc.
> There is a long history of people arguing that intelligence is actually the ability to predict accurately.
That page describes a few recent CS people in AI arguing intelligence is being able to predict accurately which is like carpenters declaring all problems can be solved with a hammer.
AI "reasoning" is human-like in the sense that it is similar to how humans communicate reasoning, but that's not how humans mentally reason.
Like my father before me, I seem to have absorbed an ability to predict what comes next in movies and books. It's sometimes a fun parlor trick to annoy people who actually get genuine surprise out of these nearly deterministic plot twists. But, a bit like with LLMs, it is a superficial ability to follow the limited context that the writers' group is seemingly forced by contract to maintain.
Like my father before me, I've also gotten old enough to to realize that some subset of people out there also behave like they are scripted by the same writers' group and production rules. I fear for the future where LLMs are on an equal footing because we choose to mimic them.
> Initially, LLMs were basically intuitive predictors, but with chain of thought and more recently agentic experimentation, we do have reasoning in our LLMs that is quite human like.
If you handwave the details away, then sure it's very human like, though the reasoning models just kind of feed the dialog back to itself to get something more accurate. I use Claude code like everyone else, and it will get stuck on the strangest details that humans actively wouldn't.
> For the Esoland benchmarks, I would be curious how much adding a SKILLS.md file for each language would boost performance?
Tough to say since I haven't done it, though I suspect it wouldn't help much, since there's still basically no training data for advanced programs in these languages.
> I am pretty confidence that we are in the AGI era. It is unsettling and I think it gives people cognitive dissonance so we want to deny it and nitpick it, etc.
Even if you're right about this being the AGI era, that doesn't mean that current models are AGI, at least not yet. It feels like you're actively trying to handwave away details.
> though the reasoning models just kind of feed the dialog back to itself to get something more accurate.
Much of our reasoning is based on stimulating our sensory organs, either via imagination (self-stimulation of our visual system) or via subvocalization (self-stimulation of our auditory system), etc.
> it will get stuck on the strangest details that humans actively wouldn't.
It isn't a human. It is AGI, not HGI.
> It feels like you're actively trying to handwave away details.
Personally, I've used LLMs to debug hard-to-track code issues and AWS issues among other things.
Regardless of whether that was done via next-token prediction or not, it definitely looked like AGI, or at least very close to it.
Is it infallible? Not by a long shot. I always have to double-check everything, but at least it gave me solid starting points to figure out said issues.
It would've taken me probably weeks to find out without LLMd instead of the 1 or 2 hours it did.
In that context, I have a hard time thinking how would a "real" AGI system look like, that it's not the current one.
Not saying current LLMs are unequivocally AGI, but they are darn close for sure IMO.
Being able to actually reason about things without exabytes of training data would be one thing. Hell, even with exabytes of training data, doing actual reasoning for novel things that aren't just regurgitating things from Github would be cool.
Being able to learn new things would be another. LLMs don't learn; they're a pretrained model (it's in the name of GPT), that send in inputs and get an output. RAGs are cool but they're not really "learning", they're just eating a bit more context in order to kind of give a facsimile of learning.
Going to the extreme of what you're saying, then `grep` would be "darn close to AGI". If I couldn't grep through logs, it might have taken me years to go through and find my errors or understand a problem.
I think that they're ultimately very neat, but ultimately pretty straightforward input-output functions.
If we had AGI we wouldn't need to keep spending more and more money to train these models, they could just solve arbitrary problems through logic and deduction like any human. Instead, the only way to make them good at something is to encode millions of examples into text or find some other technique to tune them automatically (e.g. verifiable reward modeling of with computer systems).
Why is it that LLMs could ace nearly every written test known to man, but need specialized training in order to do things like reliably type commands into a terminal or competently navigate a computer? A truly intelligent system should be able to 0-shot those types of tasks, or in the absolute worst case 1-shot them.
To add to this, previously one could argue that LLMs were on par with somewhat less intelligent humans and it was (at least I found) difficult to dispute. But now the frontier models can custom tailor explanations of technical subjects in the advanced undergraduate to graduate range. Simultaneously, I regularly catch them making what for a human of that level would be considered very odd errors in reasoning. When questioned about these inconsistencies they either display a hopeless lack of awareness or appear to attempt to deflect. They're also entirely incapable of learning from such an interaction. It feels like interacting with an empty vessel that presents an illusion of intelligence and produces genuinely useful output yet there's nothing behind the curtain so to speak.
> The recent Esolang benchmarks indicate that these LLMs are actually pretty bad at that.
I’m really not sure how well a typical human would do writing brainfuck. It’d take me a long time to write some pretty basic things in a bunch of those languages and I’m a SE.
Yes, but you also wouldn't need a corpus of hundreds of thousands of projects to crib from. If it were truly able to "reason" then conceivably it could look at a language spec, and learn how to express things in term of Brainfuck.
They did for some problems. If you gave me five iterations at a problem like this in brainfuck:
> "Read a string S and produce its run-length encoding: for each maximal block of identical characters, output the character followed immediately by the length of the block as a decimal integer. Concatenate all blocks and output the resulting string.
I'd do absolutely awfully at it.
And to be clear that's not "five runs from scratch repeatedly trying it" it's five iterations so at most five attempts at writing the solution and seeing the results.
I'd also note that when they can iterate they get it right much more than "n zero shot attempts" when they have feedback from the output. That doesn't seem to correlate well with a lack of reasoning to me.
Given new frameworks or libraries and they can absolutely build things in them with some instructions or docs. So they're not very basically just outputting previously seen things, it's at least much more pattern based than words.
edit -
I play clues by sam, a logical reasoning puzzle. The solutions are unlikely to be available online, and in this benchmark the cutoff date for training seems to be before this puzzle launched at all:
My definition of AGI hasn't changed - it's something that can perform, or learn to perform, any intellectual task that a human can.
5 years ago we thought that language is the be-all and end-all of intelligence and treated it as the most impressive thing humans do. We were wrong. We now have these models that are very good at language, but still very bad at tasks that we wrongly considered prerequisites for language.
> My definition of AGI hasn't changed - it's something that can perform, or learn to perform, any intellectual task that a human can.
Wait, could you make your qualifiers specific here? Is your definition of AGI that it be able to perform/learn any intellectual task that is achievable by every human, or by any human?
Those are almost incomparably different standards. For the first, a nascent AGI would only need to perform a bit better than a "profound intellectual disability" level. For the second, AGI would need to be a real "Renaissance AGI," capable of advancing the frontiers of thought in every discipline, but at the same time every human would likely fail that bar.
Your true average human is someone like your barista at Starbuck's. Try giving them a good math problem, or logic puzzle, or leetcode problem if you need some reminding of the standard reasoning capabilities of our species. LLMs cannot beat the best humans at practically anything, but average humans? Average humans are a much softer target than this thread seems to think.
Completely disagree. Inability to handle specific math or CS is a matter of training and experience not reasoning and intelligence. The barista is quite capable at reasoning and learning feats the LLMs aren't close to
Yeah, there appears to be this idea that "being smart" is the same thing as "knowing facts", which I don't think is realistic.
I know plenty of people who are considerably smarter than me, but don't know nearly as much as I do about computer science or obscure 90's video game trivia. Just because I know more facts than they do (at least in this very limited scope) doesn't mean that they're less capable of learning than I am.
As you said, a barista is very likely able to reason about and learn new things, which is not something an LLM can really do.
I think it would be fairly easy to prove or disprove that 'AI as it is today knows more about any subject than 99% of HN'. But knowledge alone does not translate into intelligence and that's the problem: we don't have a really hard definition of what intelligence really is. There are many reasons for that (such as that it would require us to reconsider some of our past actions), but the fact remains.
So until we really once and for all nail down what intelligence is you get this god-of-the-gaps like problem where everytime we find something that looks and feels truly intelligent by yesterday's standards that intelligence will be crammed into a slightly smaller space excluding the thing that just became possible.
The rate-of-change is a factor here. Arguably the current rate of change is very high compared to with two decades ago, but compared to three years ago it feels as if we're already leveling off and we're more focused on tooling and infrastructure than on intelligence itself.
Intelligence may not actually have a proper definition at all, it seems to be an emergent phenomenon rather than something that you engineer for and there may well be many pathways to intelligence and many different kinds of intelligence.
What gets me about AI so far is that it can be amazing one minute and so incredibly stupid the next that it is cringe worthy. It gives me an idiot/savant kind of vibe rather than that it feels like an actual intelligent party. If it were really intelligent I would expect it to be able to learn as much or more from the interaction and to be able to have a conversation with one party where it learns something useful to then be able to immediately apply that new bit of knowledge in all the other ones.
Humans don't need to be taught the same facts over and over again, though it may help with long term retention. We are able to reason about things based on very limited information and while we get stuff wrong - and frequently so - we usually also know quite precisely where the limits of our knowledge are, even if we don't always act like it.
To me it is one of those 'I'll know it when I see it' things, and without insulting anybody, including the barista's at Starbucks, I think it is perfectly possible to have a discussion about this and to accept that average humans all have different skills and specialties and that some people work at Starbucks because they want to and others because they have to, it does not say anything per-se about their intelligence or lack thereof. At the same time you can be IQ 140 but still dumber than a Starbucks barista on what it takes to make someone feel comfortable and how to make coffee.
We seem to largely agree but I wanted to respond to this one bit:
> you get this god-of-the-gaps like problem where everytime we find something that looks and feels truly intelligent by yesterday's standards that intelligence will be crammed into a slightly smaller space excluding the thing that just became possible.
It's important to distinguish between "AI" and "AGI" here. I haven't seen many objections that the frontier models of the past year or so don't qualify as AI (whatever that might or might not mean) and the ones I have seen don't seem to hold much water.
However there's a constant stream of bogus claims presenting some new feat as "AGI" upon which each time we collectively stop and revise our working definition to close the latest loophole for something that is very obviously not AGI. Thus IMO legal loophole is a more fitting description than god of the gaps.
I do think we're nearing human level in general and have already exceeded it in specific tightly constrained domains but I don't think that was ever the common understanding of AGI. Go watch 80s movies and they've got humanoid robots walking around doing freeform housework while chatting with the homeowner. Meanwhile transferring dirty laundry from a hamper to the drum remains a cutting edge research problem for us, let alone wielding kitchen knives or handling things on the stovetop.
And yet if you asked that barista if you should walk to the car wash or take your car there, they would never respond with "you should take a walk, it's healthier than driving" like almost every LLM did in a test I saw.
That is as basic as everyday reasoning gets and any human in modern society solves hundreds of problems like that every day without even thinking about it, but with LLMs it's a diceroll. Testing them with leetcode problems or logic puzzles is not going to prove much unless you first made sure none of those were in the training data to prevent pure memorization.
> If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.
Would they? Perhaps if you only showed them glossy demos that obscure all the ways in which LLMs fail catastrophically and are very obviously nowhere even close to AGI.
Certainly, they wouldn't expect that an AI able to score 150 on an IQ test is unable to play a casual game of chess because it isn't coherent enough to play without making illegal moves.
> Certainly, they wouldn't expect that an AI able to score 150 on an IQ test is unable to play a casual game of chess because it isn't coherent enough to play without making illegal moves.
To be fair, I am pretty sure Claude Code will download and run stockfish, if you task it to play chess with you. It's not like a human who read 100 books about chess, but never played, would be able to play well with their eyes closed, and someone whispering board position into their ear
There are a lot of problems with this analogy, but even if you were to take a photo of the board after every move and send it to the model, it would still be unable to play competently.
Idk about the health story, but in my use, ChatGPT has dramatically improved my understanding of my health issues and given sound and careful advice.
The second question sounds like a useless and artificial metric to judge on. The average person might miss such a “gotcha” logical quiz too, for the same reason - because they expect to be asked “is it walking distance.”
No one has ever relied on anyone else’s judgment, nor an AI, to answer “should I bring my car to the carwash.” Same for the ol’ “how many rocks shall I eat?” that people got the AI Overview tricked with.
I’m not saying anything categorically “is AGI” but by relying on jokes like this you’re lying to yourself about what’s relevant.
I have been checking organic and inorganic chemistry skills in ChatGPT pro and it is absolutely, laughably bad. But it sounds good, plausible but it comically wrong in so many ways.
Maybe you should think twice about whether the health issues advice it is giving you is legitimate.
I would accuse you of nitpicking. My experience is that LLMs are generally as smart as the average human +90% of the time. A lack of perfect to me doesn't mean it isn't AGI.
>> My experience is that LLMs are generally as smart as the average human +90% of the time. A lack of perfect to me doesn't mean it isn't AGI.
In my experience, they contain more information than any human but they are actually quite stupid. Reasoning is not something they do well at all. But even if I skip that, they can not learn. Inference is separate from training, so they can not learn new things other than trying to work with words in a context window, and even then they will only be able to mimic rather than extrapolate anything new.
It's not the lack of perfect, it's the lack of reasoning and learning.
I 100% agree that learning is missing. We make up for it in SKILLS.md and README.md files and RAGs of various types. And we train the LLMs to deal with these structures.
I've seen a lot of reasoning in the latest models while engaging in agentic coding. It is often decent at debugging and experimentational, but around 30% it goes does wrong paths and just adds unnecessary complexity via misdiagnoses.
It doesn't look anything like AGI and no one who knows what that means would be confused in any era.
Is it useful? Yes. Is it as smart as a person? Not even remotely. It can't even remember things it already was told 5 minutes ago. Sometimes even if they are still in the context window un compacted!
No, the big thing with AGI was that it was general. AI things we made were extremely narrow, identify things out of a set of classes or route planning or something similarly specific. We couldn't just hand the systems a new kind of task, often even extremely similar ones. We've been making superhuman level narrow AI things for many years, but for a long time even extremely basic and restricted worlds still were beyond what more general systems could do.
If LLMs are your first foray into what AI means and you were used to the term ML for everything else I could see how you'd think that, but AI for decades has referred to even very simple systems.
If AGI doesn't mean human level then what does? As you say, every application of A* is in some way "AI" so we had this idea of "AGI" for something "actually intelligent" but maybe I'm wrong and AGI never meant that. What does mean that?
> If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.
But this is a CPU! It's not a GPU / TPU. Even if you think we've achieved AGI, this is not where the matrix multiplication magic happens. It's pure marketing hype.
I did AI back before it was cool and I think we have agi. Imo the whole distinction was from extremely narrow AI to general intelligence. A classifier for engine failure can only do that - a route planner can only do that…
Now we have things I can ask a pretty arbitrary question and they can answer it. Translate, understand nuance (the multitude of ways of parsing sentences, getting sarcasm was an unsolved problem), write code, go and read and find answers elsewhere, use tools… these aren’t one trick ponies.
There are finer points to this where the level of autonomy or learning over time may be important parts to you but to me it was the generality that was the important part. And I think we’re clearly there.
Agi doesn’t have to be human level, and it doesn’t have to be equal to experts in every field all at once.
An interesting perspective: general, absolutely, just nowhere near superhuman in all kinds of tasks. Not even close to human in many. But intelligent? No doubt, far beyond all not entirely unrealistic expectations.
But that seems almost like an unavoidable trade-off. Fiction about the old "AI means logic!" type of AI is full of thought experiments where the logic imposes a limitation and those fictional challenges appear to be just what the AI we have excels at.
> LLMs are actually smarter than the majority of humans right now
I consider myself a bit of a misanthrope but this makes me an optimist by comparison.
Even stupid people are waaaaaay smarter than any LLM.
The problem is the continued habit humans have of anthropomorphizing computers that spit out pretty words. It’s like Eliza only prettier. More useful for sure. Still just a computer.
I really feel like we have not encountered the same stupid people. Most stupid people I know respond to every question with some form of will-not-attempt. What's 74 times 2? Use a calculator! Should I drive or walk to the car wash? Not my problem! How many R's in strawberry? Who cares! They'll lose to the LLM 100%.
The cheapest Aliexpress calculator can multiply much bigger numbers than I can in my head, and it can do it instantly. Does that mean that the calculator is “smarter” than me?
I don't believe in a separation of mind and spirit. So I do think fundamentally, outside of a reliance on quantum effects in cognition (some of theorized but it isn't proven), its processes can be replicated in a fashion in computers. So I think that intelligence likely can be "just a computer" in theory and I think we are in the era where this is now true.
I don't believe in "spirits" from the get go. I think it's certainly theoretically possible that we could mimic human thought with a computer (quantum or otherwise) but I do not think that the LLMs we have now are doing that. I'd say that what we have right now is "just a computer".
This doesn't mean they aren't useful, I like Claude a lot, but I don't buy that it's AGI.
A human can think logically with reason, not to say they are smart or smarter. But LLMs cannot. You can convince a LLM anything is correct and it will believe you. You can't convince a human anything is correct.
I can't argue that LLMs do not know an absolute insane amount of information about everything. But you can't just say LLMs are smarter then most humans. We've already decided that smartness is not about how much data you know, but thinking about that data with logical reasoning. Including the fact it may or may not be true.
I can run a LLM through absolutely incorrect data, and tell it that data is 100% true. Then ask it questions about that data and get those incorrect results as answers. That's not easy to do with humans.
That just implies LLMs are suggestible. The same is true of children. As we get older and build a more complete world model in our heads, it's harder to get us to believe things which go against that model.
Tell a 5-yr old about Santa, and they will believe it sincerely. Do the same with a 30-year old immigrant who has never heard of Santa, and I suspect you'll have a harder time.
That's not because the 5-year old is dumber, but just because their life-experience ("training data") is much more limited.
Even so, trying to convince a modern LLM of something ridiculous is getting harder. I invite you to try telling ChatGPT or Gemini that the president died a week ago and was replaced by a body-double facsimile until January 2027, so that Vance can have a full term. I suspect you'll have significant difficulty.
> There's a plethora of people who convert to religion at an older age, and that seems far more far fetched than Santa.
Being in a religion doesn’t imply belief in deities; it only implies people want social connection. This is clearly visible in global religion statistics; there are countries where the majority of people identify as belonging to a religion, and at the same time only a small minority state they believe in a “God”. Norway is a decent example that I bumped into just yesterday. https://en.wikipedia.org/wiki/Religion_in_Norway
But I bet you'd have a significantly easier time converting a child rather than a 30/40/50-yr old to a religion.
My point is that LLMs are suggestible, perhaps more so than the average adult, but less so than I child I suspect. I don't think suggestibility really solves the problem of whether something has AGI or not. To me, on the contrary, it seems like to be intelligent and adaptable you need to be able to modify your world model. How easily you are fooled is a function of how mature / data-rich your existing world model is.
The problem with definitions is that they are all wrong when you try to apply them outside mathematical models. Descriptive terms are more useful than normative ones when you are dealing with the real world. Their meaning naturally evolves when people understand the topic better.
General intelligence, as a description, covers many aspects of intelligence. I would say that the current AIs are almost but not quite generally intelligent. They still have severe deficiencies in learning and long-term memory. As a consequence, they tend to get worse rather than better with experience. To work around those deficiencies, people routinely discard the context and start over with a fresh instance.
This is a pretty significant hit to natural gas and will significantly cut Qatar's income while boosting other producers like the USA, Russia, Norway, Australia and Canada:
I think this is fundamentally misdiagnosing why Macs haven’t dominated. It is actually not about the monitor support but about:
- government and corporate bulk contracts ?and this is usually a result of software only working on Windows.)
- expensive (thus affecting for most home users and also corporate bulk buyers who can not tell the difference.)
- lack of high end game support
That is why it doesn’t have more market share.
You are thinking too much about minor technical issues.
> Interesting, cutting way back in the product they renamed the whole company for.
It was clearly the wrong bet. He pumped something like $100B into the endeavour (Meta Quest / VR / Horizons) and it is just slowly dying as we speak. He has to give up on it, although I am sure it will be called a "pivot" into AR glasses.
> He pumped something like $100B into the endeavour (Meta Quest / VR / Horizons) and it is just slowly dying as we speak.
Literally never met anyone who used or liked the Horizon thing, VRChat in comparison is more popular and doesn't feel like a soulless corporate husk: they also have quite the variety of worlds, from party games, to someone building a whole jet/chopper flight combat simcade world; ofc all of them are a bit jank, but lots of cool stuff and very expressive avatars.
Meta Quest, on the other hand, seems like a really good piece of tech - I still have my Quest 2 (because I'm broke as hell), but I enjoyed even that one, albeit maybe with a slightly more comfy head strap than the default one and the Virtual Desktop app cause their Link app doesn't support Intel Arc GPUs. The tracking is good, the experience of all sorts of stuff in VR is nice, games like H3VR or VTOL VR are great, as is Into The Radius VR! At the same time, I can see why it never saw super widespread adoption - tricky to develop for and also a somewhat limited audience.
Also the productivity situation just isn't there, closest I got to a good productivity setup (out of curiosity) was the Immersed app before they messed it all up by removing support for physical monitors - I could have my 4 physical monitors in VR surrounded by whatever I want and some virtual monitors and just lock in, it was kind of zen despite the technical limitations. It seems like people got promising tech in place... and then never really wrote good software to take advantage of it. Even Virtual Desktop has artificially enforced monitor limits in VR.
I hope VR tech continues to progress (especially lightweight headsets) no matter what happens to Meta.
Yeah, it was a bizarre decision. There isn't a clear ROI on games and that's what Horizon Worlds has been the whole time. There's no equation that says a 100M game automatically makes 100x more than a 1M game on average. If anything the equation is sub-linear. 100B just doesn't seem like the right size for a game investment.
It's supposed to be a Roblox competitor, which does print money, though probably not to the extent of how much they invested.
The problems are 2 fold:
People/kids don't want to put on a VR headset to play Roblox. I guess they're conceding this point by pivoting to mobile.
Meta is the opposite of cool. Real name requirements, only humanoid avatars, super corpo branding, etc really seriously hold them back from competing with VRChat or Roblox. This one is terminal it'll never be fixable as long as Meta is at the helm.
There are some really good ar glasses for a couple of hundred dollars, I think they are going to end up really cheap and not the 100 billion investment that facebook needs.
Tbf I don't think they ever intended to make back their investments via the goggles. As near as I can tell the thought process was basically: "Real estate + fashion + live entertainment + art + etc is X quadrillion dollars. We could make The Virtual World and capture all that value. It would be irrational not to invest $100B!" Basically Pascal's Investment.
reply