Hacker Newsnew | past | comments | ask | show | jobs | submit | vinkelhake's commentslogin

This is "just" about providing the official Chrome binary to ARM64 "desktop" Linux.

You've been able to build and run Chromium on ARM Linux for a long time (I'm running it right now), it's just that they haven't provided an officially branded Chrome.

This is a good thing. While Chromium works well, there are a few things (like syncing) that is a bit of a pain to set up.


I did something similar a few years ago. I put together a Pimoroni Interstate 75 (which is an RP Pico with an integrated LED matrix connector), a 32x64 matrix, a NES controller port, designed a simple case for it and made a Tetris. It was a fun project and the first time I had really done anything with hardware.

I've been meaning to do a write up of the project, but I keep putting it off. I wrote the software bits in C++. To speed up iteration (i.e. not have to deploy to real hardware for every tweak to the game code), I made a small web harness that ran the core logic as wasm.

https://imgur.com/a/tetris-HoXenDg


Looks really smooth! Would love to see that write up, or even the source if you would be willing to share it.


FWIW, and errors aside, I think I agree with the general sentiment. Things added to the C++ library ossify. The ABI concerns, and the general unwillingness to do anything about it, is a big reason why Google largely exited the C++ standard business.

But the conspiracy brained part of me can't help to think that part of this is sour grapes. Vinnie contributed a lot to the failed proposal to add networking (loosely based on ASIO) to the C++ standard. That proposal eventually lost out to the sender/receiver library[0] which is getting added in C++26. That still doesn't have actual networking, but lays the groundwork.

It remains to be seen how well sender/receiver turns out. Given ranges (another Niebler addition), I'm not super optimistic.

[0] https://en.cppreference.com/w/cpp/experimental/execution.htm...


Is there an equivalent to Godwin's law wrt threads about Google and Google Reader?

See also: any programming thread and Rust.


I'm convinced my last groan will be reading a thread about Google paper clipping the world, and someone will be moaning about Google Reader.


“A more elegant weapon of a civilised age.”


Lol, it seems obvious in retrospect, there really, really, needs to be.

Therefore we now have “Vinkel’s Law”


It's far from the only example https://killedbygoogle.com/


I recently had my Framework Desktop delivered. I didn't plan on using it for gaming, but I figured I should at least try. My experience thus far:

    * I installed Fedora 43 and it (totally unsurprisingly) worked great.
    * I installed Steam from Fedora's software app, and that worked great as well.
    * I installed Cyberpunk 2077 from Steam, and it just... worked.
Big thanks to Valve for making this as smooth as it was. I was able to go from no operating system to Cyberpunk running with zero terminals open or configs tweaked.

I later got a hankering to play Deus Ex: Mankind Divided. This time, the game would not work and Steam wasn't really forthcoming with showing logs. I figured out how to see the logs, and then did what you do these days - I showed the logs to an AI. The problem, slightly ironically, with MD is that it has a Linux build and Steam was trying to run that thing by default. The Linux build (totally unsurprisingly) had all kinds of version issues with libraries. The resolution there was just to tell Steam to run the Windows build instead and that worked great.


> I later got a hankering to play Deus Ex: Mankind Divided. This time, the game would not work and Steam wasn't really forthcoming with showing logs. I figured out how to see the logs, and then did what you do these days - I showed the logs to an AI. The problem, slightly ironically, with MD is that it has a Linux build and Steam was trying to run that thing by default. The Linux build (totally unsurprisingly) had all kinds of version issues with libraries. The resolution there was just to tell Steam to run the Windows build instead and that worked great.

I've heard it said in jest, but the most stable API in Linux is Win32. Running something via Wine means Wine is doing the plumbing to take a Windows app and pass it through to the right libraries.

I also wonder if it's long-term sustainable. Microsoft can do hostile things or develop their API in ways Valve/Proton neither need nor want, forcing them to spend dev time keeping up.


MS _can_ do that, but only with new APIs (or break backwards compatibility). Wine only needs to keep up once folks actually _use_ the new stuff… which generally requires that it be useful.


Or MS does deals with developers causing them to use the new APIs. I still haven't forgotten when they killed off the Linux version of Unreal Tournament 3. Don't for a second forget they are assholes.


Plus if it does happen, folks need to laern a bunch of new hostile stuff, given how linux is taking off, why not just move to treating linux as the first class platform.


> why not just move to treating linux as the first class platform

This is where the argument goes back to Win32 is the most stable API in Linux land. There isn't a thing such as the Linux API so that would have to be invented first. Try running an application that was built for Ubuntu 16.04 LTS on Ubuntu 24.04 LTS. Good luck with that. Don't get me wrong, I primarily use and love Linux, but reality is quite complicated.


stability has critical mass. When something is relied on by a small group of agile nerds, we tend not to worry about how fast we move or what is broken in the process. Once we have large organisations relying on a thing, we get LTS versions of OS's etc.

The exact same is true here. If large enough volumes of folks start using these projects and contribute to them in a meaningful way, then we end up with less noisy updates as things continue to receive input from a large portion of the population and updates begin more closely resembling some sort of moving average rather than a high variance process around that moving average. If not less noisy updates, then at least some fork that may be many commits behind but at least when it does update things in a breaking way, it comes with a version change and plenty of warning.


Sounds similar to macOS.


MacOS apps are primarily bound to versioned toolchains and SDK's and not to the OS version. If you are not using newer features, your app will run just fine. Any compatibility breaks are published.


> Try running an application that was built for Ubuntu 16.04 LTS on Ubuntu 24.04 LTS. Good luck with that.

Yea, this is a really bad state of affairs for software distribution, and I feel like Linux has always been like this. The culture of always having source for everything perhaps contributes to the mess: "Oh the user can just recompile" attitude.


I'd love to see a world were game devs program to a subset of Win32 that's known to run great on Linux and Windows. Then MSFT can be as hostile as they like, but no one will use it if it means abandoning the (in my fantasy) 10% of Linux gamers.


That's basically already happening with Unity and Unreal's domination of the game engines. They seem dominate 80% of new titles and 60% of sales on Steam [1], so WINE/Valve can just focus on them. Most incompatible titles I come across are rolling their own engine.

[1] PDF: https://app.sensortower.com/vgi/assets/reports/The_Big_Game_...


Same with Godot. I'm writing a desktop app, and I get cross-platform support out-of-the-box. I don't even have to recompile or write platform-specific code, and doesn't even need Win32 APIs.


One aspect I wonder about is the move of graphics API from DX11 (or OpenGL) to DX12/Vulkan, while there have been benefits and it's where the majority of effort is from vendors they are (were?) notoriously harder to use. What strikes me about gaming is how broad it is, and how many could make a competent engine at a lower tech level, but fits their needs well because their requirements are more modest.

I also wonder about the developer availability. If you're capable of handling the more advanced APIs and probably modern hardware and their features, it seems likely you're going to aim at a big studio producing something that big experience, or even an engineer at the big engine makers themselves. If you're using the less demanding tech it will be more approachable for a wider range of developers and manageable in-house.


I believe it's already happening to a minor degree. There is value in getting that "steam deck certified" badge on your store, so devs will tweak their game to get it, if it isn't a big lift.


I am seeing that number increasing soon with The SteamDeck and SteamMachine (and clones/home builds). Even the VR headset although niche, is linux.

The support in this space from Valve has been amazing, I can almost forgive them for not releasing Half Life 3. Almost.


There are strong indications that Half Life 3 (or at least a Half Life game) is coming soon. Of course, Valve might decide to pan the project, but I wouldn't be surprised seeing an announcement for 2026.


> Microsoft can do hostile things or develop their API in ways Valve/Proton neither need nor want, forcing them to spend dev time keeping up.

If they decide to do this in the gaming market, they don't need to mess up their API. They can just release a Windows native anti-cheat-anti-piracy feature.


> They can just release a Windows native anti-cheat-anti-piracy feature.

Unless it's a competitive game and it's a significant improvement on current anticheat systems I don't see why game developers would implement it. It's only going to reduce access to an already increasing non-windows player base, only to appease Microsoft?

Also in order to circumvent a Windows native version wouldn't that be extremely excessive and a security risk? To be mostly effective they would need to be right down the 0 ring level.. just to spite people playing games outside of Windows?


Existing anticheat software on Windows already runs in ring 0, and one of the reasons that competitive games often won't work on Linux is precisely that Wine can't emulate that. Some anticheat softwares offer a Linux version, but those generally run in userspace and therefore are easier for cheaters to circumvent, which is why game developers will often choose to not allow players that run the Linux version to connect to official matchmaking. In other words, for the target market of developers of competitive games, nothing would really get any worse if there was an official Microsoft solution.

On the other hand, using an official Microsoft anticheat that's bundled in Windows might not be seen as "installing a rootkit" by more privacy-conscious gamers, therefore improving PR for companies who choose to do it.

In other words, Microsoft would steamroll this market if they chose to enter it.


Also Microsoft closing the kernel to non-MS/non-driver Ring 0 software is inevitable after Crowdstrike, but they can't do that until they have a solution for how anti-cheat (and other system integrity checkers) is going to work. So something like this is inevitable, and I'm very sure there is a team at Microsoft working on it right now.


> just to spite people playing games outside of Windows?

These things are always sold as general security improvements even when they have an intentional anti-competitive angle. I don't know if MS sees that much value in the PC gaming market these days but if they see value in locking it all down and think they have a shot to pull it off, they'll at least try it.

In theory a built in anti-cheat could framework have a chance at being more effective and less intrusive than the countless crap each individual game might shove down your throat. Who knows how it would look in practice.


>I've heard it said in jest, but the most stable API in Linux is Win32.

Sometimes the API stability front causes people to wonder if things would be better if FreeBSD had won the first free OS war in the 90s. But I think there's a compromise that is being overlooked: maybe Linux devs can implement a stable API layer as a compatibility layer for FreeBSD applications.


Steam has versioned Steam Linux Runtimes in an attenpt to address that. Curently it's only leveraged by proton but maybe it will help in the future.

It's like a flatpack, stabilized libraries based on Debian.


Hear me out:

Containers. Or even just go full VM.

AFAIK we have all the pieces to make those approaches work _just fine_ - GPU virtualization, ways to dynamically share memory etc.

It's a bit nuts, sure, and a bit wasteful - but it'd let you have a predictable binary environment for basically forever, as well as a fairly well defined "interface" layer between the actual hardware and the machine context. You could even accommodate shenanigans such as Aurora 4X's demand to have a specific decimal separator.

We could even achieve a degree of middle-ground with the kernel anti-cheat secure boot crowd - running a minimal (and thus easy to independently audit) VM host at boot. I'd still kinda hate it, but less than having actual rootkits in the "main" kernel. It would still need some sort of "confirmation of non-tampering" from the compositor, but it _should_ be possible, especially if the companies wanting that sort of stuff were willing to foot the bill (haha). And, on top of that, a VM would make it less likely for vulnerabilities of the anti-cheat to spread into the OS I care about (a'la the Dark Souls exploit).

So kinda like Flatpak, I guess, but more.


Check out the Steam Linux Runtime. You can develop games to run natively on Linux in a container already.

Running the anti-cheat in a VM completely defeats the point. That's actually what cheaters would prefer because they can manipulate the VM from the host without the anti-cheat detecting it.


There is no "real" GPU virtualization available for regular consumer, as both AMD and NVIDIA are gatekeeping it for their server oriented gpus. This is the same story with Intel gatekeeping ECC ram for decades.

Even if you run games in container you still need to expose the DRM char/block device if you want vulkan,opengl to actually work.

https://en.wikipedia.org/wiki/GPU_virtualization#mediated


I try to get as many (mostly older, 2D) Windows games as possible to run in QEMU (VirtualBox in the past). Not many work, but those that do just keep working and I expect they will just always work ("always" relative to my lifetime).

WINE and Proton seems to always require hand holding and leaks dependencies on things installed on the host OS as well as dependencies on actual hardware. Used it for decades and it is great, but can never just relax and know that since a game runs now it will always run like is possible with a full VM (or with DOSBox, for older games).


GPU sharing for consumers is available only as full passtrough, no sharing. Have to detach from host.


MS has supported doing gpu virtualization for years in hyper-v with their gpu-pv implementation. Normally it gets used automatically by windows to do gpu acceleration in windows sandbox and WSL2, however it can be used with VMs via powershell.


12th gen and later intel iGPU can do sr-iov.


>I also wonder if it's long-term sustainable. Microsoft can do hostile things or develop their API in ways Valve/Proton neither need nor want, forcing them to spend dev time keeping up.

Not while they continue to have the Xbox division and aspire to be the world's biggest publisher.


Yeah that's been my experience with native Linux builds too. Most of them were created before Proton etc got good, and haven't necessarily been maintained, whereas running the Windows version through Proton generally just works.

Unfortunately it seems supporting Linux natively is pretty quickly moving target, especially when GPUs etc are changing all the time. A lot of compatibility-munging work goes on behind the scenes on the Windows side from MS and driver developers (plus MS prioritizing backwards compat for software pretty heavily), and the same sort of thing now has a single target for peoples efforts in Proton.

It's less elegant perhaps than actual native Linux builds, but probably much more likely to work consistently.


Sometimes you see developers posting on /r/linux_gaming and generally the consensus from the community is mostly "just make sure proton works" which is pretty telling.

It's sort of a philosophical bummer as an old head to see that native compatibility, or maybe more accurately, native mindshare, being discarded even by a relatively evangelical crowd but,

- as a Linux Gamer, I totally get it - proton versions just work, linux versions probably did work at some point, on some machines.

- as a Developer, I totally get it - target windows cause that's 97% of your installs, target proton cause that's the rest of your market and you can probably target proton "for free". Focus on making a great game not glibc issues.

I mostly worry about what happens when Gabe retires and Valve pivots to the long squeeze. Don't think proton fits in that world view, but I also don't know how much work Proton needs in the future vs the initial hill climb and proof-of-success. I guess we'll get DX13 at some point, but maybe I'll just retire from new games and just keep playing Factorio until I die (which, incidentally does have a fantastic native version, but Wube is an extreme outlier.)


1. I think targeting compatibility is 99% as good as targeting native.

2. You’re discarding the shifting software landscape. Steam OS and Linux are trending towards higher PC gaming market share. macOS has proven you don’t need much market share to force widespread (but not universal) compatibility.

3. I don’t see the value in a purist attitude around Linux gaming. The whole point of video games is entertainment. I’m much less concerned with if my video game is directly calling open source libraries then if my {serious software} is directly calling open source libraries.


On point 3, I guess my views are different because my {serious software} is usually work, and if it stops working that's kind of a B2B problem and part of doing enterprise. It's just business as they say.

Gaming is much more meaningful to me as a form of story and experience, and it is important to me that games keep working and stay as open and fair as possible. In the same way it is important I can continue to read books, listen to music or watch movies I care about.


> Unfortunately it seems supporting Linux natively is pretty quickly moving target

With the container-based approach of the Steam Linux Runtime this should no longer be a problem. Games can just target a particular version and Steam will be able to run it forevermore.


I would hope Vulkan also does a lot of work here for linux native builds but I must admit I am only now starting my journey into that space.


A lot of those Linux native builds will have been using Vulkan.

Parity between DX12 and Vulkan is pretty high and all around I trust the vkd3d[0] layer to be more robust than almost anything else in this process since they're such similar APIs.

The truth is that it's just a whole lot harder to make a game for Linux APIs and (even) base libraries than it is to make it for Windows, because you can't count on anything being there, let alone being a stable version.

Personally I don't see a future where Linux continues being as it is (a culture of shared libraries even when it doesn't make sense, no standard way of doing the basics, etc.) and we don't use translation layers.

We'll either have to translate via abstraction layers (or still be allowed to translate Win32 calls) to all of the nasty combination of libraries that can exist or we'll have to fix Linux and its culture. The second version is the only one where we get new games for Linux that work as well as they should. The first one undeniably works and is sort of fine, but a bit sad.

0 - vkd3d is the layer that specifically translates D3D12 to Vulkan, as opposed to vkdx which is for lower D3D versions.


It's not really harder to make a good native Linux port that will keep working, it's just not something most game developers have much experience with.


I have a slightly different view. The former scenario is essentially having our cake and eating it too. I'd rather not "fix" Linux culture.


Too late for an edit now, but `vkdx` in the note here is supposed to say `dxvk`.


A wise man once said "The most stable ABI on linux is win23". It sounds like a joke, but it is actually true.


In my experience, Steam client and most games work great on Debian and Ubuntu, but you should know for GNU/Linux systems, it's only officially supported on Ubuntu (maybe SteamOS is implicit), I can't find the information on Steam's website or support pages, but this is a response I got from Steam Support when reporting a Steam client UI bug on Debian with GNOME, a while ago.


Which is funny because Ubuntu is also the only distro that wants you to install the Steam snap instead, which is then again unsupported.


When I run into issues with running games on Linux (Steam or otherwise), I found it useful to consult protondb.com to see what others have gotten to work. You can filter by OS or keyword etc.

https://www.protondb.com/app/337000 for Deus Ex: Mankind Divided


I wiped windows 10 from desktop. Installed cachyos and steam Installed path of exile 2

and it worked surprisingly, also i see people joking about how win32 is the only stable api on linux xD. Also heard red dead redemption 2 also works well on linux that might be the next game i will check out.


I can confirm I finished RDR2 in story mode in Bazzite, zero issues. Never played the multiplayer part, though.


Story mode is good enough for me


It's the same with Dying Light. They have a neglected Linux version and I downloaded 16GiB before i realised to switch to the Windows version and start again.


I run Fedora 43 and all games (single tickbox in settings) are running through "compatibility mode" (wine/proton). Works great!


This felt like a blast from the past. At a few times reading this article, I had to go back and check that, yes, it's actually a new article from the year 2025 on STM in Haskell and it's even using the old bank account example.

I remember 15 or 20 years (has it been that long?) when the Haskell people like dons were banging on about: 1) Moore's law being dead, 2) future CPUs will have tons of cores, and 3) good luck wrangling them in your stone age language! Check out the cool stuff we've got going on over in Haskell!


Yeah, remember when we used to care about making better programming languages that would perform faster and avoid errors, instead of just slapping blockchains or AI on everything to get VC money. Good times.


Only half-joking: maybe Java was a mistake. I feel like so much was lost in programming language development because of OOP...


Java is most of why we have a proliferation of VM-based languages and a big part of why WASM looks the way it does (though as I understand it, WASM is the shape it is in some measure because it rejects JVM design-for-the-language quirks).

I would also not blame Java for the worst of OO, all of that would have happened without it. There were so many OO culture languages pretending to that throne. Java got there first because of the aforementioned VM advantage, but the core concepts are things academia was offering and the industry wanted in non-Ada languages.

I would say Java also had a materially strong impact on the desire for server, database and client hardware agnosticism.

Some of this is positive reinforcement: Java demonstrates that it's better if you don't have to worry about what brand your server is, and JDBC arguably perfected ODBC.

Some of it is negative: a lot of the move to richer client experiences in the browser has to do with trying to remove client-side Java as a dependency, because it failed. It's not the only bridged dependency we removed, of course: Flash is equally important as a negative.


> I would also not blame Java for the worst of OO, all of that would have happened without it. There were so many OO culture languages pretending to that throne. Java got there first because of the aforementioned VM advantage, but the core concepts are things academia was offering and the industry wanted in non-Ada languages.

Really? Were there other major contenders that went as far as the public class MyApp { public static void main( nonsense?


OOP is very useful/powerful, don't throw the good parts out. Java messed up by deciding everything must be an object when there are many other useful way to program. (you can also argue that smalltalk had a better object model, but even then all objects isn't a good thing). Functional programing is very powerful and a good solution to some problems. Procedural programing is very powerful and a good solution to some problems. You can do both in Java - but you have to wrap everything in an object anyway despite the object not adding any value.


> everything must be an object when there are many other useful way to program.

Perhaps you would prefer a multi-paradigm programming language?

http://mozart2.org/mozart-v1/doc-1.4.0/tutorial/index.html


Java was derived from C++, Smalltalk, and arguably Cedar, and one of its biggest differences from C++ and Smalltalk is that in Java things like integers, characters, and booleans aren't objects, as they are in C++ and Smalltalk. (Cedar didn't have objects.)


Right. Everything a user can do is object, but there are a few non-object built ins. (they are not objects in C++ either, but C++ doesn't make everything you write be an object)


In C++ integers and characters are objects. See https://en.cppreference.com/w/cpp/language/objects.html, for example, which explicitly mentions "unsigned char objects", "a bit-field object", "objects of type char", etc.


I feel this is a case of using the same word to mean something different. C++ “object” here seems to mean something more akin to “can be allocated and stuffed into an array” than a Smalltalk-type object.

i.e. C++ primitive types are defined to be objects but do not fit into a traditional object-oriented definition of “object”.


Yes, many people believe that C++ isn't really "object-oriented", including famously Alan Kay, the inventor of the term. Nevertheless, that is the definition of "object" in C++, and Java is based on C++, Smalltalk, and Cedar, and makes an "object"/"primitive" distinction that C++, Smalltalk, and Cedar do not, so "Java [did something] by deciding everything must be an object" is exactly backwards.


I'm not sure who invented "object oriented", but objects were invented by Simula in 1967 (or before, but first released then?) and that is where C++ takes the term from. Smalltalk-80 did some interesting things on top of objects that allow for object oriented programming.

In any case, Alan Kay is constantly clear that object oriented programming is about messages, which you can do in C++ in a number of ways. (I'm not sure exactly what Alan Kay means here, but it appears to exclude function calls, but would allow QT signal/slots)


The specific thing you can do in Smalltalk (or Ruby, Python, Objective-C, Erights E, or JS) that you can't do in C++ (even Qt C++, and not Simula either) is define a proxy class you can call arbitrary methods on, so that it can, for example, forward the method call to another object across the network, or deserialize an object stored on disk, or simply log all the methods called on a given object.

This is because, conceptually, the object has total freedom to handle the message it was sent however it sees fit. Even if it's never heard of the method name before.


You can do that in C++ too - it is just a lot of manual work. Those other languages just hide (or make easy) all the work needed to do that. There are trade offs though - just because you can in C++ doesn't mean you should: C++ is best where the performance cost of that is unacceptable.


No, in C++ it's literally impossible. The language provides no way to define a proxy class you can call arbitrary methods on. You have to generate a fresh proxy class every time you have a new abstract base class you want to interpose, either by hand, with a macro processor, or with run-time code generation. There's no language mechanism to compile code that calls .fhqwhgads() successfully on a class that doesn't have a .fhqwhgads() method declared.


you don't call fhqwhgads() on your proxy class though. You call runFunction("fhqwhgads") and it all compiles - the proxy class then string matches on the arguments. Of course depending on what you want to do it can be a lot more complex. That is do manually what other languages do for you automatically under the hood.

Again, this is not something you should do, but you can.


That doesn't provide the same functionality, because it requires a global transformation of your program, changing every caller of .fhqwhgads() (possibly including code written by users of your class that they aren't willing to show you, example code in the docs on your website, code in Stack Overflow answers, code in printed books, etc.). By contrast, in OO languages, you typically just define a single method that's a few lines of code.

You're sinking into the Turing Tarpit where everything is possible but nothing of interest is easy. By morning, you'll be programming in Brainfuck.


C++ standard gets the definition of object from the C standard, i.e. a bunch of memory with a type.

Nothing to do with the objects of OOP.


To be clear, I’m not trying to pick at whether or not C++ is “really object oriented”.

What I’m saying is that the discrepancy between primitives in C++ and Java is entirely one of definition. Java didn’t actually change this. Java just admitted that “objects” that don’t behave like objects aren’t.


On the contrary, Java objects are very different from C++ objects, precisely because they lack a lot of the "primitive-like" features of C++ objects such as copying, embedding as fields, and embedding in arrays. (I'm tempted to mention operator overloading, but that's just syntactic sugar.)


Java differs from C++ in an endless number of ways.

What I’m saying is that in both C++ and Java, there are a set of primitive types that do not participate in the “object-orientedness”. C++ primitives do not have class definitions and cannot be the base of any class. This is very much like Java where primitives exist outside the object system.

If the C++ standard used the term “entities” instead of “objects” I don’t think this would even be a point of discussion.


It's not some minor point of terminology.

The entire design of C++ is built around eliminating all distinctions between primitive "entities" and user-defined "entities" in a way that Java just isn't. It's true that you can't inherit from integers, but that's one of very few differences. User-defined "entities" don't (necessarily) have vtables, don't have to be heap-allocated, can overload operators, can prevent subclassing, don't necessarily inherit from a common base class, etc.

C++'s strange definition of "object" is a natural result of this pervasive design objective, but changing the terminology to "entity" wouldn't change it.


> The entire design of C++ is built around eliminating all distinctions between primitive "entities" and user-defined "entities"

If the intent was to erase all distinction between built-in and user-defined entities then making the primitive types unable to participate in object hierarchies was a pretty big oversight.

But at this point I think we’re talking past each other. Yes, in Java objects are more distinct from primitives than in C++. But also yes, in C++ there is a special group of “objects” that are special and are notably distinct from the rest of the object system, very much like Java.


You can read Stroustrup's books and interviews, if the language design itself doesn't convey that message clearly enough; you don't have to guess what his intentions and motivations were. And, while I strongly disagree with you on how "special and notably distinct" primitive types are in C++, neither of us is claiming that C++ is less adherent to the principle that "everything is an object" than Java. You think it's a little more, and I think it's a lot more.

But we agree on the direction, and that direction is not "Java [did something] by deciding everything must be an object," but its opposite.


I don’t actually think it’s any more adherent to that notion. This is exactly why I tried to point out the discrepancies in definitions. You have to define what an “object” is or the discussion is meaningless.

If the definition of object is something like “an instance of a class that has state, operations, and identity” then C++ primitives are fundamentally not objects. They have no identity and they are not defined by a class. If “participates in a class hierarchy” is part of the definition, then C++ is way less OO than Java.

I don’t quite understand what your definition is, but you seem to be arguing that user-defined entities are more like primitives in C++, so it’s more object-oriented. So maybe “consistency across types == object orientedness”? Except C++ isn’t really more consistent. Yes, you can create a user-defined type without a vtable, but this is really a statement that user defined types a far more flexible than primitives. But also if “consistency across types” is what makes a language OO then C seems to be more OO than C++.


I don't think C++ is object-oriented, and it is certainly way less OO than Java in most ways. Its "classes" aren't the same kind of thing as classes in OO languages and its "objects" aren't OO objects.

In part this is because by default even C++ class instances don't have identity, or anyway they only have identity in the sense that ints do, that every (non-const) int has an address and mutable state. You have to define a destructor, a copy constructor, and an assignment operator to give identity to the instances of a class in C++.

With respect to "participates in a class hierarchy", that has not been part of the definition of OO since the Treaty of Orlando. But, in Java, all objects do participate in a class hierarchy, while no primitives do, while, in C++, you can also create class instances (or "class" "instances") that do not participate in a class hierarchy (without ever using inheritance, even implicitly). So, regardless of how OO or non-OO it may be, it's another distinction that Java draws between primitives and class instances ("objects" in Java) that C++ doesn't.


I think we are in agreement.

In C++, everything is an object as defined by the C++ spec, but a lot of things are not objects in an OO sense.

In Java, almost everything is an object in an OO sense, but some stuff is definitely not.


Just like Java, you cannot inherit from integers or characters. Depending on what you want to do with them that might or might not matter.


That's true, and in Smalltalk it's not true. In Cedar there is no inheritance. At any rate it's not a case of Java making more things objects than its forebears.


> At any rate it's not a case of Java making more things objects than its forebears.

There are other cases though. E.g. in C++ you could always have function pointers which were not objects and functions which were not methods. In Java you had to use instances of anonymous classes in place of function pointers, and all of your functions had to be (possibly static) methods.


Function pointers in C++ are "objects", and the difference between static methods and C-style functions which are not methods seems purely syntactic to me, or at best a question of namespacing. Regardless, static methods are not objects either in Java or in C++, so that is also not a case of something being an "object" in Java and not in C++.


> Function pointers in C++ are "objects"

In the C++ sense, but they don't have classes and don't participate in inheritance. Whereas the early-Java equivalents do.


Yes, I agree.


And it's all still true, although I would offer the usual argument that concurrency!=parallelism, and if you reach for threads&STM to try to speed something up, you'll probably have a bad time. With the overhead of GC, STM-retries, false-sharing, pointer-chasing, etc you might have a better time rewriting it single-threaded in C/Rust.

STM shines in a concurrent setting, where you know you'll multiple threads accessing your system and you want to keep everything correct. And nothing else comes close.


To be maximally fair to the Haskell people, they have been enormously influential. Haskell is like Canada: you grow up nicely there and then travel the world to bring your energy to it.


Yeah, I wrote an essay on STM in Haskell for a class back in 2005 I think.


I think Intel x86 had some hardware support for STM at some point. Not sure what's the status of that.


Disabled on most CPUs, plagued by security issues. I haven't used it but I assume debugging would be extremely painful, since any debug event would abort the transaction.


That's not software transactional memory, it's hardware transactional memory, and their design was not a good one.


Well, HTM was not useful per se, except accelerating an STM implementation.


It isn't very useful for that, but you can use it to implement other higher-level concurrency primitives like multi-word compare and swap efficiently.


True. At some point in the now distant past, AMD had a proposal for a very restricted form of HTM that allowed CAS up to 7 memory locations as they had some very specific linked list algorithms that they wanted optimize and the 7 location restrictions worked well with the number of ways of their memory.

Nothing came out of it unfortunately.


I'd like to see what kind of hardware acceleration would help STMs without imposing severe limitations on their generality.

To me, the appealing things about STMs are the possibility of separating concerns of worst-case execution time and error handling, which are normally pervasive concerns that defeat modularity, from the majority of the system's code. I know this is not the mainstream view, which is mostly about manycore performance.


Not an expert, but my understanding is that HTM basically implements the fast path: you still need a fully fledged STM implementation as a fallback in case of interference, or even in the uncontended case if the working set doesn't fit in L1 (because of way collision for example) and the HTM always fails.


I'm no expert either, but maybe some other hardware feature would be more helpful to STMs than hardware TM is.


That would depend on your idea of "good". It would be an upstream swim in most regards, but you could certainly make it work. The Asahi team has shown that you can get steam working pretty well on ARM based machines.

But if gaming is what you're actually interested in, then it's a pretty terrible buy. You can get a much cheaper x86-based system with a discrete GPU that runs circles around this.


Besides the box art, I miss the days when 1) the graphics card didn't cost more than the rest of the components put together, 2) the graphics card got all of its damn power through the connector itself, and 3) MSRP meant something.


> 3) MSRP meant something

I'm not in the market for a 5090 or similar, but the other day I was looking at a lower-end model, an AMD 9060 or Nvidia 5060. What shocked me was the massive variation in prices for the same model (9060 XT 16 GB or 5060 Ti 16 GB).

The AMD could be had for anywhere from 400 to 600 euros, depending on the brand. What can explain that? Are there actual performance differences? I see models pretending to be "overclocked", but in practice they barely have a few extra MHz. I'm not sure if that's going to do anything noticeable.

Since I'm considering the AMD more and it's cheaper, I didn't take that close a look at the Nvidia prices.


> What can explain that?

Looks. I'm not joking. The market is aimed at people with a fish bowl PC case that care about having a cooler with a appealing design, a interesting PCB colour and the flashiest RGB. Some may have a bit better cooling but the price for that is also likely marked up several times considering a full dual tower CPU cooler costs $35.


I was thinking about cooling, but basically they all have either two or three fans, and among those they look the same to my admittedly untrained eye.


The manufacturer can use better fans that move more air and stay more silent. They can design a better vapor chamber, lay out the PCB in a way that VRMs and RAM gets more cooling. But still all that stuff should not account for more than $30-50 markup.


Hey, c'mon now - some of that is flooding the market so hard that it's ~8:1 nVidia:AMD on store shelves, letting nVidia be the default that consumers will pay for. That's without touching on marketing or the stock price (as under-informed consumers conflate it with performance, thinking "If it wasn't better, the stock would be closer to AMD").


>What shocked me was the massive variation in prices for the same model [AMD v. nVidea]

I am not a tech wizard, but I think the major (and noticeable) difference would be available tensor cores — that currently nVidea's tech is faster/better in the LLM/genAI world.

Obviously AMD jumped +30% last week from OpenAI investment — so that is changing with current model GPUs.


They were talking about within one model, not between AMD and Nvidia.


I just bought a RTX 5090 at MSRP. While expensive, it's also a radically more complicated product that plays a more important role in a modern computer than old GPUs did years ago.

Compared to my CPU (9950X3D), it's got a massive monolithic die measuring 750mm2 with over 4x the transistor count of the entire 9950X3d package. Beyond the graphics, it's got tensor and RT cores, dedicated engines for video decode/encode, and 32GB of DDR7 on the board.

Even basic integrated GPUs these days have far surpassed GPUs like the RTX 970, so you can get a very cheap GPU that gets power through the CPU socket, at MSRP.


Do yourself/me a favor, and give your 5090's power plug/socket a little jiggle test.

I'm a retired data center electrician, and my own GPU's has been "loose" at least more than once. Really make sure that sucker is jammed in there/latched.


Yeah, 12VHPWR is a mess unfortunately.


...12VHPWR alone justifies purchasing a thermal camera.


> the graphics card didn't cost more than the rest of the components put together

In fairness, the graphics card has many times more processing power than the rest of the components. The CPU is just there to run some of the physics engine and stream textures from disk.


4) games come in a retail box accompanied by detailed manuals, booklets printed with back stories, and some swags.


4) scalpers only existed for sports and music venues


The existence of scalpers rather shows that the producer set the price of the product (in this case GPU) too low [!] for the number of instances of the product that are produced.

Because the price is too low, more people want to buy a graphics card than the number of graphics cards that can be produced, so even people who would love to pay more can't get one.

Scalpers solve this mismatch by balancing the market: now people who really want to get a graphics card (with a given specification) and are willing to pay more can get one.

So, if you have a hate for scalpers, complain that the graphics card producer did not increase its prices. :-)


Which format are you saying the Chromium team made and wants to push in favor of jxl?


webp


Which is superseded by AVIF anyway.


Not for lossless; webp is a fantastic replacement for PNG there. Not so much with AVIF, it's sometimes even heavier than PNG.


How will AV2 based AVIF compare to PNG?


You can write a WASM program today that touches the DOM, it just needs to go through the regular JS APIs. While there were some discussions early on about making custom APIs for WASM to access, that has long since been dropped - there are just too many downsides.


But then you need two things instead of one. It should be made possible to build WASM-only SPAs. The north star of browser developers should be to deprecate JS runtimes the same way they did Flash.


That is never going to happen until you create your own browser with a fork of the WASM spec. People have been asking for this for about a decade. The WASM team knows this but WASM wants to focus on its mission of being a universal compile target without distraction of the completely unrelated mission of being a JavaScript replacement.


It's also too early to worry about DOM apis over wasm instead of js.

The whole problem with the DOM is that it has too many methods which can't be phased out without losing backwards compatibility.

A new DOM wasm api would be better off starting with a reduced API of only the good data and operations.

The problem is that the DOM is still improving (even today), it's not stabilized so we don't have that reduced set to draw from, and if you were to mark a line in the sand and say this is our reduced set, it would already not be what developers want within a year or two.

New DOM stuff is coming out all the time, even right now we two features coming out that can completely change the way that developers could want to build applications:

- being able to move dom nodes without having to destroy and recreate them. This makes it possible so you can keep the state inside that dom node unaffected, such as a video playing without having to unload and reload a video. Now imagine if that state can be kept over the threshold of a multi-page view transition.

- the improved attr() api which can move a lot of an app's complexity from the imperative side to the declarative side. Imagine a single css file that allows html content creators to dictate their own grid layouts, without needing to calculate every possible grid layout at build time.

And just in the near future things are moving to allow html modules which could be used with new web component apis to prevent the need for frameworks in large applications.

Also language features can inform API design. Promises were added to JS after a bunch of DOM APIs were already written, and now promises can be abortable. Wouldn't we want the new reduced API set to also be built upon abortable promises? Yes we would. But if we wait a bit longer, we could also take advantage of newer language features being worked on in JS like structs and deeply immutable data structures.

TL;DR: It's still too early to work on a DOM api for wasm. It's better to wait for the DOM to stabalize first.


I am oversimplifying it, why should anything be stable?

That is the trend we face now days, there is too less stable stuff around. Take macOS, a trillion dollar company OS, not an open source without funding.

Stable is a mirage, sadly.


On the contrary, it’s something that solid progress is being made towards, and which has been acknowledged (for better or for worse) as something that they expect to be supported eventually. They’re just taking it slow, to make sure they get it right. But most of the fundamental building blocks are in place now, it’s definitely getting closer.


Aside from everything WASM has ever said, the security issues involved, and all the other evidence and rational considerations this still won’t happen for very human reasons.

The goal behind the argument is to grant WASM DOM access equivalent to what JavaScript has so that WASM can replace JavaScript. Why would you want that? Think about it slowly.

People that write JavaScript for a living, about 99% of them, are afraid of the DOM. Deathly afraid like a bunch of cowards. They spend their entire careers hiding from it through layers of abstractions because programming is too hard. Why do you believe that you would be less afraid of it if only you could do it through WASM?


Sounds to me like they forgot the W in WASM.


I agree with the first part, but getting rid of JS entirely means that if you want to augment some HTML with one line of javascript you have to build a WASM binary to do it?

I see good use cases for building entirely in html/JS and also building entirely in WASM.


getting rid of javascript entirely means to be able to manipulate the DOM without writing any javascript code. not to remove javascript from the browser. javascript will still be there if you want to use it.


Most of the time your toolchain provides a shim so you don’t need to write JS anyway. What’s the difference?


right, that's exactly the point. those that have a shim already successfully got rid of javascript, and that's enough.


Fair enough, I misunderstood what he meant by "deprecate JS runtimes".


You can use a framework that abstracts all the WASM to JS communication for DOM access. There are many such framework already.

The only issue is that there’s a performance cost. Not sure how significant it is for typical applications, but it definitely exists.

It’d be nice to have direct DOM access, but if the performance is not a significant problem, then I can see the rationale for not putting in the major amount of work work it’d take to do this.


Which framework is the best or most commonly used?


Yew, Leptos and Dioxus are all pretty decent with their own advantages and disadvantages if you like Rust. It's been a year or so since I last looked at them, to me the biggest missing piece was a component library along the lines of MUI to build useful things with. I'd be surprised if there weren't at least Bootstrap compatible wrapper libraries for them at this point.


I was under the impression that this very much still on the table, with active work like the component model laying the foundation for the ABI to come.


I have slightly different question than OP - what's left until it feels like javascript is gone for people who don't want to touch it?

Say I really want to write front end code in Rust* does there just need to be a library that handles the js DOM calls for me? After that, I don't ever have to think about javascript again?


> Say I really want to write front end code in Rust* does there just need to be a library that handles the js DOM calls for me? After that, I don't ever have to think about javascript again?

yes, e.g. with Leptos you don't have to touch JS at all


Isn't going through the JS APIs slow?


used to be, in the early days, but nowadays runtimes optimized the function call overhead between WASM and JS to near zero

https://hacks.mozilla.org/2018/10/calls-between-javascript-a...


It did improved a lot, but unfortunately not near zero enough.

It is managable if you avoid js wasm round trips, but if you assume the cost is near zero you will be in for a unpleasant surprise.

When I have to do a lot of different calls into my wasm blob, I am way way faster batching them. Meaning making one cal into wasm, that then gets all the data I want and returns it.


Could you list some of these downsides and what are the reason of their existence?


For starters, the DOM API is huge and expansive. Simply giving WASM the DOM means you are greatly expanding what the sandbox can do. That means lower friction when writing WASM with much much higher security risks.

But further, WASM is more than just a browser thing at this point. You might be running in an environment that has no DOM to speak of (think nodejs). Having this bolted on extension simply for ease of use means you now need to decide how and when you communicate its availability.

And the benefits just aren't there. You can create a DOM exposing library for WASM if you really want to (I believe a few already exist) but you end up with a "what's the point". If you are trying to make some sort of UX framework based on wasm then you probably don't want to actually expose the DOM, you want to expose the framework functions.


> you probably don't want to actually expose the DOM, you want to expose the framework functions.

Aren't the framework functions closely related to the DOM properties and functions?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: