Anecdotal, but I recently had the opportunity to write some relatively greenfield code in each of Rust and C++. Nothing too massive, a few thousand lines. But fairly intricate code with a need for high performance. I consider myself to be fairly advanced in both languages (but not can-write-the-spec level advanced).
I was shocked at how many memory corruption / uninitialized data errors I was hitting in C++, for what (I thought) was fairly nice, modern-ish code. This is for code that builds cleanly with -Wall -Werror, with exclusive use of .at() instead of unchecked vector/map access, etc. There are just far too many ways to end up with uninitialized data, and the ecosystem (think serialization libraries) don't really help you there.
I didn't realize how much of a cognitive burden Rust was relieving me of. Sure, the borrow checker is a pain. But I have confidence that when my code builds, it will be entirely free of certain classes of bugs. And other classes of bugs are well-managed by Rust's compiler warnings and Clippy lints.
There seems to be a sort of sleight of hand in programming communities where when Rust is hard, we blame it on the language, but when C++ is hard, we blame it on the programmer. I didn't see the article do a good job of teasing this out.
> There seems to be a sort of sleight of hand in programming communities where...
I don't think there's anything sinister here, just the classic difference between old and new. Old things are familiar, and their downsides are facts of life. New things are threatening, and their downsides get a lot of scrutiny. I'm not even complaining; it's a pretty reasonable heuristic most of the time.
I agree it's not sinister, but I think it goes deeper than this.
Strong (static) type systems fundamentally make you do more work up front. They force you to be more explicit. They restrict what you can do. They're in your face, because you can't get your program to compile without them. No one gets through any significant amount of Rust programming without knowing that the borrow checker is there. It's so fundamental that it's just part of the experience.
C++ arguably has just as many rules for writing "correct" programs, but they're not enforced by a compiler. So much stuff happens by convention. If you follow the convention, great, your code is safe. If you mess up, no one catches you. Because it's your responsibility, it's easy to attribute this failure to your programming skill rather than to the language that made that mistake possible in the first place.
I'm not saying stronger type systems are always better, but I do think there is a fundamental human tendency to attribute the up-front as hard and the latent as easy, even if the up-front work ultimately helps you avoid the latent work down the line.
I was shocked at how many memory corruption / uninitialized data errors I was hitting in C++, for what (I thought) was fairly nice, modern-ish code. This is for code that builds cleanly with -Wall -Werror, with exclusive use of .at() instead of unchecked vector/map access, etc. There are just far too many ways to end up with uninitialized data, and the ecosystem (think serialization libraries) don't really help you there.
I didn't realize how much of a cognitive burden Rust was relieving me of. Sure, the borrow checker is a pain. But I have confidence that when my code builds, it will be entirely free of certain classes of bugs. And other classes of bugs are well-managed by Rust's compiler warnings and Clippy lints.
There seems to be a sort of sleight of hand in programming communities where when Rust is hard, we blame it on the language, but when C++ is hard, we blame it on the programmer. I didn't see the article do a good job of teasing this out.