Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am very intrigued by Nimrod. The language seems to have goals that overlap with e.g. Rust, but with a bunch of really interesting design decisions (e.g. GC by default, but first class support for manual memory management). Given the amount of force driving Rust (and Rust's PR machine), compared to Nimrod which seems to be really pushed forward by one single person, I'm really impressed by how far Nimrod got.

I really want to start using Nimrod for real work.



To support the fact it's mainly one guy: https://github.com/Araq/Nimrod/graphs/contributors

His name is Andreas Rumpf and he is really impressive.


Very impressive, but not sure if one should depend on tools with a Bus factor of one.


Yea, that's one of the most impressive 25-year-olds (?) I've ever seen... :)


> GC by default, but first class support for manual memory management

Since Mesa/Cedar (1978) there have been quite a few systems languages proposing that approach.

http://bitsavers.trailing-edge.com/pdf/xerox/parc/techReport...


If I understand correctly once you stop using the GC, Nimrod is not memory safe. Rust is.


Indeed, Nimrod will be more of a competitor to D in this regard than it is to Rust, where bare-metal safety is nearly the whole point.


There's really no reason not to use the GC though. It's very lightweight and quick.


While you're right for probably most classes of application, this is not an absolute truth. There are reasons not to use the GC - eg you're developing a browser, or a GC for a language, and many besides.

I see a similar thread where you made a similar statement, so in favor of not rehashing that whole thread, here it is:

https://news.ycombinator.com/item?id=7061533


And adding onto the other reply - Nimrod's GC has a realtime mode where you can specify when to run, and the maximum time. I made a (small) game in Nimrod and called the GC every frame for the remaining time (it can be used like a blocking high-accuracy timer). Testing the GC I couldn't get it to take longer than a couple microseconds - intentionally smashing my 16GB heap to hell. Why does a 16GB heap take so little time to GC? Because Nimrod's GC doesn't scale - it's deferred reference counting. Only cycle detections scan the whole heap, and you can disable those optionally (I do, I don't like designing cyclic stuff without explicitly knowing it gets broken).


That's really cool! And yeah, doesn't scale as you said, so I still don't see how we can go GC-less for everything.


Oh, I strongly disagree with that logic. A browser is exactly the sort of situation where a GC is useful. I mean, it only took Mozilla 2 decades to get Firefox to not leak memory like a sieve.


There are many reasons for leaks in Firefox, and none of them had to do with not using GC for everything. In fact, there were failed attempts to do exactly that (XPCOMGC), which failed due to performance problems. A lot of those "leaks" were just cases of using too much memory, which pervasive GC actually hurts due to the lack of prompt deallocation (which deferred reference counting loses).

GCs are simply not appropriate for every use case.

Reference counting is not a panacea; once you start wanting to break cycles (which history tells us you will), you start having to deal with stopping the world or concurrent collection. If you don't have thread-safe GC, then you have to either copy all data between threads (which limits the concurrent algorithms you can use) or you lose memory safety.

Finally, your implicit claim (that Rust's safe memory management is more vulnerable to leaks than GC) is untrue. Rust's safe manual memory management is no more vulnerable to leaks than GC. The compiler automatically destroys memory when it is no longer reachable.


Well... isn't that the point of Rust? Preventing memory leaks without the need for GC?


`it only took Mozilla 2 decades to get Firefox to not leak memory like a sieve`

I'm sure you're aware that this is quite an unfair/exaggerated statement to make. But yes, I'm all in favor of language features that help prevent memory leaks.

But the reason the smart folks at Mozilla don't just switch to using a GC for all of Firefox (as do none of the other major browser vendors) is due to GC pauses sucking for user interaction. If you don't think that's a concern, or have a solution for it, please elaborate.


I don't see how that's all that unfair. Development on the Mozilla codebase started in early 1998, almost 16 years ago.


Memory leaks waxed and waned as development focus changed. I remember Firefox's memory usage being quite decent around version 2, and even into 3. Later, some things got bloated, though a large part of the memory problem was due to misbehaving plugins. The memshrink project managed to fix even memory leaks across plugin authors' projects.

Also, note that a GC does not automatically mean no memory leaks. For instance, see how leaky Gmail is (was worse according to their dev team).


Nimrod's GC is not thread safe, last I checked. So if you aren't careful to avoid races on shared memory, you can segfault. Also, you incur full heap scans to clean up cycles.


Yes it is. Threads don't share a GC - there's no implicit sharedness. Memory is copied between threads. And of course, it doesn't prevent manual management of shared memory (just not GCd) so you CAN use locks to do so. Just like other languages.

I turn off the cycle collector in my realtime apps. I prefer designing a clean, solid system that isn't reliant on cycles without my direct knowing. I guess that's just my inner control freak though.


It's not just like other languages. Other languages don't segfault when you use shared memory.


What in the HELL are you talking about? The thread local GC won't even produce anything on the shared heap - it's thread local. Shared memory is manual memory only - Just like C, C++, Ada, and every other manual memory management language. And when the shared GC (which will have to be used explicitly) is implemented - it'll be just like Java, OCaml, and every other shared memory garbage collected language. What in God's does that even mean - segfaults when you use shared memory? It only segfaults if, like in every single other language, you didn't take the time to think out your design and are dereferencing dead memory.

Oh, and I said that it has locks like every other language. I didn't mean shared memory like every other language.

Finally - if you're just being smug at how smart rust is for having lifetime tracking and all those pointer types/restrictions - I don't think it's all that great; Nor did the gaming community when they got their hands on it last; Nor do many others who agree in the opinion that rust is just too complex while being too restricted.


Calm down. All I was saying was that Nimrod is in a somewhat isolated space in which memory management is automatic and safe except for when memory is shared between threads. I'm a bit skeptical of this, because memory management is at its most difficult exactly when multiple threads are involved. So I'm glad to see Nimrod is moving to a thread-safe GC (and I have nothing against Nimrod and would like to see it succeed).

Hybrid automatic and unsafe manual memory management (when the unsafe portion is for something really common like shared memory) is not something I'm really a fan of; it gives up safety while retaining the disadvantages of automatic memory management (lack of control, overhead). I think that safe automatic or fully manual schemes are the ones that have won out in practice because they get to fully exploit the advantages of their choices (safety in one case, control in the other).


Just one thing to point out - people wrongfully consider that an alternative to a non-deterministic GC is necessarily a manual memory management. There are automatic and deterministic memory management mechanisms, e.g. smart pointers in C++, Rust.


Fully agree.

To reinforce your point, many are unaware that the various forms of reference counting are also GC algorithms in CS speak.


> I really want to start using Nimrod for real work.

I have actually used both Rust and Nimrod for real work (same project, first Rust then rewrite in Nimrod). My experience is that Nimrod is far easier to handle than Rust. It feels like Python with the native speed of C. There are a lot of nice features built into Nimrod, for instance native Perl regular expressions and seamless import of C functions. As for me it is the most productive language I ever encountered. I know many languages.


Initially, I was kind of 'meh' towards Nimrod since Rust already looked so good. But when I recently read about the features, especially the metaprogramming and compile-time features, it seems like it could be really fun and powerful to program in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: