Functional languages may be overrated, but not nearly as overrated as imperative languages (especially Python and Java). At the very least, avoiding Haskell is--unfortunately--quite easy; avoiding Java or Python in most places is nigh impossible.
The past post he references can be summarized by its last sentence: "I seriously don't get it." It spends quite a bit of effort telling us very little about functional languages but quite a bit about the author. And, as usual, he mistakes difficultly for complexity--just because something is new and mind-bending does not mean it is complex; it could just be a little tricky to pick up. (This is where Rich Hickey's "Simple Made Easy" talk really shines.)
And, of course, in the current post he misrepresents the main case for functional programming. Which is--quite critically--not concurrency or parallelism! These are, in fact, just nice benefits. The main point is using a higher level of abstraction, reducing coupling, simplifying code, writing less code to do more and making it easy to reason about and verify.
If all you care about is concurrency, Haskell is probably not your best bet. But if you care about writing maintainable, reusable and more general code more quickly and without sacrificing too much performance, it is. I would--without thinking twice--use Haskell to write programs in a completely single-threaded world. Haskell may also be good at dealing with concurrent nonsense, but this is really just a symptom of a deeper cause: Haskell is good at dealing with complexity.
If that doesn't convince you, Haskell also comes with a ton of shiny toys like QuickCheck and Parsec. It might not have the biggest collection of libraries, but it probably does have one of the most innovative.
I'm sorry but I don't see how a language that reasonably smart person, having more than a decade working experience in all kinds of computer environments is unable to master, can be described as anything but "complex". I think using semantic games to reframe the problem does not change its essence - these tools are as easy to use as painting a delicate china vase standing on your head while riding a unicycle. Sure, there are a lot of people among 7 billions of world population that can learn the trick. But telling me it's simple is not actually going to do anything but ruin my confidence in your explanations - I tried it and I know it's anything but. And so did many other people and arrived to the same conclusion.
You may argue that it doesn't matter as concurrency and raising complexity will force everybody to go FP or be crushed under the complexity of other solutions. That may be fine argument for FP being inevitable, but it in no way makes FP simple.
You may argue that it doesn't matter as concurrency and raising complexity will force everybody to go FP or be crushed under the complexity of other solutions. That may be fine argument for FP being inevitable...
Except that that argument would fall on the wrong side of history. Most the really big examples of successful high-concurrency systems are programmed using imperative languages, and have been all along. And they do it using mostly the same habits that a lot of FP advocates would like to suggest are the exclusive domain of functional languages. Personally I like to think of them less as an excuse for functional apotheosis than as "elementary coding standards", but I learned Lisp before C++ so perhaps I'm just showing my age there.
Now there are spots where FP offers a serious win on the concurrency front. Continuation passing comes to mind immediately. It'd be nice if the outspoken contingent of FP fans would spend a bit more time talking up node.js. But I suppose it's silly to expect Javascript to be held up as a successful functional language. It spends much too much time rubbing elbows with the object-oriented hoi polloi for that.
I think functional aspects of JS are awesome and are of great use in a lot of cases. However it definitely lacks the purity of languages like Haskell or Clojure - which in my view makes it 1000 times more useful (when measured by my productivity using that to achieve any practical task) but I guess in the eyes of purists it makes it irrelevant or unworthy.
I'm inclined to agree re: being more useful. Just shooting from the hip, I'm guessing the performance and scalability of node.js wouldn't be quite so good if it had used purely functional data structures to keep track of all those callbacks.
Programming languages can strive to be either tools or toys.
Programming languages that strive to be tools focus on high quality standard libraries, speed, good debugging facilities, being easy to learn, ability to handle large projects, and hardware support among other things.
Toy languages strive to be innovative and explore new depths of computer science. Which is all well and good, but it's pretty annoying when people compare them with 'real' programming languages.
That's true, actual tools need all those criteria that you described.
If you believe that functional languages don't meet those criteria, can you provide evidence for that? For example, from what I've gathered Clojure certainly has high quality libraries, an emphasis on speed when needed (performance is frequently superior to Python's), has debugging facilities, is easy to learn for someone with functional experience (just as Python or Java are relatively easy to learn for those with imperative experience) and can still be picked up with a little extra work otherwise, and has been used in production in large projects. I don't know about its hardware support so I won't comment on that.
> high quality standard libraries, speed, good debugging facilities, being easy to learn, ability to handle large projects, and hardware support among other things.
Hm. C has three, at most four of those desiderata. It lacks good debugging facilities, especially compared to what a real (Smalltalk or Lisp, for example) debugger can do, it is about as easy to learn as driving a car with manual transmission, manual steering, and inadequate brakes, and it is so lacking in support for programming in the large that people were willing to put up with 1990s-era C++.
Right. And also this 'real' debugger would not be predictable, would crash, wouldn't work with my favorite environment, would be a hell to install, etc... No, thank you very much, good old GDB is fine with me.
None of those things are or were problems with Smalltalk and Lisp environments. You might as well rattle off the potential downsides of using a compiler instead of writing machine code by hand.
Reducing coupling and cutting down on code volume is also supposed to be one of the big wins for object-oriented programming. Incidentally, the languages which I've found to do the best in this department are both object-oriented and functional. This is because one facilitates reusing datatypes while encouraging coupling to operations, while the other facilitates operation reuse while encouraging coupling to datatypes[1]. Since it's desirable to minimize coupling in both respects, being able to choose the better tool for the job in a context-dependent manner rather convenient.
Incidentally, I find myself wondering why so many functional programming advocates nowadays seem to be caught up on the myths that object-oriented programming and functional programming are incompatible, or that object-oriented code is inherently messy and difficult to maintain. It makes me wonder if there are a whole lot of people out there whose only experience with object-oriented programming is the way things tend to get done in C++ or something like that.
There is one language that stands way out at the forefront when it comes to working on enabling B grade programmers to write scalable, concurrent systems: C#.
I think a big part of its success is that it avoids the hype machine. It isn't a disruptive language that changes the way we do things. It can't be, or management would be justifiably wary of it. Instead, it's a language that started out as a very close clone of Java, and since then its designers have been rapidly iterating to the point that it's now a (mostly, some would say) functional language that's solidly embedded in the enterprise space.
It's somewhat disappointing that it's still missing algebraic data types and pattern matching. But considering I've got most of what I want without ever having had to pitch switching languages to an employer, I'm really not in a position to complain too much.
While I don't disagree that C# is good, I don't see how you could say that Java is in any way behind. At least in concurrency, I would argue that it's somewhat ahead. Threading in Java was more intuitive to me than threading in C#.
In the past few years C# updates and new .net libraries have had a sort of concurrency and parallelism focus.
The Task Parallel Library (TPL)[1] is basically like goroutines in Go. They're a concurrency construct, for executing code that gets multiplexed to threads via the thread pool by default. Absurdly useful and very easy to use. This contains stuff like Task, Parallel.ForEach, .ContinueWith() etc.
The TPL Dataflow[2] stuff added just recently is data pipelining. Completely thread-safe communication of data, with the ability to construct a complex data pipeline that your data gets pushed through. Similar to channels in Go.
Also added recently are the async and await keywords[3] in C# and VB.NET. These are compiler features that allow you to write asynchronous code exactly the same way as you would write synchronous code. Basically a replacement for the IAsyncResult + BeginFoo/EndFoo pattern, it allows you to write simple and readable asynchronous code. Not necessarily multi-threaded as it can be used with single-threaded code, but it allows the stuff that interacts with your multi-threaded code to not bend over backwards for it.
It enhances support for the IObservable/IObserver interfaces, the duals of IEnumerable/IEnumerator, allowing Linq manipulation of asynchronous events.
As others have discussed or at least alluded to, C# certainly started out on a par with Java, but has grown into a different beast altogether, if one makes use of all the new developments mentioned.
These days my C# code is very un-Java-like, and far more Functional.
Both this and the parent post should be more specific, as pre-lambda, pre-TPL threading in C# was not too much different from standard Java threads. Long-standing language support for events and delegates put C# slightly ahead of Java at the start, but only in the last few years has C# left Java in the dust.
While I don't disagree that C# is good, I don't see how
you could say that Java is in any way behind.
GP:
[List of features where Java is behind.]
(May have missed that the sub-thread is about 'scalable, concurrent systems', though typical C# development leaves threading worries to ASP.NET/SQL libraries.)
Personally, I would appreciate it if you'd share how Java threading beats C# threading, particularly C#'s async/await. I don't know much about Java.
Fair. I had interpreted you as saying you didn't feel like Java was far behind C# in general.
I'm not terribly versed in what Java's got going for it in the concurrency department, so I'm not really qualified to speak to that. The stuff that's been happening in C# and .NET since version 3 is pretty fantastic, IMO, but I don't really know the details about how what Java has compares.
I've recently been using Clojure more and more, including introducing it for a few nascent projects at work. I haven't been this excited about a new language since Ruby in 2005.
Plenty has been written on its technical merits, but there is one aspect of Clojure that I don't think enough people pay attention to: the community! 4Clojure is just one example. The #clojure IRC channel is another. The Clojure community today feels like the Ruby community did back in 2005: hackers having fun and getting work done at the same time.
I'm pretty much totally ignorant about Clojure in practice. One part I love about Ruby is being able to drop into an irb/pry session nearly instantly or run the Rails console and test out code and ideas interactively and being able to quickly restart the dev server if configs change. JRuby has always been appealing to me but it's kind of annoying to use for development due to the JVM startup time. I used to do Java development full time and had the same annoyance with needing the compilation/restart step to see changes.
Does Clojure address this problem or are there workarounds (like Guard/Spork for testing in Rails)? Is there anything fundamentally different from JRuby that would alleviate the slow restarts, or am I simply thinking about the problem wrong?
The Clojure community just recently put its full weight behind nREPL. You can run nREPL from Leiningen outside of a project directory, much like IRB. To be honest, I've never noticed the JVM startup time...
But there is another way! Clojure inherits the concepts of SLIME/SWANK from the world of LISP. Essentially, since functions (not objects) are the top-level constructs in Clojure, you can redefine functions at any time while your program is running, and everything will "magically" start using the new definition.
So, the typical way one develops in Clojure is to start a SWANK or nREPL server, then load their program files into this server. At that point, you can run any single function or if you're writing a web app, for example, you can run the function that starts the web server, hit a URL, edit a function, re-evaluate, reload the web site and immediately see the results.
ClojureScript One is one example of how this can be taken advantage of. You can see a demo of it here: http://vimeo.com/35153207
"Essentially, since functions (not objects) are the top-level constructs in Clojure, you can redefine functions at any time while your program is running, and everything will "magically" start using the new definition. "
This isn't exactly correct. It's not because clojure has first class functions that you get this behavior, but because clojure has late binding: http://en.wikipedia.org/wiki/Late_binding
True, but Ruby also has late binding. Ruby also has open classes, so you could (in theory) do something very close to SLIME with Ruby.
The problem is, Ruby also has singleton classes and singleton methods, so there's no guarantee that a change made in code and evaluated in the runtime will have the desired effect. Actually, Clojure has a similar problem with defonce, but in practice I've found that Clojure works much better with the SLIME development model than Ruby does.
Just curious but why does singleton classes/singleton methods in Ruby make it "so there's no guarantee that a change made in code and evaluated in the runtime will have the desired effect" ?
class Foo
def bar
puts "Hello from Foo"
end
end
a = Foo.new
def a.bar
puts "Hello from a special Foo"
end
Now, even if you change the definition of "bar" in Foo's class definition, "a" will still have it's own version. This might not seem like a big deal, but as you do more metaprogramming with Ruby, more examples like this pop up.
I use Nailgun[1] with vimclojure[2] and lein-tarsier[3] to avoid the JVM startup pain. You'll end up with a JVM instance (server) running all time and the possibility to evaluate code from many different clients like REPLs and the editor itself.
Changes to the code just need to be evaluated again to override the previous function definitions. It ends up being a really fast workflow.
Seems nice. The only problem of using it instead of Nailgun for the Clojure development workflow is that starting a new instance of the JVM would mean to re-evaluate all the code, and not only the modified definitions.
I have two questions about Clojure. (1) How fast is it compared to Java and is the method for speed based on rules or tricks? (2) Compared to other lists how natural is metaprogramming?
I've recently started playing with Scheme and the feeling I get is not quite just functional programming. It feels closer to Symbolic Programming, the same feeling I get with the languages in Maxima or Axiom. There is an intersection with functional programming but the feeling I get doesn't cluster as close with Haskell, Ocaml or F#. And even if the occasion should rarely occur where the full extent of this power is needed, just knowing that code and execution and data and algorithms are basically indistinguishable results in an incredible mindshift in style. I haven't seen anything comparable till I started to wrap my mind around genetics + DNA.
So my question around all that rambling is, in clojure is the symbolic style on the counter, shelf or pantry?
Regarding speed, you really need to read this essay that John Lawrence Aspden wrote, "Clojure Faster than Machine Code?" He walks through several experiments in which he figures out how to optimize Clojure for the JVM. Finally, he writes a macro that expands his code into code that adds type hints to all of the vars, and with the help of the type hints the JVM goes blazing fast.
He concludes:
" And so I wonder:
This technique strikes me as very general, and very useful. All sorts of
things can be represented as lookups in tables.
This whole program took me one short day to write, and the whole time I was
doing things that I've never done before, just going by intuition. Once
you've got the hang of it, it's easy.
I think that the program should be as fast as the equivalent java program
would be, although I haven't got around to actually testing that, so I may
have dropped the ball somewhere.
In any case, it's probably possible to generate code like this that does run
as fast as whatever the fastest Java implementation actually is.
The JVM is widely thought to be around the same speed as native machine code
or optimized C.
I'm absolutely sure that I'm not able to write the equivalent program in
Java, C, or assembler without code-generation.
The code generation would be very very very much harder in Java, C or
assembler.
And so I wonder, is Clojure the fastest computer language in the world?"
He doesn't benchmark against anything other than his own slow Clojure code. I see no evidence at all that this wouldn't be beaten by a simple plain Java implementation.
This is a not-uncommon feeling for Lisp-like languages, and if you want a benchmark on a particular problem I'll point to http://shootout.alioth.debian.org/u64q/benchmark.php?test=fa... where the "alternative" Common Lisp program is the fastest. (Just ahead of a Java 7 program.)
Why does the feeling persist? Because there's a lot of anecdotes and personal experience by lispers who have created something they didn't think they could do in another language "from scratch". I'll offer a link to a different anecdote: http://www.cs.indiana.edu/~jsobel/c455-c511.updated.txt "Is Scheme Faster than C?" Here are the highlights, with the most important quote at the end.
…One of the assignments in the
Algorithms course was a major term project in which we were to
implement and optimize the Fast Multiplication algorithm. Fast
Multiplication is a recursive "divide-and-conquer" algorithm for
multiplying two numbers, especially large numbers: hundreds or
thousands of digits.…Part of our grade for this project was based on how fast the program
ran.
…What I noticed was that each of those
three recursive calls had nearly the same control context. In a
simple-minded implementation in C, that context would be saved and
restored three times, being destroyed after the return of each call.
I thought to myself: If only I had explicit control over the flow of
my program, I could speed it up significantly by creating that context
only once and using it three times, destroying it only after the
return of the third recursive call. Function calls are so costly!
But I would never want to attempt such a thing from scratch in C.
…Furthermore, I could start with a simpler, more naive program and
basically DERIVE the sophisticated one through a series of
correctness-preserving program transformations. This is where Scheme
really won. Because of its extremely algorithmic---almost
mathematical---nature, Scheme can be easily manipulated in a sort of
algebraic style. One can follow a series of rewrite rules (just about
blindly) to transform a program into another form with some desirable
property. This was exactly what I needed.
…I had never had the courage to
use a "goto" in C before; it was just too dangerous a tool. It was
kind of fun to use them now, knowing that I was completely safe in
doing so. That was the amazing part: I had PRODUCED a program that I
could not have WRITTEN, and would not have wanted to write directly,
for that matter.
…"But does it run
fast?" Speed was, after all, my main goal. The answer is a very
resounding "Yes!" Out of a class of about 15 students, only one
person beat me (and only barely), and he wrote significant portions of
his program directly in Assembly Language.
…Real efficiency comes from elegant solutions, not
optimized programs. Optimization is always just a few
correctness-preserving transformations away.
Lisp-like languages make correctness-preserving transformations easy with their homogeneous syntax and accompanying macros. It may be that a simple Java implementation of the GP's example would be faster, I'd like to see a benchmark too, but charged with the task of making each one faster relative to itself, I know I'd rather work with the Clojure code because it's easier to transform. Compared to this http://mechanical-sympathy.blogspot.com/2012/07/native-cc-li... on the problem of object serialization, getting really fast Java takes a lot of work whereas getting pretty fast Clojure takes a lot less work (if the parent post is to be considered a "pretty fast" implementation of the particular data processing problem).
Clojure is slower than Java in general, but strives to provide you the tools that you need to write code "as fast as java" when you need it, eg. via type hints (to avoid the need for reflection).
Clojure's interop with Java is also different from most JVM languages in that it never wraps Java's native types. This means it can't eg. treat java Maps as functions like it does with clojure maps, but on the upside... there are no wrappers. :P
Metaprogramming in Clojure is done via a CL-esque defmacro system. In addition to lists, vectors, maps and sets all have useful evaluation semantics, and are commonly used in DSLs. I actually prefer it to CL because you have more basic elements to work with.
I have been using Haskell seriously for about two years now and humorously for a bit longer. For some reason I bought the old sales pitch some time ago. Now that I have a clue about the language I would like to rewrite the sales pitch for anyone considering a move.
Here's what happens in the middle of a big Haskell project.
1) Take some existing code.
2) Refactor (Add a feature, change a record element, etc.)
3) Run ghc, then fix all the errors it reports. Repeat this
step as needed.
4) Done.
What would be different in python? For one, no compile errors means very little. How many test cases would you need? How would you find all the places the code must change?
Lesson: A tough, smart compiler can be the programmer's best friend.
Meta-lesson: There is little need to discuss laziness, purity or static typing to "sell" Haskell. They are enablers of ghc. What you need is a big, complicated code base and an urgent need to make changes to it. You won't have that until you get your feet wet.
Is Haskell perfect? No, but I'll save that for an OP that is overhyping the language.
The merits of functional programming techniques are a lot more interesting than the languages that implement them, "pure" or not. Closures, first class functions, and list comprehensions are interesting topics of discussion. Java vs. Haskell not-so-much.
I by no means want to start any kind of religious war but I feel that Scala has kind of eclipsed Clojure. I tend to notice more Scala skills required in real life projects than Clojure.
This is just my opinion as someone that knows neither Scala nor Clojure and without any axe to grind in this debate.
To me it seems more that Scala and Clojure appeal to two slightly different groups.
Scala is very appealing to people looking for a "better Java". A Java programmer can get up and running with Scala very quickly simply by treating as Java with some neat and powerful syntactic sugar, and then slowly segue into the more powerful features Scala offers as and when he needs them.
Clojure on the other hand basically requires throwing out everything you know (assuming you're not coming from a LISP background) and starting from scratch. In exchange you end up with a very powerful and new way of approaching programming, one which is very appealing to many people and very effective for handling certain problems.
So as such it makes sense that Scala is more visible. If I had a large Java shop and wanted to try using some functional programming in certain projects, then Scala would be a no brainer.
This has been a thoroughly fascinating read. The attitudes towards development are echos from the late '70s and the '80s. The wheel of incarnation is still turning guys & gals.
Since the economics of systems have not changed much (70% of the cost of a system is in maintenance) I find it humorous that nobody has mentioned making code clear for the poor maintenance (enhancements, refactoring) schmuck. Everyone including the hoodwinked managers are focusing on the "window of opportunity" a.k.a lost opportunity cost of getting the project done - to market. This is as wrong headed now as it was in the '80s.
Unless you are writing one-off systems that have a real world, crucial deadline - think new scud detection techniques during the 1st gulf war [done in Lisp BTW] - just focus on making systems that can first and foremost be comprehended quickly by someone else. You're working in a team. Right? All the rest of the hyper-fast development hoopla is self aggrandizing and for the most part a waste.
I must admit though that the reminiscences of the days of "feeling like an artist" contained in this thread bring back heady memories. I hear the egos of me and my prior associates sprinkled throughout this thread.
What I would say about Functional Languages isn't that they're over-rated but they are mis-rated if they are described as means of replacing the abomination that is Object Oriented Programing (which I use every day).
I would free acknowledge that functional programming constructs are powerful methods to concisely specify complex, well-behaved systems. I use arrays of function pointers and similar things when I want to create such a subsystem in a c++ program I'm working on (obviously not using full fp but it still give power-galore).
The thing that FP isn't is means to tie two huge, horrible systems together (or accomplish similar tasks). And while fp might indeed be used to create a number of elegant systems, if fp is unable to beat back the forest of horrible, inelegant systems around us, it isn't going to replace OO and it isn't going to get wide adoption. Because OO may be awful but has the quality that if used judiciously, it can made ugly stuff somewhat less ugly.
OOP isn't intrinsically bad -- the problem is that developers have tried to graft an OOP model on software architectures that have never actually needed to be OOP. Also the disgusting abuse of inheritance. There's a conflation of "OOP" with "modular" for large software systems when it's perfectly possible to write modular code in a structured imperative language or a functional language that might actually provide more elegant solutions to the problem at hand.
I've coded some data processing systems in Lisp which were elegant and modularly structured, and I'd avoid OOP in that domain. Currently doing game development, though, and for top-level game logic that is largely about manipulating the state of entities in a world, OOP shines. Different tools for different jobs.
C++ is pretty awful, though, and many of the systems built in it that I've seen are pretty awful.
I'm not sure what you mean by "FP isn't meant to tie two huge, horrible systems together". Is any language or programming paradigm really designed with that usage in mind?
But assuming that that is in fact the case, I wonder whether it would still be the case if we lived in a bizarro world in which FP had dominated and OOP was seen in the mainstream as an impractical oddity. I suspect that, to the extent that functional languages do not make for good "glue code", that is primarily due to the impedance mismatch between FP and the largely object-oriented systems that need tying together, rather than because of some inherent limitation of functional programming itself. If those large systems were constructed in a functional style, would functional glue be more effective?
OO code can work as reasonably effective good glue code for even older pre-OO procedural code, or even produce usable wrappers for big messy globs of assembly, should you have the misfortune to own that sort of mess.
So I would say that your bizarro world would need to presuppose that functional code had dominated from the first instead of any imperative code, which may have been what you meant.
However, my suspicion is that imperative code, at it's most basic, is the natural form for the first computer code to take, given development of simple, early processors. My intuition could be wrong on this point, of course.
In general, though, if you have particular operations in an imperative program that are more useful to reason about using a functional methodology, it's easy to use OO code as glue code wrapped around it (in C#, F#, Scala, etc.). By contrast, adding bits of imperative code inside a functional program requires careful thought and planning to keep from breaking the rest of the program through unexpected side effects.
So my suspicion is that OO is ultimately a better "glue paradigm" than FP. And why I suspect that ultimately, object-functional languages are likely to succeed more broadly in the real world than pure functional languages.
it always surprises me when erlang is lumped in with the other functional languages. i know it technically qualifies as one, but if i had to round up the "mainstream" fp languages, i'd put ocaml, haskell, scala and clojure in one group, and erlang off to one side by itself.
apart from the string handling, i actually liked erlang as a language. mailboxes and destructuring pattern matching are a very pleasant way to organise code.
I look forward to the day when Hacker News moves beyond these "my language is better than yours" articles written by third rate programmers.
In his first paragraph it makes it clear to me that he didn't understand the nature of any of the languages he originally discarded, and he discarded them because he didn't understand them in sort of a "its too hard or my brain, therefore they suck!"
Which sets a really poor stage to then hear that Clojure is the second coming.
I'm neutral on clojure- but in my experience, every time I learn a new language, for awhile it feels like the second coming, because if its a good language it has a reason for being, and in the scope of that reason for being, it really is the second coming.
The problem-- and what makes this guy a third rate programer-- is the latching onto that language and it becoming your blub language.
He decries FP as a fad and offers instead, a language specific fad.
Seriously, dude? When you step up to the level where language is less important, and it constrains your thinking less, then you'll step up to the next level of programming ability.
The past post he references can be summarized by its last sentence: "I seriously don't get it." It spends quite a bit of effort telling us very little about functional languages but quite a bit about the author. And, as usual, he mistakes difficultly for complexity--just because something is new and mind-bending does not mean it is complex; it could just be a little tricky to pick up. (This is where Rich Hickey's "Simple Made Easy" talk really shines.)
And, of course, in the current post he misrepresents the main case for functional programming. Which is--quite critically--not concurrency or parallelism! These are, in fact, just nice benefits. The main point is using a higher level of abstraction, reducing coupling, simplifying code, writing less code to do more and making it easy to reason about and verify.
If all you care about is concurrency, Haskell is probably not your best bet. But if you care about writing maintainable, reusable and more general code more quickly and without sacrificing too much performance, it is. I would--without thinking twice--use Haskell to write programs in a completely single-threaded world. Haskell may also be good at dealing with concurrent nonsense, but this is really just a symptom of a deeper cause: Haskell is good at dealing with complexity.
If that doesn't convince you, Haskell also comes with a ton of shiny toys like QuickCheck and Parsec. It might not have the biggest collection of libraries, but it probably does have one of the most innovative.