I have a lot of personal success stories with SSDs but here's my current favorite.
A few months ago I had to help one of our scientists (the company is called 5AM Solutions.. they the awesome) run a bioinformatic job written in Perl and R. As it turned out, for long stretches of the processing the job required around 20 GB of memory. The one server that had all the required dependencies installed had only 8 GB at the time.
When I let the job run the first time, it started to page out memory to hard disk. The job ran for about four days, was only about 25% complete and during that time frame the server was un-useable for any other functions. Pretty much everything came to grinding halt.
Between that first run and the time our new RAM would be installed, just for grins, I gave the system 30 GB of swap space on the locally attached SSD. With that configuration the job finished in 19 hours and during that time the server was still responsive of other tasks.
When we finally added the appropriate amount of physical RAM the job took only 15 hours to complete.
It is the first time I have ever seen virtual memory be useful.
Virtual memory is what lets us write programs pretending that we own the entire address space, and it is very useful.
Swapping pages to disk, though, has been useful for a very long time. Yes, once your high-performance application starts swapping all the time, your performance is going to suffer by several orders of magnitude. But occasionally swapping pages in and out of disk is part of what makes modern operating systems useful. You left a large PowerPoint presentation open for several days, but never got around to working on it? Not a problem, since if the OS needs that memory, it will just swap out the pages. Without that ability, the OS would need to go around killing processes. (Which it will do if it has to, but it's a rare event because it can swap out pages.)
On modern systems, there are two kinds of addresses. "Virtual" addresses and physical addresses. Virtual addresses are tracked by the operating system, and they can span the entirety of the addressable address space. So, on a 32-bit system that isn't playing any high-memory tricks, that's 0 - 2^32, or 0 - 4 GB.
But your system may not have 4 GB. So the operating system has a data structure called a page table that has the virtual to physical mapping for each process. The processor accesses this table (it caches it in something called a TLB) so that it can convert the virtual address to the physical address.
An example using small numbers. Your program has a pointer to data. That pointer may have value 800. Let's assume that the amount of memory on your system is only between 0 - 400. So the processor has to convert the value 800 to a value between 0 - 400. It's the operating system's job to maintain that valid mapping.
Why does this matter, and why is it so tied up with paging to and from disk? Let's say the OS pages out the page containing that data. Then, later, it's paged back in, but in a different physical location in memory. Your program still has the pointer value 800, but your program still works correctly because the operating system keeps track of where in physical memory 800, for your process, maps to.
People in the Windows world often say "virtual memory" when they mean "swap space" because Windows would call the amount of swap space "virtual memory size." But virtual memory is the technique described above. Read the Wikipedia entry from above, or read an operating systems textbook for a full discussion of it.
That's not entirely correct. The MMU generally handles virtual to physical memory address translation and the OS is only ever involved if there is a page fault. Outside of OS architecture and very specific and intended application, virtual/physical memory is completely transparent. When I hear "virtual memory" I assume reference to swap space unless otherwise noted because the technical meaning has such a specific domain.
That's why I noted that the CPU caches the mappings in the TLB. On modern processors, the MMU is integrated with the rest of the processor, so I didn't see the need to introduce another TLA. It's a part of the processor just as much as, say, the floating point unit is. The whole point of my discussion with small pointer values was to demonstrate that the virtual to physical mapping is transparent.
When I hear "virtual memory," I think of the computer science meaning. However, I am a researcher in high performance computing systems.
Still, an SSD makes this all better. When I was using virtual memory on a Ubuntu server and put the swap partition on an SSD, everything worked great. On a rotating platter, not so much.
Oh, no question, SSDs are an improvement. I'm just clarifying the systems concepts involved. SSDs everywhere may change some assumptions in the operating system. For quite a while now, we've considered paging to "disk" as performance death, and have gone through contortions to make good decisions about which pages should be paged out. If paging to "disk" everywhere gets several orders of magnitudes cheaper, we may want to do less up-front processing trying to determine good victim pages.
It is entirely possible to have a virtual memory design with less virtual address space than physical address space (eg: 32-bit virtual, 34-bit physical), and virtual addressing would still be useful in this context.
Just a warning, a lot of these SSDs "cheat" by using compression. I bought the best drive that I could find for my macbook pro - OCZ Vertex 3 Max IOPS, and was rather disappointed to find out that the posted speeds are based on benchmarks with compressible data. The reason this is an issue is because if you use disk encryption like you should be doing, encrypted data is not compressible. As a result, my speeds are 1/3rd to 1/2 of that which is advertised, and it was not worth the extra money.
> if you use disk encryption like you should be doing
I've never found that I had a really good reason to encrypt a drive yet, I'm kind of surprised to see a suggestion that this is the way things are done.
The OP specifically mentioned a MPB, a laptop, which are pretty easy and high value targets for theft. If my laptop was stolen, I would be relieved to know that my personal data was safe.
There are additional reasons for full disk encryption too, like ensuring that important system files have not been tampered with. Whether or not you want to go that far depends entirely on your level of paranoia.
For a home desktop, the cost/benefit may be a bit different, because the computer is exposed to less places and people. As with many things in security, you would need to calculate what is an acceptable risk to you versus the cost of mitigating that risk.
If you ever have to wipe an SSD the wear-leveling will totally mess with your wipe process. It is always best practice to just do full disk encryption on SSDs, even if you aren't taking it out of the house/office.
If you work in financial,medical, government or with any employee data (ie, corporate HR systems) and use a laptop, it's often a good idea for the laptop-issuing organization to mandate full disk encryption to prevent unintentional sensitive data loss (an inside job will bypass this protection).
In addition to doing data recovery for people (sensitive documents, photos, etc.), I have a few clients I do work for and I'm exposed to client lists, password files, software license keys, etc.. If my home-office machine were ever stolen, I never need to worry since I use full-disk crypto.
I approach full disk encryption and how we "should be doing" it the same way I approach brushing my teeth 3x a day and flossing 1x a day. Yea, I should, but I brush my teeth 2.5x a day and floss 1x a week instead, and it's worked O.K. so far.
Compressed data reduced your IOPS. Is the drive slower at handle compressed data, or is your OS the bottleneck because its compressing all data before sending to hardware? I appreciate that you might not care exactly where the problem is, but it'd be good to know if SSDs suck just for realtime-encrypted file systems, or for any non-compressible data such as MP3s, JPEGs etc.
Also, how does your encrypted-fs affect HDD performance?
Don't get me wrong, an SSD was definitely worth it over an HDD. Despite being slower than the advertised speeds by a lot, it is still far better than before. I'm just saying that paying extra for a "tier 1 performance" SSD may not be worth the money over a mid-range one if that extra performance is gained by the controller doing aggressive compression (as the sandforce-based SSDs apparently do).
The reason that I know that this is not just encryption overhead is because when asked about lower performance being seen by users who were not encrypting, OCZ staff confirmed on their forums that benchmarks not using compressible data will see lower numbers.
I googled a bit and saw that it's widely documented that sandforce are compressing right in the controller, and this allows it to physically do less reading/writing, which artificially inflates the data transfer speed.
Attempting to compress encrypted data has no effect on the size of the data whatsoever, so tests doing that are more reflective of their actual performance.
I am surprised to see the OCZ drives recommended so frequently in the article. So far I've owned 2 from crucial, 1 intel, and 1 from OCZ. The first SSD drive I purchased was a crucial, and it is still running like a champ 3 years later in a macbook pro. The OCZ drive failed within 90 days. This wouldn't be a big deal in and of itself, things break. However, their customer service is terrible. I got nothing but a 2 week long run around with them when I tried to RMA it.
I have two OCZ drives, a Vertex 60GB in a lightly-used Fit-PC2, and a Vertex 2 240GB that was in my ThinkPad W510. They both failed after a few months.
To their credit, OCZ did replace the drives promptly and without hassle. The 60GB replacement hasn't failed (albeit again under light use). The 240GB is on a Fedex truck for me today, so we will see on that one.
I put a 300GB Intel 320 in the ThinkPad about three months ago. No problems at all with it, but time will tell!
The Intel drive has better utility software - you can do a secure erase, for example. OCZ doesn't provide a utility like that. There are open source utilities and such, but it can be pretty chancy getting them to work (I never succeeded).
An aside: I kept the ThinkPad's hard drive and put it in an Ultrabay adapter replacing the optical drive. Large and less frequently accessed files (photo, video, downloads, etc.) go there. This gives lots of extra storage and should save wear on the SSD (not that it helped the OCZ any).
The hard drive is set to turn off after a few minutes even on AC power, so it sits idle most of the time. When I do need an optical drive I use a USB one, or I can swap the original drive back in.
The first OCZ was a Vertex 2 180GB and failed after 1 month. I got it RMA'd and received my second Vertex 2. That one failed after 2.5 months.
Both intels are still running strong. I'm also surprised to see the OCZ's so strongly recommended. The RMA process sucked both times and the second time is working out to be a refund. I haven't received the refund yet, but at least there is a process.
I've burned through three Vertex 1's where the third was replaced at my request during RMA with a Vertex 2 (that was nice of them at least).
The first was when my current system was new two years ago. I was never able to get it working.
The second lasted a year and a half before it started corrupting files before finally dying.
The third lasted almost two whole weeks before I received a BSOD while gaming and a no longer detected SSD.
All three of these requires some form of CMOS reset/firmware fiddling to get running initially.
At least the Vertex 2 seems fine so far. But overall, I'm not too impressed with OCZ. I'll give them credit for replacing the drives without much fuss, but the last two RMAs took around 2 weeks to receive the replacement. Which is a huge pain in the ass when it's your OS drive in your rig.
I had a similar breakage experience with OWC, which makes SSDs using the same (new/aggressive) Sandforce controllers as OCZ. My drive ("Mercury Extreme Pro", like a "Vertex 2") failed in the first week in that it would not cold boot; if you left it powered on in a blank zombie white screen for ~5 minutes you could get it to boot on restart.
OWC's customer service was stellar though. Every time I called the phone was answered by a tech (an engineer I suspect) who very much knew what he was talking about and never wasted my time with scripts. After one failed fix -- it looked like zapping PRAM had fixed it, but it didn't -- they mailed me a new drive and I returned the old in its packaging. The new drive has been running fine since February.
I am also very surprised to see OCZ drives recommended. I have 10 OCZ Vertex 3 SSDs in a single server. In the past two months, 6 of these 10 Vertex 3 drives have failed so far. OCZ is utter and bitter crap.
I have a Kingston and an OCZ Vertex 2 and the OCZ drive failed in a month. My experience with the RMA is similar, it took 3 weeks to get a replacement and very slow email support.
OCZ's SSD offerings have carved out a very good chunk of the SSD industry. Their Vertex drives almost always receive favorable reviews & perform competitively, and their Agility drives are a popular budget SSD. Judging from the past few shootouts I've seen, they are one of the Drives to Beat, along with the intels.
I recently bought a Crucial M4 256GB SSD, and I have been extremely satisfied. Blazingly fast and no issues whatsoever. Even better, on my late 2008 MacBook Pro model, I get SATA II speeds (3Gbps). Most other drives (e.g., OCZ Agility) will only be recognized as SATA I (1.5Gbps) on my Mac. This makes the Crucial drive literally twice as fast in best case scenarios.
As an example of the effects it has had on my computer performance, building my upcoming book (which invokes rake and JRuby) used to take 1m 30s on a 7200 RPM drive. Now it takes 15 seconds. Also, productivity apps like Office open in a split second.
Upgraded my Cr-48 to a 40 gb Intel drive over the weekend. An SSD with room is a marvelous thing. So, I have a late 2008 macbook (white), looks like a very similar processor (core duo 2.4 ghz), and 4 GB RAM, running Lion. I've been looking at the Crucial 256. That or the Intel 160. How long have you been running this drive, and are you running Lion? How did the upgrade go? Did you need anything besides Time Machine? Do you use macports, postgresql, or apache, and if so, any problems after the upgrade?
Also, have you looked into 3g inside your macbook? Is there any way to make that work? A dual function wifi/3g mini PCIe?
I've made a similar transition (to the Intel 160); here's my input, in case it's useful:
> How long have you been running this drive
Since December 2010.
> And are you running Lion
I am now; prior to that, it was Snow Leopard
> How did the upgrade go?
The hardest part was trimming my 250 GB drive down to 160 GB; mostly this just involved wiping out old VMs, disk images, extra programs, etc. and only keeping a subset of my iTunes library (minus TV shows, etc) on the laptop.
> Did you need anything besides Time Machine?
I don't suppose I did, but for the sake of speed I just did an image of the drive (block copy, using Disk Utility).
> Do you use …, and if so any problems?
None at all. Things are generally much more responsive, especially in cases that involve scanning directories. Deleting large directories full of source code is dramatically faster. chown'ing the Linux kernel tree, for example, feels almost instantaneous in comparison.
One issue I did have that's worth mentioning: because I did a direct copy of my Snow Leopard install (which, in turn, had been upgraded from Leopard), I found that the performance of the system was not at all what I'd hoped. It wasn't unresponsive (beachballs disappeared entirely), but it certainly wasn't blazing fast.
When I took my current job, I imaged my SSD to a backup drive and imaged my new work laptop's fresh install to my SSD, and the difference was like night and day. Suddenly, my computer was a speed demon. If it weren't for having to type my password, I feel like it would have gone from power off to loaded desktop in ten to fifteen seconds or so. It was unbelievably fast.
So my suggestion is, image your old drive, do a fresh Lion install on the new drive, and then use the Migration Assistant or Time Machine restore to bring your user directory back (and get rid of all the junk your account loads when it starts up that it doesn't need to have).
We use the 600GB Intel 320's as second drives in Macbook Pros for running virtual machines. They're the highest capacity 2.5" SSDs available, and they aren't cheap. However, we can put multiple virtual machines with 100GB+ databases on them, travel to countries with poor internet connectivity, and teach workshops where the students run intensive queries. It's as if we've shrunk a $100,000 storage array and stuck it in our carry-on luggage.
'cause SSDs are not as reliable as they appeare to be to withstand the server-side workload. Enterprise-grade SSDs are significantly more expensive than consumer-grade ones. You are not looking at $1~2/GB price, but $10~20/GB. Given the capacity required for most use cases, SSDs are hardly good choice for critical servers as primary storage.
In addition, most RMDBS are optimized for mechanical disks. Optimization for SSDs becomes interesting only recently when the price of SSD drops to be barely reasonable.
> In addition, most RMDBS are optimized for mechanical disks.
Since SSDs blow the hell out of platters no matter what the workload or access pattern is, you'll still get significantly improved performances, even without SSD-specific optimizations.
The one "optimization" I'd like to see out of SSD's rise is deoptimization: since access patterns becomes less important (or at least naive access patterns become less costly), I'd like to see systems simplified and "optimizations" removed rather than new optimizations added.
We (bu.mp) use a lot of SSDs at our datacenter.. we've probably used ~100 64GB x-25e, and recently we have added 20+ Micro P300 disks.
The first thing we used to do is try to convince the hardware raid controller not to do anything clever, like readahead etc, b/c seek times are practically meaningless. Despite our efforts at disabling every optimization we could control that was tailored for rotational platters, we still found that software raid (linux md) outperformed a classically great hardware controller--perhaps by virtue of being "stupider".
So that is our go-to configuration now: Micron P300 SLC, 200GB drives, with md raid.
I have a lot of friends using 'consumer grade' SSD for their DB workloads and the difference is night and day.
This might be a horrible example but my one friend had a installation of FileMaker Pro running on a completely tricked out XServer (RAID with 15K SAS drives). With 250 concurrent users he was completely max out on CPU and many queries took minutes to complete.
When he moved to a 256GB SSD his CPU load now never goes above 20% and not one query takes more than five seconds, period.
Also, please reference my old HN post from over a year ago.
In my experience, SSD work great in RAID-5 or RAID-6 setups, even for database workloads (blasphemy!). In fact, 6 or 8 consumer SSDs in a RAID-5 array will put your huge FC array of 15k drives to shame.
I'm surprised that I'm not seeing even a casual mention of the OWC SSDs in this review. From everything (else) I've ever read, they tend to be way ahead of the curve in terms of innovation and performance. Sure, things change quickly stats wise... but they aren't even on the chart.
Subjectively, I would describe my OWC 120GB drive as "blisteringly fast". Previously I had a 100GB OWC with extra redundancy for server loads (overkill in my iMac 2010) and the first time it booted it was like being personally greeted by the Flying Spaghetti Monster.
If you use your computer for extended periods of time every day then their performance payoffs outweigh the $. They have come down quite a bit in price lately as well. As the article talks about there are many options available today from at least getting your OS onto a smaller speedy boot drive to housing everything on a larger one.
I for one initially had picked up a 60 GB OCZ Vertex1 awhile ago and then about 10 months ago moved up to a 120 GB Vertex2. Will never look back.
Judging SSD's by their little performance bits is a kind of amusing endeavor: the latest SandForce already saturates a 6Gbps SATA III link, and others are catching up real fast. This pretty standard unit of measure is hitting the limits of the interface, not the drive.
What other criteria are there? GB/$, performance/watt, watts at idle, IOps, and warranty or lifecycle costs. Personally, I find something "big enough", ignore power consumption and iops (neither is going to make a huge enough difference for me to concern myself), and then get whatever I can find that has the longest warranty.
the latest SandForce already saturates a 6Gbps SATA III link
Considering the dinosaur pace of new interfaces, and the fact that SATA III isn't even fully "rolled out" yet, I wonder if we are going to see a new wave of hackish custom solutions by manufacturers. Dual SATA ports on your drive, anyone?
I'm pretty happy with the 256 GB drive in my MacBook Air. I would have considered this limited until recently, but wireless networked storage is cheap and easy to implement(I basically just connected a cheap 3 terabyte drive to my router) and Thunderbolt or USB3 storage gives us a lot of options when more high-performance storage is necessary. Also, cloud services (Facebook/Flickr for photos, Rdio for music) have made me much less of a data packrat. So I increasingly consider the hard disk "working storage" for applications and the most crucial files.
I knew this would happen. I bought a Thinkpad T420s which has a non-standard (or new standard?) 7mm hard drive caddy instead of the much more common 9.5mm. I bought a 64GB Crucial M4 because it could be modded to 7mm and the price was right. I'm happy with the performance. It feels noticeably faster than a standard magnetic drive, but the 64GB is in tier 10 on this comparison. I guess as long as I'm happy with the performance it doesn't really matter. I wish this article was around 2 weeks ago.
I recently grabbed a 512GB crucial m4 for $730 delivered, from here http://www.bhphotovideo.com/ (no affiliation) (price has risen slightly since). For some reason I'd been putting it off but seeing it there for less than $1.50 per gig, at the capacity I wanted for my macbook, suddenly seemed to be a no-brainer. Hell, I remember paying $100 per gig back in the '90s. Somehow my mental model of "reasonable prices to pay for storage" has just been totally biased by years and years of dirt cheap HDDs.
Frankly it hasn't been the jaw-dropping entering-hyperdrive performance boost I had kind of hoped for (I'm a rails dev). While a definite improvement, it seems that for many of my most common tasks (read: tests) I have merely pushed the bottleneck back onto the CPU. But while it hasn't sped up all that much, it never slows down, which you don't notice at first but over time has a subtle confidence-building effect. Application launch speeds are much improved, for those who spend a good part of their day launching apps, which is not me. I tend to launch a few and then use them for the next two weeks before I restart. I also like how the drive does not make whining sounds when I move the computer before it's gone to sleep.
Recommended, anyway, they're cheap enough now that it's not a luxury, even if like me you use most of it for your work music collection.
Using someone else's sans-ssd computer will remove any doubt you have about the purchase. I occasionally have to do the usual family tech support and I find the slow response infuriating.
I ended up getting a 128GB for the OS and apps and installed a macbook optical replacement HD caddy for a 500GB drive for media. If you can deal without an optical drive this config is highly recommended.
It's strange the way people have different reactions to the SSD performance factor. I'm a rails dev too, and for me the SSD was nothing short of life-changing. When I have to go back to a non-SSD machine I get cranky.
I think the difference is because if you only do one thing at a time and you have sufficient RAM, than an SSD does almost nothing. Whereas if you have a lot of processes running and accessing files spread around the drive, and HDD will quickly send you to beach ball land.
Parallels in particular will do you in in a hurry.
I'm in the habit of running a IE7/8/9 in separate Parallels instances, Photoshop, Safari/Firefox/Chrome/Opera, MacVim, MySQL w/ 20GB of data in it, local rails server, local rails console, and iTunes or Spotify.
With an SSD I basically just run whatever I want and things slow down only very slowly and linearly as I add processes. Without an SSD I feel like if I listen to music it's going to slow things down, it adds unnecessary mental overhead to my whole workflow. The mental factor is I think why people may overstate the absolute benefit of an SSD.
"for many of my most common tasks (read: tests) I have merely pushed the bottleneck back onto the CPU."
This is my experience as well. I have an older computer (2009) with a low-voltage processor (SU3700) that I am looking at upgrading, but it's not really justifiable yet.
So I thought I'd grab an SSD for a nice performance pick-me-up now, then throw it in my new computer when I got it.
However, while there is a performance increase, it isn't really all that noticeable. (I use another computer often with an HDD, and can hardly tell the difference.) The CPU is definitely the bottleneck on most tasks.
From what I read at the time, (january 2011) sandforce based SSDs got much better performance on mac systems. I think this was to do with OSX's lack of support for TRIM.
On the other hand, right after I upgraded my SSD, I went to ruby 1.9.2 and rails 3, which are much slower due in part to a bug that makes requiring files take forever.
Not sure about others, but in my case (10 MBP 13" w/ Vertex2 in optibay), I had occasional stalls (anywhere from 2-30 sec) until the TRIM Enabler ( http://www.groths.org/?p=308 ) showed up and I installed it (OSX Lion supports TRIM on this drive natively, it seems).
I think this is because, although sandforce drives have native garbage collection, it can sometimes happen at a very inopportune moment (like say, a 20s "pause" during critical moments of a SC2 game).
What this does is actually pretty simple, it replaces the string “Apple SSD” (used to identify for which drives trim is turned on) in a file in the relevant kernel extension with zeros. It also creates a backup copy of the original file.
The Trim Enabler you linked to replaces a whole kernel extension (meaning you might end up with an older version of a kernel extension) which is obviously monumentally stupid.
I felt the same way -- that I wasn't blown away by the performance -- until I got used to
find . -iname "*java" | xargs grep ...
or ctrl-h in eclipse blowing through millions of lines of source code in 10-20 seconds instead of 3-5 minutes. I tried going back briefly while helping a friend on his laptop and waiting for grep to finish was amazingly annoying.
A few months ago I had to help one of our scientists (the company is called 5AM Solutions.. they the awesome) run a bioinformatic job written in Perl and R. As it turned out, for long stretches of the processing the job required around 20 GB of memory. The one server that had all the required dependencies installed had only 8 GB at the time.
When I let the job run the first time, it started to page out memory to hard disk. The job ran for about four days, was only about 25% complete and during that time frame the server was un-useable for any other functions. Pretty much everything came to grinding halt.
Between that first run and the time our new RAM would be installed, just for grins, I gave the system 30 GB of swap space on the locally attached SSD. With that configuration the job finished in 19 hours and during that time the server was still responsive of other tasks.
When we finally added the appropriate amount of physical RAM the job took only 15 hours to complete.
It is the first time I have ever seen virtual memory be useful.