28
Dec
07

SSD RAID = 800 MBps & 0.1 ms latency. Really.

mtron.jpgNext Level Hardware.com has a report on their Battleship Mtron. This is a test of solid state disks (SSD) and how they can take your computing system to the next level. In reality, they take a computing system to the next order of magnitude. Previous tests have taken the Mac Pro to 284 MBps with four internal hard drives striped n a RAID-0.

Would you like 800 megaBYTES a second with near instantaneous access?

Read on…

As has previously been reported, SSD offer incredible seek times because you’re reading from a big hunk of RAM. whack.jpgThere is no waiting for the information on sector 23 of the disk platter to spin around and pass under a magnetic head to be read. When trying to edit multiple layers of video in real time, with various render files here and there on your media drives, you can imagine the data heads of the hard drives to be like a young child Whack-A-Mole at the local arcade game. The heads thrash about in a seemingly random pattern just trying to keep up with the game of getting to the next bit of data ahead of the playhead in the timeline.

hard-drive.gifBut, no matter if the disk spinds at 4200, 5600, 7000 or 10,000 rpm, the heads only sit in one space and the data could be anywhere on the disk. The heads have to wait for the disk to rotate so the data is underneath the heads to be read. Even on the fastest spinning drives, this may be a revolution or two- depending on how fast the heads can rotate to the precise “track” on the disk.

Solid state drives, on the other hand, have no moving parts. No waiting for the platter to spin around. Like the RAM in your computer, SSDs just access a specific memory location and can do this in 0.1 milliseconds, as opposed to the 7 or so luxurous milliseconds spinning-platter systems take.

Couple this with the ability to shovel data in and out at RAM speeds, you’ll quickly come to reach other bottlenecks like the folks at Next Level Hardware did:

Since we already know that these Mtron SSD’s have the theoretical capability to scale in almost perfect multiples using Raid 0, something is definitely wrong… Five drives put out only 386 MB/s sustained read when we should be anywhere from 550 to 600 MB/s easily. After countless hours of research about the Areca 1220, I finally stumbled across a… very informative post … explaining about … throughput maximum on the ARC-1220 controller. The limitation happens to be right around 400 to 450 MB/s max on the 1220. … I junked the 1220 immediately.

nlhchart.gif

Yes. The drives were not the bottleneck, but the controller chip on the SATA RAID card was. Even upgrading it to the best card he could find yielded the same end result- the card was the bottleneck.

Right off the bat using the Areca 1231ML and the same 5 drives, sustained read went up to 608 MB/s and burst jumped a couple hundred points to 1200 MB/s. This means our old controller had us capped at 400 MB/s throughput and took away an extra 200 MB/s of un-tapped power from the Mtron units. However, knowing in my head my full intentions of achieving close to 1 GB/s throughput with 9 or 10 drives, I did some research on the Areca 1231ML as well. It turns out not too many people really have this kind of problem using current generation mechanical HDD’s. Single consumer level raid controllers are not usually meant to be scaling at 800 to 1000 MB/s sustained throughput. But, the only information I could find on the 1231ML led me to believe it runs out of steam right around 800 MB/s. So, again this was more bad news for me.

This is certainly bleeding-edge stuff. It’s seldom that end users push the hardware so hard that individual components of that hardware become the bottleneck of the entire data path. It’s near impossible to get all the I/O specifics for individual components on commercial cards. We trust the manufacturers to make them to the best of their ability but, as Next Level Hardware divines, there is still a lowest common denominator. With internal disk-based RAID systems lilting around at much lower data rates, the manufacturers really have no call to ensure a gigabyte per second of data throughput. Especially since there’s very likely to be a bottleneck someplace else down the line, like the PCI bus in the computer.

burst.gifOur final drive setup was the extremely expensive, yet impressive 9 X Mtron 16GB Professional’s in Raid 0. Again you can see that we have hit another limitation on the expensive and high end Areca 1231ML. This time it is the high end and enterprise praised IOP341. With an uncapped controller we should theoretically be at 1100 MB/s sustained read right now which would un-cork an additional 300 MB/s out of our current setup. The article will have to suffice with only 830 MB/s sustained read.

bfdata.gif Compare this to results from Bare Feats that tested four internal SATA drives inside a stock Mac Pro (with built-in controllers) and achieved only 284 MBps. That just seems to pale in comparison to the SSDs when you are talking about raw performance and “getting the job done.”

The advantage scale tips back the other way when you talk price, though.
Four of the Segate 750 GB drives would currently run you $1,400.

Each of the fastest Mitron 32 GB SSDs will cost you $1,160.

To achieve the results Next Level Hardware found, they used NINE drives and the Areca 1231ML for about $765. This totals about $11,205 for 288 GB.

In summary, the hard drives cost 46¢ a gig.
The SSDs cost $38.90 a gig.

Do you feel the Need For Speed?

.

Advertisements

0 Responses to “SSD RAID = 800 MBps & 0.1 ms latency. Really.”



  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: