29 October 2009

Double, Double Toil and Trouble

There's been a hurricane of consternation on the Yahoo! (and I strongly expect, other) message board for STEC, our resident poster child of high performance SSD. The share has cratered from its high ~$40 to ~$20 yesterday. Today it's up a bit. Message boards have some interest, since a small fraction of the stream is intelligent. No, I don't hold any STEC, nor do I care whether they get rich. I do want them, or a company doing what they do, to continue.

One piece of intelligence is reference to this blog, at IBM. It doesn't navigate very well, and I'm sure not going to regurgitate it here, except to say that IBM has, if this fellow speaks for the company, backed off somewhat from the PCIe (Fusion-io) approach and reverted to straight SSD; STEC according to what he has written. The posting of interest talks about why PCIe went away, and SAS drives came back.

OK, one quote, from the 18 Sep entry:

Another interesting side note can be seen when you add the areal density of silicon, which to this day has tracked almost scarily to Moores Law. If for example, the GMR [giant magnetoresistive] head had not been invented by Stuart Parkin, then we'd probably have had mainstream solid state drives in the mid 90's. If nothing else comes along to push spinning rust back to the heady days of 65-100% CAGR[compound annual growth rate], then by 2015 solid state density will overtake magnetic density - in a bits per square inch term.


Now, this is important. The transition from HDD to SSD has changed, based on public pronouncements from the vendors, since January of this very year. Up until then, the notion was that HDD arrays would be replaced by (smaller unit count) SSD (mirrored?) arrays. The higher cost of SSD would be mitigated by the lower unit count resulting from not requiring striping (and possibly, mirroring) units in the array. (Aside, and from refactoring databases to BCNF; thus jettisoning at least an order of magnitude of those bytes.) Now, the notion being promoted is "disk caching", with some small number of SSD fronting the existing HDD array. I'm still not convinced this makes much sense, but there you are. The major impact of this approach is to simply not deal with data bloat, and thus get maximum benefit from SSD, but to garner "good enough" improvement.

If we are headed to a cross over in data density (but not necessarily cost/bit), then preparing for, and building now, pure SSD systems isn't implausible. While I don't relish the "disk caching" approach as the end game, if it serves to jam a size 12 brogan in the door; I'll take it.

No comments: