Re: Is EMC's CAS Centera considered "permanent data"?



Hi Bill,

In article <Lsqdnfk9MPGi2wreRVn-tw@xxxxxxxxxxxxxxxxxxxxxxxx>,
Bill Todd <billtodd@xxxxxxxxxxxxx> wrote:
>> with really old equipment, it is no longer
>> feasible to acquire spare disks that are compatible with the old
>> ones.
>Do contemporary arrays really still typically require any real degree of
>compatibility in replacement disks, as long as they are at least as
>large as what they're replacing (and use the appropriate interface, of
>course)?

As far as I know (and I'm not really an administrator or user of large
disk arrays, I just have them around as part of my job), for a
high-end array you need to find replacement disks that
- have the same interface (SCSI, FC, SATA, SSA, ...)
- have the right connector (for example for SCSI, this usually means
the 80-pin single connector that has both power and data, although
older Hitachi arrays used dual-ported SCSI disks with two 50-pin
Centronics connectors),
- have firmware versions the array controllers recognize (most
high-end arrays will only accept drives of known model and firmware
version),
- and have the correct capacity (not always true, but for traditional
RAID controllers, if one disk in the RAID group has higher capacity,
the extra capacity can't be used, unless the array virtualizes RAID
groups across physical drives, which if any is done only recently)
which pretty much restricts you to same-model replacement drives,
obtained through the array vendor (not off the street) to get the
correctly modified drive firmware.

Everything except the first two requirements sound pretty harsh. But
look at it from the point of view of the array vendor: they have spent
a huge amount of time and money qualifying and testing the whole array
and its constituent components to work correctly, when using disk
drives model X capacity Y firmware version Z. Now some idiot comes,
and gets a replacement drive at Fry's or CompUSA or Best Buy, and it
is model A capacity B>Y and firmware version C. At this point, any
bug or incompatibility puts all the data on the disk array at risk.
And high-end disk arrays are not supposed to lose data, and customers
tend to get very ticked off when it happens. So the array software is
being so restrictive to protect both the customer (who is being
penny-wise pound-foolish) and the array vendor.

Or to put it differently: customers who are willing to spend on the
order of $1M on a disk array should be smart enough to budget the
expected maintenance and part replacement outlays.

Now, with low-end arrays (for example my favorite for home use, the
3Ware cards using IDE drives), this is a different story. I think on
a 3Ware card you can mix and match drives with reckless abandon -
except that your RAID group will have a capacity which is the minimum
of the drives in the RAID group.

The big difference here is one of customer expectation. If a large
bank or insurance company buys a disk array from one of the big
vendors [EHI][BDM][CMS], and the array loses a lot of data, the CEO of
the bank/insurance will call Joe or Shinjiro or Sam on the cellphone
while they are in the middle of a golf game, and verbally tear into
them, thereby disrupting their golf scores. Bad scene. If my home
machine loses its data because I used a cheap flea market disk on my
3Ware or Adaptec or LSI RAID controller, I go to the kitchen, pour
myself a stiff drink, tell my wife to not use the computer for a day
or two, kick the dog for good measure, and start looking on the bottom
shelf of the gun safe for my most recent set of backup tapes (FYI, we
don't have a dog at home, that part of the anecdote was a joke).
Therefore, big disk arrays built by the big companies are more
paranoid than PCI cards intended for a different user population.

>Not to be unduly pedantic, but service life (normally 5 years for array
>disks) and MTBF (which these days is at least speced to be more like 1+
>M hours) should be largely independent of each other as long as the
>latter significantly exceeds the former.

True - economically viable service life is today much shorter than
MTBF. It is very tempting to replace 9GB disks with 180GB disks, as
the 180GB disk is as fast and uses as little power. This might change
in the future, as we are expecting the capacity growth of disk drives
to dramatically slow; it is very possible that disks bought in 2006
will still be economically viable (capacity-wise) in 2011 or 2016.
And if they actually achieve the rated million-hour MTBF (this is a
big if), enough of them will still be running to make it sensible to
continue using them that far into the future.

We are sort of at a strange inflection point. The MTBF of drives has
in the last few years increased massively, from O(100K) to O(1M)
hours. Whether this million-hour MTBF can actually be delivered in
practice remains to be seen (ask me again in a decade or two). At the
same time, the capacity of drives has been increasing by 60% or 80%
per year, even faster than Moores law for CPUs. This means that there
are lots of old, failing drives out there, and it is extremely
tempting to replace them with new drives with much higher capacity. I
would expect that capacity increase will slow down massively, while
the new drives have extremely high MTBF (which may actually decrease
again in the future, as we replace well-built enterprise-grade FC/SCSI
disks with consumer-grade SATA disks, but it is starting at a very
high level). This is likely to change the economics of the storage
industry significantly.

> And as long as you replace
>disks as they near the end of their service lives, it's not clear why
>you couldn't keep your array running safely as long as disks were made
>that it could use.

Today, that would be possible, but expensive - you'd be paying
hundreds of $$$ for replacement drives, when for the same money you
can get much bigger capacity drives. So much bigger that it can
easily pay for replacing all the disk array including controllers. I
think this is one of the big reasons driving the trend to replace
high-end disk arrays with mid-range arrays (here defined as: high-end
looks like a set of multiple refrigerators, while mid-range looks like
a 4U or 7U rackmount box): The purchase cost of a new mid-range array
at the same capacity point is much lower than the maintenance and
power and floor-space cost for the old dinosaur.

--
The address in the header is invalid for obvious reasons. Please
reconstruct the address from the information below (look for _).
Ralph Becker-Szendy _firstname_@xxxxxxxxxxxxxxxxxxxxxxxxxx
.