Re: HP EVA4000 / IBM DS4300 / EMC CX3-20/40
- From: "Rick" <r.malo@xxxxxxxxxxx>
- Date: Fri, 16 Jun 2006 17:10:40 GMT
I also agree with Jeff. I believe someone left a thread on this already but
you cannot compare a CX3-20 with an EVA4K. Price out a EVA6K and then
compare the arrays. I have been a presales tech consultant for... too long
and the EVA and CX line are great products. EMC will try to push the 4GB
end-to-end argument but it really doesn't buy performance, its more
marketing thank anything.
Both EMC and EVA are great arrays and they will serve you well. The EVA has
the one up on ease of management but does require more overhead. You will
definitely loose more usable space with an EVA than a CX3-XX but it may be
"jls" <jeffls-nospam@xxxxxxxxxxxxx> wrote in message
There is quite a lot of stuff here in your request, but I'll try to
help out. Disclaimer: I work for HP; however I am not a sales droid.
I bring 20+ years of systems and storage management experience
(technical and ROI).
Having said all that, I do NOT speak for HP, but only provide my
personal knowledge and expertise to this question.
On 12 Jun 2006 02:15:34 -0700, google@xxxxxxxxxxxxxxx wrote:
We have a new solution out for design and three different vendors have
put forward three different SANs.
1. IBM4300 - seems to be more clunky on the UI than the HP and is
bagged by the other vendors as being a "basic" SAn, it is meant to be
poor because the operaotr still has select which physical disks belong
to which physical raidset etc. It is the cheapest offering.
2. HP EVA4000 - nice simple interface, not sure about the holistic R5
disk array with the virtual raidsets on top. My dba freaked out when I
mentioned this to him. Middle cost.
3. EMC - not seem the interface personally, but double price of the HP.
There is an inherent difference in the EVA approach to storage. DBAs
may be uncomfortable with the approach because they've never seen it
before, but all I can say is that I WISH I had had this capability
when I managed systems for the first 13 years of my career.
What the DBA's needs really are:
1. First and foremost - as much memory in the SERVER as possible. The
fastest I/O is the one that doesn't need to actually hit the disk
In the storage array:
2. Lot of spindles - to distribute I/O around when it actually does
go to the disk so that there are no bottlenecks.
3. Isolation of the logs from the data.
Now, under the older SCSI-bus design, typically you had raidsets that
spanned all SCSI buses within the controller. The idea is that you
can't have more than one disk in a raidset sharing the same SCSI
bus... otherwise the loss of an entire bus (which actually does
happen) would kill the raidset (only loss of one drive allowed).
So, the system admin, and the DBAs had to create and manage lots of
separate LUNs and *manually* manage the performance among them to
distribute the load. This often meant after-hours work to
re-partition and re-distribute the load.
The FC-based storage of the EVA is an entirely different technology
(and, by the way, it has been around awhile now and is fairly
In the EVA you create "disk groups". Typically we recommend that you
create two - one for data, and one for logs. The log one will
typically contain far fewer disk drives than the data group, and it
will generally have sequential, write access most of the time; while
the data group will tend to have more random access.
So, let's say you have an EVA4000 with 4 full shelves at 14 drives per
shelf - that's 56 drives. Note that this is upgradeable to the
EVA6000 to double the capacity... but even at the EVA4000's 56 drive
max, with 146GB drives that is 8TB RAW. Note too that you will lose
some % due to controller meta-data protection, as well as to raidset
protection, so the 73GB drives won't really give you your 4TB capacity
in the EVA4000.
Imho, you might actually want to start out with the EVA6000 and only
have partial shelving (4 or more) added initially to save cost, but
that's without actually having the detailed business discussions with
management and IT folks, so it's just a guess right now.
So, you create two disk groups: a data group with 40 drives, and a
log group with 16 drives. You will have either single sparing or
double sparing for those groups, which will take away some of the
capacity, but give you protection at the group level for controller
information (i.e, this is NOT data protection).
If you typically have 10 data LUNs of, say 6-drive raid-5 sets with
42GB disks, this is a total of 210GB per LUN * 10 = appx. 2.1TB.
What you would do on the EVA is, instead, create 4 LUNs of .5TB each.
You specify that you want VRAID5, and the array manages everything,
making sure that you have the protection that matches RAID5 (i.e., no
loss of a single drive will allow you to lose the entire LUN).
The reason it doesn't matter to the DBAs is because every LUN will use
every spindle in that group anyway, thus creating lots of smaller LUNs
does NOT help them in any way whatsoever. I have actually had to work
with SANs where the DBAs convinced management to create lots of
smaller LUNs on an EVA - it is a complete waste of time and also can
impact IT's ability to provide other services later on (replication,
What you have with the EVA is an automated load-balancing among the
spindles in the disk group. This removes this daily activity from the
DBAs as well as from the system admin. Instead, you choose your LUN
sizes now based on other factors - e.g., backup & restore timing -
instead of on database or application performance factors.
If performance does start to degrade, since it is spread among so many
disk spindles already it degrades much more flatly and will be
detectable such that you can fix it with less urgency... and the fix
is really simple, assuming you have room in your configuration for
more shelves and/or more disk spindles (which is why I'd not recommend
starting with the EVA4000 if your needs are 4TB). All you'd need to
do then is to add more disks to the array, and then via the GUI add
them to the disk group. The array will level all the data across the
additional spindles and performance will improve - no downtime for the
systems or applications, no after-hours re-partitioning on new LUNs,
etc. You'll have more room in the disk group for other things at that
point as well - possibly for Business Copy volumes, assuming you
purchase that option.
I have spent a considerabe part of my lifetime working with companies
to alleviate performance problems and do performace-driven capacity
planning - and this includes working with lots of DBAs (and, in fact,
I have done DBA work myself). Managing disk queues was a very
significant part of that work and the way that an EVA manages storage
is a HUGE win and something that your DBAs will come to love, once
they get over their initial fear of the unknown.
Oh, and the EVA also has the ability to do dynamic volume growth, if
you need larger LUNs later. No data migration needed, just increase
the size of the LUN (actual mechanics of this will vary depending on
the host OS involved).
There really is no reason for your DBA to resist putting the
applications on the SAN. There are many, many benefits to this
configuration which make life easier for both the DBA as well as the
system admin. And sharing the capacity allows the IT organization to
provide more of a "storage service" - where you manage storage capcity
and performance rather than managing spindles.
The typical configuration would be dual-HBAs in the servers, separate
SAN switches on each HBA, and the 4 controller ports on the
EVA4000/6000 split between the two switches (2 on each switch). The
EVA array contains two controllers as well... all of which reduces
your SPOFs considerably.
For DR, the EVA also provides the capability to have another array in
a distant location and to replicate all writes between them (synch or
asynch) with the use of Continuous Access EVA (purchased separately).
ILM will be an entirely different issue down the road, especially in a
database... but there is nothing about SAN technology that prevents
that work when you come to it.
What I need:
1. Good support. Here in Australia I have had a bad experience with our
current HSG80 driven HP SAN. Some of it boils down to poor operator
training but we are located 7 hrs drive from a major city, one flight a
day and nothing in between ...
2. Windows file and Oracle database capacity. Our dba does not want his
data in the SAN. I think I understand the db design rules such that
logs on one set of spindles, data on another - but what else stops me
putting db stuff in a SAN (Currently have costs less for direct, direct
faster, more control of direct with less skills, SAN = SPOF)
3. RAMS - Reliability, Availability, Maintainability, Scalability.
Thinking of ILM as phase 2 of this project. For now just want the data
off the current SAN/direct attached (to Alpha boxes).
4. Considering 2 SANS connected by 4Gb dark to provide DR. Have heard
some stories of SAN-SAN licensing horror stories. With two SAN
requirmeents I might be pushing my individual proces down to the IBM
level. Is this too risky?
Performance specs are almost impossible to find on the Internet,
googling shows customer A loves vendor B and customer B loves vendor A
- ie. personal choices - nothing concrete. Have got a "standards"
document with everyone but EMC involved ... then got another with EMC
but not IBM/HP involved - which standard to believe. One post warned to
avoid listening to the vendors at all :-)
Any links, experiences, pointers etc ... I'll shout a few beers at
- Re: HP EVA4000 / IBM DS4300 / EMC CX3-20/40
- From: Eejit
- Re: HP EVA4000 / IBM DS4300 / EMC CX3-20/40
- Prev by Date: Re: Seagate 750GB Barracuda ES availability?
- Next by Date: Re: NAS for XP
- Previous by thread: Re: HP EVA4000 / IBM DS4300 / EMC CX3-20/40
- Next by thread: Re: HP EVA4000 / IBM DS4300 / EMC CX3-20/40