How many disks does it take to screw in a database?

My previous experience with optimizing disk performance in Linux revolved around getting huge chunks of data off disk quickly, but this means nothing to a typical database. Databases are usually limited by how many IOPS (Input/Output Operations Per Second) the disk subsystem can sustain. The database servers at work have plenty of RAM so disk reads nearly always come from the disk cache and disk writes appear to be our bottleneck.

The database servers each have a RAID 5 array with three 10k RPM drives, while the web servers each have mirrored 15k RPM drives. all SAS. There's also an underutilized development box with four 15k RPM drives, also RAID 5. (For those interested the web servers are Dell PowerEdge 1950's, while the rest are PowerEdge 2950's.) The dev box runs Xen and already had two copies of the production environment in eight virtual machines (plus a few more development VMs). A quick test showed promising results so I cloned a database VM, configured it as our third production database and promoted it to be master. While I was at it I cloned a web server VM to use for testing, and later for production.

The results? The original database disk array tops out at around 90-100 writes per second while the dev box can do 180-190 writes per second. That about the same as a theoretical single disk, 10k and 15k RPM respectively. The throughput under load? No more than 1.2MB/s. Nothing like the 250MB/s+ arrays I've tested before. It's important to note that these results came from real database load from the actual website under synthetic load with ApacheBench.