Archived posting to the Leica Users Group, 2008/08/21
[Author Prev] [Author Next] [Thread Prev] [Thread Next] [Author Index] [Topic Index] [Home] [Search]Very thoughtful and informative reply, George. Would love to debate over a beer sometime! My philosophy about PC's in general is that they are like sports cars -- lots of horsepower but not much torque. So you never load down one PC with too many tasks.My network has PC's dedicated to specific tasks: Server as just a fat disk (the RAID5), Photoshop workstation has powerful CPU and dedicated fast SCSI scratch drives, film scanner workstation is also used for e-mail, word processing, etc, and the Office PC is for Quickbooks. This is all a lot less expensive than it sounds, since I have mostly rotated older PC's into the tasks that don't need much horsepower. In the same way, I like to separate programs from data - probably leftover from midrange and mainframe computer days when "libraries" kept an organized approach, very much *unlike* Windows. Perhaps that is why I like the OS separate from data on my server -- data only on the RAID, OS on the IDE (or is it SATA?) drive that runs the computer and will be there no matter what happens to the RAID. I also like the performance advantage that puts the OS overhead on one disk, while the data read/writes are the only function of the RAID. Gary Todoroff At 10:36 AM 8/20/2008, you wrote: >Spencer Cheng writes: > > A bit late but here is my experience. > > [...] > > I agree with those who says the OS should not be part of the RAID > > array if for no other reasons other than that RAID5 recovery can take > > a long, long, long time. If you have a deadline, the last thing you > > want to do is to sit there and wait, and wait, and wait, and pray that > > the OS can be recovered so you can actually boot your PC.... > >I agree with Spencer, RAID is not a replacement for separate backups, >preferably taken on a regular basis and stored offsite. Add >catastrophic hardware failures to the list of coffee, soda, fire, and >theft. > >But I guess that I now disagree with Spencer _and_ Gary. Hopefully >this doesn't make me disagreeable.... > >If you have a redundant disk setup, RAID 1 or RAID 5 or RAID 6 or..., >and you lose a disk, that volume should continue work, albeit possibly >more slowly. That's what RAID's do. There's no mystery about it, no >praying, and no waiting for it to recover. When you replace the >failed drive the system will spend a *lot* of it's time "resilvering" >the mirror aka rebuilding the array, during which time performance >will _suck_ but the volume will still be available. > >It's not as if it's gone away and you're hoping that the recovery >process will somehow magically bring it back. All of your data is >still there and still available. No praying involved. If you don't >believe that your RAID can survive the loss of a single disk, you >probably haven't played with it enough and I'm not sure what's it's >giving you. > >If you're thinking about setting up a RAID, you really owe it to >yourself to experiment with it before you have all kinds of data on >it. Read the manual. Set it up. Read the manual. Power down and >disconnect a drive. Reboot and see what happens. Read the manual. >Power down and reconnect the drive. What do you need to do to >reintegrate the drive? Read the manual. Disconnect two drives. >Etc.... Once you're comfortable with it as a tool, then you can put >your valuable data onto it. > >Various RAID strategies offer varying degrees of redundancy. A >two-disk mirror can survive the failure of one disk without losing any >data. A three-way mirror can survive the loss of of two disks. A >four-way mirror could survive the loss of three. A simple RAID 3 or >RAID 5 can survive the loss of one disk, lose two and the whole >thing's toast. There are various flavors of RAID 5 that offer more >redundancy, called things like RAID 6 and RAIDZ2 and..., and which can >survive the loss of multiple disks. > >Just as when you have a disk die in a single disk setup and you can >sometimes peel some/most of the data off of it, you can sometimes peel >some/most of the data off of a completely FUBAR'ed RAID. But that's >not the same thing that happens when they lose a single disk. > >You should think about the risk of two disks failing in your >situation. You might think that disk failure is a long shot and that >two failures is well nigh impossible. On the other hand your first >disk might have failed because you keep the machine in the closet so >you don't have to listen to the fans and you've overheated it. That >could make your second disk more likely to fail too. Or maybe you >bought both of the disks at the same time, they came out of the same >box, and the FedEx guy dropped it on the way to your front door. Or >they were both made on the Monday morning following Mardi Gras. If >you want to see some real disk failure numbers w/out marketing crap, >check out this study the Google crew did. > > http://labs.google.com/papers/disk_failures.html > >There are lots of solid reasons to keep some data on a separate hunk >of storage from other data (and the OS is really just another hunk of >data). Performance. Manageability (you don't want the fact that your >kid filled up his hunk of the disk with mp3's and videos to keep you >from being able to work with your images). Move-ability (you'd like >to be able to take the hunk with you somewhere else w/out lobotimizing >the machine. Sometimes you handle this by using a separate disk >(either a real one or a virtual one constructed from a RAID). Other >times you partition a real/virtual disk into hunks and use them >accordingly. One or more of these reasons might encourage you to put >your OS on a separate disk, but they don't otherwise mean that you >shouldn't keep everything together. > >If you have a mac that can hold multiple disks, and you're willing to >put two disks into it, I'd say that you really should set them up in a >software RAID 1 and that you should put the entire kit-and-caboodle on >that RAID. My money's where my mouth is, it's how I run my machine. > >If you can put more disks into it, then it's a more complicated >decision. Meet me in Oakland, CA and buy me a beer and I'll babble >about it until the beer's gone.... > >Sheesh. When did I get so long-winded???? > >g. > >_______________________________________________ >Leica Users Group. >See http://leica-users.org/mailman/listinfo/lug for more information > > >No virus found in this incoming message. >Checked by AVG - http://www.avg.com >Version: 8.0.138 / Virus Database: 270.6.6/1624 - Release Date: >8/20/2008 7:11 PM