raid and vista?
sorry - I can't help you at all with that. other than to say, don't do that.
I run an 8 drive software raid system:
I've posted about it here a few times.
the neat thing is that I can start with just 3 drives, create a raid pack then add in more drives, LIVE (!), while the system is up and running. in fact, I don't even have to dismount the raid pack at all or even mark it read-only. users have full write privs to a pack that is building. it works!
to do this hot OCE (online capacity expansion) thing you physically add the drive (in my case, to any spare ide, sata, scsi or even firewire, usb connector) then tell the software about it and start the rebuild. it rebuilds in place. about 8 hours later (for my 3TB system) that new drive is part of the cluster and the rescan is complete.
I can then yank a drive out, causing a fault and things still work. insert that drive back again and it rebuilds or rechecks. all live.
once I screwed up the order of the drives when I mixed up some cables, adding new ones to the pack. it freaked out - then I freaked out - I read up a bit on it and found that I just had to do a non-destructive scan and it 'found' all the right drives in the right order and started the rebuild.
I've done some bad things to my system and its still up, kicking and serving 3TB of files.
gentoo ~ # df -h
Filesystem Size Used Avail Use% Mounted on
dev/hda1 244M 105M 127M 46% /
udev 10M 204K 9.9M 2% dev
dev/hda3 19G 1.4G 17G 8% var
dev/hda4 86G 5.8G 76G 8% usr
shm 1.5G 0 1.5G 0%
dev/shm dev/md/0 3.2T 3.2T 60G 99%
mnt/raid
only 60G left - guess I'll have to add yet ANOTHER drive, soon.
anyway, dump vista and their unreliable raid nonsense. get linux (free), use software raid5 ('md device'), don't bother with LVM (its not needed for this), and get yourself a motherboard or set of sata controller ports, enough to support the drive count you want. I prefer the intel badaxe2 since it has 8 (!!) sata2/300 ports onboard. that's why I have 8 drives in my array
but if I needed more, I'd pop in a $20 sata pci controller and there, I have room for 4 more drives.
raid5 is incredibly reliable IF you run the right software.
I had serious doubts about software raid and I've been using hardware controllers for over a decade now (3ware, adaptec, buslogic/mylex, DPT, IBM) and only now is software raid really EXCEEDING the performance of the hardware controllers when on a dedicated pc with decent sata2 controller ports.
oh, and since I run software raid, I can directly get access, easily, to my SMART data on each of them. if I want to do a quick check of the drive temps:
%
hdd_temp dev/sda: SAMSUNG HD501LJ: 30 C or F
dev/sdb: SAMSUNG HD501LJ: 30 C or F dev/sdc: SAMSUNG HD501LJ: 30 C or F
dev/sdd: SAMSUNG HD501LJ: 29 C or F dev/sde: SAMSUNG HD501LJ: 31 C or F
dev/sdf: SAMSUNG HD501LJ: 32 C or F dev/sdg: SAMSUNG HD501LJ: 30 C or F
dev/sdh: SAMSUNG HD501LJ: 31 C or F
nice. all seems to be running cool.
check out linux md-raid (software raid). email me if you need pointers or help.
I work at a large computer company who has some very high end enterprise storage solutions - and so I'm not easily impressed by consumer computer systems. but this software raid stuff is FINALLY enterprise class. it really is.
--
Bryan (pics only:
http://www.flickr.com/photos/linux-works )
(pics and more: http://www.netstuff.org ) ~