And if you want a computer that 'just work's, dont go comparing a home built pc to a mac... at least compare it to a reasonable quality and brand store bought laptop or whatever.
There is no reason why a home-built PC should be any less reliable than a brand PC.
If anything, if the builder is savvy, they would check what the components are capable of, and which revisions have issues before buying. It may depress people if they knew how little of this occurs in teir 1 machines, of any class level. Also, many of the motherboards which are supposedly only available to teir 1 systems, are available legally, as alternative products.
If you self-build, you will have the oportunity to get what you require a lot cheaper, unless what you require fits in with what the teir 1 vendors want to sell you. In my experience, wanting what they want to sell people is rare, and 'upgrades' are costly.
On the storage side of things, having two seperated logical disks is a good idea. Have them on different raid cards, and ensure that the raid card is a real hardware raid, not software.
Personally, I would not recommend raid0 for the OS, it is the builder's choice of course.
The reason I would not recommend raid0 for the OS raid, is that it actually doubles the chance of the OS volume failing (i.e. statistically, the MTBF would be halfed. The chance of the disk failing on any one day is effectively comparable to 1/MTBF for a single disk, for two disks, you have twice the chance). Raid0 is where two disks have data striped across them. This means that if you are writting a file, chunk0 goes to disk 1, chunk1 goes to disk 2, chunk2 goes to disk1, chunk3 goes to disk 2, etc. etc. This speeds things up because the internal memory of the drives (cache) can store and deal with the chunks whilst the other disk is being written to (very basic explaination, but enough).
Unfortunately, if either of the disks fail, you will loose half of every file written to the disk (anything greater than 64k with most default raids). This means you will loose everything basically.
For the OS drive, as most of the time people forget to backup regularly, or update their backup software within the OS, and then forget to update the magic CD for recovering the backup, this usually means a loss of all the data.
Other raid levels more worth considering are:
raid1 = mirror mode. Data is written to both disks simultaeneously. If you have a good hardware raid card, there will be no reduction in speed, but no gain in speed. The disk space is the disk space of the smallest disk. If one disk fails, there is no loss of data. Raid 1 is usually 2 disks exactly
raid5 = parity raid. Data is written to (n-1) disks as per a stripe. The n'th part of the stripe contains a checksum. This means that if one disk fails, the raid card can recalculate (and work with) the data that is on the disks on the fly. The speed for small writes is usually improved because of the cache on the disk. Writting large files can be vastly improved. You loose the size of one disk in this situation though. For example, if you have 12x 300gb disks in raid5, you get 11x 300gb available = 3.3TB (actually what I have described is actually raid3, but is close enough not to argue about). For raid 5 you have to have 3 disks or more. For example. 3x 1.5TB disks in raid 5 gives you 3TB
If you have money to burn, and really need the speed, then raid10 combines the resiliency (not the same as reliability, you have 4 times the chance of a disk failure, but only 1.5 [or is that 1.6] times the chance of loosing data), and the speed of raid0
Basically, you mirror two disks together, and another two disks together, then take these two virtual disks, and stripe them. This gives a lot of speed due to disk caches and writting to different disks at different times. It gives you the storage capacity of 2x a single disk though. For raid 10, you can have any even number of disks above 4.
Edit:
Not that I want to be pushing up MS sales, but my desktop running windows XP says it has been running for 17 days now. It has had a little less use than usualy it would in that period, but it usually does get pretty hammered.
On the other hand, my linux machine reports 48 days, and that was only due to me changing the UPS it is on. That really gets hammered.