Home brew NAS

Messages
893
Edit My Images
No
Today I ordered a case for what will be my new NAS come media server.

It needs to provide upwards of two terabytes storage to backup photographs and Lightroom catalogs.

Another terabyte or so to store backups of files stored on PCs and base images for bare metal restores.

Then another one or two terabytes for movies and music.


My first leaning was toward RAID5.

But I have copies of everything elsewhere and I don't like the thought of all the disks spinning up just to access a file or two.


I am looking at unraid, freenas and JBOD as alternatives. In terms of spec the system will be built on a Celeron J1900 with 4 GB of non-ECC RAM. I am hoping to run software to transcode media, e.g. plex as well as run a mail server + LAMP stack.


What are your suggestions?
 
Last edited:
whats the case and whats the drive controller first off. also what disks will you be using.

cant help too much with the OS suggestions, I tend to stick to MS server OS (it's just what I know works).

ive run raid volumes at home for several years now, having them spin up isn't an issue. at least ive never noticed any lag (only when accessing my backup DAS which are typically sleeping 23 hours of the day).
 
Freenas works ok. I use a hp mini server as you can pick them up for £125 new (with cash back) and they take 4 disks. I do run mine with windows server as it also runs my CCTV system.

I would suggest you consider raid. It's not necessarily the data, but if you have a jbod it will take a significant time to reload the data. I'd suggest raid 5 and large disks are cheap. I'm using 4x3tb disks, raid 5. It stays turned on all the time and I run a regularly weekly backup. You could use something simple like syncback or robocopy.
 
Do you mean transcode (recode video from one format to another) or remux (put a/v into a different container format)? If you are transcoding all but the simplest of files, the Celeron probably won't cut it.

My choices would be based on the underlying filesystem - I really, REALLY like ZFS for NAS type products so of the two installs, I'd go with freenas and run RAIDZ (I run FreeBSD at home and run RAIDZ and this works very nicely). You may want to check hardware compatibility though - FreeBSD is probably the O/S that has the least hardware support in. JBOD still needs an OS install though....

If I were building a machine from scratch, I'd probably run Linux and install ZFS on Linux, but I like the command line so am happy with an install that might need a little tinkering. If you prefer a web based interface, you could look at webmin.
 
I've bought this case:
http://www.ebay.co.uk/itm/181403093307?ssPageName=STRK:MEWNX:IT&_trksid=p3984.m1439.l2649

And that's it so far. I'm at a bit of a loss just now in choice of CPU/Mobo.
I was looking at mini ITX with a processer TDW of 10watts.
This would be a good candidate http://www.scan.co.uk/products/giga...n-j1900-quad-core-20-ghz-cpu-ddr3-sata-ii-3gb if it had a PCIe as oppose to PCI. There's only 2 on-board SATA sockets so I'd need to add an additional SATA controller.

The Celeron J1900 benchmarks at 1954, which is just shy of the Plex recommended minimum CPU benchmark of 2000 for full HD transcode, but plenty above the recommended 1500 for 720p.

The Pentium J2900 would be a slightly better alternative as it's clocked a smidge faster. This board is available with a PCI express slot which could run a SATA controller:
http://www.newegg.com/global/uk/Product/Product.aspx?Item=N82E16813135381

As for disks, I'd be recycling a pair of these as supplied with my Synology NAS:
http://www.newegg.com/global/uk/Product/Product.aspx?Item=N82E16822145473

I'll add another couple of 2TB WD Red disks - and then look to add another pair a bit later on.
That would mean six disks in all.

As for a boot disk, I have a 60 GB SDD SDD I'd like to use. That would be 7 SATA disks - and I appreciate that's starting to get a bit tricky. Other options include USB boot disk or buying a board that will boot from an mSata disk.

And I also have a pair of WD Green 1 TB disks in a Netgear ReadyNAS DUO - not my first choice for storing anything that is critical though. I'm tempted to leave them where they are and use them as a tertiary backup.



In terms of OS - I'm learning to administrate a Windows server farm at work, albeit with a smattering of 'nix boxes. I'm really keen to stick with linux at home.
 
Last edited:
Depends what you are transcoding to what..... (size, bitrate, quality etc...)
 
Ah was that bit about transcoding edited on?

The amd in the Microserver for example sucks for transcoding. So I'd agree with andy the CPU needs a bit more beef, especially if the box is going to be duel/multi purpose.
 
Some of the films are 1080p and the main tellybox in the house is 720p - hence the transcoding.

And yes, that board would need an add on card, as mentioned in my last post. Majore drawback with that board is that the only expansion is PCI, not PCIe.
I've since learned there's another big strike against that board - running anything other than Windows 8 on it means re-flashing the BIOS.

So, it's back to the drawing board for now.
Plan D would possibly be building the new box without transcoding in mind and instead run the plex server on my PC. The only disadvantages of that are turning the PC on to watch a film and also that the PC is dual boot. I use Windows for Photoshop and Lightroom. Linux for everything else.
 
Is the TV not "HD ready"? (i.e. will accept 1080p and downscale, but the display is only 720p).

You could always transcode offline. Quality will probably be higher too....
 
1080p to 720p will not require transcoding, transcoding will only need to be used if the receiving device does not support the native format of the file or container
 
I take your point Neil, but it would be nice to have the option there if it's needed.

I'm going to use AMD socket AM1 CPU and mobo - it's an mATX board with two PCI x1 and a PCI x16 slot, so I'll be adding a couple of SATA controllers for extra disks.


CPUAMD 5350 2.05Ghz Quad-Core Processor38.39138.39Aria PC
MotherboardAsus AM1M-A Micro ATX AM1 Motherboard23.71122.92Amazon UK
MemoryCrucial 1x2GN 240 pin DIMM CT25664BA160B16.06116.06Amazon UK
Sata ControllerSyba 2 Port SATA 6Gbps PCI-Express x1 2.0 Card18.38236.76Amazon UK
SATA HDD Toshiba DT01ACA200 (2 TB)55.442110.88Scan
 
Yes Neil, you right on that note.

It means some careful planning about how the disks are utilised. In theory, I can put six disks in the machine to make two (software) RAID 5 volumes. So long as each RAID array has disks spanning different controllers, the results should be acceptable. I've not done enough research yet to see how unRAID or JBOD will perform.

Food for thought anyway. Nothing is set in stone yet. I'm more interested in reducing power consumption and noise rather than getting breath taking performance.
 
I'd also have a think about what you're looking to achieve with RAID - ultimately, it's only there for single disk failure, but for proper resiliency you really do need offsite backup. I use Backblaze, which offers unlimited backup space for (can't remember exactly) about £30 a year, which was so much cheaper than Carbonite / S4 / Google it wasn't funny. Having done this, I've got a system restore for the boot disk falling over issue, and local USB3 hard-drive back-ups for data that I can't wait to re-download if there was a failure.

With that in place I just run a JBOD. For the added benefit of RAID5, it just wasn't worth the added faff and messing about from my perspective.

Just bear in mind, depending on how much data you have, the initial upload can be lengthy - for about 4TB, took me about 70 days (with my Virgin cable - 60Mb/s down but only ~ 1.5Mb/s up)
 
I don't mind doing an off-site backup - it's on my list of things to investigate. I do have two separate physical copies of most things as it is and three copies of the really important stuff.

There's are actually some nice low power Xeon processor's around - but I'm looking at ~£200 just for the CPU.
Since that's pricey, a pentium G2100T offers good performance/watt and supports ECC memory and that should retail at around £60 - if I can find any in stock.

The only thing set in stone is the case. Which has arrived today and should prove to be quite flexible no matter what I build.
 
I haven't ruled out the possibility of using FreeNAS or zfs.

More research needed (and some funds!) before I make any more decisions.
 
So why ECC memory?
 
Neither am I - and I run FreeBSD (FreeNAS O/S) and ZFS on my server too.... Wasn't even part of my buying decision for the mobo (which is non-ECC only BTW).
 
"Using non-ECC RAM can cause unrecoverable damage to a zpool resulting in a loss of all data in the pool."
http://doc.freenas.org/index.php/Hardware_Recommendations

I've found a££ordable Xeon CPUs with a TDP of 17 watts and similarly low power consumption* - and equally impressive performance.

If I go down the Xeon route, a little extra dosh spent on memory seems worth it for piece of mind if heading down the FreeNAS/ZFS route.


*No onboard GPU = more power hungry mobo
 
Last edited:
Well... if it makes you feel happier... :)
 
I don't believe in half way houses.
If there's point using RAID at all, particularly hardware RAID, thenI should think about ECC memory.

Otherwise the data isn't that important and JBOD will do.
 
I don't believe in half way houses.
If there's point using RAID at all, particularly hardware RAID, thenI should think about ECC memory.
Yeah, but ZFS RAID isn't hardware... and it has loads of data checking built in....
 
Yeah thanks for that.
Nothing's ruled in/ruled out yet.

I've yet to make any firm decisions.
 
I'll let you know when my non ECC and hardware raid has an issue.

20 months and none so far..

;)
But how do you know? There could be a bit incorrect somewhere on the disk that you just haven't read. My ZFS arrays are checked weekly for data errors (this can be done whilst the arrays are online) and as of Saturday, there were none :p
 
:D
 
The interweb is full of opinions and anecdotal data on the merits of different storage architectures and use/disuse of ECC ram.

I've been using NAS storage for years. I've also got a RAID set-up in my desktop and I've previously used ECC memory in workstations - and that's without taking into account my professional experience.

It's very hard to pull any empirical data together to justify the extra expenditure - but I'm not planning on choosing a CPU + mobo just to support ECC memory. However, if the option is there, DDR3 ECC is not unreasonably expensive and makes good sense.
 
The empirical data to support it would be if you've had errors when you have NOT had ECC

Me, I've been running ZFS with 16G since Sept 2011 with no issue and a "standard PC" (i.e overclocked i7) since Jan 2009, with no issue (both machines on 24/7). And that's not to add in works PCs, and home PCs that have never had an error.... I also use workstations for at least 8 hours a day, 5 days a week and never had issues (OK, some will be ECC).

Whilst I'm not saying ECC is worthless, I'm saying to find it in a home system is pretty rare - just like link aggregation is rare... And needs specialised knowledge to deploy.

Having said that, have a look here: http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf Makes interesting reading :)
 
Is it really worth the difference in price between non-ECC & ECC RAM?

I've been running a HP NL40 Microserver with 16GB ECC and 4x4TB Seagate drives using FreeNAS and a RAIDZ1 pool (think RAID5) for over a year and have been very happy with it. Much faster than the Netgear ReadyNAS it replaced, which was really just a storage solution. I have Plex running on the FreeNAS box (both Media Server and PlexConnect) and am also using it as a ownCloud server. Using TimeMachine to backup multiple Macs, and also backing up critical files on the NAS to an attached USB drive (with a single ZFS pool) using rsync that can be rotated offsite.

Probably cost about £700 to get the whole thing up and running for around 11TB of usable storage, and allows me to keep data in at least 3 places (usually more). The FreeNAS forum is a pretty good source of information if you haven't already found it, although don't mention using non-ECC RAM on there unless you want to be shot down in flames :D
 
Reading this thread makes me appreciate the simplicity of a Qnap.
 
horses for courses. i used to run a SMB synology and sold it for a microserver. much more flexible for my requirements.

Indeed. A few years ago I would have gone down the custom route but for what I need (ie storage, media server and CCTV) the Qnap is perfect.
 
Back
Top