Description

For various reasons, I decided that I needed to bring "home" all of my data that was in the cloud while leaving me sufficient (it is never enough) space for the future. I spent weeks investigating potential consumer NAS units from Synology, QNAP and others before finally deciding that the process of building my own would afford me greater power and flexibility. My primary use case is serving up media to my main Plex server and offloading all of my media management (Sonarr / Radarr / NzbHydra / Obmi / Crashplan / etc.) activities with a few VMs on the side in the future.

I put special emphasis on having power efficient and small footprint devices for all my computers. But for this storage server, I had even greater requirements as this box will live at a remote location with my existing server where I thoroughly abuse access to a gigabit fiber connection. I therefore specifically sought out devices which could provide the lowest volume to 3.5" bay count ratio. Not surprisingly, there were very few options. I would have loved to get a Lian Li PC-Q26 but they appear to have discontinued those cases without any successors. I spent a significant number of days pondering the U-NAS product line and weighing whether the tradeoffs were worth the small footprint. In the end I decided that their largest product, the NSC-810A provided the best ratio of drives while supporting the components I planned to utilize. The U-NAS website has specs and pictures of the NSC-810A in various build stages which make it very difficult to understand what parts may or may not come with the unit and how they might fit together. Unfortunately I found virtually no documentation or build logs online to help answer some of my questions. I decided to bite the bullet (and the dread of buying/shipping from China – 5 days!) and then write this build log so there would be more pictures and information available for the next person!

Build notes on the U-NAS NSC-810A:

This unit supports 8 x 3.5" HDDs, a microATX motherboard, up to 2 expansion cards (which require PCI-E adapters not included with the case – this is why my pics don’t show a riser ribbon in them), a 1U PSU and 1x USB 3.0. The box is made of a combination of plastic and aluminum. The front is a plastic with a soft matte black finish that oddly enough is a fingerprint magnet. The shell is the primary aluminum and - wow, compared to the many other PCs I've build in my lifetime - this shell is extremely difficult to remove and put back on properly. This frame of the case is rock solid and overall the unit feels "premium" even though it’s still pricey for what it is. The NSC-810At comes pre-configured with 2x Gelid 120mm high-static pressure fans in the rear and 1x 60mm Gelid fan on the top exhaust vent. The 60mm fan was a surprise as it is not listed on the U-NAS website specs. These fans aren't quiet but they aren't horribly loud either and they sure do move a lot of air. That said, if I was keeping the unit with me in my home, I'd definitely be replacing them with Noctuas. The HDD slots have 2x 4 port backplanes which are prewired with 2x 4-SAS to SATA cables. If you want to use a RAID card with SAS be prepared to remove the case and rewire the backplane. Speaking of opening the case…

Building in the NSC-810A is definitely cramped even though I’m used to tight build environments. I feel like there were some very strange choices made by the U-NAS team when creating the NSC-810A. It was almost as if they scaled up the NSC-800 and only moved the motherboard tray / PSU dock without taking some other considerations into account. This was most apparent in the wiring. I knew that this case was going to be small but I also wanted to maximize airflow to keep my HDDs cool and my Xeon processor running smoothly. That meant keeping cables out of the rear “alley” between the backplanes and fans and trying to keep all other cables on the periphery. Oddly enough, the molex connectors for the backplanes are wired (and zip-tied) on the opposite side of the case from the PSU. Not only that, but there is virtually no room on that side of the case for ANYTHING because the 8th drive bay is separated by only a few millimeters from the shell when it is put back on. I had to cut the zip ties, completely remove the back of the case (thick plastic secured with a ton of tiny screws) and rewire the molex connections so that they could reach my PSU + molex extensions (1U PSUs typically have short cables anyway). While the rear of the case was off, I took the opportunity to put fan grills on the fans since I had some spares available from my NCASE M1 build. Other issues I ran into were 1) my ATX + 8pin CPU connectors were way too short to reach the opposite end of the case where my motherboard power connectors were located and 2) the 60mm fan is positioned right where the majority of mATX boards locate their SATA, USB, other connector cables which creates a cable spaghetti mess right in front of where you want a clear zone for airflow! It really felt to me as if the PSU slot should have been swapped to the other side of the case but since that would have been a radical departure from the design of their previous products, they left it as is.

I was concerned about cooling the 45W Xeon CPU in such a shallow are for motherboard but the Noctua L9i is doing great so far. It has about 30mm of clearance from the top of the case which seemed sufficient but I’m not sure I would recommend any cooler that is larger than this one. Where I live it is blazing hot right now and my AC units aren’t really winning the war so I don’t want to cite specific temperatures other than to say this build + case idles at 6C above ambient and is 14C above ambient under load. The HDDs have been ranging from 4-9C above ambient depending on utilization.

Notes on component choices: Again, the primary purpose of this server is to provide local networked storage to my primary Plex server with several other secondary activities running in Docker / VMs. My choice of hardware is significant overkill for that purpose, I fully acknowledge this. But I wanted to focus on the most efficient yet powerful components that would reliably cover my requirements for years down the line, especially because it is at a remote location that I rarely will have access to. I knew I wanted to take advantage of reduced prices on decommissioned enterprise hardware so I stalked eBay and made what I felt like were some pretty solid buys over several weeks. I ended up buying the case, motherboard and CPU cooler for retail prices, everything else was obtained via eBay or /r/hardwareswap.

My first choice on components was actually my motherboard. I knew I wanted a motherboard with sufficient onboard SAS/SATA ports to connect all 8-bays of the NSC-810A as well as any other SSDs I might want to connect for cache or secondary storage. I wanted to avoid RAID cards because I wasn’t confident enough in managing / flashing those devices and all of the many considerations for driver support / interoperability. That left very few motherboard choices that were both microATX form factor and provided enough ports. Luckily, the Supermicro X10SL7-F fit the bill! It was my first motherboard from Supermicro and the first with IPMI built in so I’m overall very impressed with it. I committed a newbie mistake and didn’t realize from the online details that this motherboard required unregistered ECC RAM. I had to buy a second set of RAM after the motherboard arrived and I noticed that minor detail while reviewing the documentation.

I went with the Xeon E3-1265L v3 because it offered me 8 threads in just 45W and was compatible with the Supermicro X10SL7-F. Again, this is MAJOR overkill for serving up media over the network and running a few dockers but I built for the future and that might entertain some VMs! Same logic applies to maximizing the RAM capacity at 32GB of ECC DDR3 1600.

I knew I wanted a blazing fast cache drive to handle my dockers and serve as a download target for the gigabit downlink so I sourced a NVME Samsung M.2 SSD from a redditor and put in a Silverstone M.2 to PCI-E 4x adapter in one of the two expansion slots of the NSC-810A. For those of you considering a build in this case with expansion cards, consider your options wisely. A dual-slot card would very likely fit in the top slot of the case but it would be almost touching the motherboard (see pic with the M.2 card in the first slot sans ribbon cable) and the airflow would be pretty suboptimal. I had originally hoped to reserve my top expansion (and 8x PCI-E slot) for a 10Gbe card many years down the road, but with the cramped cabling options I’m not entirely sure how realistic that will be.

My build is initially starting with 4x 8TB WD Red drives (24TB with 1 drive parity) and will be joined by 4x 4 WD Red drives that are my current local storage array once I’ve completed the transfer of data remotely. Overtime I’ll upgrade the 4TB drives as finances allow or drive-health mandates. My ultimate goal is to have 56TB of usable storage on the local network which should hopefully last me at least 3 years (hah!).

OS Choice: What a tough choice this one was! I spent hours upon hours reviewing documentation, reading guides, watching YouTube tips, and digesting horror stories when making my selection for the OS from FreeNAS, unRAID, and Xpenology. I put FreeNAS on the sideline first because it was way more complicated (at least seemingly to me) and the 1GB per TB of storage guideline concerned me given that my maximum limit was 32GB. Additionally the whole FreeNAS Corral drama was going on and that seemed to really divide the community and put the viability of FreeNAS 11 (which just released to stable) in doubt in my mind. I loved the idea of Xpenology as I have several Synology devices already and adore their simplicity. But the hackish, unsupported nature of the build gave me real concern given that this storage server was going to be located remotely from me. Additionally I knew I had “overbuilt” on my specs and wanted an OS that would allow me to leverage that power, I am not convinced that would have been as easy on Xpenology. In the end I chose unRAID because it seemed like a very stable yet flexible OS that I could push further in the future as my needs / desire-to-fiddle expand while being backed by a large and helpful community.

Last but not least, one of the photos has a couple items to help with visual size comparisons rather than abstract measurements. They are a beer, a Dell XPS 13 9360 and an Xbox 360 controller.

Hope that was informative for you, thanks for the great build tool PCPartpicker!

Log in to rate comments or to post a comment.

Comments

  • 33 months ago
  • 1 point

Great build, great read! I may have to do something like this in the future!

  • 33 months ago
  • 1 point

Thanks for sharing your build details! I am currently pondering the various components for a ground-up build of a NAS for home, so your experience with a number of items I was already in the process of investigating were very informative and helpful to me.

  • 4 months ago
  • 1 point

I’m trying to follow this build exactly but having trouble with the CPU cooler screws that mount from below the motherboard. The Noctua screws are too short, the the threads on the neck of the screw stop halfway so the thumbscrew takes up more room below the board than available with the offsets and the screw is not long enough to make it through the backplate to reach and secure the cooler.

Can you explain how you secured the cooler to the motherboard?

  • 2 months ago
  • 1 point

Thanks for the write up, this was a very interesting read. As I'm doing my own DIY NAS Ive been looking at options so this has helped me a great deal.