Description
Time for a twist An actual custom 4U server on pcpartpicker for a change!
A build with a bunch of new hard drives for a new NAS/local compute server with plenty of power for cpu heavy tasks and virtualization.
This will finally allow me to get rid of the last Windows computer that I personally use, which was my old NAS. Been wanting to ditch it for some form of Linux for a while now, but couldn't because of the implications it had on data migration.
Focused around Proxmox as a hypervisor, utilizing ZFS (on Linux) for the main storage pool with an accelerated ZIL/L2ARC utilizing the m.2 NVMe SSDs and a separate SATA SSD for boot.
The ZFS pool will be served to the LAN over NFS for networked storage purposes, aside from also serving as the storage for all local VMs.
Proxmox has an easy configuration to setup/use a zfs pool as your main root/boot partition, however I want to specifically avoid this for various reasons (it's generally not a good idea to use your primary data store as dual-duty boot drive). Because of this, I will use a simple sata ssd as my proxmox boot drive, separated from the rest of the storage in the system. This could be avoided with a hypervisor that runs in RAM, booted from some sort of read-only live media (i.e. usb), however proxmox doesn't really support this (at least not well). Alternative hypervisors like ESXi or SmartOS do support this and wouldn't need a SATA disk dedicated for installation/booting. I was originally planning on using SmartOS because of the superb ZFS support, however the AMD-V virtualization extension support for KVM is experimental at best on SmartOS, so I fell back to proxmox.
8 of the 10TB HDDs will be put into a RAID-Z2 VDEV for the ZFS pool as well as 8 of the 4TB HDDs into a separate RAID-Z2 VDEV for the same pool. A single 10TB hard drive and 4TB hard drive are included as cold spares for hard drive failure replacement. I included the 4TB hard drives because I already had many of them laying around, even though the $/GB wouldn't really make sense if buying them new. This pool will be used as the primary data store for the local server VMs as well as over the network with NFS. This configuration should yield about ~70TiB of usable storage after the pool is created, although from a practical perspective, I will never really want to use more than ~56TiB in efforts to maintain at least 20% free space to ensure high performance. (Data integrity + performance has a high cost with storage). If I ever need to use more than 56TiB of storage, I'll simply add more drives for a new VDEV to add to the pool. This configuration also allows me to lose any 2 HDDs from each of the 2 VDEVs (so two 10TB and two 4TB drives) at once, and remain operational while I replace them (this ZFS configuration is equivalent to a RAID-60 in terms of parity/failure).
Initially with the 2 LSI cards (acting as HBAs in IT mode), I will only enable 16 (of the 24) HDD ports on the case, but this can be expanded by simply adding another 8-port sata HBA in the future.
The 2 NVMe SSDs will be utilized so that each of them will have a 32GB partition which will be used for a mirrored ZIL, and the remaining space on each will be used as a striped L2ARC cache (~400GB). Partitioning drives to share ZIL/L2ARC tasks is not ideal, however the NVMe SSDs will provide more than sufficient bandwidth/iops. I wanted a mirrored ZIL with quick SSDs, as well as a nice L2ARC cache, and rather than buying 3+ SATA SSDs without partitioning them for this job, I figure 2 high performance NVMe SSDs will make up for the fact that they are being exposed to the ZFS pool as partitions rather than whole disks.
For future upgrades, I'm considering adding a 10Gb/s NIC (once I swap all my LAN over to 10Gb/s gear), additional HDDs for a new VDEV, and another LSI card would be necessary to accompany any new drives. Adding more RAM in the future is a possibility if 64GB doesn't turn out to be enough in the long-run (ZFS is definitely a memory hog).
Notes: I recognize that I am not using ECC RAM despite using ZFS, and am fully aware of the risks. There really wasn't very much availability for unregistered 16GB DDR4 DIMMS, and I'm ok with the risk that it brings by not having ECC memory (plus the faster memory that I chose will be beneficial to the speed of the threadripper CPU). I also recognize that even with 64GB of RAM, for what I'm using it for, this could still be considered on the lower side (both memory hungry ZFS plus virtualization). I am fully aware of this, and if it becomes a problem with my particular workloads, I can easily add more RAM in the future.
Comments are encouraged.
Comments