Description

Using my previous main rig hardware I've built a fairly high end NAS for use as VM storage over iSCSI. The only parts I've had to buy specifically for this build are the case, ECC RAM, hotswap bays + HDDs, and the PM981 read cache + Optane write cache.

Luckily Ryzen supports ECC UDIMMs so I don't have to worry about bitrot or corrupted data being written to the array. Confirmed with "dmidecode --type memory" to have multi-bit error correction. This motherboard also happens to be ideal for this, since it can reallocate x4 CPU PCI-E lanes from the 2nd x16(x8 electrically) slot to the 2nd M.2 slot. So I have one empty x16 slot with 8 lanes that I can use for an HBA card when I want to expand and end up with all 24 CPU lanes used. With an HBA card I'll have 14 SATA, and there's just enough 5.25" bays in this case to fit two 4 x 5.25" to 4 x 3.5" hot swap bays, and there's room for two 2 x 5.25" to 3 x 3.5" for 14 HDDs total. At that point I may have to reinstall FreeNAS on a small USB SSD to free up the SATA that the current SSD boot drive is using.

The Seagate enterprise capacity/exos 7e8(their new name) drives are some of the highest performance SATA drives available. Currently running them as a stripe of mirrors. The PM981 ssd has above average write endurance and is just about the right size for when I add more RAM(too much read cache actually eats up too much RAM to store headers). The Optane 905p has capacitors for enhanced power loss protection(which is why it's a 22110 size M.2) which is good since if you lose your slog(write cache) you can kill the whole array. I definitely think I need more RAM and more HDDs to improve the read performance, though with the Optane I can fully saturate a single 10Gb link with writes. Also need a 10Gb switch so I can use link aggregation.

Currently working on wiring the house for 10Gbe. People would normally go for rackmount hardware, but there's nowhere I could put a rack without it sounding like a plane is taking off in the next room. I originally had a Caselabs Mercury S5 on order because there's really not a lot of other cases that have enough 5.25" bays to fit enough hotswap bays for me. About 1-2 months before it would have arrived, they went bankrupt and I had to file a chargeback. Mountain mods was the next best option that had a lot of 5.25" bays, but the quality is honestly very far behind Caselabs. The tolerances are so loose that the screw holes for the side panels barely line up.

I wouldn't recommend watercooling a storage server like this, but I just had enough watercooling gear to do it and I've never had a problem with my previous builds.

Part Reviews

Storage

Amazing as a slog drive for ZFS. Requires a heatsink and came with one from EKWB, and boy does it run hot.

Log in to rate comments or to post a comment.

Comments

  • 10 months ago
  • 1 point

Nice build! What are you using the NAS for?

  • 10 months ago
  • 1 point

Right now its where all the virtual disks for my VM hosts are. So all the VMs are exclusively running storage over ethernet. Using a Windows Server VM to share some of it, and using some of it for backups of some of the family computers.

  • 10 months ago
  • 1 point

wait, the ASUS Prime x470 PRO does not support ECC RAM, how did you get the ECC RAM to work?

  • 10 months ago
  • 1 point

I'm using a Crosshair VII hero and Wendell from L1Techs has reported that ECC works on this board.

It is kind of difficult to tell if it is working though since AMD doesn't validate consumer chips for ECC and the motherboard vendors don't implement any type of error reporting logging like a lot of server/workstation boards would. There is SO MUCH conflicting information about how to check if ECC is working in cases where the motherboard doesn't have built in event logging for it. Even a lot of asrock rack, asus ws, and some supermicro boards that are meant for workstation/server use misreport the bit width and ECC type when querying the smbios. I've seen some people say that it can even report different information depending on what BIOS revision they use. Some people say that memtest86 can show if ECC is working, but others also say that it doesn't work even on server boards that support ONLY registered ECC RAM. It's damn confusing and in the end with unofficially supported ECC you just have to have faith that it works.

The best evidence I have that it works is that dmidecode in FreeNAS outputs that it supports multi-bit ECC and the total bit width of each DIMM is 128(really should be reporting 72), while people on other x470 boards that don't use ECC get output that says "none" for ecc type and 64 bit total width per DIMM.

  • 9 months ago
  • 1 point

How did you decide to go AMD? Isn't Intel the CPU of choice for FreeNAS? How stable is the NAS? Thanks,

  • 9 months ago
  • 1 point

I already had the CPU, motherboard and NIC for about a year now so that's a plus. AMD supports ECC(important for software raid like ZFS) across their entire product stack, and there's 4 more PCIe lanes than an i7 or i9 which gives me just enough lanes to add an HBA card later. If I wanted equivalent CPU performance from a modern Xeon that supports ECC I'd have to go with a $2300 Xeon Gold that would still be slightly slower. There's actually a new proper workstation X470 motherboard from ASRock that logs ECC errors in bios and has IPMI + integrated graphics so that's what I'd go with if I had to do this again.

1st gen Ryzen has a lot of problems with BSD + Linux, it seems to lockup when the CPU goes into deeper idle states but if you don't mind higher power draw you can get around that by just disabling higher C-states with a kernel boot parameter. I haven't had any problems with FreeNAS or Proxmox on my 2700X or 2950X. I just got a 10Gb switch so I did a lot of experimenting with link aggregation, and LACP doesn't seem to work correctly in FreeNAS without rebooting(static LAGGs seem to work better anyways), but before those reboots I got to about 4 weeks of uptime without any issues. The only issue right now is that the CPU temperature is reading like 47 degrees higher than it actually is. The devs say that's going to be fixed in the next release.

[comment deleted by staff]
  • 10 months ago
  • 1 point

/s

[comment deleted]
[comment deleted by staff]
[comment deleted by staff]