Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While it may be tempting to go "mini" and NVMe, for a normal use case I think this is hardly cost effective.

You give up so much by using an all in mini device...

No Upgrades, no ECC, harder cooling, less I/O.

I have had a Proxmox Server with a used Fujitsu D3417 and 64gb ecc for roughly 5 years now, paid 350 bucks for the whole thing and upgraded the storage once from 1tb to 2tb. It draws 12-14W in normal day use and has 10 docker containers and 1 windows VM running.

So I would prefer a mATX board with ECC, IPMI 4xNVMe and 2.5GB over these toy boxes...

However, Jeff's content is awesome like always



Another thing is that unless you have a very specific need for SSDs (such as heavily random access focused workloads, very tight space constraints, or working in a bumpy environment), mechanical hard drives are still way more cost effective for storing lots of data than NVMe. You can get a manufacturer refurbished 12TB hard drive with a multi-year warranty for ~$120, while even an 8TB NVMe drive goes for at least $500. Of course for general-purpose internal drives, NVMe is a far better experience than a mechanical HDD, but my NAS with 6 hard drives in RAIDz2 still gets bottlenecked by my 2.5GBit LAN, not the speeds of the drives.


It depends on what you consider "lots" of data. For >20tb yes absolutely obviously by a landslide. But if you just want self-hosted Google Drive or Dropbox you're in the 1-4TB range where mechanical drives are a very bad value as they have a pretty significant price floor. WD Blue 1tb hdd is $40 while WD Blue 1tb nvme is $60. The HDD still has a strict price advantage, but the nvme drive uses way less power, is more reliable, doesn't have spinup time (consumer usage is very infrequently accessed, keeping the mechanical drives spinning continuously gets into that awkward zone of worthwhile)

And these prices are getting low enough, especially with this NUC-based solutions, to actually be price competitive with the low tiers of drive & dropbox while also being something you actually own and control. Dropbox still charges $120/yr for the entry level plan of just 2TB after all. 3x WD Blue NVMEs + an N150 and you're at break-even in 3 years or less


I appreciate you laying it out like that. I've seen these NVME NAS things mentioned and had been thinking that the reliability of SSDs was so much worse than HDDs.


SSDs are just limited write cycles whereas HDDs literally spin themselves to death. In a simple consumer NAS usage, like if this was just photo backup, that basically means SSDs will last forever. Meanwhile those HDDs start hitting borrowed time at 5-8 years, regardless of write cycles.


I have had two Sandisk 2.5 inch SSDS just suddenly fail. No warning that I could discern, and no way to recover afterwards. Both were while running Debian variants as a / partition, luckily I keep /home on a separate partition.

Any idea what that failure mode could have been? It worries me tremendously to keep data on an SSD now.


I had a Samsung 960 Pro nvme fail without warning, that was even more concerning

Then shortly after I had a BTRFS fail without failing Hardware on another drive

Just backup your stuff with 3-2-1 strategy and you're OK.

I'd recommend a combination of syncthing, restic and ZFS (with zfs-auto-snapshot, sanoid or zrepl) and maybe bluray (as readonly medium)


I'm considering Amazon Glacier for offline backup. Any opinions, alternatives, or other advice welcome.


Isn't Amazon Glacier cloud based?


Yes, I mean for off site backup. It's too late to edit that comment.


Don’t forget about power. If you’re trying to build a low power NAS, those hdds idle around 5w each, while the ssd is closer to 5mw. Once you’ve got a few disks, the HDDs can account for half the power or more. The cost penalty for 2TB or 4TB ssds is still big, but not as bad as at the 8TB level.


5w to mw is huge, but at the same time the total cost of ownership over 3/5 years depending on the cost of the hardware may not pay off in a 5w spread, especially with SSD premiums.

When it comes to self hosted servers for example, using tiny computers as servers often gets you massive power savings that do make a difference compared to buying off-lease rack mount servers that can idle in the hundreds of watts.


such power claims are problematic - you're not letting the HDs spin down, for instance, and not crediting the fact that an SSD may easily dissipate more power than an HD under load. (in this thread, the host and network are slow, so it's not relevant that SSDs are far faster when active.)


There's a lot of "never let your drive spin down! They need to be running 24/7 or they'll die in no time at all!" voices in the various homelab communities sadly.

Even the lower tier IronWolf drives from Seagate specify 600k load/unload cycles (not spin down, granted, but gives an idea of the longevity).


Is there any (semi-)scientific proof to that (serious question)? I did search a lot to this topic but found nothing...


Here is someone that had significant corruption until they stopped: https://www.xda-developers.com/why-not-to-spin-down-nas-hard...

There are many similar articles.


I wonder if they were just hit with the bathtub curve?

Or perhaps the fact that my IronWolf drives are 5400rpm rather than 7200rpm means they're still going strong after 4 years with no issues spinning down after 20 minutes.

Or maybe I'm just insanely lucky? Before I moved to my desktop machine being 100% SSD I used hard drives for close to 30 years and never had a drive go bad. I did tend to use drives for a max of 3-5 years though before upgrading for more space.


I wonder if it has to do with the type of HDD. The red NAS drives may not like to be spun down as much. I spin down my drives and have not had a problem except for one drive, after 10 years continuous running, but I use consumer desktop drives which probably expect to be cycled a lot more than a NAS.


I experimented with spindowns, but the fact is, many applications needs to write to disk several times per minute. Because of this I only use SSD's now. Archived files are moved to the Cloud. I think Google Disk is one of the best alternatives out there, as it has true data streaming built in the MacOS or Windows clients. It feels like an external hard drive.


Letting hdds spin down is generally not advisable in a NAS, unless you access it really rarely perhaps.


Spin down isn't as problematic today. It really depends on your setup and usage.

If the stuff you access often can be cashed to SSDs you rarely access it. Depending on your file system and operating system only drives that are in use can be spun up. If you have multiple drive arrays with media some of it won't be accessed as often.

In an enterprise setting it generally doesn't make sense. For a home environment disks you generally don't access the data that often. Automatic downloads and seeding change that.


Is there any (semi-)scientific proof to that (serious question)? I did search a lot to this topic but found nothing...

(see above, same question)


It's probably decades old anecdata from people who re commissioned old drives that were on the shelf for many years. The theory is that the grease on the spindle dries up and seizes up the platters.


I've put all of my surveillance cameras on one volume in _hopes_ that I can let my other volumes spin down. But nope. They spend the vast majority of their day spinning.


Did you consider ZFS with L2ARC? The extra caching device might make this possible...


That's not how L2ARC works. It's not how the ZIL SLOG works, either.

If a read request can be filled by the OS cache, it will be. Then it will be filled by the ARC, if possible. Then it will be filled by the L2ARC, if it exists. Then it will be filled by the on-disk cache, if possible; finally, it will be filled by a read.

An async write will eventually be flushed to a disk write, possibly after seconds of realtime. The ack is sent after the write is complete... which may be while the drive has it in a cache but hasn't actually written it yet.

A sync write will be written to the ZIL SLOG, if it exists, while it is being written to the disk. It will be acknowledged as soon as the ZIL finishes the write. If the SLOG does not exist, the ack comes when the disk reports the write complete.


BTW what I mean is that even with my attempts to limit activity, there seems to be enough network activity to wake these drives pretty much continuously.


Low power, low noise, low profile system, LOW ENTRY COST. I can easily get a beelink me mini or two and build a NAS + offsite storage. Two 1TB SSDs for a mirror are around 100€, two new 1TB HDDs are around 80€.

You are thinking in dimensions normal people have no need for. Just the numbers alone speaks volumes, 12TB, 6 hdds, 8TB NVMes, 2.5GB LAN.


> […] mechanical hard drives are still way more cost effective for storing lots of data than NVMe.

Linux ISOs?


The selling point for the people in the Plex community is the N100/N150 include Intel’s Quicksync which gives you video hardware transcoding without a dedicated video card. It’ll handle 3 to 4 4K transcoded streams.

There are several sub $150 units that allow you to upgrade the ram, limited to one 32gb stick max. You can use an nvme to sata adapter to add plenty of spinning rust or connect it to a das.

While I wouldn’t throw any vms on these, you have enough headroom for non-ai home sever apps.


The device I linked below (https://www.aliexpress.com/item/1005006369887180.html) has a XEON, ECC, 2xNVMe, SATA, 2.5GBit and everything else you need in a very small box.

Intel also means it has QuickSync, so you won't need to buy a N150. However, I tend to be sceptic about these aliexpress-boxes, too. Reliable server manufacturers, like Dell, HP, Lenovo or Fujitsu (RIP) are way more reliable.


I agree, if your linked pc wasn't from aliexpress it would be a compelling alternative. Not that these n100 mini-pcs are any better, but I can get one tomorrow off amazon for under $150.

I do think they bridge the server gap between using a rpi and a full server. Likely for the hobbyist that doesn't yet have the need (or space) for better hardware.


We’ve been able to buy used OptiPlex 3060 or 3070’s for about $100 for years now and they tick all the boxes for Plex and QuickSync. Only two NVME and one SATA slot though, so maybe not ideal for a NAS but definitely fits the power and thermal profile, and it’s nice to reuse perfectly good hardware.


I think you're right generally, but I wanna call out the ODROID H4 models as an exception to a lot of what you said. They are mostly upgradable (SODIMM RAM, SATA ports, M.2 2280 slots), and it does support in-band ECC which kinda checks the ECC box. They've got a Mini-ITX adapter for $15 so it can fit into existing cases too.

No IPMI and not very many NVME slots. So I think you're right that a good mATX board could be better.


Not sure about the odroid but I got myself the nas kit from friendly elec. With the largest ram it was about 150 bucks and comes with 2,5g ethernet and 4 NVME slots. No fan and keeps fairly cool even under load.

Running it with encrypted zfs volumes and even with a 5bay 3.5 Inch HDD dock attached via USB

https://wiki.friendlyelec.com/wiki/index.php/CM3588_NAS_Kit


Well, if you would like to go mini (with ECC and 2.5G) you could take a look at this one:

https://www.aliexpress.com/item/1005006369887180.html

Not totally upgradable, but at least pretty low cost and modern with an optional SATA + NVMe combination for Proxmox. Shovel in an enterprise SATA and a consumer 8TB WD SN850x and this should work pretty good. Even Optane is supported.

IPMI could be replaced with NanoKVM or JetKVM...


That looks pretty slick with a standard hsf for the CPU, thanks for sharing


Nice indeed. With only 2 nvme slots, what drive configuration do you have in your mind? Backup from nvme to HDD locally and another device remote?


For my personal purpose i would go 2 x wd sn850x 2tb RAID1 nvme(consumer)

You could also go 32gb+ Intel optane Boot and enterprise SATA data, depending on your use case


You can get a 1 -> 4 M.2 adapter for these as well which would give each one a 1x PCIe lane (same as all these other boards). If you still want spinning rust, these also have built-in power for those and SATA ports so you only need a 12-19v power supply. No idea why these aren't more popular as a basis for a NAS.


No ECC is the biggest trade off for me, but the C236 express chipset has very little choice for CPUs, they are all 4 core 8 thread. Ive got multiple x99 platform systems and for a long time they were the king of cost efficiency, but lately the ryzen laptop chips are becoming too good to pass up, even without ECC. Eg Ryzen 5825u minis


For a home NAS, ECC is as needed as it is on your laptop.


ECC is essential indeed for any computer. But the laptop situation is truly dire, while it's possible to find some NAS with ECC support.


Most computers don't have ECC. So it might be essential in theory but in practice things work fine without (for standard personal, even work, use cases).


I've had a synology since 2015. Why, besides the drives themselves, would most home labs need to upgrade?

I don't really understand the general public, or even most usages, requiring upgrade paths beyond get a new device.

By the time the need to upgrade comes, the tech stack is likely faster and you're basically just talking about gutting the PC and doing everything over again, except maybe power supply.


> except maybe power supply.

Modern Power MOSFETs are cheaper and more efficient. 10 Years ago 80Gold efficiency was a bit expensive and 80Bronze was common.

Today, 80Gold is cheap and common and only 80Platinum reaches into the exotic level.


A 80Bronze 300W can still be more efficient than a 750W 80Platinum on mainly low loads. Additionally, some of the devices are way more efficient than they are certified for. A well known example is the Corsair RM550x (2021).

If your peak power draw is <200W, I would recommend an efficient <450W power supply.

Another aspect: Buying a 120 bucks power supply that is 1.2% more efficient than a 60 bucks one is just a waste of money.


Understandable... Well, the bottleneck for a Proxmox Server often is RAM - sometimes CPU cores (to share between VMs). This might not be the case for a NAS-only device.

Another upgrade path is to keep the case, fans, cooling solution and only switch Mainboard, CPU and RAM.

I'm also not a huge fan of non x64 devices, because they still often require jumping through some hoops regarding boot order, external device boot or power loss struggle.


these little boxes are perfect for my home

My use case is a backup server for my macs and cold storage for movies.

6x2Tb drives will give me a 9Tb raid-5 for $809 ($100 each for the drives, $209 for the nas).

Very quiet so I can have it in my living room plugged into my TV. < 10W power.

I have no room for a big noisy server.


While I get your point about size, I'd not use RAID-5 for my personal homelab. I'd also say that 6x2TB drives are not the optimal solution for low power consumption. You're also missing out server quality BIOS, Design/Stability/x64 and remote management. However, not bad.

While my Server is quite big compared to a "mini" device, it's silent. No CPU Fan only 120mm case fans spinning around 500rpm, maybe 900rpm on load - hardly noticable. I've also a completely passive backup solution with a Streacom FC5, but I don't really trust it for the chipsets, so I also installed a low rpm 120mm fan.

How did you fit 6 drives in a "mini" case? Using Asus Flashstor or beelink?


I'm interested in learning more about your setup. What sort of system did you put together for $350? Is it a normal ATX case? I really like the idea of running proxmox but I don't know how to get something cheap!


My current config:

  Fujitsu D3417-B12
  Intel Xeon 1225
  64GB ecc
  WD SN850x 2TB
  mATX case
  Pico PSU 150
For backup I use a 2TB enterprise HDD and ZFS send

For snapshotting i use zfs-auto-snapshot

So really nothing recommendable for buying today. You could go for this

https://www.aliexpress.com/item/1005006369887180.html

Or an old Fujitsu Celsius W580 Workstation with a Bojiadafast ATX Power Supply Adapter, if you need harddisks.

Unfortunately there is no silver bullet these days. The old stuff is... well too old or no longer available and the new stuff is either to pricey, lacks features (ECC and 2.5G mainly) or to power hungry.

A year ago there were bargains for Gigabyte MC12-LE0 board available for < 50bucks, but nowadays these cost about 250 again. These boards also had the problem of drawing too much power for an ultra low power homelab.

If I HAD to buy one today, I'd probably go for a Ryzen Pro 5700 with a gaming board (like ASUS ROG Strix B550-F Gaming) with ECC RAM, which is supported on some boards.


does the bios support S3 sleep and wake on lan?


I agreed with this generally until learning the long way why RAID 5 minimum is the only way to have some peace of mind and always a nas with at least 1-2 extra bays than you need.

Storage is easier as an appliance that just runs.


> I'd not use RAID-5 for my personal homelab.

What would you use instead?

ZFS is better than raw RAID, but 1 parity per 5 data disks is a pretty good match for the reliability you can expect out of any one machine.

Much more important than better parity is having backups. Maybe more important than having any parity, though if you have no parity please use JBOD and not RAID-0.


I'd almost always use RAID-1 or if I had > 4 disks, maybe RAID-6. RAID-5 seems very cost effective at first, but if you loose a drive the probability of losing another one in the restoring process is pretty high (I don't have the numbers, but I researched that years ago). The disk-replacement process produces very high load on the non defective disks and the more you have the riskier the process. Another aspect is that 5 drives draw way more power than 2 and you cannot (easily) upgrade the capacity, although ZFS offers a feature for RAID5-expansion.

Since RAID is not meant for backup, but for reliability, losing a drive while restoring will kill your storage pool and having to restore the whole data from a backup (e.g. from a cloud drive)is probably not what you want, since it takes time where the device is offline. If you rely on RAID5 without having a backup you're done.

So I have a RAID1, which is simple, reliable and easy to maintain. Replacing 2 drives with higher capacity ones and increasing the storage is easy.


I would run 2 or more parity disks always. I have had disks fail and rebuilding with only one parity drive is scary (have seen rebuilds go bad because a second drive failed whilst rebuilding).

But agree about backups.


Were those arrays doing regular scrubs, so that they experience rebuild-equivalent load every month or two and it's not a sudden shock to them?

If your odds of disk failure in a rebuild are "only" 10x normal failure rate, and it takes a week, 5 disks will all survive that week 98% of the time. That's plenty for a NAS.


If the drives are the same age and large parts of the drive haven't been read from for a long time until the rebuild you might find it already failed. Anecdotally around 12 years ago the chances of a second disk failing during a raid 5 rebuild (in our setup) was probably more like 10-20%


> and large parts of the drive haven't been read from for a long time

Hence the first sentence of my three sentence post.


If I wanted to deal with snark I'd reply to people on Reddit.


My goal isn't to be rude, but when you skip over a critical part of what I'm saying it causes a communication issue. Are you correcting my numbers, or intentionally giving numbers for a completely different scenario, or something in between? Is it none of those and you weren't taking my comment seriously enough to read 50 words? The way you replied made it hard to tell.

So I made a simple comment to point out the conflict, a little bit rude but not intended to escalate the level of rudeness, and easier for both of us than writing out a whole big thing.


Storing backups and movies on NVMe ssds is just a waste of money.


Absolutely. I don't store movies at all but if I would, I would add a USB-based solution that could be turned off via shelly plug / tasmota remotely.


Why would you need to disconnect it? Are you worried about ransomware?


Power consumption


not if you value silence and compactness


A disk NAS isn't very large or loud.


For my purposes, a disk NAS is both too large and too noisy.

The only place I can put a NAS is my living room. I'm not putting a fucking 4-bay synology on my entertainment shelf. And if I can hear it, it is too loud.

these mini-NAS boxes are about the size of a single 3.5 HDD


Gen 11 HPE Microservers seem pretty decent:

https://buy.hpe.com/us/en/compute/tower-servers/proliant-mic...

Pity they're Intel cpu's though. :(

HPE have announced 12th Gen servers for their other lines recently, so maybe the Microservers will get a 12th Gen update this year too. Hopefully with AMD cpus rather than the Intel crap.


Personally i find these too expensive compared to the good old N54L and Dell T20 era.

The Hardware is great but >1000 bucks is pretty hard even for enthusiasts


The Gen8 stuff is pretty good, though the 16GB ram maximum for those is what's causing me to look at upgrades. :)


Where are you measuring the power consumption? I've recently started measuring the wattage of all the various electronics in my collection, and I haven't found any computer that's not underpowered and draws under 25W from the wall when idle, and that's with no HDDs and minimal RAM.

Turns out I actually have power supplies that alone draw over 30W with zero load; when trying for the lowest idle power consumption I've found that the choice of power supply matters a lot,


I use a cheap power meter for initial testing and a tasmota plug for having a monitoring solution as well as a second measure and a possibility ro remote hard reset in case of a freeze or something.

Turns out that power supply and motherboard are the most important to save power - besides low C-states (powertop). I had best results with Fujitsu D3x17 / d3644 and Gigabyte C246 wu2.

Today these are unicorns not worth hunting for. Like I said: No modern server grade board is that good while being cheap. You could take a look at

  Kontron K3851-R ATX
If i remember correctly. Kontron bought Fujitsus Mainboard segment a while ago.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: