Hacker News new | past | comments | ask | show | jobs | submit | edejong's comments login

Because we want to run more than just Home Assistant on the same OS? Because traditionally OS and application layers were separated? Because we trust mature Linux distros more when it comes to LTS and security patches? Because we already know our way around Debian/Ubuntu/Nix/etc.?


Absolutely this. Once you get into the game of running apps at home with certain quality assumptions you end up having to bolt on various things (VPN, DNS, log aggregation etc) that are better wrapped around the application instead of having them run within it. And having an AppOS typically just gets in the way of all that plus what edejong said that you already know how to do it on the typical production OSes and learning to do it for every AppOS is just cumbersome.


Proxmox adds very little overhead. I'm running dozens of things alongside HAOS.

The OS is the path of least resistance and gives you the best experience for low maintenance.

https://community-scripts.github.io/ProxmoxVE/scripts?id=hao...


> Proxmox adds very little overhead.

It's still running a second kernel and entire userspace stack. In my world that's not "very little overhead".


Using Proxmox with lxc containers, there is no second kernel. It uses the host kernel’s native cgroups and namespaces for process isolation. You can actually achieve the same with just systems and namespaces.

Having said that, I think if you prefer traditional distro packaging, you should absolutely stick to that.


ProxMox supports both VMs and LXC containers. You would use LXC containers for low overhead. No second kernel.


ProxMox supports it, but it's not what the linked script does, nor is it officially supported by HAOS.


Yep!

I'm aware of the tradeoffs here. For home assistant specifically, there's two options if you want to stay on the path of first-class support. Run it bare metal or in a VM.

Going a different path isn't a bad choice, or even a big downgrade.

I had fun with all the different ways of running home assistant 6+ years ago, and then decided to embrace a solution that required the least fuss and would hold up long term. I'm happy with my choice, and it gave me exactly what I was expecting.


I'm also running dozens of things alongside HA and I don't have to use proxmox.

It's not hard to run HA in unsupported mode. The only real difference is an annoying reminder that you're unsupported. Everything else works, including plugins/add-ons.

I've run HA a bunch of ways. It doesn't really matter all that much. Use HACS to fill any gaps.


One-click updates work from the dashboard? I don't think they would.

Of course you don't have to go the proxmox route, but it is an easy route.

I have a proxmox cluster, so moving things between machines, high availability, and backups are a breeze. Had an SSD go bad a few months ago, and I just moved everything on that machine to another node until the new drive showed up. It was a pleasant experience.

There's plenty of other ways to achieve this, but this is what I chose and I'm happy with it. It's simple, and I can manage everything from my phone if needed.


Yes, one click updates work just fine. The only difference is the unsupported message which only appears after reboots or an update. Click ignore and it goes away.

Seconded. My home server does many things, one of which is homeassistant.


I used to use Home Assistant via Docker, but I've since switched to Proxmox with HAOS in a VM and a second Debian VM for everything else. My main reason for this is that it seems like the more supported scenario. For example, when the Voice Assistant stuff first came out the setup was only really documented via HAOS add-ons. I managed to get it working with standalone Docker containers but it was a pain to figure out. It really is simpler to just use HAOS IMO.


> we trust mature Linux distros more when it comes to LTS and security patches

This. If I have to trust some huge container or custom OS where is the benefit of open source?


Your position is indeed supported by the data presented here: https://ourworldindata.org/rise-of-social-media


quite the opposite. look at the almost exactly linear growth of the 2 biggest sites, fb and youtube. there is nothing special about the early 2010s, no acceleration in growth.


Not exactly. It’s not that I can’t see the colors, I just need more contrast to pick up red or green. A grayish green looks the same as plain gray to me. A small bright green dot? Might as well be gray or brown. But a large, solid area of bright green or red? No problem at all.


Same here. I can figure it out, just not at a glance. Unfortunately, when it comes to video games, identifying small flashing colors at a glance is exactly the goal.


No, these are not disruptors. Substantial incremental improvements, but part of the larger battle.


"Even though Steve Jobs emphasised iPhone superiority to "Buttons", it is to be expected that the consumer QWERTY category will continue to succeed."

Their key mistake.


I don't know. 17 years on and my fingers still miss hardware keyboards a little bit.

My dream smartphone would be a black rectangle, but with a landscape hardware keyboard to slide out from underneath. And in an ideal world OLED keys for changing the layout and a touch sensitivity for moving a text cursor.

What I miss from the 2000s is the big differentiation in phone form factors. Granted, a lot of them were weird, but there was at least experimentation and optimising for different use cases. What if the current standard of a black rectangle is just a local maximum and there is something better ahead?


I still think Motorola's Droid (and Droid 2) were the pinnacle of the smartphone form factor.

I distinctly recall the prevailing view among friends at the time was that even with the keyboard-less smartphones becoming the norm that the keyboard approach would become the standard interface, as Blackberry still existed and had majority market share (it seemed; my region had few iPhones at the time).


Early research always had 1.0% in its confidence intervals, which is most likely the right IFR during the first phases.

The 0.1%-0.2% was just bad science, taking medians over countries with lagging statistics reports.

Where did you find 3.4%? Isn’t that an upper bound?


People paid him for such nonsense?


The service came at no additional cost


Totally agree. Observability is just another dataset and should be modeled, managed and governed as other datasets. Data quality controls should be equal or of higher standard than regular data sets.

Monitoring, dashboarding and alerting should leverage other BI-class tooling.


The title should be: “Critical flaw in title causes HN readers to click on mostly irrelevant article.”


I thought this was already well known. A study in 2021 [1] found this relationship. However, it is good to keep people informed because it seems we are letting a potentially severely disrupting disease shred our society to bits.

From 2021: “Biological markers of brain injury, neuroinflammation and Alzheimer’s correlate strongly with the presence of neurological symptoms in COVID-19 patients.”

[1] https://aaic.alz.org/releases_2021/covid-19-cognitive-impact...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: