I think this is a totally flawed way of thinking. Most systems do not need a fully distributed database, for example, and as a result most don't need the flexibility.
E.g. over years of using Etcd. I've come to the conclusion that none of the uses I've actually used it for were necessary, and so I've generally stopped using it. I'll use it again if I come across instances where consistency is sufficiently critical and the system needs to be distributed, but that's a rare situation to be in. E.g. I've seen lots of systems try to use it to distribute configuration data. Most of the time this doesn't need to be consistent, as long as you can determine whether or not it is (and so take outdated instances out of rotation, or not put them back in rotation). Same for load balancer configs, for example.
It's gone to the point where whenever someone mentions "distributed database" alarm bells goes off in my head. 9 out of 10 times it's a sign of someone over-engineering a system and building in unnecessary complexity.
What I want out of a system to manage my containers is something that reduces complexity of deployment and operation, not something that introduces more complexity. So far I've seen nothing of Kubernetes that indicates it fits that bill. Maybe it does on the very high end. My largest systems have "only" been in the few hundred containers/VM range over dozens of servers in 4-5 physical locations (on premises/co-location, managed and public cloud). For that scale, I've personally found Kubernetes overkill and adding too much complexity.
Maybe that'll change some day as it matures, but I'm not in a rush to complicate my stacks.
This is far off the topic and I'm not sure what point you're making, or what you refer to as a way of thinking.
My main point in all of this is that concepts must be understood to run large distributed applications. If you don't have such an app, nothing applies. If you do, then you have a choice of tools to use, and one of them is Kubernetes which offers a great foundation and set of features. There are other tools like Nomad, Chef or your own code and scripts - it doesn't matter what you use as long as you understand what you're doing.
E.g. over years of using Etcd. I've come to the conclusion that none of the uses I've actually used it for were necessary, and so I've generally stopped using it. I'll use it again if I come across instances where consistency is sufficiently critical and the system needs to be distributed, but that's a rare situation to be in. E.g. I've seen lots of systems try to use it to distribute configuration data. Most of the time this doesn't need to be consistent, as long as you can determine whether or not it is (and so take outdated instances out of rotation, or not put them back in rotation). Same for load balancer configs, for example.
It's gone to the point where whenever someone mentions "distributed database" alarm bells goes off in my head. 9 out of 10 times it's a sign of someone over-engineering a system and building in unnecessary complexity.
What I want out of a system to manage my containers is something that reduces complexity of deployment and operation, not something that introduces more complexity. So far I've seen nothing of Kubernetes that indicates it fits that bill. Maybe it does on the very high end. My largest systems have "only" been in the few hundred containers/VM range over dozens of servers in 4-5 physical locations (on premises/co-location, managed and public cloud). For that scale, I've personally found Kubernetes overkill and adding too much complexity.
Maybe that'll change some day as it matures, but I'm not in a rush to complicate my stacks.