Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

it's not really bottlenecked by the store but by the calculations performed on each pod schedule/creation.

It's basically "take global state of node load and capacity, pick where to schedule it", and I'd imagine probably not running in parallel coz that would be far harder to manage.





No a k8s dev, but I feel like this is the answer. K8s isn't usually just scheduling pods round robin or at random. There's a lot of state to evaluate, and the problem of scheduling pods becomes an NP-hard problem similar to bin packing problem. I doubt the implementation tries to be optimal here, but it feels a computationally heavy problem.

In what way is it NP-hard? From what I can gather it just eliminates nodes where the pod wouldn't be allowed to run, calculates a score for each and then randomly selects one of the nodes that has the lowest score, so trivially parallelizable.

I think filtering and scoring fall under a heuristics based approach to address NP-hardness?

Binpacking seems to be a well-defined NP-hard problem: https://en.wikipedia.org/wiki/Bin_packing_problem


That's greedy

The k8s scheduler lets you tweak how many nodes to look at when scheduling a pod (percentage of nodes to score) so you can change how big “global state” is according to the scheduler algorithm.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: