Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For cloud customers as well

Cloud costs are dominated by power delivery and cooling. Both of those are directly influenced by how much power the chip uses to achieve it's performance target.

I guess it does indirectly influence dollar cost, but I was referring to MSRP of the chip. As a simple example: the per-chip cost of Graviton is probably enormous (if you factor R&D into the cost of a chip), but it's still cheaper for Amazon customers. Why? Power and cooling.



Disclosure: I used to work on GCE.

I don't understand where these power and cooling mantras came from, but large-scale cloud providers have very low PUE (Google publishes historical data at https://www.google.com/about/datacenters/efficiency/). That means you can take the basic power from a thread, add some for memory, and multiply by just a bit to get Watts. Take that and plug in your favorite $/kWh guess and get a price.

Ignoring Graviton, which doesn't have published power data to my knowledge, you can look up the TDP for a bunch of chips and see that it's a a few Watts per thread [1]. Similar calculations can be done for RAM. You end up at 5ish Watts per thread with some RAM attached. Let's call it 10 all in with cooling or other stuff. Since 10W is 1% of a kW, we end up with .01 kWh per hour. The top hit for "us power commercial rates" [2] says that we should assume 7c per kWh or so. That means our instance with cooling and overheads, needs to include .01 x .07 => $.0007/hr of power and cooling costs.

A single core w/ 4 GiB of memory on GCP at 3yr commitment rates (so we're focused on the long-term depreciation price) is .009815 + 4x.001316 => $.015/hr or about 20x as much as the power.

tl;dr: Power costs add up, but they are not even close to dominating the costs of cloud pricing.

[1] https://wccftech.com/amd-epyc-7h12-cpu-64-core-zen-2-280w-td...

[2] https://www.statista.com/statistics/190680/us-industrial-con...


Reaching those PUE numbers is pretty damn impressive though.

I think the misconceptions come from enterprise DC environments with traditional hot/cold aisle designs, servers running way cooler than they need to be and peak power requirements leading to overly expensive power/cooling costs. PUE for these closer to 2.0 than GCE is to 1.0.

If you have something like GCE where you can control for all of those, i.e run the DCs hot, eliminate transient peaks, source power cheaply and use super efficient cooling like evaporative or geothermal pumped water etc then yeah, it's a completely different ballgame.

It's pretty safe to assume all the hyperscalers are doing all of these things too, they aren't stupid. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: