Hacker News new | past | comments | ask | show | jobs | submit login

Why would you use that instead of bare metal? You'll come out >$4k per month with sustainable use, for that price you get the same power in bare metal?

Preemptible is a bit cheaper but are 600GB memory really worth it for short running applications? Until you loaded everything in memory your machine probably gets destroyed..

EDIT: Not sure about the exact CPU performance but should be quite close to what OVH offers here? With the same memory configuration and 2TB NVMe this still costs <$1,500/month (https://www.ovh.co.uk/dedicated_servers/hg/180bhg1.xml).




You can find a cheaper bare metal system with comparable performance for every instance type. That is not a new argument for high vCPU/Memory instances.

If (monthly) price is the main concern "the cloud" is probably not for you. Other people obviously massively value the benefits of it and pay the premium, and that is not suddenly going to stop for a new instance size.


In addition to that, those instances might be deployed for short time were much power at once is needed and powered off after that. In that case the cloud might even be cheaper as Google offers pricing by Minute (or was it second now?) whereas most bare metal providers bill by the month.


Per second, with a one minute minimum [1].

[1] https://cloudplatform.googleblog.com/2017/09/extending-per-s...


"Most bare metal providers bill by the month." Maybe you mean dedicated hosting providers?

Any provider letting you spin up bare-metal through an API almost certainly bills at much finer grain than monthly, although they may quote monthly prices to make it easier for customers to assess cost.


There is overhead for space, reliability, networking, and maintenance for bare metal. If you have an existing datacenter the marginal costs are tiny, but if you are only in the cloud or not planning to need these beasts for years on end, the net costs are much lower to just harness the economies of scale provided by GCP, AWS, et al. As for OVH, the amount you’d save in networking between that box and the rest of your infrastructure by having everything on one cloud provider probably pays for the difference.

Add in flexible network storage options and integration with existing security infrastructure. You are thinking too small by comparing a single box to a single box.


You are thinking too big by talking about having an existing datacenter. Many companies colocate anywhere from 1U to a private room with many racks in an shared datacenter.


And most people that have a tiny part of a shared datacenter still call it their datacenter.


I used a GCE to test some image processing software I wrote a while ago (it runs on a very large dataset). I configured a 64 core machine with 128gb of memory. It ran perfectly, although it cost about $200 to run the test for a day.

Sure, it wasn't the highest performance per CPU, but I didn't have to buy the bare metal, I can scale up the number of cores if need be, and I can fire one up whenever I want one.


You do realize that it was not actually a 64 core machine?


I wonder why you were downvoted. A 64 core machine would have 128 vCPUs


Not necessarily. Depends on the CPU architecture and whether hardware threading is enabled.


In the context of GCE they're documented specifically as a single hyperthread


For the price of 2-3 days you should be able to get a dedicated OVH server for a month. The 64 GCP cores are 32 real cores, so monthly rent of $600 should get you there.


For one reason, if you only need the one system, you almost certainly need to compare the cost to owning and maintaining two.

If you can't tolerate the server being down for more than the lead time it takes to get a new one, you need to already have one on standby. The lead time is probably at least a couple weeks, but there's no guarantees since you're depending on vendor availability and hundreds of other things out of your control.

Depending on what you're running on it, you'll also probably want to test software upgrades and have fallback plans when you deploy.

The thing the "cloud" version gets you is zero lead time, along with the ability to spin up a second instance (or ten, if you want) while you deploy a new version or just want to do some testing.


Maybe you need the capacity only for some hours here there?

If all your data and other processing is in cloud A, moving some processing to B might not be feasible (moving lots of data takes time, security requirements may complicate setup)


One of the biggest data-lockin factors is network I/O costs. These have been kept artificially high by all cloud providers and act both to deter import/export and also to subsidize other functionality.


Yeah, you're not kidding.

I run semi-bandwidth intensive applications and DigitalOcean and LightSail are actually better deals than EC2 for the amount of bandwidth. $5/mo for 1 TB on DO/LS vs $90 for 1 TB on EC2.

We use a mix of dedicated hardware and DO/LS to meet our needs as bandwidth on the major cloud providers was just too expensive.


There are providers that offer unlimited bandwidth that HN users have tested: https://news.ycombinator.com/item?id=14247795

> notamy: I have an application on OVH (on the USD $3.50/month plan) that pushes/pulls >10TB/month

That entire discussion is recommended for anyone looking for a cheap VPS.


Yeah, OVH is great, I highly recommend them, we use their cheap dedicated servers via Kimsufi and SoYouStart at a few locations.

I seem to recall a friend having stability issues with their VPSes several years ago, so we stuck to the dedicated stuff from them, but it's been extremely good especially considering the price. Have you had good stability with their VPS services?


Unfortunately all I can do right now is point to the anecdata of others as in the link above and additional pointers within that project[1][2]. If you have time and can share any more details on your experience I would tremendously appreciate it.

I am debating starting a Twitch -> YouTube video stream duplicator/archiver that would initially make money by auctioning available capacity with the long-term goal of being aquired by Twitch since their integration is so unreliable.

[1] OVH 10TB traffic throttling https://github.com/joedicastro/vps-comparison/pull/25

[2] Scaleway truly unlimited https://github.com/joedicastro/vps-comparison/issues/9#issue...


I'm doing a similar (but different) kind of thing going from Twitch to YouTube. At the moment, Google Cloud Platform doesn't charge for bandwidth egress to their own services, so YouTube uploads are free if you use GCP.


Thanks for the specific tip! Link for the lazy: http://gamebot.gg/ A Show HN would probably do well (with a bit of behind-the-scenes in the comment), as would in-depth blog posts if you're doing machine learning.

I actually tried the Hearthstone one and the very first clip in the current example 'Greatest Clips' (BJwDyxrplpo) appeared to miss the actual action (clicking "Disenchant" - which may actually have been the point since it could have been just a tease) but the rest of the clips seemed complete (and interesting).

I've thought about stream-jumping/recording based on simple indicators like increases in chat comments, viewers, followers, etc. How much of this could be built off Twitch's own 'clip' functionality (whether initiating them yourself or aggregating the manual curation of others -- neither of which AFAIK has an API right now) and collecting them later? Separate note I'm trying to hide in this pararagraph: don't overfit if you want to apply this tech to other streaming sites where real money is flowing (aka NSFW).

Personally I don't care so much about specific games on Twitch (except Street Fighter, which gets relatively little love but your videos are a real time-saver) instead of personalities. It might be worth offering this service to them, focused solely on collecting their highlights. had a tough time with the non-English streams but not sure what options you have there. I'm also interested to see how this will turn out for you using Twitch content if they notice that what you're doing is catching on. Twitch seems to be leaving a lot of low-hanging fruit behind for othes to capitalize on.

Feature wise: more playlists, maybe monthly and/or collecting the highlights of the highlights, with most comments/views/thumbs-up on previous YouTube videos. If there was a way to incorporate chat since most streamers don't in their videos, you should. https://github.com/PetterKraabol/Twitch-Chat-Downloader

Bug-wise, it seems like something is going wrong with the links as the end of this video, appx. 30 seconds of moving images but no links in Firefox with ad block [disabled as legacy]. (8ql3id1lJoM, ilkKuvuna10)


Ahh you found it! No machine learning at the moment, just using the Twitch clips api. Machine learning would be very helpful for some problems however, specifically to weed out clips in which the broadcaster specified the incorrect game.

You're right about offering the service to the streamers - that's definitely the way to go to make a business out of it and it's something I've considered. However, I was mostly interested in doing the project for fun, and for some passive income, and making it a service would definitely not be passive.

The clips API returns language information about the clips, which you can use to filter them. Before that, I had to manually maintain a blacklist of non-English streamers.

I do monthly highlight videos, but they're solely based on clip views on Twitch - it doesn't use YouTube analytics, which I'm sure would improve the videos.

It is a cool idea to include chat - another thing I've considered but haven't implemented, though I've noticed some Twitch highlight channels (that do manually edited videos) do it. Thanks for the link to the downloader.

The links at the end of the videos are tricky - there's no api for that, so currently they're populated by a Firefox macro on a desktop that's supposed to be run every day - looks like there's an issue with it running! The better version would be to use a webscraper or headless browser to automate those clicks via the render server. That's what I'm supposed to be working on next, in fact...


YouTube automation seems like a relatively untapped (if niche) market.


These are still 4 to 5 figure prices. Larger companies really don't care about these tiny fees, especially compared to the licensing costs of the software running on these servers. It gets the job done faster and easier, so it's worth it, especially when everything else might already be in Google Cloud.


Here's a server with 512GB RAM, 40 cores for $1,850:

http://www.ebay.com/itm/122593732313


Cool. So, where do you put that server? does it have sufficient power, AC, generators, UPS? How much does that cost per month?

Who monitors that PC, and does preventative maintenance, etc? If a part looks like its going to fail, where do you migrate your workload to, so you can take that server offline for repairs. (you need multiple servers)

Since you have multiple servers, how much does your 40GB networking cost (with 100GB uplinks) so you aren't constrained by the network? And what kind of storage network do you have, so that you can live-migrate these running machines around to different machines?

Lastly, if you co-locate the server somewhere, what does it cost for multiple redundant internet connections to the facility? And where is your failover facility, that is at least a few hundred miles away?


I think this is a very important point and one that many folks don't fully appreciate:

The total cost of ownership of a computing asset is several times greater than the cost of the actual asset.

Think of it this way: a dog can be obtained for a very nominal cost (or free) but the cost to house, feed, entertain, and provide healthcare for is non-trivial.

it's not unheard of for just the costs of deploying a new device into a large organization to be something like EIGHT TIMES what the cost is for the actual asset. That's just to get the hardware deployed and NOT the cost to keep it running.

Cutting down on TCO and streamlining the deployment of resources is a big part of the sell for cloud deployments. Particularly for computing assets that may otherwise spend a lot of their time idle.


Right and all the bare metal providers like softlayer are probono orgs


Softlayer's network is crap compared to google and aws. That being said in my previous company we used a combination of Softlayer (for dedicated machines) and AWS for cloud. There is definitely a use case for each, but as Nrsolis mentions there is additional cost in things beyond the cost of the initial hardware itself.


Not a Softlayer customer but hard to imagine network crappier than AWS.


At least with AWS you get placement groups, which can help a lot. With Softlayer we saw entirely too much packet loss on a regular basis, and they try to upsell you on things to "fix" it.


default port speed was 100Mbps as of 10 months ago


It's also a very old CPU generation, and slower memory.

They're great boxes for cheap on-prem etl clusters.

I prefer sell R810s over those hps because they're 2U and have better power consumption with the same specs.


Thank you, I was looking for this comment. That is the value of the cloud: the reduction in TCO, the predictable pricing, up-time guarantees, and bandwidth availability.


...ok? As I just said, we (as a company) dont care about a few thousand per month in exchange for letting Google Cloud handle everything for us. We're definitely not interested in buying some used server from ebay and then figuring out where to run it.


How old is that server? I do not see ECC RAM if you care... also are the other pieces of the server going to fail anytime soon?

How loud is it? How much energy does it consume while running? How hard is it to configure and keep running? What kind of firmware does it have and will it be a problem updating?

These are all the questions I would have before buying a beast like that...


People tend to ignore even quite a big costs if they are spending company's money, not their own :)

Also, this machine you link to might cost $1,850 but:

- it is used, not new, so it can break any minute, and you don't have any warranty.

- on GCP just $2,100 per month buys you similary speced machine AND peace of mind.

- running this machine 24/7 poses some significant electricity cost.

EDIT: - $2,100 is probably lower than a salary you would pay an IT technician maintaining your machine(s).


The whole “electricity costs a lot” argument is getting tiresome. Where I live, electricity is 11 cents per KWh. That means even if you run that machine full tilt 24x7 (which you won’t), and even if it draws 1KW (which it won’t), it’s still only $79/mo in electricity cost.


> electricity costs a lot

well it's not just electricity. to run your server you probably at least need a network, ups and probably other things. this stuff especially gets ugly when you want to have a network with many servers. at least most dedicated server providers charge a ton of money for interconnection of servers. well at least ova actually provides a vRack for dedicated servers but its not always free.


Power in a lot of places is 2-3x that, and for a naively designed data centre you can double that to include the cost for air conditioning (yes you can do a lot better but on a small scale, basic AC is going to equal your workload draw).


Data centers aren’t built in such places. And for just a few machines a simple building HVAC will do fine.


then you would need multiple of them (for redundancy) if you use it in production environment, then goes addirional maintenance of hard drives and so on. so what would be end prive for bare bones?


What if you don't have sustained usage? What if you are developing software for scientific data processing, and you usually work on small data sets for testing, and once in a while you have huge computing needs?


>What if you don't have sustained usage?

Well, that's where cloud servers are great.

You'll need to break out Excel and calculate whether it's worth it with regards to usage.


Isn't it what beowulf clusters are for?

It's never cheaper in the cloud.


No. Scientific workloads rarely scale well when having to do a lot of communication over a commodity network. You're also assuming people would rather have a bunch of machines lying around that they had to pay upfront for than just paying for an occasional single instance? You're oversimplifying things if you literally think it never makes financial sense to use the cloud. It's the same as saying it's never cheaper to have health insurance. Objectively, that has to be true on average, yeah, but then why do so many people buy health insurance? You're managing complex risk at the expense of overhead. Even when it sucks, it's not feasible for everyone to keep enough cash laying around for when they get hit by a bus.


We used a config not quite this big for a Nominatium database rebuild. It takes weeks on an underpowered server, but hours (or a day?) on something with enough resources.

Once rebuilt, using the database is fine on a normal server.


If your data is in Google/AWS cloud, it's expensive to process it outside Google/AWS.


There are quite a few problems where you need a lot of memory and CPU performance for just a relatively short amount of time like a few hours per day or even just a few hours per week. Forecasting or complex optimization problems for example.

In these cases the amount of money you spend on hardware virtual or otherwise is negligible. Depending on what you do it might just as well be a rounding error.


Not only that but due to the usual NUMA mismatch, additional page tables, iommu, poor storage connectivity/sharing, etc between the bare metal and the VM, the VM is likely losing a significant chunk of perf vs the bare metal.

Frankly, I have a hard time understanding why the convenience of being able to call an API to get a VM (vs using an API to get bare metal) continues to be an advantage. I am reminded of the reddit articles about all the effort they went through to re-optimize their app (by batching queries) for the longer database latencies at AWS... Its like they never considered all that work might also apply to bare metal and save them even more money...


The problem with this isn't the price for the computing power / memory, the killer are the traffic costs in the cloud that are going to bankrupt you before this thing is even at half utilization.


> >$4k per month with sustainable use, for that price you get the same power in bare metal?

not so sure about that + a hoster who provides such a machine as bare metal wants a setup fee, needs time to setup and a minimum contract duration much longer than one month

guess there are not many hosters which have such a beast as bare metal in stock and available in few minutes (are they any hosters at all?); they will order sch machine themselves and you will wait at least a week


You could order 16 of these https://www.hetzner.com/dedicated-rootserver/px121-ssd

Minimum contract length: 1 month, total cost (including setup): $4,384 (on-going month to month cost thereafter: ~$2,191.20).

For that you'd get an aggregate total of 4TB RAM, 7.6TB HDD (SSD), 96 real Intel E5-1650 v3 cores (or 192 vCPUs) and 800TB of bandwidth.

Sprinkle with terraform/ansible/k8s/docker and you have a resilient, massively powerful compute cloud with no long term obligation that's about half the price of GCE if you keep it around beyond 30 days. Or another way to look at it: if you needed such a platform for two years, your second year would be free compared to GCE.

One major issue with this approach (versus GCE's "all in one" box) could be network performance bottlenecks depending on what task(s) you were using such a cluster for.


Bare metal 48 ht cores for $1000/mo: https://www.packet.net/bare-metal/servers/type-2-virtualizat...

So we've established that aws is far more expensive now since the tco is taken into account in both cases.


There's IBM/Softlayer for that. Their hourly bare metal offering goes up to just 256 GB of RAM, though. More than that and you'll have to do a monthly commitment.


> https://www.ovh.co.uk/dedicated_servers/hg/180bhg1.xml

Yeah, the main reason to go with Amazon in this case is if you only need the box for a few hours (ie: you're doing data science or similar). For long term high load use, bare metal, even managed bare metal like that is almost always cheaper.


Everyone jumps to OVH for dedicated server price comparisons, but what about pricing for similar US coastal datacenter locations? Are there even US dedicated server providers with pricing lower than “call us?”

I totally believe that OVH is cheapest for European companies serving European customers, but that’s apples to oranges when we’re talking about American cloud providers.


We host at IBM/SoftLayer and our bare metal pricing comes in at around 33% of these high mem GCE instances.


OVH Montreal is 8ms away rtt from NYC


>"Why would you use that instead of bare metal?"

I would think lead time. To get the quote from your hardware vendor, get the PO approved from finance, get the box built and shipped, getting it racked and stacked in DC. This whole sequence can easily take longer than a month.


You can rent a bare metal server from Codero and get it provisioned in about an hour. Prices from about $100 to $1000 a month. At the high end, you get roughly what Google is offering here.


I had a big simulation to run that required lots of memory and lots of cores. I rented a machine for some 10 hours that it took and happily paid the 30-40 dollars that were charged.

those machines have a lot of value for specific workloads.


There's the high-cpu variant with only 86.4 GB of RAM


But that one costs 6.4x the 16 core variant for 6x the cores. Are there any applications where you're heavily dependent on having all cores in one machine?


Anything where the workers have to synchronize their work with each other often. Having everybody talking to each other over the network quickly kills performance.


Yes.

XGBoost will use all the cores you throw at it, and despite the recent work on GPU versions most of the time CPU cores are best.

I'll commonly run a model for 48 hours on an i7. I'd love to be able to try more models.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: