Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Imagine a world with 1Tb/s internet. What would change?
43 points by dinobones 10 months ago | hide | past | favorite | 90 comments
For the past ~10 years, it seems like internet speeds here in the US have stagnated. Probably 80% of places I've been to or lived have been 100Mbps-500Mbps.

I'm curious what things would look like if the internet was suddenly 1000x faster.

Would there be new apps we could build? Would it enable novel use cases?




Scoping this to users, it wouldn't change much at all. For data transfers you would also need storage, memory and CPUs that can handle it. For streaming it also doesn't change much since even a 4K HDR stream would work on a legacy VDSL2 system. Same goes for remote compute.

For non-users (datacenters, companies, cluster systems etc) that might mostly help with distributed storage, but again, you'd also need all the other things, because distributing anything like that also means every piece of the puzzle needs to be able to handle it. Within a datacenter or a public cloud, even top speeds of 400Gb/s (which is less than half of the thought experiment) is at such a high tier that it isn't useful for individual applications or nodes.

Something that would actually make an impact after you get to around 2Gbps would be lower latency, lower jitter, net neutrality (including getting a full connection with no administrative limits), and more peered routes. More bandwidth doesn't really do much, especially if you can't use that bandwidth anyway if the amount of connections, the latency and the computing equipment doesn't scale with it.

When you have low enough latency combined with high enough bandwidth you can start to actually get new apps and also develop novel use cases (imagine a CPU-to-CPU interconnect that can span continents). But the speed of light (or some more exact latency-per-distance quantity) prevents that.

Beyond the likes of BitTorrent and Gnutella we're not likely to see network-based ideas that are currently impossible due to limits on the average speed. Perhaps the real problem right now is the lack of universal availability of reasonable connectivity.


Latency and peering are huge. I'm amazed at how poor internet latency constantly is. Assuming fiber optics have a speed of light around 0.6c, one would expect that Cincinnati to Chicago (~450 km) is around 2.5 light-milliseconds one way, but actual pings take me at least 10ms, usually more like 20.

Worse, my home ISP has pretty bad peerings, so for example, traffic from my home to a data center about thirty minutes drive away (well under 50km as crow flies) ends up going via Ashburn and Washington, which results in RTT in excess of 30ms.

There is so much more room for improvement in latency. Even pinging my friend's house (under 10km away) on the SAME ISP (with a total of 4 traceroute hops, including start and end) takes 3 to 4ms. At 0.6c, even assuming we're both wired to the core of downtown, only 0.4ms of our RTT is actually spent on speed of light. The other ~3ms is spent on packet switching and protocol conversions.

Admittedly, all of these latencies are sufficient for most things that I want to do on the internet, and most users don't start seeing issues for most tasks until they're up in the 100ms+ range. That said, I think there's still more to gain here than from straight up throughput improvements.


The latency is also where the bottleneck of really novel mass-market network applications sits, most of the time. Just looking at examples like: https://gist.github.com/jboner/2841832 for a network to be usable as a somewhat fast SSD, you'd need a maximum latency of 0.15ms. And at that point all we have gained is a network that can be somewhat used like a 10 year old SSD.

To then get to a really relevant latency we have to get to memory reference speeds, 100ns at most. That's 0.0001ms or something like that (might be off by one order of magnitude). But with a 0.6c limit, we're not going to get there.

Like the other comment, having everyone get a much better minimum speed (say 500Mbps) and low base latency (say 1ms at most to the first IX you can get to) would make a huge difference. But to reference yet another comment, you'd probably just get 4K video ads and larger frameworks with less optimisation since the bandwidth takes care of it anyway.


Haha this made me laugh. If our ancestors could see us commenting… “it took 4 milliseconds… should be faster”. So funny, I don’t understand how you even notice the difference win 4ms vs .4!


Even myself from the 2000s would be amazed by the bandwidth that we can muster today. I am just old enough to remember waiting all day to download an IE update over dial up, and I'm definitely old enough to remember a 700MB torrent taking a whole day to download. Now I download and install a 100GB game from Steam in half an hour.

But in that same time, where bandwidth available to me has gone from <100 kbps to >1 gbps, ping has only gone from ~100ms to ~10ms.

The legacy telephone system was actually pretty good at latency, as long as you didn't end up on a crappy GEO satellite link, which dial up never would. After all, voice is actually a pretty latency sensitive application.

That said, our older ancestors would indeed be amazed at sub-second communications. Heck, even airmail latencies would seem insane to someone from before the first world war.


I think there would be diminishing returns after a couple dozen Gb/s, but my almost 10 year old computer can easily handle over 10k web requests/second (including database queries that do several joins), or ~40k serving cached pages from nginx. Phoronix benchmarked the 9950X at over 200k requests/second with nginx, so I'd expect you could host something like reddit (sans video) at home with sufficient uplink if you were willing to take the legal risk and you had a new computer or two.


There are diminishing returns after 1GBps.

I previously had 10Gbps symmetrical fiber[1], and it was simply impossible to saturate without running a speed test against another customer.

Servers were generally not fast enough to make use of that speed. XBOX downloads would sometimes peak near 1Gbps, but not sustain.

The main issue, I suppose, is that disk drives are not fast enough, or at least, not fast enough relative to their size. You can have 500MBps (40Gbps) hard drives, but your cloud provider has 50-100 customers at least accessing that drive, so your share of the straw is fairly small.

More pedestrian uses can't possibly benefit from such bandwidth either. 4k Blu-rays max out at 128Mbps. I suppose we could have 3D 8k120 streams taking 4Gbps (128*4*4*2). Maybe you can just go uncompressed, so 382Gbps (2*7680*4320*120*48), but that seems like it probably causes more trouble than compressing/decompressing, since it will be rather hard to buffer, and small hiccups will lose huge amounts of data.

In short, I think it wouldn't be substantially different than having a good 1Gbps symmetrical internet. It might allow Stadia like experiences to be really good, but those still have latency issues.

[1] https://www.init7.net/en/internet/fiber7/


You shouldn't actually need that much disk bandwidth to run something like a reddit though. 192 GB of RAM (which gaming motherboards can support) costs under $1k and should be enough to keep several weeks worth of threads warm. On the other hand, a large thread might need to transfer ~100 kB, so at 10k requests/second, that's ~8 Gb/s. I can't find stats on how many page views/second they get, but I imagine it's more than 10k/s and less than 100k/s, so presumably somewhere between 10Gb/s and 80Gb/s you have more than you need for that use case.


If you think of the distribution of network speed across the population, improving the lowest quantiles has the most effect.


We could have smartphones that are essentially streaming their UIs from a cloud instance that's running the actual OS/rendering for the device.

That approach would allow for thinner, cheaper devices with longer battery lives since their hardware is only responsible for processing the video stream from the cloud and rendering it on a screen.

Perhaps it's not a direct result of 1TB/s but such internet speeds would likely have the second order effects of providing extremely robust streaming infrastructure that enables such a use case.


I understood the hypothetical as “same internet connectivity as today but every connection is 1 TB/s”. No way anyone would want streaming UIs with that, your device would become unusable every time you don’t have network and latency would almost always be annoying.


Maybe, but what if in return smartphones weighed and cost 1/10 as much?


I don’t think the weight of cell phones has been an issue since the 80s or 90s. At some point a phones size and weight is about what’s comfortable to hold. Being too thin or too light can start to create new problems.

If the UI was streaming, I think the up front hardware cost would be replaced by a monthly fee required to run the servers required to send the feed, which would probably be more expensive over the life of the phone.

The hardware could also not be a completely dumb thin client. With the cameras, there needs to be enough inside to handle focus, capture, and a buffer to store the photos/videos to upload. Would there also be lag in the viewfinder, with the camera having to send the live image up to the server and back down to the UI, or would this part switch over of a bare bones local UI? Maps would need GPS in the phone to tell the server where it is. Various accelerometers would still be needed, then I guess the local accelerometer would need to make a call to the server to tell it to rotate the screen… even with a fast connection, I’d have to imagine some lag there. Biometric unlock would also need some local hardware. I’m sure there is much more.

Doing all this from a remote system doesn’t seem practical. Too much still needs to be in the phone itself, that it would seem impossible for the benefits to outweigh the costs.


The utility of a phone like that is basically zero if I can’t use it anytime I want. I wouldn’t even take that for free if I had to give up my current phone in exchange. Maybe it’s just me though.

Also, 1/10th size is totally unrealistic just by streaming. Just the screen is probably already 20% or so of power consumption even at very low brightness, and that’s not going to change due to streaming. And the rest of the system doesn’t disappear but just gets more lightweight, so if you reduce that to 1/3, you still have 50% of the current total power consumption, which means still need at least half of the current battery size for same battery life. You would probably end up with a phone that’s half the thickness as a current smartphone.


Please, don't assume your situation is universal. I'd take this smartphone immediately. I'm never far from a 5G tower. I used to have a speedtest script on my Android before I bought an iPhone, and the average speed I had at any time was over 100 Mb/s. Only 5 days of a year I was below 50 Mb/s - but still connected, every day.

> Just the screen is probably already 20% or so of power consumption even at very low brightness

There are other kinds of displays, some of them don't need any brightness at all, but even those that do are getting much more efficient. An old/cheap IPS is incomparable to what a Samsung AMOLED can do, and they're not stopping there.

I use my phone very often - much more than 5-6 hours of screentime - and display is only around 10% of the power budget. The 5G connection usually takes more as I'm always on some call.

I'd gladly take a phone that's just the same size and weight as Samsung S9, but has a giant battery and mainframe-level performance. I'd gladly pay the price of having to be connected, because I have to be and am connected anyways.


People buy $30 prepaid Android devices that are hot garbage. There is absolutely a market for phone VDI.

Plus, we’re in a thought experiment with pervasive terabit cellular. I’m typing this on vacation in major European city on 3G. Indulge the imagination with the idea that we’ve liberated some building penetrating bands and have improved coverage as well! ;)


I would choose such device only if 100% availability comes along with the high bandwidth and low latency.


But video streaming is typically very battery intensive tho


That's because of the necessary compression/decoding. This kind of internet speed could stream a raw signal directly into a display driver.


You assume latency will decrease with bandwidth but that's not the case


sounds like a privacy nightmare.


The first thing I'd consider is what changed when we went from 100kbps to 100Mbps.

Now, we share complete video files and music files, whereas before we shared vector-like files such as Flash and MIDI.

What are we doing locally today that could not be sent over our current bandwidth? Is it something that will affect telepresence, like all the 3D data needed to recreate a realistic environment in real time? Is it about more accurate control of remote objects, like drones and robotic vehicles? Maybe it will enable remotely connected computers to be more efficient clusters, taking advantage of unused cycles during off-peak hours.

I think the biggest impact isn't going to be what happens at the faster speeds that happen in best-case scenarios. It's what will happen when mobile devices in areas with poor reception can achieve 1Gbps reliably and consistently.


Websites would be heavier, everything would expect a faster internet connection and using software without a fast internet connection would be worse.

Probably software wouldn't get better, probably we wouldn't solve the real big problems of our time either.


Apple would make 16 GB the standard ssd size in all laptops.

Fewer services would run locally. The typical user would probably not even care (or know) if their photos were stored on the phone or in the cloud.

I don't think that much would change at first, since everything else would become a bottle neck. 16K HDR streaming? Sure, but how many people would have a 16K HDR screen? Lossless music streaming? Already here, more or less, and does not require 1Tb/s.

Over time, everything would change of course, probably for the worse (for users). Mega corporations would be in posession of all user data, use it for AI training, ad targeting and all kinds of data extraction we can only dream of.


> The typical user would probably not even care (or know) if their photos were stored on the phone or in the cloud.

This is already true. I use iCloud Photos and I have local thumbnail and some local full res stuff that I looked at, but the software manages all of that. I don’t actually know what’s local. When I want to look at a photo or video, it downloads it on demand. The only time I ever notice it is with videos on cellular in rural areas.


Those 16k screens would need to be very large and very close to the viewer for the human eye to be able to resolve the individual pixels.


Right now we have a common architecture where users upload files to a central service, and that central service then forwards the content to other users. This is true of services like Youtube, Zoom, etc. With 1Tb/s content creators could serve the content from their own network. This would allow for platforms that have much lower operating costs, and could offer much more generous revenue share. Perhaps a peer-to-peer agreement could occur, where different nodes in the network will cache and reserve each other's files to respond to highly viral content.

I would also disagree with the thesis that internet speeds in the US have stagnated. In 2014 I had about 80Mbps. Today I have about 1500Mbps. On west coast cities I see high end condos with access to speeds up to 7000Mbps. Even my friends in pretty rural locations in 'fly over' states have access to hundreds of mbps with the latest federal grants to build fiber in rural areas. In one case I know someone that skipped from 52k to 200mbps fiber, with cable internet never offered to his house.


> With 1Tb/s content creators could serve the content from their own network. This would allow for platforms that have much lower operating costs, and could offer much more generous revenue share.

We have PeerTube now, which does that. Works fine. Nobody uses it, because there's little "discovery". The centralization of YouTube allows people to find your cat video.


Centralization isn' a matter of pipe bandwidth alone but of other things - fault tolerance, replication, professional sysadmins, logging.... Not sure how easy it would be to reproduce all those in a distributed system.


Internet speed is probably not even a top 5 reason why people don’t serve their own content. What a security nightmare


Web frameworks would be 1000x the current size.


Can't wait for the 10GB webpages that show you a news article.


Surrounded by autoplay infomercials.


The old, “Intel giveth, Microsoft taketh” axiom.


“Intel not giveth much, but Javascript Frameworks still taketh a lot”.


Chrome has entered the chat.


But just think of the performance improvements when you can write your blog posts with React in Rust inside WASM and have it compile to WebGPU code embedded in JS in a SVG data URL.


so you're saying chromium would use even more memory


Pretty much this. It was a huge disappointment for me to understand that even if I made a revolutionary discovery and doubled the yield of food crops, people would reproduce and the prices (and availability) of food would not change.


I don't think food scarcity/availability/price plays any appreciable role in developed nations birth rates but I'd be willing to consider evidence that it does.


I think price is a factor. It isn’t uncommon for someone to say, “another mouth to feed.”

When adding people to a family, food is probably the biggest non-negotiable cost going up.

I’d also expect to see a strong correlation between family size and the percent of meals eaten at home. Going out to eat quickly becomes unaffordable as family size grows.


Yes, but if you invented a system of crop rotation that you could play Doom on, now that would get some traction :)


Population is no longer food limited. Obesity, though...


Innovations like better videos codecs/compression would stop.

It mirror current situation with RAM and electron apps. websites would be bloated and unoptimized, there will be GBs of JS/CSS for a single url.

Realistically thinking, I mostly want to better connectivity across the globe(current ISPs speeds are mostly for a given metro or country) and non throttling connections that I fully utilize.


16K porn becomes the new standard definition

I wish I was joking but take the current usage of the internet, and scale up each part. 1TB/s might enable new things, but it's more likely to enable more of old things.


High definition video might actually really improve zoom calls: https://blog.google/technology/research/project-starline/


Is the problem with video streaming really bandwidth? I thought storage was the bigger problem, so before storage doesn’t come down in price, it won’t change much, right?


A high quality 4k stream is about 20mbit/sec. 16k resolution would only have 16x the amount of pixels. Assuming bitrates scale proportionally to resolution, that would only be 320mbit/sec.

So if you want to max out a 1tbit connection, you would need something closer to 900k resolution.


If there's anything we're seeing with the AI revolution, it's that we get more of the old things.


Call me cynic, but with every improvement the adtech and survillance gets stronger, so 4k video ads everywhere, more analytics to analyse your environment(somehow with lots of AI on everything), your door lock might need internet, metaverse might become a common thing, faster computers to allow AI more strong(like how we needed a hunky rig to play Crysis but my dinky notebook can do Crysis 3 fine) etc. That being said, I have access to 1Gbps but I still use 150Mbps because after 100Mbps, I don’t see any improvements for my daily life .


Companies will lean even harder into software as service so that people can't pirate stuff. Microsoft will require you to boot Windows over the internet.


IMHO this parameter just tends to shuttle things towards agglomeration, i.e. entities with servers, that can benefit from economics of scale.

I also believe we hit a practical wall on this that's observable by the success, or lack thereof, of game streaming.

In a world where there was a lot of savings to be unlocked, things like Stadia and Nvidia's GeForce Now, and Xbox's service, are notable big wins.

They didn't, which has me firmly believing incremental speed past "everyone in my household can stream video at 4K when desired" is an expected end state. That's tantamount to saying "once people can see whatever they want, at a resolution indistinguishable from reality, without delay, there won't be mass desire for increased Internet speeds", which seems intuitive.

Anything requiring greater streaming bandwidth (ex. VR) is highly sensitive to latency, which may have also affected the game streaming use case.

If latency approaches ~0 ms (which requires colocation with peering providers), I could see this sort of bandwidth opening up AR a bit more by effectively reducing compute requirements in such a small form factor, but that's kinda it.


A lot of people here are extrapolating from current tech, not realising that “quantity has a quality all of its own.”

For one, ubiquitous terabit Internet would completely eliminate the need for local compute and storage in most form factors.

About 60 Gbps is enough for 8K uncompressed video! You wouldn’t need a GPU or a PC to put it in. Just run everything in the cloud and access it like a virtual desktop. This is already commonplace in large enterprise.

One issue with such virtual desktops is latency. Their disks are expensive to move around to follow users so the virtual PC doesn’t move and mobile users have to access them from far away. With terabit Internet a 1TB operating system disk could be moved to a nearby point of presence in about ten seconds. Alternatively the OS could start booting instantly from the remote disk and stream the rest in the background later.

In other words, computing would be more like in Star Trek: there would just be this “ambient” compute you can interface with anywhere without thinking about data or device locality.


We could have secure computers, if we adopted capability based security en masse, but that's highly unlikely. I don't see us all running Genode or Hurd any time soon.

Because we, and the public, don't have secure computers, we're always under threat from any code we run, and any sites we use. This leads us inevitably to walled gardens, where we effectively outsource our security concerns. This has horrible consequences for Democracy, but we put up with them.

Until we fix the root issue, faster connectivity will just mean more content "consumed" by the public, and power concentrated into fewer and fewer hands.

---

On the plus side, if we all had our own secure servers, we could all have a Memex, as envisioned almost 80 years ago. We'd have our own copies of everything, and be able to spool them off for anyone as an act of sharing. We'd all have our own Library of Alexandria.

Of course, that doesn't sit well with the rentier class, so it's unlikely as well. 8(


So the interesting thing to put in here is that 1tb/s might on paper look like it'd make things more innovative, but it might not because of two things:

1) latency

2) congestion

Now, lets ignore power for the moment, thats a tricky thing. For example if I could connect a laptop at 1tb/s without draining the battery in 5 minutes, I could just not bother with a CPU, RAM or anything else locally. just have a dumb terminal.

But

Latency is the killer here, as is congestion.

If your internet actually ran at 500megs a second, and with <10ms latency, you could offload a whole bunch of things. You could have network storage (as in NFS) you would be able to load things instantly, up to 5mb in size. (5mb is about as much as you can download in a blink of an eye at 50 megabtyes a second.)

If you look at some of the concepts for windows 95, microsoft wanted "networked" computers, based on files and applications, rather than web pages. If you apply that to modern life, thats what 1tb/s could get you.


I went from 500mpbs download/upload in my old condo to 50/20mbps download/upload in my house. There are two noticeable effects: it takes slightly longer to download movies the night before I go on flights or long car trips and it takes significantly longer to push updates to large docker containers to the cloud. Everything else is more or less identical for me.

Now, maybe there'd be some novel use case that would come up if everyone (or let's say 80%+) had 2gbps internet, but it's hard to imagine that that's the big constraint for much. Maybe something like virtual/augmented reality could do more heavy processing in the cloud in that case (assuming low enough latency)?


I don't think anything meaningful will change. We'll have higher-definition video and that'd be it. Business practices and the client-server model wouldn't change, and the internet as we know it wouldn't change. Files will just get bigger (and arguably morr bloated). That's the trenf we've always seen.

It might be easier to do distributed computing in some fields, and there could be interesting opportunities for mesh networks and internet of things in addition to data collection, but it'f all be the same corporate data sales stuff we see now. There won't be a paradigm shift because the current culture is built around business.


What it could cause a big change is a drop in latency, more than of bandwidth. But for some world regions the light speed would be a limiter anyway.

Another game changer it would be internet (with good enough bandwith/latency) everywhere. Scratch that, Starlink is doing pretty much that, or at least have the potential to do it soon enough.

The key disruptor is near to free to get and to use it somewhat, and what kind of ubiquitous devices with that universal internet access would imply to change everything. GPS is a good example of that kind of impact, without the internet part.


I'm on 45Mbps/11Mbps in a suburban area in the UK. We might get 1Gbps next year. A cable provider (Trooli) previously installed down our road in 2023 and left out two houses. The one I own is one of them.

At this point the main thing that would change is I'd be able to do online backups for TB of data. With 1Tb/s internet I would save a lot less on my hard drive and download ML models and more whenever I needed them. But I just know the remote server I'm downloading from would still throttle me to Mbps to guard their bandwidth.


Openreach fibre is usually something like ('up to') 900mbps down but up is limited to segment the market so most people will not be able to do those fast backups.


Well, there are some thing to be put on the table:

- in 10 years audio/video resolutions will probably be very higher, probably a bit more than human eye resolution, 3D movies might became a thing not only a curios experiment demanding very expensive and crappy iron to be used, so it would be common having let's say 50Gb for a common movie, demanding let's 1Gb/sec for a conference call etc;

- dataset will probably be much bigger, fine grained and very long in timeline terms, let's say a future home assistant with InfluxDB will not record daily maxima and minima for 10 years but 1' resolution temp and gazillion more sensors, let's say it could be common to have 3D thermo-cam at home to regulate ventilation better and so on.

So, on one side, anything computer related should be expected much bigger than today, as today almost anything is much bigger than 10 years ago, and not in a linear progression.

As a middle ground we have to consider some so far known physical limitations and some climate and geopolitical changes, meaning it's even possible than in 10 years internet will be in a much worse shape than today because mass migrations and world wars have cracked the current infra and poverty caused by wars and not so well done and quick reorganization of the world supply chains might left us with limited wireless comms with too few low altitudes satellites.

Finally... If we achieve a steady bandwidth growth the current dare, sorry state of IT, archaically keeping up a crappy modern mainframe model for the service economy, where almost no one own nothing except big tech... Well, it might be worse "hey, do not buy an expensive NVME drive! Just mount an NVME one in a proper datacenter via internet" and I do not like much such nightmare...

BEWARE, so far thin clients are not much less limited than old dumb terminals, but they are still common and it's pretty common to have gazillion of people working on remote desktop because 99% of the companies infra are not designed for distributed desktop computing, so they keep absurd centralization, totally ignoring the enormous waste of resources, limits of use and comfort, and attack surface of modern "endpoints". Such disgraced model can be made worse under the flag of reducing hw costs to the consumer.


Something like the Apple Vision Pro with 8k spatial video will be the de facto standard 10 years from now (2034). I watched the Vision Pro demo at the Apple Store, and yes, that is crazy and is the future of A/V tech, so we will need high-bandwidth 2.5Gbps+ connections to support it.


Ads would get bigger and more annoying.


If corporations got their way, nothing would run locally anymore. Everything would run in the cloud and our devices would be nothing more than thin clients.


I feel like most use cases are more latency-bound than bandwidth-bound today. I could be wrong, maybe I'm not thinking big enough :)


With that crazy-high bandwidth, wouldn’t most users end up using Ethernet instead of HDMI/DP for screens? I’m imaging houses have a media center computer that’s powering every desktop and television from that point. No need for local processing. The latency wouldn’t matter for most use-cases either.


That would rival the speed of on-die processor caches. Right now the Apple M3 offers 300 GB/s or 2.4 Terabits / sec.

A 1 terabit /sec network connection would allow you to basically allow your machine to train and tune arbitrarily sized models completely remotely.


Streaming 8K per eye at 90 fps gets you pretty close to the maximum detail our eyes can perceive.


Assuming those are low latency links the line between the edge and the datacenter would be blurred. For instance it might be possible to train an LLM or have a supercomputer for weather models by combining thousands of nodes across the Internet, rather than having them all racked together. Multiple university campuses could easily combine their clusters and so forth.

In addition we could send more raw data to the cloud for processing, rather than having it done locally. For example a cheap VR headset with no GPU could simply send raw position and control data to a cloud server, which would stream back stereo video back to the headset with little compression or latency. Or say a large surveilance system could send the footage from thousands of cameras and sensors directly to the cloud without requiring any initial processing on the edge. You could make devices smaller, lighter, and consume less power at the cost of having all the compute done off site.


I wish one day, normal video conference just works, is not just the speed that we need but stability of the connection, easier way that audio auto-configures, maybe we can stop saying "do you hear me? " "yes and do you hear me? " ...


If something can be configured, it can be configured incorrectly. People are fallible, so they will configure things incorrectly.


You don't have to get to even a tiny fraction of 1Tb/s before your hardware can't take advantage of the speeds. Many devices area already bottlenecked by WiFi, leaving much bandwidth unused, and people are OK with that.


AR/VR with low latency could allow for some interesting multiplayer experiences.

However there is also the downside of making high fidelity omnipresent surveillance easier.


Is bandwidth the limiting factor or is it ping?


It would enable geographically distributed training of large neural networks. 1Tbps internode bandwidth is roughly what’s required in modern GPU clusters.


With bandwidth caps (soft and hard) also stagnating, you'd see increased profits for ISPs as those caps would be reached in minutes instead of hours.


Welp, I guess HN has answered. On my front page it says:

> 6. Ask HN: Imagine a world with 1Tb/s internet. What would change?

> 7. OpenSSH Backdoors (isosceles.com)


Network-attached storage and Network-attached RAM would be ubiquitous. You would literally be able to download more RAM.


Wearable MRI with some type of big model on it that can enable brain to brain communications.


Not sure about new apps but you can be sure the first thing there would be is more and higher resolution ads :/


Centralization systems become the norm and AI is common place. No need to localized software or operating systems…


We would still be marveling at how the latest web framework let’s you increment a counter.


For me nothing. I can already torrent a full 4k movie in under 10 minutes with my current connection, which is plenty fast. If everyone else had 1Tb and I could get a movie in 20 seconds or something, okay, fine. That would be kind of neat, I guess, but it wouldn't really be that big of an improvement to my overall life. My vision isn't good enough to appreciate the difference between 4k and 8k, so I wouldn't download 8k versions even if they were available. I'd consider that a waste of disk space.

I might start paying for streaming movies if services would fully buffer an artifact-free copy of the movie on my media player before starting to play the content. I don't think they'd do that though even if I had infinite bandwidth to my house. To save bandwidth and storage costs on their end they're going to continue to enshittify the streams to whatever generally-tolerable trickle people won't cancel a subscription over.

My Zoom calls would be the same. 75% of the people in Zoom calls keep their camera off anyway and hardly ever say anything, so the extra bandwidth wouldn't matter for work.

I generally will continue to avoid any Internet of Shit devices or privacy-ravishing Cloud services, so having extra bandwidth wouldn't impact my propensity to use any of that stuff.

I don't know. I guess I'm not creative enough or prone to use Cloud anything to imagine how my life would get better with 1000x more bandwidth. Maybe some startup would come up with something that would be the next "iPhone moment."


Lower latency would change things a lot more than higher speeds.


Modulo storage costs, we'd probably do a whole lot less compression.


Websites would probably just be huge f-ing videos shouting in your face.


Instead of a webpage loading 16MB of JavaScript for some animations and styling, itwebpages would come in a 16TB JavaScript super-framework.


Video games would be more realistic.


Severe DDOS attacks everywhere.


high fidelity holograms of people for remote presence




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: