Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Schrödinger's IPv6 Cat (ripe.net)
43 points by minusf 11 months ago | hide | past | favorite | 71 comments


Shown in the article, but not linked:

Google's IPv6 adoption statistics: <https://www.google.com/intl/en/ipv6/statistics.html>

Facebook's IPv6 adoption statistics: <https://www.facebook.com/ipv6/?tab=ipv6_total_adoption>


I work for a major open source network software company, we support IPv6 natively and in the past 2 years I've been asked about IPv6 by my customers exactly...zero times.


My company would only ask for IPv6 if a software doesn't support IPv6 though.


Sonic (beloved Bay Area ISP) got pestered with requests for IPv6, until they finally natively implemented it for their fiber, even though they had 6rd tunnels before.

But granted, Sonic is favored by enthusiasts, so they likely have a higher share of customers caring about such technicalities. And even then the ratio of users actively asking for it may have been tiny.


My hunch is that AWS charging for v4 addresses will start applying pressure across a broad cross-section of businesses as they start asking why don’t they just use the free thing


Half of AWS services aren't even available from an IPv6 endpoint, it's a clusterfuck. I'm all for carrot and stick but they are making a giant mess of it.

Source: https://docs.aws.amazon.com/vpc/latest/userguide/aws-ipv6-su... , see 'IPv6 only support' column.


> ... see 'IPv6 only support' column.

Is that relevant? There's nothing wrong with having RFC1918 addresses and globally-routable IPv6 addresses assigned to your VPC.

Have the RFC1918 addresses accessing IPv4-only AWS resources and the globally-routable IPv6 addresses serving the world. Easy.

After all, the major cloud providers don't charge for RFC1918 addresses... they just charge for globally-routable IPv4 addresses.


> There's nothing wrong with having RFC1918 addresses and globally-routable IPv6 addresses assigned to your VPC.

It's a pretty backwards way to build your network. You pay all the costs and gain none of the benefits.


I'm afraid I don't follow.

The way to express the design in a pure-IPv6 world would be that you use ULA addresses to reach the AWS services that you use and globally-routable addresses to reach the outside world.

Given that the cost that we're avoiding paying with the mechanism I described in my previous post is the ongoing cost for globally-routable IPv4 addresses, I'm not sure what cost you're talking about paying.

And given that the benefits are not having to pay for globally-routable IPs, I'm not sure what benefits you're talking about that we don't get?

Are you perhaps one of those "Hosts must be IPv6-only, no dual-stack allowed!" people? If so, I regard that as a silly stance today, and expect it will remain a silly stance for the next several decades (maybe even the next century, who knows?).


Running single stack hosts is absolutely a reasonable goal. If i have choice between running ipv6 and nat6to4, and ipv4, ipv6, and nat4, surely the former is both a simpler setup, and a further step towards a real full v6 internet?


I agree, but more and more customers are strictly limiting egress for security reasons which reduces the argument somewhat. I think it’s more likely that not overpaying for NAT Gateways will be a more effective source of pressure for AWS customers.


> Running single stack hosts is absolutely a reasonable goal.

Sure. I expect that it's not one that we will see most Internet-facing machines achieve in our lifetimes.

> If i have choice between running ipv6 and nat6to4, and ipv4, ipv6, and nat4, surely the former is both a simpler setup...

No. You already have an IPv4 stack in your OS, and I guaran-damn-tee you that your NAT64 setup is far more complicated than a NAT44 setup. [0]

> ...and a further step towards a real full v6 internet?

Sure. But there's no inherent value in dropping IPv4. The only thing wrong with IPv4 that's not also wrong with IPv6 is that it doesn't have enough address space. Moving more and more globally-reachable servers and hosts to IPv6 reduces the number of IPv4 addresses required, which solves the "not enough addresses" problem of IPv4.

[0] AFAIK, if you use NAT64, you either let both direct-IP connections [1] and inbound IPv4 port forwarding not work, OR you must use additional (substantially complex) software to make that work. So, either you break some software that happens to use IPv4, or you massively increase your system software complexity. Seems bad either way.

[1] That is, connections to IPv4 hosts without a pre-connection DNS lookup.


I meant the network admin costs, if you're having to run dual stack, and especially if you're getting a network setup where you can't fearlessly combine/add routes between any two subnets that you have. To my mind the key benefit of using IPv6, the thing that makes it worth doing at all, is to stop having to worry about address assignment and address collisions and local addresses; obviously you do still probably want to talk to v4-only outside resources, but if you can't get away from having to give all your hosts individual v4 addresses and keep track of them then frankly you might as well just stay v4-only (except at the load balancer or what have you - which might be what you meant, but it sounded like you were talking about using a mix of v4 and v6 within the VPC).


> I meant the network admin costs...

Yeah, the network admin costs don't double, they're marginally larger.

> ...you can't fearlessly combine/add routes between any two subnets that you have.

You can't do this with ULA subnets, either. The standard way to do ULA subnet calculation is collision-resistant, not collision-proof. There's NO central coordinating body to prevent collisions. While the odds of collision are VERY, very low, they're not zero.

The benefit is that you pretty much never have to renumber after network merges... it's NOT that you never have to check for collisions.

> To my mind the key benefit of using IPv6 ... is to stop having to worry about address assignment and address collisions and local addresses...

See above.

> ...if you can't get away from having to give all your hosts individual v4 addresses and keep track of them then frankly you might as well just stay v4-only...

This is nutty. If you don't get why Internet-connected systems configured with "NATted IPv4 + globally-reachable IPv6" is strictly better than "NATted IPv4 and no IPv6", I question how deeply you've thought about this.

> ...it sounded like you were talking about using a mix of v4 and v6 within the VPC...

Yep. See above.


Yeah. For example, ECR (Elastic Container Registry) is _not_ available on IPv6. Anything "serverless" such as ElastiCache or RDS is also not available.

So it means that you can't have a fully-IPv6 stack for any modern application on AWS.


Having IPv4 just for your public facing servers is a small expense, and within the private network you can still use private IPv4. The biggest pressure is to allow your servers to call out into the internet without an IPv4 address or a NAT. That's pressure on APIs, SaaS services consumed by backend servers, update servers, etc.

Maybe that's enough to remove the friction around IPv6 and make it "just work" to the point that everyone just keeps it on. Or maybe it doesn't and we get a divide where everything consumed by machines moves to IPv6 while content consumed by humans keeps preferring IPv4.


Where I work there is almost nobody with any IPv6 experience and certainly nobody willing to come forth and push for adoption. We just push the increasing cost of NAT gateways etc onto our customers.


I doubt it. People will just start setting up load balancers with SNI routing if cost becomes a problem.


Quasi related; how does one find out if your ISP is using CGNAT?

I'm rather lucky in that my ISP recently started offering IPv6 (and somehow my workstation appears to be using it by as the default), but none of the other PC's on my network do. (Win11 change perhaps?)


You can, with several caveats, detect which hop(s) on the path perform NAT by using some trickery [1]:

> NAT devices are detected by observing a difference in the expected and actual checksum of the UDP packet that is returned as the part of the Original Datagram in the ICMP Time Exceeded message. If they differ then it indicates that a NAT device has modified the packet. This happens because the NAT device must recalculate the UDP checksum after modifying the packet (i.e. translating the source port) and so the checksum in the UDP packet that is nested in the ICMP error may not, depending on the device, match the original checksum.

[1] https://github.com/fujiapple852/trippy/releases/tag/0.11.0


Check the IP that your router receives on its WAN interface and compare it to the IP printed by internet services like Google (search for "what is my ip" and there'll be a special card among the results) or https://ipinfo.io/ip . If they're not the same (because your router's IP is a private IP like 192.168.#.# or 10.#.#.#) then your router is being NAT'd.


The most reliable way is to compare your ISP-assigned address to the response from any one of a number of services that return the caller's IP address (e.g., https://checkip.amazonaws.com/).


Aside from comparing assigned public IP addresses regularly, I think we (ipinfo) probably have this data internally, or at least we can figure it out. We are pinging and running traceroutes on every IP out there to figure out IP geolocation, so I think we should be able to tag ASNs/ISPs that use CGNAT. So, on CGNAT connections the RTT on the same IP address will be different from time to time and traceroute paths and times will be different as well.

But I'm not sure who will and how they will find this information useful. If anyone can think of a reason why CGNAT detection can be useful generally, I can pitch this to the engineers.


Check the IPv4 address on your WAN. If it's in the 100.64.0.0/10 range [0], you're on CG-NAT.

Furthermore, run

    curl ipv4.icanhazip.com
If the address you get back is different from the one on your WAN interface - assuming your Gateway is your ISP rather than, say, a VPN - you must be on CG-NAT.

[0] https://en.wikipedia.org/wiki/Carrier-grade_NAT#Shared_addre...


I don't know of any ISP that will give you a public ipv4 address for free.

More interesting is windows 11 auto configuring ipv6. Does you pc have a public ipv6 address starting with 2:: or fe80:: link local address?

Quick ipv6 crash course. Instead of DHCPv4 (there is DHCPv6 but it's optional) being required for address configuations, ipv6 uses somting called Stateless address Autoconfiguration (SLAAC). Normaly your router sends out Router advertizments packets and this tells devices about the default gateway, public prefix, dns etc... and pc will generate a public ip of (64 bit public prefix):(64 bit random number).

It seems like Windows 10 and eariler will not do ipv6 unless your router advertises it.

TL;DR learning ipv6 is easier than disabling it at this point


>I don't know of any ISP that will give you a public ipv4 address for free.

There probably isn't an ISP that gives out *static* public IPv4 addresses for free, but any ISP that supports IPv4 without CGNAT will give out public IPv4 addresses by definition. The two I've used in the US (Frontier, now Ziply) certainly do.


Aquiss [0] in the UK gives static public IPv4 and static /56 IPv6 PD included in the regular plan price.

[0] https://aquiss.net/


I suppose "for free" is a relative term, since no ISP I have will give me IP service for free either. However none of the residential ISPs available to me in my section of the US will offer a discount to not give me a public IPv4 address, so I think that counts as getting one "for free"?


One of my pet hot takes is that IPv6 will never exceed ~60% adoption. NAT and SNI routing (aka virtual hosts at the TLS layer) solve most problems for most users fairly well.


Eyeballing a sigmoid curve fit to the google's IPv6 charts would support that.


My ISP no longer allows port-forwarding of ipv4 addresses as 1 public ip is shared amongst many ISP customers. This is due to a migration to MAP-E.

IPV6 is pretty much my only choice for hosting stuff in offices and at home.

Is MAP-E becoming prevalent?


I feel like lots of providers in Japan are using it now. MAP-E is awful, it doesn't use typical IPv6 acquisition methods and the ipv4 address/allowed forwarding ports are calculated based on the IPv6 address (using a public/fixed table?).


Every post about IPv6 and its failure is about friction. Friction for the inevitable march towards adoption.

As usual with English, the British master it, and they have a term for bureaucratic friction: "The Blob"


The Blob is a political insult towards the civil service by senior Tory leaders, who blamed them for resisting their policies for political reasons.

It does not refer to bureaucratic friction in general, and is not a term in widespread use by the British.


IPv6 have some issues, but the main reasons it's ignored is that big&powerful do not want a global per device, so they do not want people buying a domain name and then host easily their own stuff, call easily P2P anyone else and so on.

That IMVHO the real reason who stop the adoption.


Here’s the real reason we won’t move to IPv6: NAT is used as a security feature in IPv4. World isn’t willing to do the work to make that transition.


Wrong. It's more about money. People who run ISPs have said they don't support IPv6 because they won't see any return on the cost. These ISPs use CGNAT and like to solve customer "issues" by selling them a static IP. They would sell far fewer static IPs and actually have to look into issues rather than dilly dally around a bit so the static IP "fixes" the issue. They like to blame issues on other nefarious customers causing shared IPs to be banned or something like that.


In a lot of cases on a residential line you can't even pay for a public and/or static v4. The option simply doesn't exist. Many ISPs just force you to buy a "business" package for 3x the cost with a bunch of other features you may not need.


This talking point has been debunked since the 90s. Any device capable of doing NAT can perform the even easier task of filtering packets.

Even if you do decide to toss your router and connect directly to the internet it’s a lot less risky than it was in 1998 when Windows 95 didn’t have a firewall. I doubt IPv6 is going to make many people decide they want dumber gateway devices, however, since the cost differential hasn’t been meaningful for ages.


They can use NAT on v6 if they really want to


I am not sure if it is me on the article sounds like it was passed through an LLM.

These days I see more and more content similar to how the chat GPT would generate and describe things


I can see how you might suspect that, but I got a different read from it. The constant references to the topic as the "IPv6 Cat" struck me as another in the long tradition of authors who became too attached to a clumsy and ineffective analogy they thought was good enough™ and banged it like a drum. That strikes me as an all-too-human thing to do (especially since I've been guilty of it myself before) rather than an AI artifact. I enjoyed the piece nevertheless, and I agree with its premise that market forces are not enough to continue the trend of IPv6 penetration growth and that public policy carrots and sticks are both needed and justifiable to ensure it comes to pass.

On another matter, whose brainchild is IPv6+? I haven't heard of that one before.


Indeed!

Look at these formulations: "Respecting these governance frameworks is crucial to maintaining the open, collaborative model that underpins global Internet development and its technological evolution ... collaborative approaches that engage technical communities, promote open standards, and prioritise interoperability are essential... To overcome these challenges, a strategic approach combining economic and operational incentives with collaborative governance is essential. Governments and organisations must take proactive steps to create a more supportive environment... By combining these measures, enterprises and network operators can address the barriers to IPv6 adoption while fostering collaboration between governments, industry leaders, and the technical community. This approach ensures that the transition to IPv6 remains inclusive, efficient, and aligned with the Internet’s principles of openness and innovation."

Purely LLM gibberish...:))


How is that gibberish? It's clearly a policy paper/article, and the wording is very in-line with that: it's wordy, but there's nothing factually wrong or outlandish in it.


I'm not convinced. I don't think this is gibberish.


"seamless", appearing 5 times.

Everything is "seamless" with ChatGPT.

IPv6 is seamless, etc...


Only if one delves.


I honed my skills at recognizing ChatGPT, not you ?


Not a great writeup. The IPv6 Cat thing is tortured and the article feels meandering and mostly pointless. Is the intended audience policy makers?


> One key reason for this uneven progress is the extension of IPv4’s lifespan through interim technologies like Network Address Translation (NAT) and IPv4 address transfers.

they completely ignore the actual problem with IPv6 which is that they didn't just extend IPv4 in a straightforward manner. they could have made the address fields 64 bits and been done with it. but, oh no, they had to make it the protocol for the ages.

it's completely analogous to the failed Intel Itanium vs. AMD x64.


I've never seen anyone explain a "straightforward" way to extend the bits without having 90% of the same adoption difficulty. What's your idea, specifically?

Also extension mechanisms like that already exist as part of ipv6.


The issue isn't that the data sent over the wire changed too much. Instead, the semantics changed too much.

Adopting IPv6 would ideally have been as simple as changing a socket definition and your address types. But so much of the semantics changed that it isn't that easy at all. It also prevented backwards capability.


IPv6 changes far more than the address size.

Why mandate the use of Neighbor Discovery Protocol instead of the much simpler ARP?

Why change the rules for UDP checksums? The checksum field in UDP over IPv4 is optional. The checksum field in UDP over IPv6 is required. This is a major pain for protocols that change fields in transit, such as PTP.

I could go on. There are important reasons for each of these decisions, but the fact is that every little change slows adoption. IETF could have stayed focused on solving address scarcity alone, but instead they chose to boil the ocean.


As you noted, there were important reasons for those changes – they even helpfully summarized them in a dedicated section of https://datatracker.ietf.org/doc/html/rfc4861#section-3.1; the checksum benefits are obvious – and none of them were major factors in the rollout delays.

The single biggest factor was that changing the header format broke every decoder in existence, and it took a long time both to get all of that old hardware and software aged out of common use since there wasn’t a legal or financial compulsion to do so. Nobody delayed migration because they liked supporting ARP+ICMP more or critically depended on being able to half-ass the implementation of an obscure time sync protocol - if you don’t update checksums, lots of things will stop your traffic even in an IPv4-only world. The main reason was that everyone had to replace their network infrastructure, review firewall rules, etc. and early adopters were only rewarded with more pain. Given how painful that has been, I sympathize with the people who said we should go to 128-bits because we never want to repeat the process.


As someone who has spent the last several years of my career implementing picosecond-precise time transfer using an "obscure time sync protocol", and holds a few patents in this field, kindly check your dismissive attitude.

When you're working on an FPGA or an ASIC, everything about the UDP checksum is a total pain in the ass. It is entirely redundant with the MAC-layer checksum. The field comes before the contents it checks, and depends on the entirety of the message contents, which must be buffered in the meantime while 10+ Gbps of data continue to arrive. The logical thing to do is to disable it, which clients are explicitly required to accept under IPv4. There is no "half-assing" here, only a logical decision to avoid spending 16 kiB of precious SRAM on every network port. That is the reason why the product line in question doesn't support PTP over IPv6 and never will.


First, I’m not saying that it’s obscure to be dismissive but simply recognizing that the number of people who need to have picosecond precision is not a significant factor in global IPv6 adoption.

Second, while it’s certainly true that having to buffer packets to calculate the checksum is more expensive that doesn’t mean that the best option is to ignore concerns about data integrity which was a far more frequent source of problems. If they hadn’t developed an encapsulation mechanism, using an alternate protocol like UDP-Lite would avoid the issue and anyone needing extremely high-precision already has to have tight enough control over their network to deploy it since they’d need to avoid having random middleboxes interfering with timing.


Data integrity is ensured by the MAC layer. Ethernet's CRC32 is substantially stronger than the weak and unnecessary 16-bit IP-checksum in the UDP header. It is also infinitely easier to calculate in hardware, because it is placed where it belongs (i.e., after the end of protected data).

I acknowledge that PTP is not that widespread, but this isoteric issue is emblematic of broader overreach with the IPv6 design. This decision is one of dozens (hundreds?) that are nice-to-have for many users, but catastrophically disruptive for others.

Such concerns are individually minor, but I assert that they collectively represent a significant barrier to adoption.


If you're running at 10Gbps and you don't spare the memory to buffer a single packet, that's desire not need.

Your expertise does not make you automatically right about every tradeoff.

Also why does on-the-fly editing for PTP packets in particular require your buffer to be bigger than a PTP packet? Aren't those small?


It's very much "need" in this case. This was considered at length.

To be clear, we are talking about exotic custom hardware that has little in common with the average x86/x64 desktop.

For something like a 24-port 10 GbE switch, the platform might have a gigabyte of off-chip DRAM, but only a megabyte of on-chip SRAM. An ask of 16 kiB SRAM per port is 37% of that capacity, which is badly needed for other things.

The other complicating factor is that the PTP egress timestamp and update pipeline needs to be predictable down to the clock cycle, so DRAM isn't an option.

Most PTP packets are small, yes, but others have a lot of tags and metadata. They may also be tucked between other packets. To be fully compliant, we have to handle the worst case, which means a full-size buffer for a jumbo frame.

And yes, we did consider RFC1141 and RFC1624. We use those when we can, but unfortunately not possible in this case.

Say what you will about the rest of IPv6, but I am particularly salty about the UDP checksum requirement.


> To be fully compliant, we have to handle the worst case, which means a full-size buffer for a jumbo frame.

Well, fully compliant except for IPv6. If you said no jumbo frames for PTP, or no jumbo frames for specifically IPv6 PTP, then the extra buffer for PTP checksums only needs 4% of your SRAM.

> They may also be tucked between other packets.

Does that matter? Let's say a particular PTP packet is 500 bytes. If there's a packet immediately after it, I would expect it to flow through the extra buffer like it's a 500 byte shift register.


not have 128 bit addresses for one thing. 64 bits would have been fine. that was one of the biggest consternations that's a huge hit for small packets.

so nat sucks. we needed to have something better. but instead of just extending to 64 bit src/dest addresses, align the fields and drop the checksum or any straightforward extension like that we got an entirely new protocol with new rules, nuances and complexity. so people just said nope. if it had been just a superset of IP with a different packet format and wider fields, it would have been adopted widely 20 years ago.

this wasn't intended to be a contentious take, btw: i was genuinely surprised that the article was ignoring this take. it was a very common feeling in the late 90s and 2000s when IPv6 was coming out. "over-engineered"


> What's your idea, specifically?

This is the problem. Lots of arm-chair protocol engineers claim it'd be easy if 'They did X'. Of course, these immediately fall apart under the barest of scrutiny but they keep coming up.

Here is your challenge. Create a way to add this address space extension in a way that doesn't break backwards compatibility. Remember, you need to be specific how you would add the change and how it would keep backwards compatibility.


> address space extension in a way that doesn't break backwards compatibility.

i didn't say it wouldn't break backward compatibility - you're moving the goal posts. what i said was "a superset of IP with a different packet format and wider fields"

> arm-chair protocol engineers

don't be condescending. i've likely been designing protocols for longer than you think.

> Remember, you need to be specific how you would add the change and how it would keep backwards compatibility.

if all you had to do to deal with IPv6 was bigger addresses and a slightly different wire format, it wouldn't have had the barrier to adoption. don't design an entirely different protocol. the wire format is the least of the problems.


>if all you had to do to deal with IPv6 was bigger addresses and a slightly different wire format,

This again. The biggest barrier to IPv6 adoption has always been a different wire format, it doesn't matter the degree of difference.


I'm just a casual homelab guy, but all my hardware now supports IPv6 but I'm not really using it precisely because it is just so different from IPv4.

How you assign addresses is completely different. How you configure your firewall as a result is completely different. In fact software support for the latter was one of the things I struggled with for years before having to change router software from pfSense to OpenWRT. Last I checked pfSense still didn't have full support.

They changed the way you write the addresses, using the port separator as group separator as well, leading to needing special software support for parsing IPv6 addresses. I know because I had to fix this in a few projects where we bothered to add IPv6 support, and that was the biggest PITA by far when adding IPv6 support, the rest was trivial.

Out of all the trouble I've had with IPv6, the wire format was the least problematic by far. All the wire format did was cause it to take some time to get IPv6 capable hardware.

But I've had that hardware for decades at this point. The thing keeping IPv6 back is all the other things they changed.


> They changed the way you write the addresses, using the port separator as group separator as well, leading to needing special software support for parsing IPv6 addresses. I know because I had to fix this in a few projects where we bothered to add IPv6 support, and that was the biggest PITA by far when adding IPv6 support, the rest was trivial.

OMFG!

The hardest part of supporting IPv6 was fixing your address parsing? THAT!?

Here's my frustration. Everyone who doesn't understand the why of IPv6 always complains that the address format is such a huge problem and that is why the IPv6 deployment is so slow and hard. It's basically a shibboleth for poor understanding.

The reason why this isn't a good take is that IP address parsing is a standard function of every standard library on the planet. You dump in a string and they all figure it out, and spit back an object with everything you need. The reason you had so much trouble with supporting it is that you weren't using the platform libraries. You hacked together some junk, probably a few broken regex's and string concatenation. Your homebrew IP library was broken and I guarantee you didn't handle all the IPv4 parsing rules correctly.


> The hardest part of supporting IPv6 was fixing your address parsing?

That was actually a non-trivial part of implementing IPv6. Sure RFC 2732 had come out a few years earlier, but we weren't parsing URLs so it was not clear if it applied to our use-case.

All the rest that was required for us to support IPv6 was quite trivial. This was the only thing we had to spend time on.

> The reason why this isn't a good take is that IP address parsing is a standard function of every standard library on the planet.

Ok, I stand to be corrected, after all none of us were network programming experts.

How do you parse a IPv6 address, including the port number if present, using Boost 1.35 or C++03 STL? Note should run on Windows XP, as well as Linux and OSX of similar era. Does your solution require the format specified in RFC 2732?

Anyway my point still stands. There main friction to adopting IPv6 is not the wire format, it's everything else they changed.


So here is an exercise: go look at the structure of an IPv4 packet. It’s not complicated. Can you see where you can cram 32 additional bits? Or even 24? Because if there isn’t a place for them then you cannot possibly extend the IPv4 address space without breaking backwards compatibility. Anyone can do this exercise, and anyone who has an opinion should do this exercise.

Spoiler: you will come to the conclusion that you can’t find the additional bits. Your only option is to break compatibility and create a new packet header format. At this point you can choose literally any size address larger than 32 bits. 64 is good, but the cost to go to 128 is literally nothing while giving you a lot more possibilities of what you can do with it.

Lastly, IPv6 fixes a lot of craft from IPv4. It is a more streamlined protocol that is actually easier to work with than IPv4. The people who told you that IPv6 is overengineered didn’t have an alternative better protocol. Their point was that IPv4 is fine and we don’t need anything but what it provides because a new protocol is scary and annoying to learn because new things are scary. Literally, mathematically, there is no alternative that solves address exhaustion in a backwards compatible way. CGNAT is the overengineered hack, not IPv6.

I really hope you stop respond in to people with nonsense before you look at the packet structure yourself.


what i said was "a superset of IP with a different packet format and wider fields"

well, yes obviously you need more bits. what you don't need is all the other changes.

> I really hope you stop respond in to people with nonsense before you look at the packet structure yourself.

don't be condescending.


Again, look at the IP header format and see for yourself that there is no place to create wider fields. This is what everyone here has been trying to tell you in a myriad ways and you are not hearing this. There is no possible way to do what you are proposing. What you are saying is the definition of nonsense because there is no sense in it. You are arguing that an 18 wheeler should be able to fit inside the trunk of a car and getting upset when people tell you that it doesn’t fit.


A different address size is your main suggestion? Anything that isn't 32 is going to have the same problem.

> if it had been just a superset of IP with a different packet format and wider fields

It pretty much is...

Changes like DHCP are not the deciding factor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: