Hacker News new | past | comments | ask | show | jobs | submit login

Thanks for the reading material.

You omitted the sentence before your excerpt where Mr. McManus suggests we move to a multiplexed pipelined protocol for HTTP.

I'll go further. I say we need a lower level, large framed, multiplexed protocol, carried over UDP, that can accomodate HTTP, SMTP, etc. Why restrict multiplexing to HTTP and "web browsers"? Why are we funnelling everything through a web browser ("HTTP is the new waist") and looking to the web browser as the key to all evolution? It seems obvious to me what we all want in end to end peer to peer connectivity. Although the user cannot articulate that, it's clear they expect to have "stable connections". This end to end connectivity was the original state of the internet. Before "firewalls". Client-server is only so useful. It seems to me we want a "local" copy of the data sources that we need to access. We want data to be "synced" across locations. A poor substitute for such "local copies" has been moving data to network facilities located at the edge, shortening the distance to the user.

But, back to reality, in the case of http servers, common sense tells me that opening myriad connections to (often busy) web servers to retrieve myriad resources is more prone to potential delays or other problems (and such delays could be due to any number of reasons) than opening a single connection to retrieve said myriad resources. Moreover, are his observations are in the context of one browser?

I guess when you work on a browser development team, you might get a sort of tunnel vision, where the browser becomes the center of the universe.

If you dream of multiplexing over stable connections, then you should dream bigger than the web browser. IMO.

I'm aware of a bug in some PHP databases with keep alive after POST. I mainly use pipelining for document retrieval (versus document submission) so I am not a good judge of this. What I'm curious about is where keep alives after POST would be desirable. You alluded to that usage scenario (a series of GET's after a large POST).




Re. Patrick's sentence, you're right, but as I mentioned above, SPDY/4 will become HTTP/2 (we're working through the standardization process). So I think most of the major players are on board with "fixing" HTTP pipelining by using SPDY-style multiplexing.

Re. thinking bigger, you might want to read up on QUIC, which was announced recently: http://en.wikipedia.org/wiki/QUIC . Based on that, I would content that at least we on the Chromium team don't have tunnel vision. :)

Re. your question, Patrick's data is from Firefox only I believe. You're right that it's not surprising his stats show that SPDY helps over HTTP without pipelining. But the more interesting thing is that HTTP with pipelining still doesn't help that much over HTTP without pipelining (on average) and SPDY still beats it by orders of magnitude. I'd have to dig, but I'm pretty sure there are similar stats on the Chromium side.


Yes, a major appeal of pipelining to me is efficiency with respect to open connections. It's easier to monitor the progress of one connection sending multiple HTTP verbs than multiple connections each sending one verb.

Whether multiple verbs over one connection are processed by the given httpd more efficiently than single verbs over single connections is another issue. IME, a purely client-side perspective, pipelining does speed things up. But then I'm not using Firefox to do the pipelining.

I'm sure the team reponsible for Googlebot would have some insight on this question. (And I wonder how much SPDY makes the bot's job easier?)

In any event, multiplexing would appear to solve the open connections issue. And I don't doubt it will consistently beat HTTP/1.1 pipelining alone. I'm a big fan of multiplexing (for peer-to-peer "connections"), but I am perplexed by why it's being applied at the high level of HTTP (and hence restricted to TCP, and all of its own inefficiencies and limitations).

I'm curious about something you said earlier. You said something about the "overhead" of using netcat. It's relatively a very small, simple program with modest resource requirements. What did you mean by overhead?


Re. multiplexing at the HTTP layer, because an HTTP replacement has to be deployable and testable. However, now that the ideas in SPDY have been proven and are on their way to being standardized, you can look at QUIC to see what can be done when not limited to TCP and HTTP.

By overhead I mean latency overhead -- running a program to download a site to a local file and then displaying it in a browser will almost certainly have a higher time to start render. Not to mention you're hitting everything cold (i.e., not using the browser's cache).


I don't measure latency as including rendering time. Maybe I'm not "rendering" anything except pure html.

I measure HTTP latency as the time it takes to retrive the resources.

Whatever happens after that is up to the user. Maybe she wants to just read plain text (think text-only Google cache). Maybe she wants to view images. Maybe she wants to view video. Maybe she only wants resources from one host. Maybe she does not want resources from ad servers. We just do not know. Today's webpages are so often collections of resources from a variety of hosts. We can't presume that the user will be interested in each and every resource.

Of course those doing web development like to make lots of presumptions about how users will view a webpage. Still, these developers must tolerate that the speed of users' connections vary, the computers they use vary, and the browsers they use vary, and some routinely violate "standards". Heck, some users might even clear their browser cache now and again.

But HTTP is not web development. It's just a way to request and submit resources. Nothing more, and nothing less.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: