Hacker Newsnew | past | comments | ask | show | jobs | submit | nwellnhof's commentslogin

fmemopen and open_memstream are both part of POSIX, so they're not restricted to GNU systems and can be used portably. fopencookie is a GNU extension, though.

Why not petition to change § 52 AO directly? I made such a petition a couple of years ago but didn't get around to promote it: https://www.openpetition.de/petition/online/anerkennung-der-...

> I cannot understate the impact of Russian Energy being cut off.

It's an interesting fact that Western Germany imported Russian gas since the early 1960s, throughout the cold war and in complete opposition to US interests. German Wikipedia has a nice overview: https://de.wikipedia.org/wiki/Geschichte_der_deutschen_Gasve...


> It's not that Russia had nukes in Ukraine and withdrew them.

Russia is the single legal successor of the USSR, so all Soviet nukes became Russian nukes, regardless where they were located. So after the USSR broke up, Russia did have nukes in Ukraine and withdrew them.


Legal succession is mostly irrelevant and more complicated than that. Russia had operational control because it had taken physical control of the ex-Soviet command and control systems which were in Russia, and hence had the launch codes, etc.

To be fair, Russia becoming the single successor of the USSR wasn't a foregone conclusion in the early 1990s. There wasn't relevant precedent of a country dissolving I think -- Yugoslavia was still battling it out, Austria-Hungary was too long ago.

It was an explicit decision by both CIS and UN. Russia took USSR's seat on UNSC two weeks after USSR was dissolved, and that happened in 1991. Budapest Memorandum was negotiated 3 years later, by which time this was already a firmly established thing.

Removing XSLT from browsers was long overdue and I'm saying that as ex-maintainer of libxslt who probably triggered (not caused) this removal. What's more interesting is that Chromium plans to switch to a Rust-based XML parser. Currently, they seem to favor xml-rs which only implements a subset of XML. So apparently, Google is willing to remove standards-compliant XML support as well. This is a lot more concerning.


It’s interesting to see the casual slide of Google towards almost internet explorer 5.1 style behavior, where standards can just be ignored “because market share”.

Having flashbacks of “<!--[if IE 6]> <script src="fix-ie6.js"></script> <![endif]-->”


The standards body is deprecating XSLT with support from Mozilla and Safari (Mozilla first proposed the removal).

Not sure how you got from that to “Google is ignoring standards”.


There's a lot of history behind WhatWG that revolves around XML.

WhatWG is focused on maintaining specs that browsers intend to implement and maintain. When Chrome, Firefox, and Safari agree to remove XSLT that effectively decides for WhatWG's removal of the spec.

I wouldn't put too much weight behind who originally proposed the removal. It's a pretty small world when it comes to web specifications, the discussions likely started between vendors before one decided to propose it.


The issue is you can’t say to put little weight who originally proposed the removal if the other poster is putting all the weight on Google, who didn’t even initially propose it


I wouldn't put weight on the initial proposer either way. As best I've been able to keep up with the topic, google has been the party leading the charge arguing for the removal. I thought they were also the first to announce their decision, though maybe my timing is off there.


It doesn't seem like much of a charge to be led. The decision appears to have been pretty unanimous.


By browser vendors, you mean? Yes it seems like they were in agreement and many here seem to think that was largely driven by google though that's speculation.

Users and web developers seemed much less on board though[1][2], enough that Google referenced that in their announcement.

[1] https://github.com/whatwg/html/issues/11578 [2] https://github.com/whatwg/html/issues/11523


Yes, that's what I mean. In this comment tree, you've said:

> google has been the party leading the charge arguing for the removal.

and

> many here seem to think that was largely driven by google though that's speculation

I'm saying that I don't see any evidence that this was "driven by google". All the evidence I see is that Google, Mozilla, and Apple were all pretty immediately in agreement that removing XSLT was the move they all wanted to make.

You're telling us that we shouldn't think too hard about the fact that a Mozilla staffer opened the request for removal, and that we should notice that Google "led the charge". It would be interesting if somebody could back that up with something besides vibes, because I don't even see how there was a charge to lead. Among the groups that agreed, that agreement appears to have been quick and unanimous.


In the github issues I have followed, including those linked above, I primarily saw Google engineers arguing for removing XSLT from the spec. I'm not saying they are the sole architects of the spec removal, and I'm not claiming to have seen all related discussions.

I am sharing my view, though, that Google engineers have been the majority share of browser engineer comments I've seen arguing for removing XSLT.


Probably if Mozilla didn't push for it initially XSLT would stay around for another decade or longer.

Their board syphons the little money that is left out of their "foundation + corporation" combo, and they keep cutting people from Firefox dev team every year. Of course they don't want to maintain pieces of web standards if it means extra million for their board members.


Mozilla's board are basically Google yes-people.

I'm convinced Mozilla is purposefully engineered to be rudderless: C-suite draw down huge salaries, approve dumb, mission-orthgonal objectives, in order to keep Mozilla itself impotent in ever threatening Google.

Mozilla is Google's antitrust litigation sponge. But it's also kept dumb and obedient. Google would never want Mozilla to actually be a threat.

If Mozilla had ever wanted a healthy side business, it wasn't in Pocket, XR/VR, or AI. It would have been in building a DevEx platform around MDN and Rust. It would have synergized with their core web mission. Those people have since been let go.


> If Mozilla had ever wanted a healthy side business, it wasn't in Pocket, XR/VR, or AI. It would have been in building a DevEx platform around MDN and Rust[…] Those people have since been let go.

The first sentence isn't wrong, but the last sentence is confused in the same way that people who assume that Wikimedia employees have been largely responsible for the content on Wikipedia are confused about how stuff actually makes it into Wikipedia. In reality, WMF's biggest contribution is providing infrastructure costs and paying engineers to develop the Mediawiki platform that Wikipedia uses.

Likewise, a bunch of the people who built up MDN weren't and never could be "let go", because they were never employed by Mozilla to work on MDN to begin with.

(There's another problem, too, which is that addition to selling short a lot of people who are responsible for making MDN as useful as it is but never got paid for it, it presupposes that those who were being paid to work on MDN shouldn't have been let go.)


So the idea is that some group has been perpetuating a decade or so's worth of ongoing conspiracy to ensure that Mozilla continues to exist but makes decisions that "keep Mozilla itself impotent"?

That seems to fail occam's razor pretty hard, given the competing hypotheses for each of their decisions include "Mozilla staff think they're doing a smart thing but they're wrong" and "Mozilla staff are doing a smart thing, it's just not what you would have done".


You're not wrong.

And where philosophical razors are concerned, the most apt characterization of the source of Mozilla's decay is the one that Hanlon gave us.


Can you say more about the teams let go who worked on MDN and Rust? Wondering if I can read anything on it to stay up to speed.



> The standards body is deprecating XSLT

The "CORPO CARTEL body" is deprecating XSLT. WhatWG is a not really a standards body like the W3C.


I think the person you’re replying to was referring to the partial support of XML instead of the xslt part.


Then standards body is Google and a bunch of companies consuming Google engine code.


I guess you mean except Mozilla and Safari...which are the two other competing browser engines? It's not like a it's a room full of Chromium based browsers.


Do Mozilla and Safari _not_ take money from Google?


Safari yes

Mozilla…are they actually competing? Like really and truly.


Mozilla has proven they can exist in a free market; really and truly, they do compete.

Safari is what I'm concerned about. Without Apple's monopoly control, Safari is guaranteed to be a dead engine. WebKit isn't well-enough supported on Linux and Windows to compete against Blink and Gecko, which suggests that Safari is the most expendable engine of the three.


If your main competitor is giving you 90% of your revenue they aren't a competitor.


I really can’t imagine Safari is going anywhere. Meanwhile the Mozilla Foundation has been very poorly steering the ship for several years and has rightfully earned the reputation it has garnered as a result. There’s a reason there are so many superior forks. They waste their time on the strangest pet projects.

Honestly the one thing I don’t begrudge them is taking Google’s money to make them the default search engine. That’s a very easy deal with the devil to make especially because it’s so trivial to change your default search engine which I imagine a large percentage of Firefox users do with glee. But what they have focused on over the last couple of years has been very strange to watch.

I know Proton gets mixed feelings around here, but to me it’s always seemed like Proton and Mozilla should be more coordinated. Feel like they could do a lot of interesting things together


https://news.ycombinator.com/item?id=45955979 this sibling comment says it best


>Mozilla has proven they can exist in a free market; really and truly, they do compete.

This gave me a superb belly laugh.


Mozilla used to compete well but that ended... at least 10 years ago?


I don’t get the comparison. The XSLT deprecation has support beyond Google.


It's just ill-informed ideological thinking. People see Google doing anything and automatically assume it's a bad thing and that it's only happening because Google are evil.

HN has historically been relatively free of such dogma, but it seems times are changing, even here


Completely agree. You see this all the time in online discourse. I call it the "two things can be true at the same time" problem, where a lot of people seem unable to believe that 2 things can simultaneously be true, in this case:

1. Google has engaged in a lot of anticompetitive behavior to maintain and extend their web monopoly.

2. Removing XSLT support from browsers is a good idea that is widely supported by all major browser vendors.


Safari is "cautiously supportive", waiting for someone else to remove support.

Google does lead the charge on it, immediately having a PR to remove it from Chromium and stating intent to remove even though the guy pushing it didn't even know about XSLT uses before he even opened either of them.

XSLT is a symptom of how browser vendors approach the web these days. And yes, Google are the worst of them.


Maybe free of the "evil Google" dogma but not free from dogma. The few who dared to express one tenth of the disapproval what we usually express about Apple nowadays were downvoted to transparent ink in a matter of minutes. Microsoft had its honeymoon period with HN after their pro open source campaign, WSL, VSCode etc. People who prudently remembered the Microsoft of the 90s and the 2000s did get their fair share of downvotes. Then Windows 11 happened. Surprise. Actually I thought that there has been a consensus about Google being evil for at least ten years but I might me wrong.


"relatively" is meant to be doing a lot of work in my previous comment. Allow me to clarify: Obviously some amount was always there, but it used to be so much less than it is now, and, more importantly, the difference between HN and other social media, such as Reddit, used to be bigger, in terms of amount of dogma.

HN still has less dogma than Reddit, but it's closer than it used to be in my estimation. Reddit is still getting more dogma each day, but HN is slowly catching up.

I don't know where to turn to for online discourse that is at least mostly free from dogma these days. This used to be it.


> It's just ill-informed ideological thinking.

> People see Google doing anything and automatically assume it's a bad thing and that it's only happening because Google are evil.

Sure, but a person also needs to be conscious of the role that this perception plays in securing premature dismissal of anyone who ventures to criticize.

(In quoting your comment above, I've deliberately separated the first sentence from the second. Notice how easily the observation of the phenomenon described in the second sentence can be used to undergird the first claim, even though the first claim doesn't actually follow as a necessary consequence from the second.)


Interesting to watch technologists complain rather than engineer alternatives, ignore political activism.


So-called "standards" on the Google (c) Internet (c) network are but a formality.


> This is a lot more concerning.

I'm not so sure that's problematic. Probably browser just aren't a great platform for doing a lot of XML processing at this point.

Preserving the half implemented frozen state of the early 2000s really doesn't really serve anyone except those maintaining legacy applications from that era. I can see why they are pulling out complex C++ code related to all this.

It's the natural conclusion of XHTML being sidelined in favor of HTML 5 about 15-20 years ago. The whole web service bubble, bloated namespace processing, and all the other complexity that came with that just has a lot of gnarly libraries associated with it. The world kind of has moved on since then.

From a security point of view it's probably a good idea to reduce the attack surface a bit by moving to a Rust based implementation. What use cases remain for XML parsing in a browser if XSLT support is removed? I guess some parsing from javascript. In which case you could argue that the usual solution in the JS world of using polyfills and e.g. wasm libraries might provide a valid/good enough alternative or migration path.


They don't reduce complexity. They translate C++ (static complexity) to JS (dynamic complexity).

Also it is not complexity if XSLT lives in a third-party library with a well defined interface.

Thei problem is control. They gain control in 2 ways. They will get more involved in xml code base and the bad actors run in the JS sandbox.

That is why we have standards though. To relinquish control through interoperability.



SVG is XML based.

> Removing XSLT from browsers was long overdue

> Google is willing to remove standards-compliant XML support as well.

> They're the same picture.

To spell it out, "if it's inconvenient, it goes", is something that the _owner_ does. The culture of the web was "the owners are those who run the web sites, the servants are the software that provides an entry point to the web (read or publish or both)". This kind of "well, it's dashed inconvenient to maintain a WASM layer for a dependency that is not safe to vendor any more as a C dependency" is not the kind of servant-oriented mentality that made the web great, not just as a platform to build on, but as a platform to emulate.


Can you cite where this "servant-oriented" mentality is from? I don't recall a part of the web where browser developers were viewed as not having agency about what code they ship in their software.


A nice recent example is "smooshgate", wherein it was determined that breaking websites with an older version of Mootools installed was not an acceptable way to move the web forward, so we got `Array.prototype.flat` instead of `Array.prototype.flatten`: https://news.ycombinator.com/item?id=17141024

> I don't recall a part of the web where browser developers were viewed as not having agency

Being a servant isn't "not having agency", it's "who do I exercise my agency on behalf of". Tools don't have agency, servants do.


I think you're reading way too much into that. For one thing, that's a proposal for Javascript, whose controlling body is TC39. For another, this was a bog standard example of a draft proposal where a bug was discovered, and rollout was adjusted. If that's having a "servant-oriented mindset", so do 99% of software projects.


> this was a bog standard example of a draft proposal where a bug was discovered, and rollout was adjusted

Yes, but the "bug" here was "a single website is broken". Here, we are talking about an outcome that will break many websites (more than removing USB support would break) and that is considered acceptable.

> That's a proposal for Javascript, whose controlling body is TC39

Yes, and the culture of TC39 used to be the culture of those who develop tools for using the web (don't break the Space Jam website, etc.)


Where are you seeing that it’s a single website? Mootools is a JavaScript library used by tons of websites.

Also, the entire measurement is fundamentally just part of the decision. Removing Flash broke tons of sites, and it was done anyways because Flash was a nightmare.


https://news.ycombinator.com/item?id=17141024 links to https://developer.chrome.com/blog/smooshgate which says:

> Shipping the feature in Firefox Nightly caused at least one popular website to break.

and links to https://bugzilla.mozilla.org/show_bug.cgi?id=1443630 which points to a single site as being broken. There's no check as to the size of the impacted user base, but there is a link in the blog post to https://www.w3.org/TR/html-design-principles/#support-existi... which says:

> Existing content often relies upon expected user agent processing and behavior to function as intended. Processing requirements should be specified to ensure that user agents implementing this specification will be able to handle most existing content. In particular, it should be possible to process existing HTML documents as HTML 5 and get results that are compatible with the existing expectations of users and authors, based on the behavior of existing browsers. It should be made possible, though not necessarily required, to do this without mode switching.

> Content relying on existing browser behavior can take many forms. It may rely on elements, attributes or APIs that are part of earlier HTML specifications, but not part of HTML 5, or on features that are entirely proprietary. It may depend on specific error handling rules. In rare cases, it may depend on a feature from earlier HTML specifications not being implemented as specified

Which is the "servant-oriented" mindset I'm talking about here.

> Removing Flash broke tons of sites

Yes, but Flash wasn't part of a standard, it was an ad-hoc thing that each browser _happened_ to support (rough consensus and working code). There were no "build on this and we'll guarantee it will continue to work" agreement between authors and implementers of the web. XSLT 1.0, as painful as it is, is part of that agreement.


I think you’re pretty off-base about how web standards work and how much of an “agreement” they constitute.

Flash doesn’t have an RFC because it was a commercial design by Adobe, not because it wasn’t a defined spec that was supported by browsers.

Meanwhile SSLv2 and v3 and FTP and gopher have RFCs and have been removed.

Making an RFC about a technology is not a commitment of any kind to support it for any length of time.

You’ve conjured a mystique around historical browser ideology that doesn’t exist, and that’s why what you’re seeing today that feel at odds with that fantasy.


Flash was a plugin to the browser ecosystem that no one ever made a commitment to other than "here it is".

SSLv2 and v3 all are protocol versions that anyone can still support, and removing support for them breaks certain web properties. This is less of a problem because the implementations of the protocol are themselves time-limited (you can't get an SSL certificate that is valid until the heat death of the universe).

FTP and gopher support wasn't removed from the browser without a redirect (you can install an FTP client or a Gopher client and the browser will still route-out-to-it).

The point isn't "RFC = commitment", the point is that "the culture of the web" has, for a very long time, been "keep things working for the users" and doing something like removing built-in FTP support was something that was a _long_ time in coming. Whereas, as I understand it, there is a perfectly valid way forward for continuing to support this tech as-is in a secure manner (WASM-up-the-existing-lib) and instead of doing that, improving security for everyone and keeping older parts of the web online, the developers of the browsers have decided that "extra work" of writing that one-time integration and keeping it working in perpetuity is too burdensome for _them_. It feels like what is being said by the browser teams is, "Yes, broken websites are bad for end users, yes, there are more end users than developers, yes, those users are less technical and therefore likely are going to loose access to goods they previously had ... but c'est la vie. Use {Dusk, Temple}OS if you don't want the deal altered any further." And I object to what I perceive as a lack of consideration of those who use the web. Who are the people that we serve.


It’s interesting that you are trying really really hard to explain away every counter example.

C’est la vie, I suppose.


Yes, I'm trying to explain my position so that you can understand it. Which I am not doing very well.

Chacun son truc, I believe as there isn't a moral component to this on per se.


It's literal W3C policy: https://www.w3.org/TR/html-design-principles/#priority-of-co...

--- start quote ---

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors; which in turn should be given more weight than costs to implementors; which should be given more weight than costs to authors of the spec itself, which should be given more weight than those proposing changes for theoretical reasons alone. Of course, it is preferred to make things better for multiple constituencies at once.

--- end quote ---

However, the needs of browser implementers have long been the one and only priority.

Oh. It's also Google's own policy for deprecation: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...

--- start quote ---

First and foremost we have a responsibility to users of Chromium-based browsers to ensure they can expect the web at large to continue to work correctly.

The primary signal we use is the fraction of page views impacted in Chrome, usually computed via Blink’s UseCounter UMA metrics. As a general rule of thumb, 0.1% of PageVisits (1 in 1000) is large, while 0.001% is considered small but non-trivial. Anything below about 0.00001% (1 in 10 million) is generally considered trivial. There are around 771 billion web pages viewed in Chrome every month (not counting other Chromium-based browsers). So seriously breaking even 0.0001% still results in someone being frustrated every 3 seconds, and so not to be taken lightly!

--- end quote ---


I put this in a parallel thread, but maybe this is a linguistic gap between "servant", a person who does what they are told and has very limited agency within the bounds of their instructions, and "service", where you do things for the benefit of another entity.

None of the above reads like a "servant-oriented mindset". It reads like "this is the framework by which we decide what's valuable". And by that framework, they're saying that keeping XSLT around is not the right call. You can disagree with that, but nothing you've quoted suggests that they're trying to prioritize any group over the majority of their users.


Nowhere does it say "majority of users".

Moreover, Google docs says that even even 0.0001% shouldn't be taken lightly.

As I keep saying, the person who's pushing for XSLT removal didn't even know about XSLT uses until after he posted "intent to remove", and the PR to remove to Chrome. And the usage stats he used have been questioned: https://news.ycombinator.com/item?id=45958966


I could argue that W3C didn’t follow that policy when they attempted to push xhtml, which completely inverts that priority order, as xhtml is bad for users and great for purity.

But instead I’ll point out that W3C no longer maintains the html spec. They ceded that to the WHATWG which was spun by the major browser developers in response to the stagnation and what amounted to abandonment of html by the W3C.


Ah, that's true. While w3c still maintains a lot of standards, the intention to remove XSLT was sent to WHATWG.

I didn't look at all documents, but Working Mode describing how specs are added or removed doesn't mention users even once. It's all about implementors: https://whatwg.org/working-mode


The principles covers more about users. But it still does not set the same priority hierarchy as W3C.

https://whatwg.org/principles

I’m not surprised they focus on implementors in “working mode”, though. WHATWG specifically started because implementers felt like the W3C was holding back web apps. And it kind of was.

WHATWG seemed to be created with an intent to return to the earlier days of browser development, where implementors would build the stuff they felt was important and tell other implementors how to be compatible. Less talking and more shipping.


https://datatracker.ietf.org/doc/html/rfc8890

> The Internet is for End Users

> This document explains why the IAB believes that, when there is a conflict between the interests of end users of the Internet and other parties, IETF decisions should favor end users. It also explores how the IETF can more effectively achieve this.


It feels like maybe the disconnect here is with what "servant" means, and with this quote: "the servants are the software that provides an entry point to the web (read or publish or both)".

The RFC8890 doesn't suggest anything that overlaps with my understanding of what the word "servant" means or implies. The library in my town endeavors to make decisions that promote the knowledge and education of people in my town. But I wouldn't characterize them as having a "servant-mindset". Maybe the person above meant "service"?

FWIW, Google/Mozilla/Apple appear to believe they're making the correct decision for the benefit of end users, by removing code that is infrequently used, unmaintained, and thus primarily a security risk for the majority of their users.


I’ve never heard of servant oriented, but I understand the point. Browsers process and render whatever the server returns. Whether they’re advertisements that download malware or a long rambling page on whatever I’m interested in now, browsers really don’t have much control over what they run.


I'm not sure what you're talking about.

1. As we're seeing here, browser developers determine what content the browser will parse and process. This happens in both directions: tons of what is now common JS/CSS shipped first as browser-specific behavior that was then standardized, and also browsers have dropped support for gopher, for SSLv2, and Flash, among other things.

2. Browsers often explicitly provide a transformation point where users can modify content. Ad blockers work specifically because the browser is not a "servant" of whatever the server returns.

3. Plenty of content can be hosted on servers but not understood or rendered by browsers. I joked about Opera elsewhere on the thread, which notably included a torrent client, but Chrome/Firefox/Safari did not: torrent files served by the server weren't run in those browsers.


I cannot imagine a time when browsers were "servant-oriented".

Every browser I can think of was/is subservient to some big-big-company's big-big-strategy.


There have been plenty of browsers that were not part of a big company, either for part or all of their history. They don't tend to have massive market share, in part because browsers are amazingly complex and when they break, users get pissed because their browsing is affected.

Even the browsers created by individuals or small groups don't have, as far as I've ever seen, a "servant-oriented mindset": like all software projects, they are ultimately developed and supported at the discretion of their developer(s).

This is how you get interesting quirks like Opera including torrent support natively, or Brave bundling its own advertising/cryptocurrency thing.


Both of those are strategies aimed at capturing a niche market segment in hopes of attracting them away from the big browsers.


I guess? I don't get the sense that when the Opera devs added torrents a couple decades ago, they were necessarily doing it to steal users so much as because the developers thought it was a useful feature.

But it doesn't really make a difference to my broader point that browser devs have never had "servant-mindset"


I agree. They've never had that mindset.


I don't remember it this way. It was my understanding that browsers were designed to browse servers and that servers, or websites designed themselves around web standards that were initiated by specs made part of browsing experience that web browsers created.


It’s utter nonsense. Development of the web has always been advanced by the browser side, as it necessarily must. It’s meaningless for a server/web app to ship a feature that no browser supports.


> The culture of the web was "the owners are those who run the web sites, the servants are the software that provides an entry point to the web (read or publish or both)".

This is an attempt to rewrite history.

Early browser like NCSA Mosaic were never even released as Open Source Software.

Netscape Navigator made headlines by offering a free version for academic or non-profit use, but they wanted to charge as much as $99 (in 1995 dollars!) for the browser.

Microsoft got in trouble for bundling a web browser with their operating system.

The current world where we have true open source browser options like Chromium is probably closer to a true open web than what some people have retconned the early days of the web as being.


Chromium commits are controlled by a pool of Google developers, so it's not open in the sense that anyone can contribute or steer the direction of the project.

It's also 32 million lines of code which is borderline prohibitive to maintain if you're planning any importantly different browser architecture, without a business plan or significant funding.

There's lots of things perfectly forkable and maintainable in the world is better for them (shoutout Nextcloud and the various Syncthing forks). But Chromium, insofar as it's a test of the health and openness of the software ecosystem, I think is not much of a positive signal on account of what it would realistically require to fork and maintain for any non-trivial repurposing.


> Chromium commits are controlled by a pool of Google developers, so it's not open in the sense that anyone can contribute or steer the direction of the project.

By these criteria no software is open source.


I would disagree, corporate open source involves corporate dominance over governance that fits internal priorities. It meets the legal definition rather than the cultural model which is community driven and often multi-stakeholder. I would put Debian, VLC, LibreOffice in the latter camp.


Is it often multi-stakeholder? Debian has bureaucracy and a set group of people with commit permissions. VLC likewise has the VideoLAN organization. LibreOffice has The Document Foundation.

It seems like most open source projects either have:

1. A singular developer, who controls what contributions are accepted and sets the direction of the project 2. An in-group / foundation / organization / etc that does the same.

Do you have an example of an open source project whose roadmap is community-driven, any more than Google or Mozilla accepts bug reports and feature reports and patches and then decides if they want to merge them?


A lot of the governance structures with "foundation" in their name, e.g. Apache Foundation, Linux Foundation, Rust Foundation, involve some combination of corporate parties, maintainers, independent contributors without any singularly corporate heavy hand responsible for their momentum.

I don't know that road maps are any more or less "community driven" than anything else given the nature of their structures, but one can draw a distinction between them and the degree of corporate alignment like React (Facebook), Swift (Apple).

I'm agreeable enough to your characterization of open source projects. It's broad but, I think, charitably interpreted, true enough. But I think you can look at the range of projects and see ones that are multi stakeholder vs those with consolidated control and their degree of alignment with specific corporate missions.

When Google tries to, or is able to, muscle through Manifest v3, or FLoC or AMP, it's not trying to model benevolent actor standing on open source principles.


My argument is that "open source principles" do not suggest anything about how the maintainers have to handle input from users.

Open source principles have to do with the source being available and users being able to access/use/modify the source. Chrome is an open source project.

To try to expand "open source principles" to suggest that if the guiding entity is a corporation and they have a heavy hand in how they steer their own project, they're not meeting those principles, is just incorrect.

The average open source project is run by a person or group with a set of goals/intentions for the project, and they make decisions about the project based on those goals. That includes sometimes taking input from users and sometimes ignoring it.


Chromium can be forked (probably there are already a bunch of degoogled ones) to keep Manifest v2

what's missing is social infrastructure to direct attention to this (and maybe it's missing because people are too dumb when it comes to adblockers, or they are not bothered that much, or ...)

and of course, also maintaining a fork that does the usual convenience features/services that Google couples to Chrome is hard and obviously this has antitrust implications, but nowadays not enough people care about this either


The web wasn’t the browser it was the protocols.


Most of the protocol specs were written retroactively to match functionality that browsers were already using in the wild.


That’s not an accurate statement. The web was not just the protocols. It was the protocols and the servers that served them and the browsers that supported them and the web sites that were built with them. There is no web without browsers just like there is no web without websites.


I can’t understand why you’re splitting hairs to this extent. The web is protocols; some are implemented at server side whereas others are implemented at browser side. They’re all still protocols with a big dollop of marketing.

That statement was accurate enough if you’re willing to read actively and provide people with the most minimal benefit of the doubt.


My response is in a chain discussing browsers in response to someone who literally said “The web wasn’t the browser it was the protocols.”

I responded essentially “it was indeed also the browser”, which it seems you agree with so I don’t know what you’re even trying to argue about.

> willing to read actively and provide people with the most minimal benefit of the doubt.

Indeed


My point is, you could write your own server and your own browser to participate in the web, but you have to follow the protocols.

https://issues.chromium.org/issues/451401343 tracks work needed in the upstream xml-rs repository, so it seems like the team is working on addressing issues that would affect standards compliance.

Disclaimer: I work on Chrome and have occasionally dabbled in libxml2/libxslt in the past, but I'm not directly involved in any of the current work.


I hope they will also work on speeding it up a bit. I needed to go through 25-30 MB SAML metadata dumps, and an xml-rs pull parser took 3x more time than the equivalent in Python (using libxml2 internally, I think.) I rewrote it all with quick-xml and got a 7-8x speedup over Python, i.e., at least 20x over xml-rs.


Python ElementTree uses Expat, only lxml uses libxml2. Right now, I'm working on SIMD acceleration in my not yet released, GPL-licensed fork of libxml2. If you have lots of character data or large attribute values like in SVG, you will see tremendous speed improvements (gigabytes per second). Unfortunately, this is unlikely to make it into web browsers.


Wait. They are going along with a XML parser that supports DOCTYPES? I get XSLT is ancient and full of exploits, but so is DOCTYPE. Literally poster boy for billion laughs attack (among other vectors).


You don't need DOCTYPE for that, you can put an ENTITY declaration straight in your source file ("internal subset") and the XML spec it needs to be processed. (I seem to recall someone saying that Adobe tools are fond of putting those in their exported SVG files.)


The billion laughs bug was fixed in libxml2 in 2008. (As far as I understand in .Net this bug was fixed in 2014 with .Net 4.5.2. In 2019 a bug similar to "billion laughs" was found in Go YAML parser although it was explicitly mentioned and forbidden by YAML specs. Among other products it affected Kubernetes.)

Other vectors probably mean a single vector: external entities, where a) you process untrusted XML on server and b) allow the processor to read external entities. This is not a bug, but early versions of XML processors may lack an option to disallow access to external entities. This also has been fixed.

XSLT has no exploits at all, that is no features that can be misused.


> Other vectors probably mean a single vector: external entities,

XXE injection (which comes in several flavors), remote DTD retrieval, and quadratic blowup (a sort of twin to the billion laughs attack).

You aren't wrong though. They all live in <!DOCTYPE definition. Hence, my puzzlement.

Why process it at all? If this is as security focused as Google claims, fill the DOCTYPE with molten tungsten and throw it into the Mariana Trench. The external entities definition makes XSLT look well designed in comparison.


The billion laughs attack has well known solutions (basically, don't recurse too deep). It's not a reason to not implement DOCTYPE support.


> The billion laughs attack has well known solutions (basically, don't recurse too deep)

You can then recurse wide. In theory it's best to allow only X placeables of up to Y size.

The point is, Doctype/External entities do a similar thing to XSLT/XSD (replacing elements with other elements), but in a positively ancient way.


I think it might make more sense to use WebAssembly and make them as extensions which are included by default (many other things possibly should also be made as extensions rather than built-in functions). The same can be done for picture formats, etc. This would improve security while also improving the versatility (since you can replace parts of things), if the extension mechanism would have these capabilities.

(However, I also think that generally you should not require too many features, if it can be avoided, whether those features are JavaScripts, TLS, WebAssembly, CSS, and XSLT. However, they can be useful in many circumstances despite that.)


Yeah, when I first heard about this a month or so ago, my thoughts were exactly this - a WebAssembly polyfil.


> Currently, they seem to favor xml-rs which only implements a subset of XML.

Which seems to be a sane decision given the XML language allows for data blow-ups[^0]. I'm not sure what specific subset of XML `xml-rs` implements, but to me it seems insane to fully implement XML because of this.

[^0]: https://en.wikipedia.org/wiki/Billion_laughs_attack


Given that you have experience working on libxslt, why do you think they should have removed the spec entirely rather than improving the current implementation or moving towards modern XSLT 3?


Why keep XSLT if the huge majority of devs use the HTML5+CSS+Javascript combo? Why pump money on a standard which will not be used?

Are XML technologies better or safer? Probably. However practice sets the standards. Is it a good thing? It remains to be seen.

Personally I am not satisfied with the "Web" experience. I find it unsafe, privacy disrespecting, slow and non-standards compliant.


> Currently, they seem to favor xml-rs which only implements a subset of XML.

What in particular do you find objectionable about this implementation? It's only claiming to be an XML parser, it isn't claiming to validate against a DTD or Schema.

The XML standard is very complex and broad, I would be surprised if anyone has implemented it in it's entirety beyond a company like Microsoft or Oracle. Even then I would question it.

At the end of the day, much of XML is hard if not impossible to use or maintain. A lot of it was defined without much thought given to practicality and for most developers they will never had to deal with a lot of it's eccentricities.


I was somewhat confused and irritated by the lack of a clear frontrunner crate for XML support in rust. I get that xml isn't sexy, but still.


What's long overdue is them updating to a modern version of XSLT.


I receive about 20 phishing emails each week sent from Google servers, on top of 50 other spam emails. All my abuse reports are ignored. At least for me, Google is the largest source of email spam by far. Maybe they should start to clean up their own act first.


The "severe security issue" in libxml2 they mention is actually a non-issue and the code in question isn't even used by Chrome. I'm all for switching to memory-safe languages but badmouthing OSS projects is poor style.


It is also kinda a self-burn. Chromium an aging code base [1]. It is written in a memory unsafe language (C++), calls hundreds of outdated & vulnerable libraries [2] and has hundreds of high severity vulnerabilities [3].

People in glass houses shouldn't throw stones.

[1] https://github.com/chromium/chromium/commits/main/?after=c5a...

[2] https://github.com/chromium/chromium/blob/main/DEPS

[3] https://www.cvedetails.com/product/15031/Google-Chrome.html?...


Given Google's resources, I'm a little surprised they having created an LLM that would rewrite Chromium into Go/Rust and replace all the stale libraries.


Google is too cheap to fund or maintain the library they've built their browser with after its hobbyist maintainers got burnt out, for more than a decade so they're ripping out the feature.

Their whole browser is made up of unsafe languages and their attempt to sort of make c++ safer has yet to produce a usable proof of concept compiler. This is a fat middle finger in the face of all the people's free work they grabbed to collect billions for their investors.


Nobody is badmouthing open source. It's the core truth, open source libraries can become unmaintained for a variety of reasons, including the code base becoming a burden to maintain by anyone new.

And you know what? That's completely fine. Open source doesn't mean something lives forever


The issue in question is just one of the several long-unfixed vulnerabilities we know about, from a library that doesn't have that many hands or eyes on it to begin with.


And why doesn’t Google contribute to fixing and maintaining code they use?


Because they don't want to use the code. They begrudgingly use it to support XSLT and now they don't use it.


Maintaining web standards without breaking backwards compatibility is literally what they signed up for when they decided to make a browser. If they didn't want to do that job, they shouldn't have made one.


They "own the web". They steer its standards, and other browsers' development paths (if they want to remain relevant).

It is remarkable the anti-trust case went as it did.


According to whom?

Chromium is open source and free (both as in beer and speech). The license says they've made no future commitments and made no warrants.

Google signed up to give something away for free to people who want to use it. From the very first version, it wasn't perfectly compatible with other web browsers (which mostly did IE quirks things). If you don't want to use it, because it doesn't maintain enough backwards compatibility... Then don't.


The license would be relevant if I'd claimed that removing XSLT was illegal or opened them up to lawsuits, but I didn't. The obligation they took on is social/ethical, not legal. By your logic, chrome could choose to stop supporting literally anything (including HTML) in their "browser" and not have done anything that we can object to.

iIRC, lack of IE compatibility is fundamentally different, because the IE specific stuff they didn't implement was never part of the open web standards, but rather stuff Microsoft unilaterally chose to add.


> By your logic, chrome could choose to stop supporting literally anything (including HTML) in their "browser" and not have done anything that we can object to.

Literally this. Microsoft used to ship a free web browser. Then they stopped. That's not something anybody can object to.

> because the IE specific stuff they didn't implement was never part of the open web standards, but rather stuff Microsoft unilaterally chose to add.

Standards aren't holy books. It's actually more important to support real customer use cases than to follow standards.

But you know this. If standards are more important that real use cases, then the fact that XSLT has been removed from the html5 standard is enough justification to remove it from Chrome.


> Literally this. Microsoft used to ship a free web browser. Then they stopped. That's not something anybody can object to.

There is a fundamental difference between ceasing to make a browser and continuing to make a browser, while not meeting your expectations as a browser maker.

> If standards are more important that real use cases, then the fact that XSLT has been removed from the html5 standard is enough justification to remove it from Chrome.

Browsers very much have not depreciated support for non-HTML5 markup (e.g. the HTML4 era <center> tag still works). This is because upholding devs and users expectation that standards compliant websites that once worked will continue to work is important.


We object with our feet, by switching browsers.

What odds would you put dropping XSLT support at for triggering a user migration?


The license is the way it is not by choice. We should be clear about that and acknowledge KHTML, and both Safari and Chromium origins. Some parts remain LGPL to this day.


Because in this case it doesn't contribute to their ability to deliver ads.


If that was case they would switch to (rust XPath/XSLT) Xee.


Sounded like the maintainers of libxml2 have stepped-back, so there needs to be a supported replacement, because it is widely used. (Or if you are worried the reputation of "OSS", you can volunteer!)


Where's the best collection or entry point to what you've written about Chrome's use of Gnome's XML libraries, the maintenance burden, and the dearth of offers by browser makers foot the bill?


> cybercrime — which the U.N. estimates costs $10.5 trillion around the world annually.

That's almost 10% of global GDP. Who comes up with these numbers?


It will all make sense once you realize who works at the UN, basically nepo babies of all colors and variety, including second cousins of Saudi royalty etc.


One of my family members was a research director at the UN and came from a middle class American family. It has its problems (he certainly has his share of complaints) but the idea that they are all nepo babies is incorrect and they do have serious researchers. Also, are we sure that the $10.5 trillion is a UN generated number? Other people in the comments seem to think it was made up by some other organization.


A relative of mine worked for the UN and interfaced with the UN after they left for a non-profit. Anyone that knows anything about them and also just simply observing what and how they are doing things should have no doubt that it is filled with people that got there by using their connections. And you absolutely constantly run into people that have no business being there other than through nepotism. Btw. I am sure that US staff is less likely to be a total nepo baby, but because the UN "has" to hire from all over the world, most roles are not filled like that.


It might be including the cost of the entire cybersecurity business sector? Salaries of security engineers, security vendors, etc. Not just fallout from hacks.

edit: cybersecurity ventures seems to be the real source for the 10.5T number: https://cybersecurityventures.com/cybercrime-damage-costs-10...

Apparently their methodology is just assume $3T cybercrime cost in 2015, then compound it by 15% annual.


[flagged]


complete and utter nonsense. you have to be innumerate to believe that $1 in $10 is being stolen by cybercriminals.


you're right. The worldwide sum in nonsense - 10%. As I said, I work with the US market and $0.5T out of $30T might happen this year.


Paying $1,000 for low-impact issues is a nice move which might make me contribute to their program again.


Don't bother. They'll find an excuse to pay $0. This is all at Apple's inscrutable discretion.


At least it seems that they won't assign CVE IDs and credit researchers without compensating them at all (which is what happened when I reported CVE-2024-27811, for example):

> We want those researchers to have an encouraging experience — so in addition to CVE assignment and researcher credit as before, we will now also reward such reports with a $1,000 award.


aren't all bug bounty program at the sponsor's inscrutable discretion?


Yes, but Apple tends to be more inscrutable than anyone else.


Making this feature opt-out is a clear violation of the GDPR. Linkedin claims they have a "legitimate interest" in collecting this data for AI training without consent, but this argument is laughable.


"Legitimate interest" is abused on an absurd scale


Even as written in the regs "legitimate interest" shouts "we are your preference not to be stalked by advertisers or provide us with free training material, but fuck you and your silly little preferences we want to anyway so here have another hoop to jump through", and it is stretched even further from there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: