Hacker News new | past | comments | ask | show | jobs | submit | more berkes's comments login

In the very least, it will open up FTEs that can now work on what makes Mozilla projects unique, rather than on building and maintaining generic fundamentals.

It's a pet-peeve and personal frustration of mine. "Do one thing and do that well" is also often forgotten in this part of Open Source projects. You are building a free alternative to slack? spend every hour on building the free alternative to slack, not on selfhosting your Gitlab, operating your CI-CD worker-clusters or debugging your wiki-servers.


You just showed the poster-child of gatekeeping that is harming Open Source.

Every contributor is valuable, it's in the name, the definition of "contribute".

Any bar to entry is bad, it certainly never is the solution to a different problem (not being able to manage all contributions). If anything, in the longer run, it will only make it worse.

Now, to be clear, while I do think GitHub is currently the "solution" to lower barriers, allow more people to contribute and as such improve your Open Source Project, the fact this is so, is a different and other problem - there isn't any good alternative to Github (with broad definitions of "good") why is that and what can we do to fix that, if at all?


In spirit, I agree.

In practice, if you get dozens of PRs from people who clearly did it to bolster up their CV, because their professor asked them or something like that, it just takes a toll. It's more effort than writing the same code yourself. Of course I love to mentor people, if I have the capacity. But a good chunk of the GitHub contributions I've worked on were pretty careless, not even tested, that kind of thing. I haven't done the maintainer job in a while, I'm pretty terrified by the idea of what effect the advent of vibe coding had on PR quality.

I feel pretty smug the way I'm talking about "PR quality", but if the volume of PRs that take a lot of effort to review and merge is high enough, it can be pretty daunting. From a maintainer perspective, the best thing to have are thoughtful people that genuinely use and like the software and want to make it better with a few contributions. That is unfortunately, in my experience, not the most common case, especially on GitHub.


In my experience low-quality PRs aren't that common, but I do agree dealing with them is annoying. You can't just tell people to go away because they did spend their spare time on it. On the other hand it's also garbage. Sometimes it's garbage by people who really ought to know better. IMHO low-quality issues are the bigger problem by the way, a problem that existed well before GitHub.

But I just don't see how GitHub or a PR-style workflow relates. Like I said in my own reply: I think it's just because you'll receive less contributions overall. That's a completely fair and reasonable trade-off to make, as long as you realise that is the trade-off you're making.


> Every contributor is valuable, it's in the name, the definition of "contribute".

No. I definitely seen people who created multitude of misleading bug reports, flood of stupid feature requests. I personally did a bit of both.

There are people who do both repetitively, fill issue reports without filling requested fields. Or open issue again when their previous report was closed.

I got once bug report where someone was ranting that app is breaking data. Turned out (after wasting my time on investigating it) that user broke data on their own with different software, through its misuse.

There were PRs adding backdoors. This is not a valuable contribution.

There were PRs done to foment useless harmful political mess.

Some people pretend to be multiple people and argue with themselves in pull requests or issues (using multiple accounts or in more bizarre cases using one). Or try to be listed multiple times as contributor.

Some people try to sneak in some intentionally harmful content one way or another.

Some contributors are NOT valuable. Some should be banned or educated (see https://www.chiark.greenend.org.uk/~sgtatham/bugs.html ).


This can be categorized as "spam".

Fighting spam isn't done by using unfamiliar tech, but by actually fighting the spam.

With good contributor guidelines, workflows, filters, etc.

Contributions that don't adhere to the guidelines, or cannot fit in the workflow can be dismissed or handed back.

Two random examples of things I came across in PRS recently:

"Sorry, this isn't on our roadmap and we only work on issues related to the roadmap as per the CONTRIBUTION-GUIDELINES.md and the ROADMAP.md"

"Before we can consider your work, please ensure all CI/CD passes, and the coding style is according to our guidelines. Once you have fixed this, please re-open this ticket"

That is fine, a solved problem.

Using high barrier tech won't keep intentionally harmful contributions away. It won't prevent political mess or flamewars. It won't keep ranters away. It won't help with contributors feelings of rejection and so on. Good review procedures with enough resources, help prevent harmful changes. Guidelines and codes of conduct and resources and tech to enforce, help against rants, bullying or flamewars, not "hg vs git". Good up-front communication on expectation is the solution to people demanding or making changes that can never be accepted.


This is just blatantly wrong on so many levels.

Proposed contributions can in fact have negative value, if the contributor implements some feature or bug fix in a way that makes it more difficult to maintain in the long term or introduces bugs in other code.

And even if such contribution is ultimately rejected, someone knowledgeable has to spend time and effort reviewing such code first - time and effort that could have been spend on another, more useful PR.


It's not wrong, it's just based on the assumption that the projects wants contributors.

Quite obviously, any incidental friction makes this ever so slightly harder or less likely. Good contributions don't necessarily or only come from people who are already determined from the get go. Many might just want to dabble at first, or they are just casually browsing and see something that catches their attention.

Every projects needs some form of gatekeeping at some level. But it's unclear to me whether the solution is to avoid platforms with high visibility and tools that are very common and familiar. You probably need a more sophisticated and granular filter than that.


> Many might just want to dabble at first, or they are just casually browsing and see something that catches their attention.

You can easily craft an email for that. No need to create a full PR.


"Crafting an email" in the format required by many email-based projects is hardly easy for the average user, who's most likely using a webmail service that does not have much control over line wrapping and the like. Accepting patches in attachments (instead of the email body) helps with this, but naive users can still easily get caught by using HTML email, which many project maintainers love to performatively turn up their noses at.


It is not wrong.

For one, it's semantic: It's only a contribution if it adds value to a project.

What you probably mean is that "not everything handed to us is a contribution". And that's valid: There will be a lot of issues, code, discussions, ideas, and what more that substract, or have negative value. One can call this "spam".

So, the problem to solve, is to avoid the "spam" and allow the contributions. Or, if you disagree with the semantics, avoid the "negative value contributions" and "allow the positive value contributions".

A part of that solution is technical: filters, bots, tools, CI/CD, etc. Many of which github doesn't offer, BTW. A big part is social and process: guidelines, expectations, codes-of-conduct, etc. I've worked in some Open Source projects where the barriers to entry where really high, with endorsements, red-tape, sign-offs, wavers, proof-of-conducts etc. And a large part is simply inevitable "resources". It takes resources to manage the incoming stuff, enforce the above, communicate it, forever, etc.

If someone isn't willing to commit these resources, or cannot, then, ultimately, the right choice to make is to simply not allow contributions - it can still be open source, just won't take input. Like e.g. sqlite.


This isn't a platform issue — it's a problem with the PR system, and arguably with open source itself. If you're unwilling to spend time on anything beyond writing code, maybe keep the project closed-source.


Or, more obviously, make it open-source, and make a big fat note in the README of "I will not accept PRs, this repo is just for your consumption, fork it if you want to change it".


It's not a binary. Many projects do want PRs, but it doesn't mean they have to accept any random PR, or fawn over every contributor who creates an obviously low-effort one. It's perfectly fine to "gatekeep" on quality matters, and that does mean acknowledging the fact that not all contributors are equally valuable.


> fawn over every contributor who creates an obviously low-effort one

It's that sense of superiority that pisses me off.

Many maintainers condescendingly reply "contributions welcome" in response to user complaints. People like that had better accept whatever they get. They could have easily done it themselves in all their "high quality" ways. They could have said "I don't have time for this" or even "I don't want to work on this". No, they went and challenged people to contribute instead. Then when they get what they wanted they suddenly decide they don't want it anymore? Bullshit.

You're making the assumption that these are "high quality" projects, that someone poured their very soul into every single line of code in the repository. Chances are it's just someone else's own low effort implementation. Maybe someone else's hobby project. Maybe it's some legacy stuff that's too useful to delete but too complex to fully rewrite. When you dive in, you discover that "doing it properly" very well means putting way too much effort into paying off the technical debts of others. So who's signing up to do that for ungrateful maintainers for free? Who wants to risk doing all that work only to end up ignored and rejected? Lol.

Just slap things together until they work. As long as your problem's fixed, it's fine. It's not your baby you're taking care of. They should be grateful you even sent the patches in. If they don't like it, just keep your commits and rebase, maybe make a custom package that overrides the official one from the Linux distribution. No need to worry about it, after all your version's fixed and theirs isn't. Best part is this tends to get these maintainers to wake up and "properly" implement things on their side... Which is exactly what users wanted in the first place! Wow!


> People like that had better accept whatever they get.

no, I am not obligated to merge badly written PRs introducing bugs just because I had no time to implement the feature myself


Let all those "bad PRs" with useful features and fixes accumulate at your own peril. You might wake up one day and find you're not upstream anymore because someone else has merged them all into a fork. I've seen it happen.


You seem to assume that in all cases such situation would be a problem.

In fact it not always is a problem. For some I would love if someone else would maintain it, for some fork is friendly and has a bit different purpose and so on.


> People like that had better accept whatever they get.

FOSS maintainers are not a unified mind. The people who go "contributions welcome" and "#hacktoberfest" are somewhere near one end of the spectrum, and the folks dealing with low-effort contributions are somewhere near the other end of the spectrum.


Of course not. That's why I singled out a very specific kind of maintainer: the type who thinks himself superior to users even when they engage at their level. Guys so good they can't be bothered to do it themselves but complain when others do it.

Good maintainers may be firm but they are always nice and grateful, and they treat people as their equals. They don't beg others for their time and effort. If they do, they don't gratuitously shit on people when they get the results. They work with contributors in order to get their work reviewed, revised and merged. They might even just merge it as-is, it can always be refactored afterwards.

That's hard to do and that's why doing it makes them good maintainers. Telling people their "contributions are welcome" only to not welcome their contributions when they do come is the real "low effort".


> Just slap things together until they work. As long as your problem's fixed, it's fine. It's not your baby you're taking care of. They should be grateful you even sent the patches in.

Thank you for a clear and concise illustration of why some contributions are really not welcome.

Just about the only thing I will agree with you on is that projects should indeed make it clear what the bar for the proper contribution is. This doesn't mean never saying "contributions are welcome", if they are indeed welcome - it's still the expectation for whoever is contributing to do the bare minimum to locate those requirements (e.g. by actually, you know, reading CONTRIBUTING.md in the root of the repo before opening a PR - which many people do not.)


Making things clear and being honest about the scope and status of the project is always a good thing.

Dismissing users making feature requests and reporting bugs with a "PRs welcome" cliche is quite disrespectful and very much a sign of a superior attitude.


lol go closed then


Not all PRs are created equal.


Also don't forget that not all contributions are done through PRs or are actual code changes. There are folks that do tests, make MREs, organise issue reports, participate in forums … they all are also contributing: their time and efforts.


And that is good.

Diversity, here too, is of crucial importance. It's why some Open Source software has sublime documentation and impeccible translations, while the other is technically perfect but undecipherable. It's why some Open Source software has cute logos or appeals to professionals, while the other remains this hobby-project that no-one ever takes serious despite its' technical brilliance.


There's proper and good tracking possible just fine.

Tracking to discover latency, errors, weird behaviour, malicious actors and so on.

Tracking to see what content does well and what not.

Tracking to see what rough demographics (mobile, desktop, country, region, time-of-day etc) visit your premises.

E.g. plausible-analytics or even Matomo do a good job at i) keeping the data rough and broad and without any PII, and ii) storing the data on-premise rather than at commercial aggregators who will either re-sell or use it for own services.


If it's not tracking the user then I don't understand what the problem is with DNT here


No, the real problem was that it worked too good from the perspective of ad-tech and data-gatherers.¹

It relied on the goodwill of those who run these services to i) invest some effort and money to detect the DNT headers and then ii) not collect/store the data of these requests.

Back, when only a tiny portion of web-users would send these headers along, the industry was fine to implement it. If only for marketing purpose. But, as soon as they saw that it actually worked, the industry saw a threat to their revenues and stopped.

I believe a DNT2.0 that's more granular could've been a basis for GDPR, but the GDPR refrained -rightfully so, IMO- from any implementation details. For one, the GDPR never once requires some "popup", it merely states that if you are an a*hole and collect data that you shouldn't and/or send that to other parties, you should at least ask concent to do so - the idea being that web-owners would then massively ditch these services so that they don't have to nag their users.

And because the GDPR refrained from implementation details, the Ad- and surveilance industry adopted a "dark pattern" that annoys people to no end (the popups) so as to paint the GDPR in a bad light. This industry could've easily said "If we see a DNT header with level:x and domainmask:*, we'll assume NO to every tracking cookie and won't collect them". And the browser makers then could add some UI to allow users per-domain or global, or wildcard or whatever settings "set-and-forget". But alas, this industry is malicious at best and will annoy users to no end for their own agenda.

¹ edit: source: https://pc-tablet.com/firefox-ditches-do-not-track-the-end-o...


>adopted a "dark pattern" that annoys people

It's not a dark pattern, but actually is similar to terms of conditions and privacy policies that sites show. Requiring users to go through legal agreements sucks, but companies can't just ignore the law in order to make a better user experience.


My website has no tracker nor any third party cookies so it doesn't need cookie dialog. And even if I had some analytics that stays on prem, doesn't store or gather PII, I wouldn't need one.

The first dark pattern, is that websites want to send all your PII and other data to other companies, and act as if this is normal.

The second dark pattern is how they do this. They could just not track and share this data, but allow you to flip some setting if you really want them to gather and sell or share this data. No popup needed. Or one that has some big button "proceed" that denies all tracking and a tiny link "advanced settings" that allows opt in to tracking. Instead, their UX is the exact opposite. Sometimes with deliberate javascript to make the "nope" button not work, slow or clumsy.


the GDPR refrained -rightfully so, IMO- from any implementation details

I would disagree with this. If you're going to force bad actors to take actions that they don't want to, and you give them wide latitude to decide how to comply, then of course they're going to try to find ways to satisfy the letter of the law while avoiding the law's underlying goal.

surveilance industry adopted a "dark pattern" that annoys people to no end (the popups) so as to paint the GDPR in a bad light

We should in fact blame lawmakers when they fail to anticipate the obvious consequences of their laws.

This industry could've easily said "If we see a DNT header with level:x and domainmask:*, we'll assume NO to every tracking cookie and won't collect them".

If they were the type of people to do that, then they wouldn't have been doing the invasive tracking in the first place.

The GDPR would be far better if it simply banned individualized tracking. It would be somewhat better if it explicitly specified that sites must honor browser headers and specified the exact UI to use when requesting permissions.


I agree that much clearer constraints and less wiggle room would be better.

But imposing technical solutions in laws has hardly ever worked. Because these are almost always much easier to circumvent.

E.g. your suggestion to "honor browser headers" would be easy to circumvent by not having a browser - native apps, alt clients, etc. Google would easily track almost everything they do now through android, play services, email, docs, etc. And such implantation details inevitably get outdated. E.g. in The Netherlands we have a law that forbids, with severe punishment, that you read people's paper post. If only lawmakers hundreds of years ago had abstracted this to "correspondence" rather than paper mail in envelopes, it would've applied to email and probably all network traffic.


> You think these websites give a shit about your privacy because you clicked on a div with a "No" in it

Yes. For a subset of "these websites". Because this is enforced and EU has fined billions already. The fines for doing what you say they do, are steep and a severe risk for many "these websites".


> For a subset of "these websites".

So for websites that are not in that subset, they will still track you regardless of what you click on, so you still need browser-level protections for those websites, and those browser-level protections will also work on the websites that are in that subset, so you still gain nothing by clicking the No.


Yes. But "these websites" will then be prosecuted, their owners cannot enter the EU ever again without the risk of severe penalties, they cannot do business in the EU and can and often will, lose access to many services that do want to stay on the good side the EU (i.e. will see their google ads blocked, their stripe frozen, their hosting closed etc)

Edit: what I'm trying to say is: this "technical" problem has a real and working "solution" that's not technical at all: law and enforcement. Now, that won't work for all and everything, it never does. There will always be malicious, scammy, malware, criminal and illegal webservices around. But it makes it very hard for malicious actors to do so and make money.


Yeah but the question is how you, as a user, should best protect yourself. I'm saying clicking the "No" provides no advantage over using a browser that just protects you from tracking by default. Then it doesn't matter whether the website is following the law or whether the EU (where I don't live) will enforce the law or change it in the future or whatever.

> Now, that won't work for all and everything, it never does. There will always be malicious, scammy, malware, criminal and illegal webservices around.

Yeah, exactly. So if I have to protect myself from those websites anyway, I may as well apply the same protections to all websites. Clicking the "No" does nothing for me.


> So if I have to protect myself from those websites anyway, I may as well apply the same protections to all websites.

And what is the protection?



In one project, we had an ENV var (a few actually) for timeouts of network requests. Most places would raise an exception if they hit this timeout.

In test and CI we had this set to a very low number. In acceptance (manual testing, smoke testing) to a very high number.

This was useful because it showed three things:

- Network- and services- configuration bugs, would immediately give a crash and thus failing test. E.g. firewalls, wrong hosts, broken URIs etc.

- Slow services would cause flickering tests. Almost always a sign that some service/configuration/component had performance problems or was misconfigured itself. Quick fix would be to increase the timeout, but re-thinking the service - e.g. replace with a mock if we couldn't control it, or fixing its performance issues, the proper- and often not that hard- fix. Or re-thinking the use of the service, e.g. by pushing it to async or a job queue or such another fix.

- Stakeholders going through the smoke-test and acceptance test would inevitably report "it's really slow" showing the same issues as above but in a different context and with "real" services like some external PSP, or SMTP.

It was really a very small change: just some wrappers around http-calls and other network calls, in which this config was used, next to a hard rule to never use "native" clients/libs in the code but always our abstraction. This then turned out to offer so much more benefits than just this timeout: error reporting, debugging, decoupling.

It wasn't javascript (Ruby, some Python, some Typescript) but in JS it would be as easy as `function fooFetch(resource, options) { return fetch(resource, options) }` from day one, then slowly extended and improved with said logging, reporting, defaults etc.

I've since always introduced such "anti-corruption" layers (facades, proxy, ports/adapters) very early on, because once you have "requests.get("http://example.com/foo/bar")' all throughout your python code, there's no way to ever migrate away from "requests" if (when!) it gets deprecated, or to add said timeout throughout the code. It's really a tiny task to add my own file/module that simply imports "requests" and then calls it on day one, and then use that instead.


The main thing that keeps me from using Jupyter notebooks for anything that's not entirely Python, is Python.

For me, pipenv/pyenv/conda/poetry/uv/dependencies.txt and the invitable "I need to upgrade Python to run this notebook, ugh, well, ok -- two weeks later - g####m that upgrade broke that unrelated and old ansible and now I cannot fix these fifteen barely held up servers" is pure hell.

I try to stay away from Python for foundational stuff, as any Python project that I work on¹ will break at least yearly on some dependency or other runtime woe. That goes for Ansible, Build Pipelines, deploy.py or any such thing. I would certainly not use Jupyter notebooks for such crucial and foundational automation, as the giant tree of dependencies and requirements it comes with, makes this far worse.

¹ Granted, my job makes me work on an excessive amount of codebases, At least six different Python projects last two months, some requiring python 2.7, some requiring deprecated versions of lib-something.h some cutting edge, some very strict in practice but not documented (It works on the machine of the one dev that works on it as long as he never updates anything?). And Puppet or Chef - being Ruby, are just as bad, suffering from the exact same issues, only that Ruby has had one (and only one!) package management system for decades now.


Chrome is installed on (almost?) every Android phone. So they'd be buying much more than this.

Not a new "AI phone", which has to gain traction, find users, convince people to switch, compete in highly competitive (hardware( and duopolized (OS, Software) landscape.

I won't be suprised if amongst Android users, Chrome is one of the most installed apps - if only because many phones have it locked (i.e. its really hard or impossible to remove).

Maybe "Google Assistant" is installed more than chrome, IDK. But Chrome has the additional benefit that it is also installed on many iPhones. Sou Chrome would be a gateway into "making your iPhone an AI phone" too.


What would be "treason territory"? The leaking or the siphoning of case data?


How is this different from opening any website through a QR code, that will then run "arbitrary code"?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: