Hacker Newsnew | past | comments | ask | show | jobs | submit | crabmusket's commentslogin


Get with the program dude. Where we're going, we don't need morals.

Let's put aside the fact that the person you replied to was trying to represent a diversity of views and not attribute them all to one individual, including the author of the article.

Should people not look for reasons to be concerned?


I can show you many instances of people or organisations representing diversity of views. Example: https://wiki.gentoo.org/wiki/Project:Council/AI_policy

Okay. Why are we comparing a commentor answering a question to a FOSS organization who wants to align contributiors? You seem to have completely side tracked the conversation you started.


This is the way I went - Framework feels like the most mainstream way to have hardware that supports Linux, ships to lots of countries, etc. I installed Fedora first with GNOME but now with KDE Plasma. It's been good!

But I will say, after 18 months it's starting to show a little bit of bit rot. E.g. for some reason the bootloader refuses to remember to boot into the most recent kernal/OS combination I have installed - it works if I intervene during boot and manually select it, but it seems to often revert to an older combo. And there are starting to be some odd little bugs with external storage drives and the file browser... I haven't looked too deeply into it, but I expect these are Fedora problems, not Framework problems. Maybe I brought them upon myself by tinkering a bit too much with some drivers (not strictly necessary, I was trying to do some unusual A/V stuff I wouldn't normally bother with, but it was for a friend...)


Lots of businesses like to claim being a "startup" as it brings connotations of innovation, dynamism, coolness, being the "next big thing" etc. There are many senses of the word, and it can be used in different ways (e.g. I work at a small business which has some elements of startup culture, and it's not an incorrect way to give people a sense of what it's like here - but we're definitely well established) but I think often being one of the "cool kids" is part of the motivation.

That's the kind of lazy bullshit idea that, to me, exemplifies the AI hype slop era we're in. The point of a chart is to communicate visually. If the chart isn't clear without a supplemental explanation, why is it there?

If user research indicates your chart isn't clear enough, then improve the chart. But what are the odds they did any user research? They probably just ran an A/B test and saw number go up because of the novelty factor.


Reading this as a web developer, it reminds me of Demo's permission system.

Deno is a JS runtime that often runs, at my behest, code that I did not myself write and haven't vetted. At run time, I can invoke Deno with --allow-read=$PWD and know that Deno will prevent all that untrusted JS from reading any files outside the current directory.

If Deno itself is compromised then yeah, that won't work. But that's a smaller attack surface than all my NPM packages.

Just one example of how something like this helps in practise.


Two thoughts.

Ben Thompson and James Allworth discussed an idea on an episode of The Exponent (https://exponent.fm/) the idea of a "principle stack", and at which "layer" of the stack it's appropriate to address different societal issues. I wish I could find the episode again, it was quite a few years ago. The upshot being... maybe software licensing isn't the right place to address e.g. income inequality?

On the other hand, I definitely encourage tech workers (and all workers) to think about their place in the world and whether their work aligns with their personal values. I think the existence of free and open source software is a fantastic thing, but I think we should continue to evaluate whether it is in danger, or whether it could be better, or whether our efforts might be applied to something else.

For example, I'd love to see co-ops developing shared-source infrastructure based on principles of mutuality, which the sector is built upon anyway. The co-op principles already include cooperative and communitarian ideas which mesh really well with some aspects of open-source software development. But co-ops aren't about just giving everything away either. There could be a real new approach to building a software commons for mutual businesses, rather than a kind of freedom-washed way for big tech companies to benefit from free labour.


It is impossible to write a real "use for good, not evil" [1] license, because there's no formal, universally accepted notions of good and evil. While there are things that are universally considered good, or considered evil, the areas around them are large, nebulous, and are anything but clearly outlined. Hence legally avoiding the "anti-evil" license terms will always be a relatively easy option for a willing party. Moreover, there is a large range of issues and causes that are considered "good" by some and "evil" by others, so there will always be a controversy and disagreement even without any legal suits, where everyone would consider themselves sincerely right, not just technically correct while violating the spirit.

A weapon that only a lawful good character can wield is the stuff of fairy tales and board games, which do not reflect reality fully enough.

Unlike this, freedom is pretty well-defined, so e.g. GPL is upheld by courts.

[1]: https://www.json.org/license.html


I have this thinking that, in reality, there's no such thing as objectively 'good' or objectively 'bad'

It's all context and timing.

Almost everyone that will attack this idea will present actions that are loaded with context - murder, is killing when it's bad, self defence is killing when it's good.

If you look at everything, and look at it's non-contextual action, then you can easily find contextually 'good' and contextually 'bad' instances of that thing.

Even further, the story of the man who lost his horse [0] shows us that even if we say that something that happens is contextually good, or bad, the resulting timeline could actually be the complete opposite, meaning that, ultimately, we can never really know if something is good, or bad.

[0] https://oneearthsangha.org/articles/the-old-man-who-lost-his...


I think this is one of these cases where talking in abstract terms does not help people agree.

What I am hearing is if you remove context (and timing, lets say it is part of context) then there is no good or bad. But who said to remove context? Arent we saying then there is good and bad depending on context?

Many people, including myself, would agree in the abstract, while at the same time some situations being very clear once down to a real example.

It reminds me of people claiming pain is an illusion or facts not existing (very edgy), until someone slaps them in the face to prove "I did slap you, that is a fact". I think that is reality, and specific examples are easier.

P.S. I would add values into the context.


How do you make good or bad resolvable? Is a piece of code being used by Tyson Foods okay? A vegetarian software engineer who contributed to the package might say “no, that use contributes to the killing of animals for food, which is bad.”

If you need to evaluate all the context to know whether a license is usable, it makes it extremely hard for “good guys” to use code under that license. (It’s generally very easy for “bad guys” to just use it quietly.)


> How do you make good or bad resolvable?

It is not a computer program, but a an ethics problem. We can solve it by thinking of the context and the ethics of it.

I realize it is the topic of this thread, but OP did not mention anything in relation to licenses, and was just talking about good and bad not existing objectively (without context).

I think, if we came with a specific situation, most people with similar values might reach the same good/bad verdict, and a small minority might reach a different one.

I believe the Tyson Foods example is overly simplistic and still too abstract, because one can be vegetarian for many reasons, and these would affect the "verdict". In the real world, if we were working on that piece of software the question would be: Does the implementation of this specific hr SAP module for Tyson foods by me, a vegetarian against animals suffering unnecessarily, etc. as opposed as the abstract idea of any piece of code and any vegetarian. If a friend called you: I have this situation at work, they are asking me to write software to do x and I feel bad about it, etc. etc. I bet it would not be difficult to know what is right and wrong. Another aspect of it is, we could agree something is wrong (bad) and you might still do it. That does not mean there is no objective reality, just that you might not have options or that your values might not be the ones you think (or say) they are, for example.


But in a typical FOSS scenario, your decision to open source the code and Tyson Foods decision to use it are decoupled. You don't know who all the potential users are when you open source it, so you can't consider all the concrete cases and make sure that the license reflects them. In the same way Tyson Foods isn't going to contact all the creators of libraries they want to use and ask if their concrete use case is in line with the creator's ethics.

Agreed. This would be a logistical nightmare on both ends. Especially if the licenses can be revoked if and when Tyson Foods decides to change some of their policies and/or the author decides to change their political views.

I believe that this would effectively make sure that nobody uses these licenses.


> I think, if we came with a specific situation, most people with similar values might reach the same good/bad verdict, and a small minority might reach a different one.

All you doing is agreeing how the context of the situation is determining if the action is "good" or "bad" (which was my point)


In classic times there was no general concept of good or evil. The question was about if something is fitting in its context. With the rise of Christianity came the general concept of good or bad.

Even that evolved with time.

This was one of the many disagreements between Catholics and Protestants during the 16th-17th century, for instance, with some of the most powerful Catholic currents (e.g. Jesuits) being very much in favor of rethinking morality to take into account context, while the most powerful Protestant currents pushed for taking morality back to [their interpretation of] the manichean early Christian dogmas.


Come on. A quick search suggests that Zoroastrianism already had this a good six-hundred years before christianity. And ancient Greek philosophers were trying to define good, evil, and "God" for generations before christianity (source: I've been reading about early christianity for two years). Certainly, Judaism had it and that's what inspired early christianity (with the exception of Paul, the early leaders were devout Jews).

> "the Software shall be used for Good, not Evil."

For JSLint, Crockford gave an exemption though: "I give permission to IBM, its customers, partners, and minions, to use JSLint for evil."

https://gist.github.com/kemitchell/fdc179d60dc88f0c9b76e5d38...


The very fact that this instantly feels like ironic jest illustrates how impossible it is to seriously limit licenses with broad moral clauses.

One could come up with clauses that could be admissible at court, e.g. "this software is expressly not licensed to be used for anything intended to kill humans". It would not be licensed for military planning software, but would likely be still licensed for a military transport system, or even an anti-drone weapon.

The best actionable clause I could come up with is like so: "your license to use this software for any purpose terminates as soon as a court of [insert jurisdiction] finds that it has been used for [something you are opposed to, but also sufficiently clearly defined, like genocide, or incarceration of peaceful political dissidents], which has resulted from the use of your products and services, and with your prior knowledge of such use". I think I've even saw similar clauses in many commercial licenses, just with not morals-related provisions.


> there are things that are universally considered good, or considered evil

What a bold claim.


From the perspective of decreasing income inequality on a global scale, when multinationals fire workers in developed countries and replace them with lower-paid workers in developing countries, that is a very good thing, since people in developing countries need the jobs more. I would be skeptical of any license which privileges co-ops over multinationals for that reason. Co-ops are likely to reinforce existing global income inequality, due to labor protections for developed-world workers. A globally rich, privileged slacker gets to keep a job they're barely doing, because they had the good fortune of being born on the right dirt. It's modern feudalism.

I haven't yet fully digested this comment, but I will say right off the bat that there are many co-ops in the developing world. Nathan Schneider in Everything for Everyone describes the culture shock of arriving in Nigeria (IIRC) and co-ops being everywhere, just such a normal part of life.

Sure, I think the point I'm trying to make is that second and third-order effects can be complex and unexpected when it comes to economics.

For example, what if the dominance of co-ops in Nigeria is a contributor to economic stagnation? Do co-ops still count as "virtuous" if they're keeping a nation impoverished? Testing that hypothesis would be highly nontrivial, econometrics is hard.

Trying to license your software so as to reduce income inequality seems too ambitious. Licensing your software so it can e.g. be used by cleantech companies but not fossil fuel companies seems way more feasible by comparison.


Yes I don't disagree. I was using the income inequality statement as an example of what Thompson and Allworth might advise against. Software licensing might be at the wrong layer of the stack to have any impact on macroeconomics.

Fair.

I think there's a kernel of truth in what you said, but you're also talking about avoiding accidental "income inequality" in this comment, and "economic stagnation" in the other.

It seems like you might've moved the goalpost a bit...

At the end of the day: any entity that works for the public good (be it a co-op, a non-profit or a state owned enterprise[1]) would be a better recipient of the free labour provided by f/oss hobbyists, than a for-profit multinational... And often economic performance is equivocated with financial performance. At the end of the day, if everyone can put food on the table[2] (here and in the developing world), I couldn't care less if some GDP metric might imply that "there's stagnation actually"

[1] My point being, that a SOE will have more bargainining power than a small co-op, and thus be able to fight unequal exchange and compensate for income inequality

[2] "food on the table" is a proxy for: food itself, shelter, healthcare, affordable heating (or cooling) and consumer goods and services (tech gadgets to learn and keep in touch with family, long distance transport to visit relatives, etc.)


Goalposts are the entire problem. I read the original article ... Holy wow, undefined goalposts!

I agree and it's happening. I co-founded Outpost Publishers Cooperative as a member services co-op to provide enterprise-level subscription services to publishers on Ghost (which is a non-profit).

I'm biased but I think the model of member-service co-ops (like Ace Hardware) providing tailored software services to particular industries is fertile ground. Free of VC incentives, reasonably profitable, aligned incentives, and the state of software tooling makes this doable.

And since this model doesn't require capturing as much value as a VC funded venture, it's more sustainable.

But the hard thing is figuring out how to get to decent product without upfront investment, in lieu of investment models that don't require outsize returns.

I can think of ways to create early capital but I've yet to see an industry think through how to fund smart suppliers without falling into the trap of thinking they need to be VCs.


> how to get to decent product without upfront investment

Yeah, this is the hard part.

I work in the small “ERP-like” business market and I’ve come up with some good ideas (based on the reaction of the people I talk to). But the problem is that even a small team of about five genuinely solid developers can cost around US $300,000–500,000 per year — and that’s even factoring in that I’m in LATAM!.

That’s a lot.

To make something like this happen, you need to convince fairly big players — the ones who have the capital and the patience, but more importantly the vision. And that’s the part that’s rare. At least in theory, that’s what VCs are supposed to bring.


Bite the bullet, and find something smaller to use for funding the big thing.

(This is the stage I’m at currently.)


I'd say too we aren't the only ones. Plausible Analytics is a great, mission-driven, open-soutce non-profit providing cookie-free web analytics.

And they let us bulk buy for our member publishers.

There's so much potential in what you are suggesting!


That is fantastic to hear, kudos to you and best of luck! The funding is definitely an issue I'm chewing over in my mind as I think about these issues.

>at which "layer" of the stack it's appropriate to address different societal issues.

One problem with trying to restrict the availability of open-source software: In the limit, as LLMs become better and better at writing code, the value of open-source software will go to zero. So trying to restrict the availability of your code is skating away from where the puck is going. Perhaps your efforts to improve the world are better allocated elsewhere.


I mean, if you ignore the fact there would be no LLM's without wholesale scraping of the corpus of all software ever written.

LLM's are the least ethically sourced pieces of technology I've ever seen. That they have businesses built that haven't been sued out of existence for not asking for permission to train first is positively mind boggling.


> all software ever written

LLMs aren't usually trained on large proprietary codebases like the ones from Google, Microsoft or Apple?


You think there wasn't a reason Microsoft bought GitHub, whose ToS allowed them to expand their training corpus vastly beyond their own internal systems? Why Amazon does the same thing with CodeCommit? If your stuff is hosted somewhere with a ToS, you can bet that repo is getting into the training corpus. Having you flavor of LLM in today's is too valuable for any corp to pass up the opportunity.

I think I confused two different discussions on Exponent. Here's one episode where they discuss the stack, particularly in reference to net neutrality:

https://exponent.fm/episode-168-a-community-of-loonies/

But I'm sure I remember an episode where they discuss Matthew Prince and some neo-nazi site.

The "principle stack" is a separate concept which I haven't yet found.


> This is actually based on "The Kernel in The Mind" by Moon Hee Lee.

This looks like a really interesting resource. Can anybody here vouch for its accuracy or usefulness? I can't find a ton about it online. The fact that it's only published as a series of LinkedIn posts, or a PDF attached to a LinkedIn post, does not fill me with confidence - but I guess we can't expect kernel devs to know how to create websites?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: