What is there to understand about it? You don't make an argument, just a shallow dismissal. There's nothing about your comment that indicates you read TFA.
Controversial in general, maybe. In the case of opioids and the pharm industry, absolutely not. It's been well documented at this point that pharm companies were well aware of the abuse, and not only did nothing to stop it, but went out of their way to encourage it because sales were going through the roof.
In the case of Purdue and oxycontin, the culpability has in fact been established in court as well.
As for the coders, I find it hard to believe that they were so ignorant, naïve, or unintelligent that they had absolutely no idea what was going on. I just don't buy it.
That's far too literal an interpretation of harm. The point isn't to never do any kind of "physical harm". It's about doing the least amount of harm possible/necessary in any situation, where doing nothing can also be seen as causing harm.
I had a burst appendix as a teenager, leading to peritonitis. To treat this, surgeons were going to operate laparoscopically to remove the appendix and fix remove any contamination in the peritoneum. Obviously this required damaging my skin, removing an organ etc. which in the strictest sense is harm. But doing nothing at all would obviously lead to sepsis and death, so this was still the least harmful intervention. During the surgery, it turned out that the laparoscopic method was hard to carry out due to obesity and other factors. The attending made the decision to convert to a laporotomy, doing even more harm to my skin and leaving me with a 30 cm scar on my stomach. But it was the right call because it maximised the chances of accomplishing the goal of the procedure(preventing imminent death), minimising the risk of serious complications.
And here I am almost 20 years later. I have a scar, I have some adhesions that occasionally cause moderate abdominal pain if I don't eat enough fibre, and perhaps my lymphatic system and gut flora are very minorly compromised in some nebulous way due to the lack of an appendix. On the other hand, I'm alive. So yes, they "did harm", but they also minimised harm. And they didn't do any unnecessary harm, to the best of their ability. And that's the point of the ethical principle.
I absolutely want this arrangement for developers. We need to grow up as a profession, and take responsibility for the consequences of our actions.
This isn't the 90s anymore. Today there's practically nothing you can do in the modern world without interacting with software. Buying food, going to the hospital, travelling, communicating, voting, going to school, using anything electrical, anywhere. Our society is completely dependent on software at this point. The fact that there's no professional ethics code with the appropriate oversight for the development and maintenance of software is utterly insane.
The points you bring up about the Hippocratic Oath are important problems to solve, rather than reasons not to try.
If you think about it enough, most industries are doing terrible things. Work for an auto company? Thanks for the CO2 emissions accelerating climate change. Work for a consumer manufacturer? Thanks for the plastic waste choking oceans and landfills. Defense contractors? Thanks for enabling wars and killing innocents. Banks? Thanks for enslaving folks to debt and perpetuating economic inequality. Tech giants? Thanks for surveilling billions and eroding privacy on a massive scale. Social media platforms? Thanks for amplifying misinformation and fueling mental health crises. Fast fashion? Thanks for exploiting sweatshop labor and polluting waterways with toxic dyes. Pharma companies? Thanks for price-gouging drugs and prioritizing profits over access. Oil and gas? Thanks for fracking communities into environmental ruin and lobbying against renewables.
Almost everyone is contributing to terrible activities. Just different degrees of bad.
What is your point, besides potentially making yourself feel better about your industry? Those "different degrees" are what it's all about. They're the whole point.
Yes, voluntarily working in an industry where that "degree" is undeniably magnitudes higher than average just for personal gain, does make you quite the awful human. And "helping maximize the number of pills pushed to confirmed opioid addicts" is indeed a large number of standard deviations of "terrible" removed from the work the average person does.
Yup, working on recommender sysrems at places like Meta is
also quite high up there. Luckily the number of people who do this kind of work is minuscule when taken as part of the global population. Even more luckily, thousands of people on HN alone will forego such jobs even if it means earning less. I've done so myself.
The question was how GP felt about their particular unethical act, and it's consequences which likely includes multiple deaths. Since you are not GP, it seems unlikely that you can answer this question.
I fail to see the relevance of bringing up a different, and also unethical example, but I'll answer anyway. If GP said that they used to spend their time optimising software to be as addictive as possible in order to drive people into gambling addiction, destroying their lives and taking all their money while doing it, I would ask the same question.
It's a very smooth gradient from optimizing a sales funnel to writing gambling software. I don't know where the line is, but in both cases you're exploiting human psychology to make more money.
And its also why some of the anarchist folks I hang out with say there's no ethical consumption under capitalism. And definitely in areas, they're completely correct.
It is not much different. I would not worked for gambling company either. In fact, gambling companies have to pay more (and do, there are open positions) because their pool of potential employees is smaller.
The exact same question can be asked to developers who help target gamblers with attempts to push them deeper into addiction.
It's probably slightly worse because opioids actually kill people whereas gambling just financially ruins them (which can lead to suicide, but still I know which I would pick).
But it's only a slight difference. I don't think people who work at predatory apps/gambling systems should be able to sleep at night either. Not all gambling though; I don't have any objection to occasional sports betting for example.
But if you work for one of those pay-to-win apps and find some customers are spending thousands of dollars on it (whales), you know you're being immoral.
How is it different from smuggling fentanyl or taking hostages for ransom?
There will always be someone willing to do the work if the pay is good enough.
The former almost certainly causes much less societal damage than working for a pharma company that strives to get the whole population addicted to opioids, due to the scale constraints that come with running an underground business vs. an "above board" one.
Why do you think that gambling companies pay above the industry average for the required skillset?
Because luckily there are many other people with me who won't work for them, so they have a smaller pool of candidates and need to pay more.
Yes, because criminals and pedophiles care deeply about following laws. They would never even think of using a piece of software if it was illegal, right?
I can't believe how infrequently this point comes up, given how fundamental a flaw to the whole scheme this point out. So long as you're allowed to have a computer and run code on it, you can run programs that do your bidding. ChatControl cannot be effective, at least I can't imagine for longer than a few months until the first 2% of CSAM handlers are caught who didn't get the memo and spread the word to the remaining 8% who didn't get the memo, until we outlaw computers
It's mind-bending levels of absurdity. Surely nobody intends to (be able to) truly outlaw computers? That cat is out of the bag and people will build them or get their hands on them if they wish
The only possible outcome is that only honest citizens have their chats scanned and devices locked down. The latter has as side effect that Google, Apple, and Microsoft can do whatever the heck they want because open OSes are illegal now
It's been long enough(about 7 years) since I worked in an environment like this that I've been seriously considering going back to it lately. I played one round of your game, and that was enough to make it completely obvious to me what a fucking terrible idea that is.
Thank you for making this, I think you just saved me from flushing two years of personal progress down the toilet in the name of... What? Fucking business logic? I'll pass on that, I think, and keep improving my life on my own terms.
And maybe address the question why, again and again, I keep finding ways to convince myself that this is what I want my life to be. No matter how many times it leads me to crash and burn and have to spend years picking up the pieces.
Seriously, thank you. If I ever meet you, I will buy you a beverage to your liking, to go, so you can go home and enjoy it in peace.
I think it's imprecise to say that opening knowledge is unnecessary. What is unnecessary is opening theory, or more specifically, rote memorisation of opening lines.
This is different from opening understanding. Understanding the importance of tempo, development, controlling the centre, the different pawn structures, middle games and endgames that result from different openings, the plans and motifs typical in various opening complexes. Any late beginner to intermediate player needs to pick and study an opening. The problem is that instead of studying the opening, players try to memorise lines without improving their understanding of the resulting middlegames, and the plans they should be playing for. Then, when their opponent diverges from the main lines(which in my experience happens in 99% of games between players below 2000, because it's very rare that both players have memorised the same long line), they don't know what to do.
I'm a 1900 FIDE player, I have an opening repertoire of sorts. For instance I play the modern benoni with black. An extremely theoretical opening, and yet I have only a small handful of longer lines actually memorised, because they're simply too complex for me to figure out over the board(e.g the b5 lines against Bd3 h3 Nf3 setups). But what I have studied extensively is the strategic landscape of the benoni, games by strong players in the opening, etc. And I have years of experience playing the opening. I know what kind of exchanges typically favour me, or my opponent, what pawn breaks each player should be trying for. And all of that knowledge is crucial for me to get anything out of the opening. I have beaten players tactically much stronger than me in this opening simply because my understanding of this specific opening was better than theirs.
Tactical ability is obviously important, but it's definitely not everything.
In general I certainly wouldn't disagree with this, it's what I was alluding to with general ideas that stick with you. But I'd call this a different thing than opening study. For instance one can get Benoni like structures in the King's Indian, Benko, English, Nimzo, and more! And so it's not really understanding the opening, but understanding how to play a certain structure that arises in many different openings.
And it has nothing to do with memorization. I mean you mentioned the b5 stuff against Bd3/h3/Nf3 setups. You might not be able to calculate the depth of what happens if white manages to hold onto his extra pawn, but you can certainly calculate to at least the point of 'okay, I'm getting my pawn back in most lines, disrupting his center, and getting my play going. if the one line where he holds onto it (Bxb5 stuff) then he's going to have a bit of difficulty castling, his pieces look disorganized, his extra pawn and b2 both look weak.' That's more than enough on general principles to go for the sac I think.
It's typical in these situations that the price per stock is negotiated, with current SP as a starting point. It's fairly unusual, I think, for the company selling stock to get a price significantly higher than market price. It's more typical that there's a slight discount. At least that's been the case for every stock I owned where dilution has occured. We also don't know yet when exactly this deal was negotiated and approved, so it's hard to actually say. Considering where INTC has been very recently(below $20), $23.28 seems very reasonable to me.
The reason the stock surged up past $30 is the general market's reaction to the news, and subsequent buying pressure, not the stock transaction itself. It seems likely that once the exuberance cools down, the SP will pull back, where to I can't say. Somewhere between $25 and $30 would be my bet, but this is not financial advice, I'm just spitballing here.
The thing is, ChatGPT isn't really designed at all. It's hobbled together by running some training algorithms on a vast array of stolen data. They then tacked on some trivially circumventable safeguards on top for PR reasons. They know the safeguards don't really work, in fact they know that they're fundamentally impossible to get to work, but they don't care. They're not really intended to work, rather they're intended to give the impression that the company actually cares. Fundamentally, the only thing ChatGPT is "designed" to do is make OpenAI into a unicorn, any other intent ascribed to their process is either imaginary or intentionally feigned for purposes of PR or regulatory capture.
reply