Hacker News new | past | comments | ask | show | jobs | submit | adamwong246's comments login

  Location: Portland, Oregon
  Remote: Yes
  Willing to relocate: No
  Technologies: typescript/js, golang, Ruby (full stack)
  Résumé/CV: https://adamwong246.github.io/resume.pdf
  Email: adamwong246@gmail.com


Someone needs to mix this together with https://www.derw-lang.com/


Why should I trust Tesla's AI with my life, much less everyone else's? They couldn't even get the CyberTruck's trim right! It's wild that we have not demanded greater governmental oversight over consumer AI products but in time it will become inevitable.


The name being false advertising should be reason enough to never use it.


It doesn't maintain constant altitude over sealevel! You can't give it a compass bearing and have it follow that in a perfectly straight line!


You are right to be horrified but what is the use in maintaining the pretense that we are not strip-mining our citizens of their attention?


While elevated Anime does exist, it by-and-large deserves it's reputation as pulp. It was cheap to produce by humans, and even cheaper to produce by machines, but the kids simply can't get enough of it. I am slightly terrified at the thought that should the matrix ever come to pass, it will appear as a endless tide of cat-girl avatars.


AI is kind of the ultimate expression of "Deferred responsibility". Kind of like "I was protecting shareholder interests" or "I was just following orders".


https://www.bloomberg.com/news/articles/2024-07-01/dan-davie...

Dan davies did a great interview on odd lots about this he called it accountability sinks


I think about a third of the reason I get lead positions is because I'm willing to be an 'accountability sink', or the much more colorful description: a sin-eater. You just gotta be careful about what decisions you're willing to own. There's a long list of decisions I won't be held responsible for and that sometimes creates... problems.

Some of that is on me, but a lot is being taken for granted. I'm not a scapegoat I'm a facilitator, and being able to say, "I believe in this idea enough that if it blows up you can tell people to come yell at me instead of at you." unblocks a lot of design and triage meetings.


"A computer can never be held accountable, therefore a computer must never make a management decision".

How did we stray so far?


What would the definition of accountability there be though? I can't think of anything that one couldn't apply to both.

If a person does something mildly wrong, we can explain it to them and they can avoid making that mistake in the future. If a person commits murder, we lock them away forever for the safety of society.

If a program produces an error, we can "explain" to the code editor what's wrong and fix the problem. If a program kills someone, we can delete it.

Ultimately a Nuremberg defense doesn't really get you off the hook anyway, and you have a moral obligation to object to orders that you perceive as wrong, so there's no difference if the orders come from man or machine - you are liable either way.


Well the reality if/when a death by AI occurs is that lawsuits will hit everyone. So the doctor working on the patient, the hospital and its owners, and the LLM tech company will all try to be hit. The precedent from that will legally settle that issue.

Morally, it's completely reckless to use 2024 LLMS in any mission/safety critical factor, and to be honest LLMS should redirect all medical and legal inquiries to a doctor/lawyer. Maybe in 2044 that can change, but in 2024 companies are explicitly marketing to try and claim these are ready for those areas.

>If a program produces an error, we can "explain" to the code editor what's wrong and fix the problem.

Yes. And that's the crux of the issue. LLMS aren't marketed to supplement professionals who become more productive. In marketed to replace labor. To say "you don't need a doctor for everything, ask GPT". Even to the hospitals themselves. If you're not a professional, these are black boxes, and the onus falls solely on the box maker in that case.

Now if we were talking about medical experts leveraging computing to help come to a decision, and not blindly just listening to a simple yes/no, we'd come to a properly nuanced issue worth discussing. But medicine shouldn't be a black box catch all.


Yeah that will probably happen, but I feel like it has no basis to really make any sense.

I mean this is just the latest shiniest way of getting knowledge, and people don't really sue Google when they get wrong results. If you read something in a book as a doctor and a patient dies because it was wrong, it's also not the book that's really getting the blame. It's gonna be you for not doing the due diligence of double checking. There's zero precedence for it.

The marketing department could catch a lawsuit or two for false advertising though and they'd probably deserve it.


Firstly, Google was indeed sued many times early on over search results. Those were settled when Google simply claimed to be a middleman between queries and results.

I'm not sure of modern lawsuits, but you can argue they started to become more and more of a curator as the algorithms shifted and they accepted more SEO optimization of what we now call slop. Gemini itself is first and foremost for many results, so we've more or less come full circle on that idea.

I agree there's no precedence for this next iteration, and that's what the inevitable lawsuits will determine over the years, decades. I think the main difference is that these tech companies are taking more "ownership" of the results with their black boxes. And it seems like the last thing they'd do is open that box.


>"A computer can never be held accountable, therefore a computer must never make a management decision".

Ultimately it's algorithmic diffusion of responsibility that leads to unintended consequences.


It all depends on how you use it. Tell the AI to generate text in support of option A, that's what you mostly get (unless you hit the built-in 'safety' mechanisms). Do the same for options B, C, etc and then ask the AI to compare and contrast each viewpoint (get the AI to argue with itself). This is time-consuming but a failure to converge on a single answer using this approach does at least indicate that more research is needed.

Now, if the overall population has been indoctrinated with 'trust the authority' thinking since childhood, then a study like this one might be used to assess the prevalence of critical thinking skills in the population under study. Whether or not various interests have been working overtime for some decades now to create a population that's highly susceptible to corporate advertising and government propaganda is also an interesting question, though I doubt much federal funding would be made available to researchers for investigating it.


I don't think it's the ulimate expression per se, just the next step. Software, any kind of predictive model, has been used to make decisions for a long time now, some for good, some for bad.


I wonder how much of the bureaucratic mess of medicine is caused by this. Oh your insurance doesn't cover this or let me prescribe this to you off-label. Sorry!


Hand written html, no CSS, #FF0000 font color AND egregious use of gifs? And not a single piece of social media?! Now THIS is a what website should look like! Maybe I am just getting old but I miss the Good Old Days of the web.


Jerry Pournelle was a Sci-Fi writer, but he also consulter for the US Department of Defense and was a longtime columnist for Byte Magazine.


Poverty is not quite the right word. "Sterility" is the word I would use. My linux machine is a complex, fiddly beast, which I treat like a bonsai tree. My mac, however, gives me "dead mall vibes" in comparison. It's not all bad, I get more work done. But it certainly does not feel "alive" in the same way my linux machines feel.


The "alive" point hits the nail on the head for me, and covers the full system health spectrum - my underspec'd homelab/project laptop certainly feels alive, in the sense that only things that are alive can cough up blood.


Yeah that’s a better way to put it. I meant “poverty” in a somewhat spiritual sense, like a lack of aliveness that you’re talking about. It is weird to talk about operating systems this way I guess, but it is how it feels.


> Seriously, what do you, the consumer, do with this?

The comment chills me to the bone. Americans truly can not conceptualize themselves as anything _but_ a consumer. I mean, what can I say, other than "Wake up, Neo. The Matrix has you."


I meant it more like "the consumer" not "The Consumer" but I could have said "the reader", "the observer", etc.

You got me curious, though. What do non-Americans do when they read if not consume what they read?


As long as you raise the temperature slowly, the frog won't even notice.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: