Hacker Newsnew | past | comments | ask | show | jobs | submit | pera's commentslogin

In a sane world I would have agreed but in the US at least I am not certain this is still true: In Bartz v. Anthropic, Judge Alsup expressed his views that the work of an LLM is equivalent to the one of a person, see around page 12 where he argues that human recalling things from memory and AI inference are effectively the same from a legal perspective

https://fingfx.thomsonreuters.com/gfx/legaldocs/jnvwbgqlzpw/...

To me this makes the clean-room distinction very hard to assert, what am I missing?


If a human reads the code and then writes an implementation this is not clean room and the LLM would in most cases be equivalent to that.

Clean room requires the person writing the implementation do have no special knowledge of the original implementation.


Could you share a source for this definition? As far as I know it means no having access to the code only during the implementation of the new project

Clean room reverse engineering always involves a wall of some kind between the person figuring out how the tech works, and the person creating the new tech. Usually the wall is only allowing one-way communication via a specification of the behavior of the old tech, perhaps reviewed by a lawyer to ensure nothing copyrighted leaks across.

https://en.wikipedia.org/wiki/Clean-room_design https://en.wikipedia.org/wiki/Chinese_wall


That's strange, which OS? I am on Arch and also on 145 and I get the "Ask an AI Chatbot" in the context menu. The settings used to work in the past so I am not sure what's going on.

I believe these are all the settings I have disabled for AI:

browser.ml.chat.enabled

browser.ml.chat.menu

browser.ml.chat.page

browser.ml.chat.page.footerBadge

browser.ml.chat.page.menuBadge

browser.ml.chat.shortcuts

browser.ml.chat.sidebar

browser.ml.enable

browser.ml.linkPreview.enabled

browser.ml.pageAssist.enabled

browser.tabs.groups.smart.enabled

browser.tabs.groups.smart.userEnable

browser.tabs.groups.smart.userEnabled

extensions.ml.enabled

sidebar.notification.badge.aichat

Am I missing anything?


Seems whatever I had disabled earlier is still disabled on my install of FF 145.

I do have these additional settings.

browser.ml.chat.maxLength=0 browser.ml.chat.prompt.prefix="{}" browser.ml.chat.prompts.0="{}" browser.ml.chat.prompts.1="{}" browser.ml.chat.prompts.3="{}" browser.ml.chat.prompts.4="{}" browser.ml.chat.shortcuts.custom=false browser.ml.linkPreview.longPress=false browser.ml.modelHubRootUrl="example.com"


As far as I can see, that's it. Or at least I'm not seeing anything else related that I've disabled.

I had to go out. When I'm back home in a few hours, I'll try to look up all I've disabled.

There is a fundraising for that organised by their union (IWGB Game Workers):

https://actionnetwork.org/fundraising/support-rockstar-worke...


Why does this look horribly wrong to me. Why does a union need a fundraiser? Shouldn't they have hold tight belt and have significant war chest for this? Or take extra fees from the new members?


It does feel wrong because in our society having access to more financial resources often translates to better representation in the courtroom. This is similar to how donating to organizations like the EFF can provide more justice to those who are not multibillion-dollar corporations.


I really wished they would also let you disable those very annoying modal popups announcing yet-another-chatbot-integration twice a week: My company is already paying for your product, just let me do my work ffs...


That's just like your opinion man, I see it through rose coloured glasses as a poem from more naive times back when some folks still had some hope... This was way before vulture capitalism fucked everything up you know, or at least that's how I remember it but I was like 10.

Not everyone was into this hopeful vision of cyberspace though, Masters of Doom comes to mind.


You’re right (as someone a bit older but also with rose-tinted glasses).

There was a feeling of hope on the Internet at the time that this was a communication tool that would bring us all together. I do feel like some of that died around 9/11 but that it was Facebook and the algorithms that really killed it. That is where the Internet transitioned from being about showcasing the best of us to showcasing the worst of us. In the name of engagement.


s/Doom/Deception/



Preprint back from May:

https://arxiv.org/abs/2405.03675


Heh stockholders are not hallucinating: They know very well what they are doing.


retail investors? no way. The fever-dream may continue for a while but eventually it will end. Meanwhile we don't even know our full exposure to AI. It's going to be ugly and beyond burying gold in my backyard I can't even figure out how to hedge against this monster.


Yeah no I didn't mean retail investors, OpenAI is not publicly traded, but yeah I do share your concern...


I am not familiar to this type of side-channel attacks but the article says they use GPU.zip which is exploitable through Chrome:

https://www.hertzbleed.com/gpu.zip/


Looks to me that the browser version requires the targeted website to be iframed into the malicious site for this to work, which is mitigated significantly by the fact that many sites today—and certainly the most security-sensitive ones—restrict where they can be iframed via security headers. Allowing your site to be loaded in an iframe elsewhere is already a security risk, and even the most basic scans will tell you you're vulnerable to clickjacking if you do not set those headers.


I also wanted to add a bit more context regarding some of these claims.

For example, back in March Dario Amodei, the CEO and cofounder of Anthropic, said:

> I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code

Other similar claims:

https://edition.cnn.com/2025/09/23/tech/google-study-90-perc...

https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...

Some of these AI predictions seem quite unlikely too, for example AI 2027:

https://news.ycombinator.com/item?id=43571851

> By late 2029, existing SEZs have grown overcrowded with robots and factories, so more zones are created all around the world (early investors are now trillionaires, so this is not a hard sell). Armies of drones pour out of the SEZs, accelerating manufacturing on the critical path to space exploration.

> The new decade dawns with Consensus-1’s robot servitors spreading throughout the solar system. By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun.32 The surface of the Earth has been reshaped into Agent-4’s version of utopia


You should not believe any of the claims genAI companies make about their products. They just straight-up lie. For example:

> Several PhD-level reasoning models have been released since September of 2024

This is not true. What's true is that several models have been released that the companies have claimed to be "PhD-level" (whatever that means), but so far none of them have actually demonstrated such a trait outside of very narrow and contrived use cases.


If there is such models. Why are there not widely discussed full thesis works produced fully by them? Surely getting dozens of those out should be trivial if they are that good.


Well, would the AI graduate students also be required to be jerked around by professors, pass hundreds of exams, present seminars, teach, do original research, write proposals, deal with bureaucracy,too? Maybe this would solve the "hallucination" issues?


I'm going to laugh and shit my pants in that or some order when we realize the models that produced ALL the code has sleeper protocols built into the code that's now maintained by AI agents that might also be infected with sleeper protocols. Then later when 50 messages on Claude costs 2,500$ every company in the world is either going to experience exponential cost increases or spend an exponentially large amount of capital hiring and re-hiring engineers to, "un-AI'ify" the codebase.

https://www.youtube.com/watch?v=wL22URoMZjo


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: