Alternatively, since brainwashing is a fiction trope that doesn't work in the real world, they can brainwash the same (0) number of people for less money. Or, more realistically, companies selling social media influence operations as a service will increase their profit margins by charging the same for less work.
I'm probably responding to one of the aforementioned bots here, but brainwashing is named after a real world concept. People who pioneered the practice named it themselves. [1] Real brainwashing predates fictional brainwashing.
The report concludes that "exhaustive research of several government agencies failed to reveal even one conclusively documented case of 'brainwashing' of an American prisoner of war in Korea."
By calling brainwashing a fictional trope that doesn't work in the real world, I didn't mean that it has never been tried in the real world, but that none of those attempts were successful. Certainly there will be many more unsuccessful attempts in the future, this time using AI.
My thesis is that marketing doesn't brainwash people. You can use marketing to increase awareness of your product, which in turn increases sales when people would e.g. otherwise have bought from a competitor, but you can't magically make arbitrary people buy an arbitrary product using the power of marketing.
so you just object to the semantics of 'brainwashing'? No influence operation needs to convince an arbitrary amount of people of arbitrary products. In the US nudging a few hundred thousand people 10% in one direction wins you an election.
This. I believe people massively exaggerate the influence of social engineering as a form of coping. "they only voted for x because they are dumb and blindly fell for russian misinformation." reality is more nuanced. It's true that marketers for the last century have figured out social engineering but it's not some kind
of magic persuasion tool. People still have free will and choice and some ability to discern truth from falsehood.
The CCP controlling the government doesn't mean they micromanage everything. Some Chinese AI companies release the weights of even their best models (DeepSeek, Moonshot AI), others release weights for small models, but not the largest ones (Alibaba, Baidu), some keep almost everything closed (Bytedance and iFlytek, I think).
There is no CCP master plan for open models, any more than there is a Western master plan for ignoring Chinese models only available as an API.
Never suggested anything of the sort, involvement doesn’t mean direct control, it might be a passive ‘let us know if there’s progress’ issued privately, it might also be a passive ‘we want to be #1 in AI in 2030’ announced publicly, neither requires any micromanagement whatsoever: CCP’s expectation is companies figuring out how to align to party directives themselves… or face consequences.
This isn't even whataboutism, because the comparison is just insane.
The difference between the CCP, where "private" companies must actively pursue the party's strategic interests or cease to exist (and their executives/employees can be killed), and the US, where neither of those things happen and the worst penalty for a company not following the government's direction (while continuing to follow the law, which should be an obvious caveat) is the occasional fine for not complying with regulation or losing preference for government contracts, is categorical.
Only those who are either totally ignorant or seeking to spread propaganda would even compare the two.
They don't have to micromanage companies. A company's activities must align with the goals of the CCP, or it will not continue to exist. This produces companies that will micromanage themselves in accordance with the CCP's strategic vision.
Both OpenAI and Google used models made specifically for the task, not their general-purpose products.
OpenAI: https://xcancel.com/alexwei_/status/1946477756738629827#m "we are releasing GPT-5 soon, and we’re excited for you to try it. But just to be clear: the IMO gold LLM is an experimental research model. We don’t plan to release anything with this level of math capability for several months."
DeepMind: https://deepmind.google/blog/advanced-version-of-gemini-with... "we additionally trained this version of Gemini on novel reinforcement learning techniques that can leverage more multi-step reasoning, problem-solving and theorem-proving data. We also provided Gemini with access to a curated corpus of high-quality solutions to mathematics problems, and added some general hints and tips on how to approach IMO problems to its instructions."
>we achieved gold medal level performance on the 2025 IMO competition with a general-purpose reasoning system! to emphasize, this is an LLM doing math and not a specific formal math system; it is part of our main push towards general intelligence.
DeepSeekMath-V2 is also an LLM doing math and not a specific formal math system. What interpretation of "general purpose" were you using where one of them is "general purpose" and the other isn't?
Sam specifically says it is general purpose and also this
> Typically for these AI results, like in Go/Dota/Poker/Diplomacy, researchers spend years making an AI that masters one narrow domain and does little else. But this isn’t an IMO-specific model. It’s a reasoning LLM that incorporates new experimental general-purpose techniques.
You are overinterpreting what they said again. "Go/Dota/Poker/Diplomacy" do not use LLMs, which means they are not considered "general purpose" by them. And to prove it to you, look at the OpenAI IMO solutions on GitHub, which clearly show that it's not a general purpose trained LLM because of how the words and sentences are generated there. These are models specifically fine tuned for math.
Clear about what? Do you know the difference between an LLM based on transformer attention and a monte carlo tree search system like the one used in Go? You do not understand what they are saying. It was a fine tuned model, just as DeepSeekMath is a fine tuned LLM for math, which means it was a special purpose model. Read the OpenAI GitHub IMO submissions to see the proof.
Yes, "new math" is neither magical nor unrelated to existing math, but that doesn't mean any new theorem or proof is automatically "new math." I think the term is usually reserved for the definition of a new kind of mathematical object, about which you prove theorems relating it to existing math, which then allows you to construct qualitatively new proofs by transforming statements into the language of your new kind of object and back.
I think eventually LLMs will also be used as part of systems that come up with new, broadly useful definitions, but we're not there yet.
I think they must've messed up validation somehow. The performance drops relative to the base model are sometimes quite dramatic, which should've been caught by corresponding deterioration in validation performance.
They write "we utilize 10% randomly selected from the training set as a validation set and the original validation set as a test set for evaluation. During the validation phase, we measure validation loss and save the weights of the best validation loss for every 5% of the training steps. We train for 10 epochs with a batch size of 4." so it might be as simple as not including the base model in the validation checkpoints, meaning that the first validated checkpoint is after half an epoch, which is plenty of time to do damage if the fine-tuning method/hyperparameter configuration isn't chosen well. Unfortunately, they don't graph their training curves.
Fortunately it's not true. GrapheneOS seem https://xcancel.com/GrapheneOS/status/1993061892324311480#m to be reacting to news coverage https://archive.ph/UrlvK saying that although legitimate uses exist, if GrapheneOS have connections to a criminal organization and refuse to cooperate with law enforcement, they could be prosecuted nonetheless:
« il existe pour une certaine partie des utilisateurs une réelle légitimité dans la volonté de protéger ses échanges. L’approche est donc différente. Mais ça ne nous empêchera pas de poursuivre les éditeurs, si des liens sont découverts avec une organisation criminelle et qu’ils ne coopèrent pas avec la justice. »
Charitably, GrapheneOS are not in fact a front for organized crime, but merely paranoid, assuming that the news coverage is laying the groundwork for prosecution on trumped-up charges. Notably, there doesn't appear to have been direct communication from law enforcement yet.
Of course if your organization have connections to a criminal organization, you are going to be in trouble. Same thing for refusing to cooperate with law enforcement, this is not some abstract thing, it is about following the law, for example relating to evidence tampering or search warrants.
I don't think France is anything special in that regard.
Paranoid? Telegram CEO was arrested and held for days, his movements out of France restricted for months. And he is a connected billionaire, not an open source developer.
Open source developers have been given jail sentences in the last months.
If you're a broke open source developer - even if you believe under the law you're not doing anything wrong - would you want to be exposed to law enforcement harassment (lawfare) for no reason?
reply