LLMs at their core do produce reproducible results with a given seed. it's all the workflow stuff people do on top that tends to break reproducibility.
This is not the case for LLMs running on GPUs (which is most of them); GPUs are non-deterministic for this use-case due to the floating point math involved. there is no way to get perfectly deterministic output from OpenAI despite the presence of seed and temperature parameters.
i'm not sure i agree. maybe if you're "vibe-coding", but not if you're using AI as an assistant. a good abstraction makes it hard to write bugs, so telling AI to use a certain library (which i know to be high quality) is a good way to constrain the types of bugs i have to look for when reviewing the code.
this works for actual compiled code. no vm, no runtime, no interpreter, no container. native compiled machine code. just download and double-click, no matter which OS you use.
cosmopolitan-libc has aspirations (but not concrete plans) to add SDL interfaces for all supported platforms. this would allow APE executables to compile-in cross-platform UI toolkits like QT.
it's adapting the world (well, internet) to suit the model rather than the other way around -- to the point where there is a growing amount of content on the internet designed exclusively for machine consumption at the expense of direct human consumption.
it's like self-driving cars -- if we had a dedicated separate road network just for self-driving cars, and required that they all communicate with standard protocols, then we'd have self-driving cars by now -- but that's not actually the goal of FSD. the goal is to have cars that can use existing infrastructure and co-exist with human drivers.
A major distinction here is that it is very cheap to host content on the internet and VERY EXPENSIVE to build things like a separate road network in the real world.
Who is actually hurt if I publish an llms.txt or MCP in addition to my existing content?
with embedded scripting languages (including lua and micropython) the CPU is running a compiled interpreter (usually written in C, compiled to the CPU's native architecture) and the interpreter is running the script. on PyXL, the CPU's native architecture is python bytecode, so there's no compiled interpreter.
there two false equivalencies in your argument, as presented in response to GP:
1. ID checks are not the same as age verification.
2. a social media website is not the same as a porn website.
if you take the stance that social media sites should require ID verification, then i would furthermore point out that this is likely to impact any website that has a space for users to add public feedback, even forums and blogs.