> Would .ipynb format solve for this? Unfortunately there's not yet a markdown format that includes output cells (likely due to the unusability of base64 encoded binary data). There are
existing issues TODO to create a new format for Jupyter notebooks; which have notebook-level metadata, cell-level metadata, input cells, and output cells.
API facades like OpenLLM and model routers like OpenRouter have standard interfaces for many or most LLM inputs and outputs. Tools like Promptfoo, ChainForge, and LocalAI also all have abstractions over many models.
What are the open standards for representing LLM inputs, and outputs?
W3C PROV has prov:Entity, prov:Activity, and prov:Agent for modeling AI provenance: who or what did what when.
LLM evals could be represented in W3C EARL Evaluation and Reporting Language.
> simonw/llm by default saves all prompt inputs and outputs in a sqlite database. Copilot has /save and gemini-cli has /export, but they don't yet autosave or flush before attempting to modify code given the prompt output?*
> How does the distance metric vary with feature order?
> Do algorithmic outputs diverge or converge given variance in sequence order of all orthogonal axes? Does it matter which order the dimensions are stated in; is the output sensitive to feature order, but does it converge regardless? [...]
>> Are the [features] described with high-dimensional spaces really all 90° geometrically orthogonal?
> If the features are not statistically independent, I don't think it's likely that they're truly orthogonal; which might not affect the utility of a distance metric that assumes that they are all orthogonal
Which statistical models disclaim that their output is insignificant if used with non-independent features? Naieve Bayes, Linear Regression and Logistic Regression, LDA, PCA, and linear models in general are unreliable with non-independent features.
What are some of the hazards of L1 Lasso and L2 Ridge regularization? What are some of the worst cases with outliers? What does regularization do if applied to non-independent and/or non-orthogonal and/or non-linear data?
Impressive but probably insufficient because [non-orthogonality] cannot be so compressed.
There is also the standing question of whether there can be simultaneous encoding in a fundamental gbit.
> Abstract: [...] We derive the perihelion precession of planetary orbits using quantum field theory extending the Standard Model to include gravity. Modeling the gravitational bound state of an electron via the Dirac equation of unified gravity [Rep. Prog. Phys. 88, 057802 (2025)], and taking the classical planetary state limit, we obtain orbital dynamics exhibiting a precession in agreement with general relativity. This demonstrates that key general relativistic effects in planetary motion can emerge directly from quantum field theory without invoking the geometric framework of general relativity.
Gravity of n-body planets from QFT, but not what else?
Where doesn't a QFT-extended or SQR or SQG or other Alternative Theory to GR correspond to real observations or to GR?
> Fedi's [SQR Superfluid Quantum Relativity] also rejects a hard singularity boundary, describes curl and vorticity in fluids (with Gross-Pitaevskii,), and rejects antimatter
...
Evidence for scale invariance and magnetohydrodynamics:
>> One of the most exciting aspects of this research is that magnetohydrodynamics, the theory of magnetized plasmas, turns out to be fantastically scalable.
There are a few methods to make rhombohedral graphene (which demonstrates superconductivity at room temperature).
Normal carbon stacks into a hexagonal ABAB pattern.
For superconductivity, the layers need to be at least ABC (because twisted bilayer graphene does not demonstrate the effects (superconductivity, quantum hall effect) at room temperature FWIU).
Current process: CVD chemical vapor decomposition and then sort and stack graphene flakes.
Flash heating plastic yields graphene and hydrogen; but you must capture the flue.
There are new plastic recycling methods that intentionally don't produce graphene that maybe could produce more plastic and graphene.
But graphene is hazardous sort of like coal ash; so IIUC if you can make graphene onsite (e.g. from unsorted 'recycled' plastics) and lock it in to glass or another substrate that avoids transport risks.
> Abstract: [...] Here we have discerned the quantum critical universality in graphene transport by combining the electrical and thermal conductivities in very high-quality devices close to the Dirac point. We find that they are inversely related, as expected from relativistic hydrodynamics, and the characteristic conductivity converges to a quantized value. We also observe a giant violation of the Wiedemann–Franz law, where the Lorentz number exceeds the semiclassical value by more than 200 times close to the Dirac point at low temperatures. At high temperatures, the effective dynamic viscosity to entropy density ratio close to the Dirac point in the cleanest devices approaches that of a minimally viscous quantum fluid within a factor of four.
Wikipedia lists some limitations of the Wiedemann–Franz law[1], and also some previous violations in other materials.
Reading the Wikipedia page I don't get the sense the law is quite as fundamental as the headline and summary make it sound like.
Here's one of the previous violations:
In 2011, N. Wakeham et al. found that the ratio of the thermal and electrical Hall conductivities in the metallic phase of quasi-one-dimensional lithium molybdenum purple bronze Li0.9Mo6O17 diverges with decreasing temperature, reaching a value five orders of magnitude larger than that found in conventional metals obeying the Wiedemann–Franz law. This due to spin-charge separation and it behaving as a Luttinger liquid.
Still, graphene is cool and seems to be the gift that keeps on giving in terms of surprising results in solid state physics.
>> re: the fractional quantum hall effect, and decoherence: How are spin currents and vorticity in electron vortices related?
> [...] But the Standard Model Lagrangian doesn't describe n-body gravity, n-body quantum gravity, photons in Bose-Einstein Condensates; liquid light in superfluids and superconductors, black hole thermodynamics and external or internal topology, unreversibility or not, or even fluids with vortices or curl that certainly affect particles interacting in multiple fields.
This is probably wrong if these are also true:
This says that the standard model actually does describe the n-body orbits of the planets:
> Would .ipynb format solve for this? Unfortunately there's not yet a markdown format that includes output cells (likely due to the unusability of base64 encoded binary data). There are existing issues TODO to create a new format for Jupyter notebooks; which have notebook-level metadata, cell-level metadata, input cells, and output cells.
API facades like OpenLLM and model routers like OpenRouter have standard interfaces for many or most LLM inputs and outputs. Tools like Promptfoo, ChainForge, and LocalAI also all have abstractions over many models.
What are the open standards for representing LLM inputs, and outputs?
W3C PROV has prov:Entity, prov:Activity, and prov:Agent for modeling AI provenance: who or what did what when.
LLM evals could be represented in W3C EARL Evaluation and Reporting Language.
From https://news.ycombinator.com/item?id=44934531 :
> simonw/llm by default saves all prompt inputs and outputs in a sqlite database. Copilot has /save and gemini-cli has /export, but they don't yet autosave or flush before attempting to modify code given the prompt output?*
reply