The "small amounts is cardioprotective" is most likely false, the new recommendation simply reflects this. I recall reading about this in research more than 10 years ago. (Sorry for not citing, I'm on mobile rn)
Yes, the point is that the user only has to write an interpreter, not a compiler, the compiler is automatically generated. See also GraalVM's Truffle interpreter or Souffle Datalog.
Do you know if there's any disadvantage on using the Futamura projections to generate a compiler? Harder to maintain? Some optimizations are harder to express this way?
I have some experience writing compiler optimization passes and working on JITs, but I have not worked on an interpreter that uses Futamura's projections to generate a compiler.
You consider a few thousand satellites hundreds of miles above the surface of the earth to be litter, but the existing network of cables is beneath your notice? The internet is a web of copper, plastic, and glass that literally touches every place on Earth. It’s like a fungus that grows beneath our feet and over our heads.
I think you might be surprised just how many electronic devices have to be installed along that cable for it to function. All of those devices have to be replaced regularly. If you keep your eyes open, you can see them all over the city you live in. Big grey metal cabinets on every block, some of them under a metal cover that hundreds of people walk over every day without a second thought, others disguised with artwork. Anonymous concrete buildings scattered all over, some of them disguised as houses or businesses. Some of this infrastructure is needed for each customer, and some of it needed per mile. Rural customers need a lot more of the latter type. And all of it is delivered and installed using gasoline or diesel. And it all has to be replaced far more often than the cables themselves. The modem you have in your house is functionally obsolete after just a few years, and will probably never be reused. The rest of the infrastructure will be replaced on the same timescale as well.
There is a reason why rural customers have such a hard time getting decent internet service, and why satellite internet is more economical for them. It’s not immediately obvious to me that the infrastructure cost of Starlink service is larger than the cost of your cable service. You cannot condemn satellite internet services simply because they use a lot of satellites without also condemning terrestrial internet services.
And if it is at all worthwhile for you to have fast internet access in the city, then it is equally worthwhile for folk living in rural areas to have fast internet access too.
Just to keep some perspective on the numbers, all Starlink satellites launched so far, I mean all of them together, don't make the MTOW of three Boeing 747s or ten 737s.
If you count the fuel used to keep planes flying I wouldn't be surprised if the satellites launches actually polluted a lot less. Don't forget that every launch, although surely consuming a lot more fuel than an airplane, brings in orbit 60 satellites, and there have been about 30 of them so far. I you compare these figures to the thousands planes flying every day, it should give a very different perspective.
Please note that I'm against pollution as you, also I am not a Tesla or Musk fan; they make great products, but that doesn't prevent me to think he's an absolute ass.
Are you really comparing bridges to be replaced every 50 years, and regular satellites expected to operate for 2 decades, with these Spacex sattelites which are expected to last 5 years?
Residential broadband involves millions of items such as switches, transceivers and billions of miles of cables. Much of this is replaced every 5 years by ISPs. Not even taking wireless routers into account which are probably replaced much sooner than that.
I'd imagine Starlink at least has less impact than our current infrastructure.
So you're saying we shouldn't put networking equipment of the sort in LEO? I don't understand your position. Switches in datacenters and deployed to provide internet to consumers are thrown away very frequently, and LEO satellites don't pollute their orbits with debris since unpowered flight at these altitudes regresses due to atmospheric drag (that's a desirable outcome). The timing works out.
I prefer this to an alternative where a failed constellation would be left as garbage to orbit the planet for decades/centuries. And I'd bet a few thousand satellites in space (including fuel used to put it up there) has an overall lower impact in the environment than the sprawling equipment and cabling needed to do the same coverage if using ground technologies (including all the resources used to install and maintain said equipment, and ecological damage).
> We already have a perfectly good foundation of mathematics. It's called ZFC [1]
This is completely missing a core point of category theory and type theory (which is about categories equipped with certain extra structure), which is to work is settings which can be flexibly tightened or loosened in order to do constructions which are as general as possible. A theorem in HoTT is far more widely applicable than a theorem in ZFC; HoTT can be interpreted in any higher Gröthendieck topos, ZFC can be only intepreted in a certain class of 1-toposes. In HoTT, we can simply assume choice and LEM to internally reproduce ZFC. In ZFC, univalence is false and cannot be assumed.
> It's not at all clear HoTT, or type theory in general, is the best solution for formalizing mathematics
Xena project isn't too hot on HoTT indeed, but they will tell you that type theory in general is absolutely the best solution for formalizing mathematics.
> in particular denying the law of excluded middle
MLTT does not deny LEM, nor HoTT. The point is not to deny LEM, but to work in a more general system which does not necessarily assume it.
Incorrect. Whether the language as a logic is capable of expressing undecidable statements, is orthogonal to whether type checking is decidable. Type checking is analogous to checking proof validity in logics. Commonly used proof systems for first-order or higher-orders logics admit decidable proof validity, and already many first-order theories allow expressing undecidable statements. ZFC is a rather obvious example. Likewise intensional type theory is highly expressive as a logic and admits decidable proof validity.
Decidability of type checking is instead related to how explicit surface syntax is. In intensional type theory, the usage of equality proofs is marked explicitly, which is sufficient to retain decidable type checking. In contrast, in extensional type theory, the usage of equality proofs is left implicit, which causes type checking to be undecidable.
This is not true, as MLTT-style type theories have a different syntactical notion of proof than first order logical theories like ZFC. Proof of termination of type checking for Idris is equivalent to proof that all terms can be reduced to a normal form: for any given term, you can construct another term which when type checked requires fully normalizing the first term. It is well known that proving that the normal form evaluation of all terms in a MLTT type theory terminates is equivalent to the system’s consistency (because the false type has no constructors). This is how proofs of relative consistency of these type theories are usually proved.
I don't see that "normalization implies consistency", which I'm aware of, relates to my previous comment in any relevant way.
ZFC and MLTT do not differ in that decidability of proof validity is not related to logical expressiveness.
It's not even true that for type theories, decidability of proof validity implies normalization. Normalization is not logically equivalent to decidable type checking.
For example, we can have a term language for extensional type theory which is annotated with reduction traces. This is the kind of syntax that we get when we embed ETT inside ITT. It has decidable type checking, as the type checker does not have to perform reduction, it only has to follow traces. This kind of tracing is actually used in the GHC Haskell compiler to some extent. So we have decidable type checking for a surface syntax of a type theory, for a type theory which is a) consistent b) not strongly normalizing.
You’ve actually hit on the difference here: in the type theory the witness term plus the reduction trace is equivalent to the ZFC proof. It’s true that checking this combination is decidable, regardless of consistency. The intentionality does not eliminate the limitations of this property, it only provides that the reduction trace is implicit given the witness term. The actual proofs of decidability of type checking (e.g. from Martin-Löf‘s original paper) are conditioned on the normalization theorem, which assumes consistency of the type theory.
Let’s come at this from the other direction. Suppose I find a non-normalizing term X of type false (like that produced by Girard’s paradox if we leave off stratification). What happens when I try to type check this?
id(X): ==(X, rec_0(0, X))
Where id is the constructor of the propositional equality type and rec_0 is the recursor for False.
What you say is kind of interesting but I get the impression that we are talking past each other.
id(X): ==(X, rec_0(0, X))
Sorry, what's this supposed mean, is it a definition? The propositional equality type has two or three arguments, I only see one (X) here, if as you say "id" is propositional equality.
In any case, nothing peculiar happens when we check a trace-annotated proof of bottom. We are free to use any particular finite-step unfolding of the non-normalizing proof.
We also don't need any paradox or inconsistency for a non-normalizing term, it is enough to work in ETT in a typing context which implies bottom, or which contains an undecidable equational theory, e.g. the rules of the SK-calculus.
> The actual proofs of decidability of type checking (e.g. from Martin-Löf‘s original paper) are conditioned on the normalization theorem
Decidability of type checking is always relative to a particular surface/raw syntax. A trace-annotated raw syntax does not need a normalization assumption for type checking.
I left the first argument of the equality type implicit, written fully it’s ==(0, X, rec_0(0, X)) since those terms are both of type 0.
The overall expression is shorthand for any term whose typechecking involves that type assertion. For example, type checking passing the term to the left of the colon to an identity lambda with argument type of the term to the right of the colon (λ(x: ==(0, X, rec_0(0, X))).x)(id(X)) will pass iff the left term’s type is right term.
I’m not talking about extensional type theories here (as Idris/MLTT is not extensional), so I don’t think I understand the relevance of the trace annotated proof. Idris will let you write the term I have above without any trace annotations so its typechecker must do something with it.
You’re right that a trace-annotated raw syntax doesn’t need normalization assumptions, but Idris is not trace annotated; the type checking algorithm is required to perform any normalization required without further direction from the user.
I think you might be confusing the language and the type system. Not saying anything about the runtime characteristics of the language (which is obviously Turing Complete), but the type system itself, which is TC. If the type system allows you to encode a TM, then it is undecidable, regardless of whether it terminates due to an incomplete type checker. Since the Idris type system allows you to encode a TM, then the type system is undecidable due to the Halting problem.
A type system cannot be TC. What you seem to talk about, is that if a type checker can simulate arbitrary TM's through an encoding of its input, then the type checker is necessarily non-total. This is correct. But the Idris checker is total and it is not possible to use it to simulate TMs.
The Idris type system does allow you to specify and formalize TMs internally, but that has no bearing on decidability of type checking, as I explained before.
A type system can be TC. A type system is a logical system comprising a set of rules that assigns a property called a type to the various constructs of a computer program, such as variables, expressions, functions or modules. If those rules can encode a Turing complete system, then the system is Turing complete. Type checking is the process of verifying and enforcing the constraints of types. Type assignment is different from verification.
OpenAI should have never been open, named "OpenAI", or advertised itself as being open. At the time of OpenAI's inception, much of the AI risk community deemed it as harmful, although that wasn't spoken out a lot, because it is a delicate affair to criticize misguided effort on AI safety when the status quo was almost no effort being spent on AI safety at all. It is a minor consolation that OpenAI turned out to be less open than initially advertised.
Somewhat random but this debate about the "Open" in OpenAI's name reminds me of OpenVMS which is not open source, but supports "open systems" like POSIX.