You've got the right idea. A program is a proof for the proposition made by its type. Only for the right languages though, and not in the naive sense of `assert(1+1 == 2)` proves the assertion.
OK, not what I expected. Surely a spec is a spec, the program fulfils the spec hopefully, and the proof is that the program implements the spec. How do you see it? thanks
Curry Howard shows that for a logical proposition (A) with corresponding constructive proof (B), there will be a type (A') with program (B'). It's not about proving desired properties of programs. However, you can use the CH principles to do this too, as long as you can encode the proposition about a program in its type. This will not often look like the dataflow type of the naive program though. Look at software foundations for real detail, I'm not expert at this stuff, just aware of the basics.
If someone can learn C type syntax this is MUCH simpler. That doesn't mean you don't have to spend a little bit of time learning how it works, but it is not some kind of number-theory level maths construction only accessible to savants.
Well, I interpret that as the assertion that it's more opaque than other types of programming. But I disagree and think that it is actually simpler in terms of the syntax and amount of prior understanding required. My blunt reply is that to assert that a particular example is inaccessible but then only to have dedicated 15 minutes to prove so is silly. I'm sure most people who seem to suggest that this stuff is opaque had no problem learning PCRE or complicated SQL joins and also didn't complain that it took more than 15 minutes to do so. Of course TT is a deep field and there are many complicated parts of it, but the syntax, rules and the example given are not opaque and very understandable to anyone who can tackle languages and abstraction.
Okay then. Please give some links to the following.
– Lambda calculus tutorial that gets you up to what you need to understand the referenced paper in 15 minutes
- A description of Hoare notation that I can also understand to adequate depth to understand the referenced paper, also in 15 minutes
- A description of classical logic etc. etc. 15 minutes
- A description of intuitionistic logic etc. etc. 15 minutes (presumably in how it differs from classical)
These would be very useful. If you can also explain them in a way that would allow me to use the information I've learnt in an hour to do something useful with programs like represent them, transform them, verify them against a specification, I would gladly spend a couple of weeks or a month doing that, genuinely.
If you can't do any of these things, please stop posting how easy it all is
You're right, I should read more carefully. Nonetheless, my point still stands, if you can give me some resources to learn these things, with considerably more than 15 minutes allowed, and bearing in mind I have no one to ask when I get stuck, and at my end aim is to actually use these things rather than treat them as interesting (which I already find them so) then I'd be grateful.
So if you lost the weights how is that not killing the AI? Is it because it lacks the death experience? If so what about bitrotting the weights incrementally and degrading its inputs?