Hacker News new | past | comments | ask | show | jobs | submit | alan-crowe's comments login

There is an interesting phenomenon with polynomial interpolation called Runge Spikes. I think "Runge Spikes" offers a better metaphor than "hallucination" and argue the point: https://news.ycombinator.com/item?id=43612517


if it catches on, everyone will start applying it on humans. "he's got runge spikes". you can't win against antropomorphization


That's just overfitting. Using too flexible a model without regularization.


That is interesting indeed, thanks!


I've suggested Runge Spiking as an alternative, non-anthropomorphic metaphor

https://news.ycombinator.com/item?id=43612517


I was expecting ancient history, the story of the Turing Institute in Glasgow. Established 1983. Dissolved 1994. https://en.wikipedia.org/wiki/Turing_Institute

The article is about the second try, commencing 2014.



One wants the money supply to expand to match a growing economy. One way is to permit fractional reserve banking. Bank deposits can grow beyond the amount of gold, both increasing the money supply and creating the possibility of a bank going bankrupt. An alternative is fiat money: the King prints a limited amount of extra money to match the money supply to the growing economy and avoid deflation.

This is nice for the King because he benefits from the seigniorage. (But also potentially fatal, as he gives in to temptation to print too much and then some more, leading to hyper-inflation and the guillotine.)

Fiat money can be combined with fractional reserve banking. Now the monetary authorities can create extra money to overcome banking crises. Notice that society has a trade-off to make. Perhaps high reserves, so that the banks do not create much money, in which case the King/President will have to print extra fiat currency. Perhaps low reserves, and a small base of fiat currency. Now the banks create most of the money in circulation and get to charge interest on it, as an kind of on going seigniorage.

Which is best for the country? One off seigniorage accruing to the national treasury (due to printing), or the recurring seigniorage of interest paid to the banks on the money created by the fractional reserve system? There is something to be said for fractional reserve banking, as a kind of automatic stabilizer. If the economy contracts, there is a credit squeeze and the money supply contracts. Pick the reserve ratios and minimum lending rates correctly and it contracts the correct amount for price stability. And this works in reverse, expanding the money supply as the economy recovers, avoiding needless restrictions on growth due to deflation.

Experience shows that reserve ratios are too low, leading to economic instability. Why do we ignore this experience? Follow the money, low reserve ratios are profitable for bankers. We could have have a much more stable economic system with a two part strategy of 25% reserves and the monetary contraction implied by raising reserve rate being countered by printing the right amount of money.


As far as I understand it so far, the idea that the trees are "unlabeled" over simplifies.

The most common kind of binary tree is defined as

binary-tree = nil | node of binary-tree label binary-tree

for example

empty-tree

node nil Alice nil

node nil Bob (node nil Carol nil)

node (node nil David nil) Edward (node nil Fred nil)

node (node nil George nil) Harold nil

If we erase the labels, there remains an implicit label Alice = 00, Bob = 01, Carol = 00, David = 00, Edward = 11, Fred = 00, George = 00, Harold = 10 encoded by the pattern of empty trees.

The trees in the article seem to be doing it slightly differently, with implicit labels 0,1, or 2. Edward is erased, leaving an implicit 2 and the erasure of both Bob and Harold leaves an implicit 1 for both of them, removing the distinction between 01 and 10.

Edited to add: I'm up to page 27 (pdf reader says page 39 of 176) and some nodes have three children. 0, 1, or 2 children represent "values". "It follows that trees containing a node with three of more branches is a computation that is not a value, and so must be evaluated."


That is very helpful. I particularly appreciate the carefully written text.

I think the second figure, captioned "Stem with a single leaf child" has a mistake, with the line down from the triangle descending to a square. But that square should be a circle.


Thank you for your kind words and also for noticing the mistake — fixed.


Statmist


Another example of "wasn't widely deployed because of its cost" is the Kalthoff 30-Shot Flintlock from 1659. Here is a video from Forgotten Weapons https://www.youtube.com/watch?v=ghKrbNpqQoY

I'm starting to see Henry Maudslay's screw cutting lathe (1800) as a turning point. Before it, inventors could invent really cool devices, and carefully hand make one or a few, but they would be too expensive. Then machine tools made shaping metal cheaper. That included shaping metal to make machine tools. So costs fell and fell, and eventually all sorts of things became cheap enough for wide deployment.

That is scary, because the "right time" to invent something depends on the capabilities of production machinery setting the production cost. As an inventor, one likes to think of success of the inventing lying in ones own hands, but there is an ecosystem of production machinery that has an out sized say in how much your invention will cost to mass produce. It can even veto an excellent invention by saying "Not yet!".


A lot of Maudslay's (and Brunel's) innovation came down to process even through the idea of standardized parts probably dated back to the French General Jean-Baptiste Vaquette de Gribeauval a number of decades earlier.

As you say, this subsequently applied in the case of flintlocks.

But, yeah, everything exists as part of an ecosystem and if you're not at the right time, your brilliant idea probably won't fly. I've been an IT analyst off and on for many years and I've seen this happen often.


The review distills the book's view of the difference between pure mathematics and applied mathematics. "applied" split from "pure" to meet the technical needs of the US military during WW2.

My best example of the split is https://en.wikipedia.org/wiki/Symmetry_of_second_derivatives Wikpedia notes that "The list of unsuccessful proposed proofs started with Euler's, published in 1740,[3] although already in 1721 Bernoulli had implicitly assumed the result with no formal justification." The split between pure (Euler) and applied(Bernoulli) is already there.

The result is hard to prove because it isn't actually true. A simple proof will apply to a counter example, so cannot be correct. A correct proof will have to use the additional hypotheses needed to block the counter examples, so cannot be simple.

Since the human life span is 70 years, I face an urgent dilemma. Do I master the technique needed to understand the proof (fun) or do I crack on and build things (satisfaction)? Pure mathematicians are planning on constructing long and intricate chains of reasoning; a small error can get amplified into a error that matters. From a contradiction one can prove anything. Applied mathematics gets applied to engineering; build a prototype and discover problems with tolerances, material impurities, and annoying edge cases in the mathematical analysis. A error will likely show up in the prototype. Pure? Applied? It is really about the ticking of the clock.


I think that the problem is that theoretical real analysis is often presented like it's nothing but a validation of things people already knew to be true -- but maybe it's not?

The example you gave concerns differentiation. Differentiation is messy in real analysis because it's messy in numerical computing. How real analysis fixes this mess parallels how numerical computing must fix the mess. How do we make differentiation - or just derivatives, perhaps - computable?

The rock-bottom condition for computability is continuity. All discontinuous functions are uncomputable. It turns out that it is sufficient, to make your theorem hold, to have the 2nd partial derivatives f_{xy} and f_{yx} be continuous. They wouldn't even be computable otherwise!

One of the proofs provided uses integration. In numerical contexts, it is integration which is considered "easy", and "differentiation" which is considered hard. This is totally backwards to symbolic calculus.

The article also mentions Distribution Theory. This is important in the theory of linear PDEs. I suspect it is implicit in the algorithmic theory as well, whether practitioners have spelled this out or not. This is a theory that makes the differentiation operator itself computable, but at the cost of making the derivatives weaker than ordinary functions. How so? On the one hand, it allows to obtain things like the Dirac delta as derivatives, but those aren't even functions. On the other hand, these objects behave like functions - let's say f(x,y) - but we can't evaluate them at points; instead, we can take their inner product with test functions, which we can use to approximate evaluation. This is important because PDE solvers may only be able to provide solutions in the weak, distribution-theoretic sense.


Do I understand properly that in a different universe distributions could have been called prefunctions?


A distribution is a function, on the space of test functions.


OK, so if we have a distribution D (less nice than the average function) and a test function T (nicer than the average function), we have ⟨D,T⟩ = c: ℂ, so ⟨D,—⟩: test fn→ℂ and ⟨—,T⟩: distribution→ℂ ?


Wait i thought functions are predistributions..

[My bad, it was Matvei, not Manuel, no idea how i mixed that up..

Checkout his childrens books, as well as

https://archive.is/eaYRs

Note how the independent diagonals are what i consider interesting]


if there are no interiors (maybe edges but no faces nor volumes) then the vertices on the diagonals are truly independent: eg QM on small scales, GR on large ones.

[I'm currently pondering how the "main diagonal" of a transition matrix provides objects, while all the off-diagonal elements are the arrows. This implies that by rotating into an eigenframe (diagonalising), we're reducing the diversion to -∞ (generalised eigenvectors have nothing to lose but their Jordan chains) and hence back in the world of classical boolean logic?]

EDIT: https://mmozgovoy.dev/posts/solar-matter/


[Righhht, maybe you can excite me even more by relating this to quantales?? Or maybe expand on fns vs distributions a bit more?]

L: quantal (quasiparticles)


Is this sufficient relation: rel'ns (matrices which are particularly "irrreducible"/"simple" in that they've forgotten their weights to the point where these are either identity or zero) are concrete models of abstract quantales?

Lagniappe: https://www.sciencedirect.com/science/article/pii/0022404993...

EDIT: I'm afraid I'm just learning fns vs distributions (curried fns?) myself.

I wonder how quasiparticles might relate to ideals (nuclei in quantale-speak I believe)? Note that something very much like quasiparticles is how regexen turn exponential searches into polynomial...


REDIT(s)

I ought to get overly emotional (in a bittersweet way) about all this, and i almost did, but Teddy reminded me to stay ataraxic (i.e. keeping his role in formulating key management policies purely in the cortex )

thank you for that blogpost about MPB (its one small step for fuzzablekind!)

[as well as the nuclei hint, more tk]


thank you for EC ... as to thermidorian reactions, I haven't read tRB yet but it's on the slush pile now (and I have an ice pick —albeit full length— for set dressing while I read).


oops: Eispickel => ice axe, not ice pick


A distribution is not a function. It is a continuous linear functional on a space of functions.

Functions define distributions, but not all distributions are defined that way, like the Dirac delta or integration over a subset.


A functional is a function.


The term "function" sadly means different things in different contexts. I feel like this whole thread is evidence of a need for reform in maths education from calculus up. I wouldn't be surprised if you understood all of this, but I'm worried about students encountering this for the first time.


Don’t know if you are a mathematician or not but mathematically speaking “function” has a definition that is valid in all mathematical contexts. Functional clearly meets the criteria to be a function since being a function is part of the definition of being a functional.


The situation is worse than I thought. The term "function", as used in foundations of mathematics, includes functionals as a special case. By contrast, the term "function", as used in mathematical analysis, explicitly excludes functionals. The two definitions of the word "function" are both common, and directly contradict one another.


By contrast, the term "function", as used in mathematical analysis, explicitly excludes functionals. The two definitions of the word "function" are both common, and directly contradict each other.

This is incorrect. In mathematics there is a single definition of function. There is no conflict or contradiction. In all cases a function is a subset of the cross product of two spaces that satisfies a certain condition.

What changes from subject to subject is what the underlying spaces of interest are.


> What changes from subject to subject is what the underlying spaces of interest are.

I'm not sure I understand what you mean here. I need some clarification. How does this have any bearing on whether functionals count as functions or not? What is the "underlying spaces of interest" in this example?

In some trivial way, every mathematical object can be seen as a function. You can replace sets in axiomatic set theory with functions.


Everything I wrote was assuming set theory as the foundations for mathematics and applies only to that setup. At any rate a functional is function since the definition starts with: a functional is a function from…

Some books will say: a functional is a linear map….

Note that a linear map is a function.


You genuinely don't know what you're talking about. The word "function" means different things in different areas. So does the word "map" or "mapping". In analysis, what you personally call a "function" instead falls under the term "mapping". In foundations - which is a different area with incompatible terminology - the terms "mapping" and "function" are defined to mean the same thing.

This situation is a consequence of how mathematicians haven't always been sure how to define certain concepts. See "generating function" for yet another usage of the word "function" that's in direct contradiction with the last two. So that's three incompatible usages of the term "function". All this terminology goes back to the 1700s when mathematics was done without the rigour it has today.

I find it aggravating how you're so confidently wrong. I hope it's not on purpose.

[edit] [edit 2: Removed insults]


I am looking at the whole development of this thread with amusement, but I also find it somewhat shocking.

I see that you are desperately trying to distinguish "foundational" and "analysis" contexts from each other. If you are writing a book about analysis, it might be helpful to clarify that in this context you reserve "function" for mappings into ℂ or ℝ, for example [1] defines "function" exclusively as a mapping from a set S to ℝ (without any further requirements on S such as being a subset of ℝⁿ). Note that even under this restricted definition of function, a distribution still is a function.

In a general mathematical context, "function" and "mapping" are usually used synonymously. It is just not the case that such use is restricted to "foundations" only.

It seems to me that squabbles about issues like this are becoming more frequent here on HN, and I am wondering why that is. One hypothesis I have is that there is an influx of people here who learn mathematics through the lens of programs and type theory, and that limits their exposure to "normal" mathematics.

[1] Undergraduate Analysis, Second Edition, by Serge Lang


I learned mathematics the regular way. So you're wrong - and not just about this.

> I see that you are desperately trying to distinguish "foundational" and "analysis" contexts from each other

They literally are different. The proof is all the people here saying that distributions aren't functions, while displaying a clear understanding of what a distribution is. Maybe no one's "wrong" as such, if they're defining the same word differently.

I think you're the naive one here. Terminology is used inconsistently, and I tried to simplify the dividing line between different uses of it. I agree it's inaccurate to say it's decided primarily by Foundations vs Analysis, but I'm not sure how else to slice the pie. It's like how the same word can mean slightly different things in French and English. I agree it's quibbling, but it's harder to teach maths to people if these False Friends exist but don't get pointed out.

I never expected some obsessive user to make 6 different replies to one of my comments. Wow. This whole thing thread was a bit silly, and someone's probably going to laugh at it. I need to take another break from this site.


I never expected some obsessive user to make 6 different replies to one of my comments. Wow.

You have 6 posts in the thread started by my top comment. I had multiple replies to one of your posts because HN requires one to wait a while to reply and I was in a hurry. The order of posts doesn’t matter. At least not to me.

Insinuating I’m obsessive has a negative connotation. Along with outright insults such comments make you look bad and unreasonable.


Terry Tao in one of his analysis books writes:

Functions are also referred to as maps or transformations, de- pending on the context.

This after defining a function in essentially the same I did.


Just to make clear, so you are saying Serge Lang is wrong, too? And as proof you cite various anonymous HN users, most of them heavily downvoted?

> I agree it's inaccurate to say it's decided primarily by Foundations vs Analysis, but I'm not sure how else to slice the pie.

Seems you agree with me after all.

> I agree it's quibbling, but it's harder to teach maths to people if these False Friends exist but don't get pointed out.

A distribution is a function, but considered on a different space.

It is even harder to teach math to people by insisting that above fact is wrong. Schwartz got a Fields medal for this insight.


It’s strange to hear a fellow mathematician say that if I’m in set theory class then a functional is a function but isn’t one in functional analysis. In Rudin’s Functional Analysis book he proves that linear mappings between topological spaces are continuous if they are continuous at 0. I’ve never heard of someone believing that a continuous mapping is not a function.

Terry Tao writes in his analysis book:

Functions are also referred to as maps or transformations, depending on the context.

Tao certainly knows more about this than I ever will.


Yeah, the whole argument felt somewhat unhinged and silly. It is fine to point out that sometimes "function" is used in a more specific manner than "mapping", particularly in analysis, but I doubt any mathematician would think that a functional is not a function, in a general context such as a HN comment.


You genuinely don't know what you're talking about. .... I find it aggravating how you're so confidently wrong.

This is a fine example of irony.

Let V be a vector space over the reals and L a functional. Let v be a particular element of V. L(v) is a real number. It is a single value. L(v) can't be 1.2 and also 3.4. Thus L is a function.

A function is simply a subset of the product of two sets with the property that if (a,b) and (a, c) are in this subset then b=c.

Can you find a functional that does not meet this criterion? If so then you have an object such that L maps v to a and also maps v to c with a and c being different elements.

Find me a linear map that does not meet the definition of function. Give an example of a functional in which the functional takes a given input to more than one element of the target set.

I think you are not a mathematician and you also don't appear to understand that a word can have different meanings based on context. "generating function" isn't the same thing as "function". Notice that generating is paired with function in the first phrase.

Example: Jellyfish is not a jelly and not a fish. Biologists have got it all wrong!


I'll try one last time.

> I think you are not a mathematician

Guess again.

> Example: Jellyfish is not a jelly and not a fish. Biologists have got it all wrong!

You have a problem with reading comprehension. I never said any mathematician was wrong.

Think about namespaces for a moment, like in programming. There are two namespaces here: The analysis namespace and the foundations namespace.

In either of those two namespaces, the word "mapping" means what you're describing: an arbitrary subset F of A×B for which every element of a ∈ A occurs as the first component in a unique element (x,y) ∈ F.

But the term "function" has a different meaning in each of the two namespaces.

The word "function" in the analysis namespace defines it to ONLY EVER be a mapping S -> R or S -> C, where S is a subset of C^n or R^n. The word "function" is not allowed to be used - within this namespace - to denote anything else.

The word "function" in the foundations namespace defines it to be any mapping whatsoever.

Hopefully, now you'll get it.


If one has a “thing” that “maps” elements of one set to another that satisfies the condition I previously gave then that thing is a function. Every functional satisfies that definition. Therefore every functional is a function.


[edit] I've finally blown it. You're a moron. Your definition of "function" as some subset of AxB is how it's defined in foundations. It's not how it's defined in analysis. In analysis, your definition would describe the term "mapping". What a crackpot and idiot. I'm done wasting time and sanity on this.

Interesting. So you think there are functions in real analysis that are studied that don't meet the definition I gave? Is there a functional that does not meet the definition I gave?

In all contexts a function is a subset of the product of two sets that meets a certain condition. Anything that does not meet this definition is not called a function.

Every functional meets the definition of function.


The word "function" in the analysis namespace defines it to ONLY EVER be a mapping S -> R or S -> C, where S is a subset of C^n or R^n. The word "function" is not allowed to be used - within this namespace - to denote anything else.

In real analyis one is interested in functions from R^n to R. They don't define function to be only something from R^n to R. It's just that these are the functions they wish to study. They don't define function to exclusively be a map from R^n to R. It’s just that these are the types of functions they care about.

No mathematician can possibly think function is anything other than a subset of the product of two spaces that meets a certain condition.


In general, instead of resorting to name calling it's best to just walk away. It makes you look bad and unreasonable.


Try composing two distributions.


Try composing f : A -> B with g : A -> B, for A ≠ B. Still, f and g are functions. So, what exactly is your point?


What is a delta function at a composed with a delta function at b <> a?


I’m not sure if I am mathematically sophisticated enough to follow along but I’ll try. This chain of thought reminds me of the present state of cryptography, which is built on unproven assumptions about the computational hardness of certain problems. Meanwhile Satoshi Nakamoto hacks together some of these cryptographic components into a novel system for decentralized payments with a few hand-wavy arguments about why it will work and it grows into a $1+ trillion asset class.


The innovation on Bitcoin is not about cryptography but game-theory at work. For example, is it convenient for a miner to destroy the system or to continue mining? There are theoretical attacks at around 20%, not 51%. A state actor could also attack the system if they want to invest enough resources.


Genuinely curious since I’d only heard of the “51% attack” — what happens around 20%?


Please check "Selfish Mining: A 25% Attack Against the Bitcoin Network" [1] and scientific studies such as [2].

[1] https://www.reddit.com/r/Bitcoin/comments/1pv8ty/selfish_min...

[2] https://arxiv.org/pdf/1507.06183


yes the cool thing about tech is that you don't have to know why it will work or even how, just so long as it does.


I took a look at the book a while ago, and I like how it treats abstraction as its guiding theme. For my project Practal (https://practal.com), I've recently pivoted to a new tagline which now includes "Designing with Abstractions". And I think that points to how to resolve the dilemma you point out between pure and applied: we soon will not have to decide between pure and applied mathematics. Designing (≈ applied math) will be done the most efficient way by finding and using the right abstractions (≈ pure math).


The chains of reasoning are only long and intricate if you trace each result back to axiomatics. Most meaningful results are made up of a handful of higher-level building blocks -- similar to how software is crafted out of modules rather than implementing low-level functionality from scratch each time (yes, similar but also quite different)



That's a fantastic essay - I feel like it's the tip of a rich vein that I'm looking forward to exploring. Thanks for drawing it to my attention. I can't wait to get on to studying abstract algebra and categories properly for myself which is probably about a year off at this point.


If you want to study categories from a relatively foundational point of view, the author, McLarty, also has a very readable book called Elementary Categories, Elementary Toposes.


Sounds perfect. Thank you very much for the tip.


Literally the same:

A type is a theorem and its implementation a proof, if you believe that Curry-Howard stuff.

We “prove” (implement) advanced “theorems” (types) using already “proven” (implemented) bodies of work rather than return to “axioms” (machine code).


No, it is not the same, CH is just a particular instance of it, much like "shape" is not the same thing as "triangle".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: