Hacker Newsnew | past | comments | ask | show | jobs | submit | more movpasd's commentslogin

This is covered in the article. The uncertainties brought about by zonal pricing are not really worth it, given that the main obstacle is the need for network reinforcement. The UK is just not that big! Introducing a complicated market reform which will be obsoleted within a few decades doesn't make sense.


Everyone's different. Some people genuinely thrive under the conditions you're describing, others don't like it but are able to put up with it no problem, and others can't stand it but are forced to.

The perspective I've found most useful is this. There is a constellation of correlated "autistic traits", which everyone has to a degree, but which like most traits become disabling if turned up too much. "Autism" is a term describing that state. So, it is much less a particular switch that can be turned on or off, not even a slider between "not autistic" and "very autistic", but more a blurry region at the outskirts of the multidimensional bell curve of the human experience.

People on the furthermost reaches of this space are seriously, unambiguously disabled, by any definition. They're what people traditionally imagine as "autistic". But the movement in psychiatry has been to loosen diagnostic criteria to better capture the grey areas. Whether this is a good or a bad thing is a social question, not a scientific one, in my opinion. Most of us want to live in a society that supports disabled people, but how many resources to allocate to that is a difficult question where our human instincts seem to clash with the reality of living in a modern society.

On your last paragraph: I think this is a serious problem with the discourse around neuroatypicality today. My opinion is that the important thing is that we become more accepting and aware of the diversity of the human experience, and that this is a necessary social force to balance the constant regression to the mean imposed by modernity. If that's the case, then drawing a border around any category of person, staking a territorial claim to a pattern of difficulty the group experiences, and refusing to accept that the pattern exists beyond it: it's just unfair, it's giving into defensiveness.


> They're what people traditionally imagine as "autistic". But the movement in psychiatry has been to loosen diagnostic criteria to better capture the grey areas.

There has also been a change that reclassified what we would previously have termed Asperger’s Syndrome as Autism. To be clear, AS was always considered to be a form of or closely related to Autism, but that change in language does mean we’ve had a big shift in what is Autism medically and what the public pictures when they think Autism


For myself, I have low energy if I don't eat breakfast, but there is essentially no hunger signal for me in the morning. Over time I've settled on eating the plainest breakfast I can.

I think this has a lot to do with the 9–5 and my natural sleep cycle being delayed compared to that.


I think this comes back to the idea of having a "UX model" that underlies the user interface, laying out its affordances clearly in code. In a modern application you're going to have complex UX logic and state that's distinct from the domain model as such and that deserves representation in the code.

In an MVC conception, the UX model becomes a top layer of abstraction of the domain model. It's a natural place to be because for modern apps, users expect "more than forms", i.e.: different ways of cutting up the domain data, presented in different ways, ...

This is something that component-based frontend frameworks struggle with a bit: the hierarchical layout of the DOM doesn't always reflect the interrelations in data between parts of a user experience. Prop drilling is just a reflection of this fact and perhaps it's why we're seeing a rise in the use of state stores. It's not really about state, that's just the technical symptom, it's really about providing a way of defining a (in-browser) data model based on the user experience itself rather than the particularities of the UI substrate.


At the risk of sounding woo, I find some parallels in how LLMs work to my experiences with meditation and writing. My subjective experience of it is that there is some unconscious part of my brain that supplies a scattered stream of words as the sentence forms --- without knowing the neuroscience of it, I could speculate it is a "neurological transformer", some statistical model that has memorised a combination of the grammar and contextual semantic meaning of language.

The difference is that the LLM is _only that part_. In producing language as a human, I filter these words, I go back and think of new phrasings, I iterate --- in writing consciously, in speech unconsciously. So rather than a sequence it is a scattered tree filled with rhetorical dead ends, pruned through interaction with my world-model and other intellectual faculties. You can pull on one thread of words as though it were fully-formed already as a kind of Surrealist exercise (like a one-person cadavre exquis), and the result feels similar to an LLM with the temperature turned up too high.

But if nothing else, this highlights to me how easily the process of word generation may be decoupled from meaning. And it serves to explain another kind of common human experience, which feels terribly similar to the phenomenon of LLM hallucination: the "word vomit" of social anxiety. In this process it suddenly becomes less important that the words you produce are anchored to truth, and instead the language-system becomes tuned to produce any socially plausible output at all. That seems to me to be the most apt analogy.


I agree with the overall message, but I will say that there is still a great deal of value in memorisation. Memorising things gives you more internal tools to think in broader chunks, so you can solve more complicated problems.

(I do mean memorisation fairly broadly, it doesn't have to mean reciting a meaningless list of items.)


That's also my view. It's clear that these models are more than pure language algorithms. Somewhere within the hidden layers are real, effective working models of how the world works. But the power of real humans is the ability to learn on-the-fly.

Disclaimer: These are my not-terribly-informed layperson's thoughts :^)

The attention mechanism does seem to give us a certain adaptability (especially in the context of research showing chain-of-thought "hidden reasoning") but I'm not sure that it's enough.

Thing is, earlier language models used recurrent units that would be able to store intermediate data, which would give more of a foothold for these kind of on-the-fly adjustments. And here is where the theory hits the brick wall of engineering. Transformers are not just a pure machine learning innovation, the key is that they are massively scalable, and my understand is part of this comes from the _lack_ of recurrence.

I guess this is where the interest in foundation models comes from. If you could take a codebase as a whole and turn it into effective training data to adjust the weights of an existing, more broadly-trained model, But is this possible with a single codebase's worth of data?

Here again we see the power of human intelligence at work: the ability to quite consciously develop new mental models even given very little data. I imagine this is made possible by leaning on very general internal world-models that let us predict the outcomes of even quite complex unseen ("out-of-distribution") situations, and that gives us extra data. It's what we experience as the frustrations and difficulties of the learning process.


In Python 3.12 syntax, you can use

    type UserIs = UUID


`type UserId = UUID` creates a TypeAlias, not the same thing (from a type checker's point of view) as a NewType [1].

[1] https://typing.python.org/en/latest/spec/aliases.html


I think it can be useful to think of the parsing and logic parts both as modules, with the parsing part interfacing with the outside world via unstructured data, and the parsing and logic parts interfacing via structured data, i.e.: the validated types.

From that perspective, there is a clear trade-off on the size of the parsing–logic interface. Introducing more granular, safer validated types may give you better functionality, but it forces you to expand that interface and create coupling.

I think there is a middle ground, which is that these safe types should be chunked into larger structures that enforce a range of related invariants and hopefully have some kind of domain meaning. That way, you shrink the conceptual surface area of the interface so that working with it is less painful.


I think you and the article are referring to abstractions over different concerns.

The concern you're talking about is about the actual access to the data. My understanding of the article is that it's about how caching algorithms can abstract the concern of minimising retrieval cost.

So in some ways you're coming at it from opposite directions. You're talking about a prior of "disk by default" and saying that a good abstraction lets you insert cache layers above that, whereas for the author the base case is "manually managing the layers of storage".


The language used is seriously confusing here.

Algorithms can't really abstract anything since they are, well, just algorithms (formal descriptions of how a computation should be done).

Looking at the author's examples again, I think most everybody would say that caching is used in both:

  if data_id in fast_storage:
      return fast_storage.get(data_id)
  else:
      data = slow_storage.get(data_id)
      fast_storage.set(data_id, data)
      return data
and

  # Uses fast storage or slow storage just like above, but behind the get() method.
  return storage.get(data_id)
The first one does not make an abstraction on storage, the second one does, but they are both "caching" data internally.

While there are generic implementations of caching algorithms and we can consider those abstractions, "caching" is a wider term than those implementations, and is specifically not an abstraction (the fact that there is a caching implementation that abstracts something does not make all caching an abstraction).

Edit: Let me also point out that "abstract the concern of minimising retrieval cost" is not caching — I can say that eg. a simple interface with method FastGet(id) does the former, and it needs not use any caching if the underlying structure is fast enough and eg. directly in memory.


This is correct, I appreciate you for putting it so coherently :). I think I didn’t make it clear enough in the piece that I’m coming from a stance of fast access being table stakes, and the question being about how that’s accomplished.


"Caching" is an idea of storing a result of an expensive computation in storage that is faster to get from than doing the original computation (in very generic computer terms, computation can be simply fetching from the network or slower local storage).

What you describe as "caching algorithms" are not really caching algorithms, but cached object lifetime management algorithms (LRU, LFU...).

"Abstraction" is a higher level, simplified view of a set of concepts, yet caching is a single concept. See eg. https://en.wikipedia.org/wiki/Abstraction_(computer_science)

It sounds like you are both trying to redefine what "caching" means (tying it to implementations of particular algorithms), but also what "abstraction" means.

We should be very deliberate with the language we use, and our main goal should be to make it simpler to understand, not harder — I believe you are doing the latter here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: