The issue is a percentage of a Celsius value is not that. For example, an increase from 1°C to 2°C is a "100% increase", but is only 1 percentage point from freezing to boiling.
You could say things like that with anything in percentages? 100% increase in your pension from 100k to 200k is only 10% (increase, to 20% total) of your target 1M, or whatever.
100k to 200k is a 100% increase in absolute, but a 10 percentage point increase to your target of 1M. The difference between the example you give and the one in the article is that 0 in the case of your pension meaningfully refers to its emptiness, but in the case of Celsius, it has no "emptiness" interpretation.
The equivalent would be saying that going from 600k to 700k was a 100% increase... compared to 500k.
It's not completely meaningless, to be fair. Saying 10°C to 20°C is a 100% increase has the meaning of "it's twice as far from freezing", which isn't totally meaningless (kind of like saying Everest is twice as high as Mont Blanc, which really means "its summit is twice as far from sea level").
Yes? I didn't see any link in comment I replied to between 100% and 100deg besides it happening to be the same number - I already changed that to 1M, changing it to 50k no different either.
If the argument was saying there's something special about 100% being 100 quantity then.. no? I don't really know where to go from there, what I said still holds with a 100k target, but I'm not going to be able to give 'another' example where the 100 quantity is meaningful because it isn't for degrees either. It's the freezing point at 0 that makes it work better for centigrade than Fahrenheit, imo.
This is actually a deliberate design choice, which the breathtakingly short JSON standard explains quite well [0]. The designers deliberately didn't introduce any semantics and pushes all that to the implementors. I think this is a defensible design goal. If you introduce semantics, you're sure to annoy someone.
There's an element of "worse is better" here [1]. JSON overtook XML exactly because it's so simple and solves for the social element of communication between disparate projects with wildly different philosophies, like UNIX byte-oriented I/O streams, or like the C calling conventions.
It is the same struggle you can find in any language with private/public props. The stream he wants to read from is actually just a buffer that has been wrapped as a stream, and he’s having a hard time directly accessing the buffer through its wrapper. He could stream it into a new temporary buffer, but he’s trying to avoid that since it’s wasteful. I’ve had the same problem in C++.
But the other side of this is that there's a contract violation going on: []byte can be mutated, but io.Reader cannot.
When I pass io.Reader I don't expect anything underneath it to be changed except the position. When I pass []byte it might be mutated.
So really solving the requires a whole new type - []constbyte or something (in general Go really needs stronger immutability guarantees - I've taken to putting Copy methods on all my struts just so I can get guaranteed independent copies, but I have to do it manually).
there is also the specific vs general trade-off: the general (io.Reader) being more flexible, while providing fewer opportunities for optimization. vice versa with the specific—be it []byte, or even []constbyte. i think it is just an inherent struggle with all abstractions.
> But the other side of this is that there's a contract violation going on: []byte can be mutated, but io.Reader cannot.
> When I pass io.Reader I don't expect anything underneath it to be changed except the position. When I pass []byte it might be mutated.
Where is it written in the contract that the underlying data source of io.Reader is immutable? I do not believe that is true. Even when io.Reader is backed by os.File, the OS may allow the file to be modified while you are reading it, possibly by the same process, using the io.Writer interface.
So while it's true that you certainly wouldn't expect an io.Reader itself to be mutable, an object that implements io.Reader certainly can expose interfaces that allow the underlying data source to be mutated. After all, bytes.Buffer is explicitly one such io.Reader; it has a Bytes() method:
> "The slice aliases the buffer content at least until the next buffer modification, so immediate changes to the slice will affect the result of future reads."
It's documented and totally allowed. Doesn't mean you should do it, but if an io.Reader that's backed by a buffer wants to expose its underlying buffer it isn't an issue of breaking contracts at the very least.
Another alternative to exposing an interface that returns []byte is getting clever with interfaces. Go sometimes will try to turn an io.Reader into an io.WriterTo: this can also avoid unnecessary copies.
> So really solving the requires a whole new type - []constbyte or something (in general Go really needs stronger immutability guarantees - I've taken to putting Copy methods on all my struts just so I can get guaranteed independent copies, but I have to do it manually).
A concept of constness would be nice, but it's trickier than it seems. An immutable type like string is fairly straight-forward. Immutable constant values of primitive types, also straightforward. Of course, Go has both. Constness as in const references to mutable values is weirder, because "const" sounds like it means something is immutable, since it does mean that in many contexts, but actually what it really means is that you can't mutate it, it still might get mutated behind your back, depending on if the actual value is const or not. I think that's at least a little unfortunate.
What I think I want is multiple concepts:
1. Variables that can't be re-assigned by default, akin to JavaScript `const` declarations.
2. Immutable const values of composite types, expanding the const primitives Go has today.
3. References that are immutable by default, mirroring Rust. Ideally you could take an immutable reference of constants that are typed, including composite const values, pushing them into rodata like you'd expect. To me this is better than C/C++'s concept of constness because it feels more intuitive when mutability is the exception; Having references marked const explicitly makes it look too similar to defining constants, making it easy to confuse immutability of the values to simply having an immutable reference to a mutable value. Having a `mut` case instead makes it clearer.
Can this be done in a theoretical Go-like programming language? Maybe... Will any of it ever be done? Probably not. The most feasible is probably 2., but even that would be pretty awkward due to the lack of immutable references; you'd have to always take a copy to actually use the value.
Not 100% sure what is meant here. io.Reader itself only exposes the ability to do read-only access, but if you upcast you are not limited to only io.Reader's functionality. Now of course if you had a function that upcasted e.g. io.Reader to io.Writer and wrote to the provided stream, that would be weird. On the other hand, there is no such issue with merely grabbing an interface { Bytes() []byte } out of an io.Reader, calling the Bytes() method, and then not modifying the returned buffer. I don't think that introduces any hazards or contract violations and as best as I can tell is idiomatic Go and not discouraged.
I think you articulated the actual point of the OP. It isn’t so much about creating something better than anyone else, but it is a feeling that your contribution and world means something.
AI can somehow cause one to react with a feeling of futility.
Engaging in acts of creation, and responding to others acts of creation seems a way out of that feeling.
It's a surprising choice that Rust made to have the unit of compilation and unit of distribution coincide. I say surprising, because one of the tacit design principles I've seen and really appreciated in Rust is the disaggregation of orthogonal features.
For example, classical object-oriented programming uses classes both as an encapsulation boundary (where invariants are maintained and information is hidden) and a data boundary, whereas in Rust these are separated into the module system and structs separately. This allows for complex invariants cutting across types, whereas a private member of a class can only ever be accessed within that class, including by its siblings within a module.
Another example is the trait object (dyn Trait), which allows the client of a trait to decide whether dynamic dispatch is necessary, instead of baking it into the specification of the type with virtual functions.
Notice also the compositionality: if you do want to mandate dynamic dispatch, you can use the module system to either only ever issue trait objects, or opaquely hide one in a struct. So there is no loss of expressivity.
The history here is very interesting, Rust went through a bunch of design iteration early, and then it just kinda sat around for a long time, and then made other choices that made modifying earlier choices harder. And then we did manage to have some significant change (for the good) in Rust 2018.
Rust's users find the module system even more difficult than the borrow checker. I've tried to figure out why, and figure out how to explain it better, for years now. Never really cracked that nut. The modules chapter of TRPL is historically the least liked, even though I re-wrote it many times. I wonder if they've tried again lately, I should look into that.
> Another example is the trait object (dyn Trait), which allows the client of a trait to decide whether dynamic dispatch is necessary, instead of baking it into the specification of the type with virtual functions.
Here I'd disagree: this is separating the two features cleanly. Baking it into the type means you only get one choice. This is also how you can implement traits on foreign types so easily, which matters a lot.
Sorry if my comment wasn't clear: I'm saying that I think in both the module and trait object case, Rust has done a good job of cleanly separating features, unlike in classic (Java or C++) style OOP.
I'm surprised the module system creates controversy. It's a bit confusing to get one's head around at first, especially when traits are involved, but the visibility rules make a ton of sense. It quite cleanly solves the problem of how submodules should interact with visibility. I've started using the Rust conventions in my Python projects.
I have only two criticisms:
First, the ergonomics aren't quite there when you do want an object-oriented approach (a "module-struct"), which is maybe the more common usecase. However, I don't know if this is a solvable design problem, so I prefer the tradeoff Rust made.
Second, and perhaps a weaker criticism, the pub visibility qualifiers like pub(crate) seems extraneous when re-exports like pub use exist. I appreciate maybe these are necessary for ergonomics, but it does complicate the design.
There is one other piece of historical Rust design I am curious about, which is the choice to include stack unwinding in thread panics. It seems at odds with the systems programming principle usecase for Rust. But I don't understand the design problem well enough to have an opinion.
> Rust's users find the module system even more difficult than the borrow checker. I've tried to figure out why, and figure out how to explain it better, for years now.
The module system in Rust is conceptually huge, and I feel it needs a 'Rust modules: the good parts' resource to guide people.
(1) There are five different ways to use `pub`. That's pretty overwhelming, and in practice I almost never see `pub(in foo)` used.
(2) It's possible to have nested modules in a single file, or across multiple files. I almost never see modules with braces, except `mod tests`.
(3) It's possible to have either foo.rs or foo/mod.rs. It's also possible to have both foo.rs and foo/bar.rs, which feels inconsistent.
(4) `use` order doesn't matter, which can make imports hard to reason about. Here's a silly example:
Full agree with 1, I do use 2 depending (if I'm making a tree of modules for organization, and a module only contains imports of other modules, I'll use the curly brace form to save the need of making a file), and I'm not sure why 4 makes it harder? Wouldn't it be more confusing if order mattered? maybe I need to see a full example :)
In `use foo::bar; use bar::foo;`, am I importing an external crate called foo that has a submodule bar::foo, or vice versa?
This bit me when trying to write a static analysis tool for Rust that finds missing imports: you essentially need to loop over imports repeatedly until you reach a fixpoint. Maybe it bites users rarely in practice.
Hard agree. Is retrospect I think the model of Delphi, where you must assemble `manually` a `pkg` so you can export to the world should have been used instead.
It also have solved the problem where you ended doing a lot of `public` not because the logic dictated it, but as only way to share across crates.
It should have been all modules (even main.rs with mandatory `lib.rs` or whatever) and `crate` should have been a re-exported interface.
> Hard agree. Is retrospect I think the model of Delphi, where you must assemble `manually` a `pkg` so you can export to the world should have been used instead.
How would you compare that to, say, go? I think the unit of distribution in go is a module, and the unit of compilation is a package. That being said, by using `internal` packages and interfaces you can similarly create the same sort of opaque encapsulation.
A lot of discussion here about where boundaries would be with free speech, how this would be implemented, specific details. But, as with any policy, this is not a binary "do it or don't". This is a dial that can be turned in a more libertarian or a more regulatory direction. (In fact, even this is simplistic: it's many hundreds of conceptually correlated dials.)
The interesting question is whether we're happy with where the dial is right now, which direction we want to push it, and how fast --- and the underlying meaning of the article is that maybe we should be pushing it in the regulatory direction very fast indeed.
Science is not ideas: new conceptual schemes must be invented, confounding variables must be controlled, dead-ends explored. This process takes years.
Engineering is not science: kinks must be worked out, confounding variables incorporated. This process also takes years.
Technology is not engineering: the purely technical implementation must spread, become widespread and beat social inertia and its competition, network effects must be established. Investors and consumers must be convinced in the long term. It must survive social and political repercussions. This process takes yet more years.
I can think of an argument for justifying the status quo.
The folder structure reflects the subdivision of code into modules. Each module may have submodules, and each module decides the visibility of its children to other modules at the same level as itself, and to its own supermodule. This is a naturally hierarchical structure, which file systems lend themselves well to. A code database would have to replicate this structure within it somehow anyway.
A non-hierarchical tag system would help model situations where you have multiple orthogonal axes along which to organise the code (as you point out). But in these cases, which axis gets the top-level hierarchy just doesn't matter. Pick one, maybe loosely informed by organisational factors or by your problem conceptualisation.
On the flipside, in situations where a stricter hierarchy would improve modularity, the tag system might _discourage_ clean crystallisation, and cause responsibilities to bleed into each other. IMO, it's more important for there to be modules at all than for their boundaries to be perfect.
There may be no conclusive proof, but it's a philosophically tough pill to swallow.
Non-locality means things synchronise instantly across the universe, can go back in time in some reference frames, and yet reality _just so happens_ to censure these secret unobservable wave function components, trading quantum for classical probability so that it is impossible for us to observe the difference between a collapsed and uncollapsed state. Is this really tenable?
Strip back the metaphysical baggage and consider the basic purpose of science. We want a theoretical machine that is supplied a description about what is happening now and gives you a description of what will happen in the future. The "state" of a system is just that description. A good _scientific_ theory's description of state is minimal: it has no redundancy, and it has no extraneous unobservables.
This is called the anthropic principle. I personally have objections to it, specifically that due to emergence it is hard to make definitive statements about what complex phenomena may emerge in alternate universes. However, it's taken seriously by many philosophers of physics and certainly has merit.
My point is that it isn't possible to determine the emergent behaviour of a complex system from first principles. So arguments of the type "these physics don't result in atoms being produced, so life can't emerge" doesn't imply that other complex structures _like_ life don't emerge.
Technology is made iteratively by repeated trial and then observed error in the physical structures we've created (i.e. we build machines and then watch them fail to work properly in a particular way).
Technology that works in a different universe without atoms, would require us to be able to experiment within that universe if we wanted to produce technology that works there with our current innovation techniques.