I like the idea but find that an overstatement that trivializes the nature of technical debt: at some level one is talking about things that "pay off now" but require "interest payments" in the future" and while one can kind of squeeze the line `for (let i = 0; i < array.length; i++) {` into that framework, experience says that that line by itself almost never requires a maintenance debt payment; if you have a list then you almost certainly want to iterate through it and refractors are driven by the size of the list, not by the bugginess of the looping construct...
I think what's really at stake is that code and data are a sort of inventory: they are not what you are selling, but they get turned into what you are selling. Inventory always has a carrying cost, and generally people underestimate that because they are only looking at the direct cost of storage, not how the presence of the inventory itself gets in the way, makes getting to other things harder, makes bottlenecks harder to see.
And that's where you see that debt is a wrong metaphor, because debt has the particular property that you can pay all of it off and that would be a good thing. By contrast inventory is a good thing in the right place: It means that if one thing stops working, the system can still continue for a while. Really operating with zero inventory everywhere is possible, and it's not done because it would drive you out of business. Similarly, deleting all of your code is not accessible in the way that getting rid of all of your debts is.
Designing an API to have a separate messaging later from its business layer from its data management layer from its data fetching layer is a technical debt; the fact that any change in the system now needs to be distributed across 10 different places in the code base is your interest payment. I would argue that you would like to derive all of these from some shared source of truth to remove those interest payments, and when you do, I no longer think that it's a bad thing for you to have a homebrew HTTP framework that has those separations in its internal functions.
Rabbi Hillel famously is quoted as saying "Don't say anything that ought not be heard -- not even in the strictest confidence -- for ultimately it will be heard." This is one of the ways that it will be heard, heh.
This is actually really interesting and mirrors a more general effort that I am pursuing[1] to build runtime representations of type in TypeScript, though my context is not so much GraphQL but rather just a more traditional HTTP API.
It looks like you try to keep it so that the runtime type object itself has the correct TypeScript type? That is, I would expect that under the hood you either have `types.number = 0`, `types.string = ''`, and so on and then you just use `typeof runtimeTypeObject` to derive this, or something more sinister like `types.number = ('number' as any) as number`. Either way that's fairly clever and I like it. I probably won't do it that way in `tasso` because one of my abstract points of power is that `tasso` should be able to validate that a given type object is a valid `tasso` schema, so there should be a metaschema. But it is really nice to have this thing saying just `typeof runtimeTypeObject` and not `ValueOfSingleType<typeof runtimeTypeObject>` to derive, from the schema's type, the instances' type.
I may steal one thing for `tasso` from this library, and that is the way your `constant()` works without specifying type metadata; I was under the impression that even with the `<t extends string>` TypeScript would still say "well `const x = constant('abc')` doesn't say anything else about `'abc'` so I am going to infer that it is a `string` and then string does extend string so `x` has type `string`," much like it does when you just write `const x = 'abc'`. I didn't realize that you can "hint" that you want the more generic type for the thing. In tasso this manifests when you are writing self-recursive schemas, like
// the following line is a sort of hack
const refs = tasso.refs<'cell' | 'list'>();
// but then we can write stuff like this
const schema = {
cell: tasso.number,
list: tasso.maybe(tasso.object({
head: refs.cell,
rest: refs.list
}))
};
It will be nice to replace that with `tasso.refs('cell')` and `tasso.refs('list')` without that "refs object", I think.
Runtime Type Check looks very useful when developing a system which communicate with other systems, cause we cannot perfectly predict other systems output.
The absolute most important thing to understand about models as they were originally intended, is that they are not descriptions of a data model per se.
This is not to say that data modeling is not important, or that your model won’t have a similar shape as your data model; it will. But that is not the original point.
In MVC as it was originally understood, a model is a list of subscribers. You can add yourself to that list, you can remove yourself from that list, and whenever a “value” changes, if you are on that list, you get a notification that the value has changed. You have a couple of different designs of these, depending on whether you want to send a current-state message as a notification on subscription, or just give everybody read access to the current state. The latter commits you to initializing your models in ways that indicate the absence or staleness of data (someone loads your app, you need to send out a network request to get a new thing) but also allows you to use variable scope to augment how much multiplexing and state tracking you need.
Models vary from data models in having view-relevant data. So for example you switch from the type
const urlList = new Model<UrlRow[]>([])
to the type
type Fetching<x> = { state: "init" }
| { state: "loaded", data: x },
| { state: "fetching", staleData: x },
| { state: "fetch error", staleData: null | x }
const urlList = new Model<Fetching<UrlRow[]>>({
state: "init"
})
Notice that these Fetching indicator statuses are a part of the data model for the UI, not the underlying data model that exists at the database level.
If this begins to feel a lot like React and Redux, that is because there is a shared lineage there. React originally made its splash as “the V in MVC” but its setState/state system, while it doesn't contain a subscriber list, effectively does something equivalent by insisting on destroying anything that has the old values and then superficially identifying things which appeared to remain the same from moment to moment, with the basic model then being a “subscription tree.”
Redux of course takes this from being a tree to being something more generic by making the store global... I think that models should not be app-global the way that Redux likes (it does not want to solve the problem of multiplexing models, which is understandable but it turns out to be a much simpler problem than you'd think) and that the pattern of reducers is more verbose than one generally needs, but I love the time travel browser features that Redux gives me. Somehow Redux has encouraged abominations like redux-thunk which complect separate concerns into the same dispatch function unnecessarily. But the fundamental workings involve that same basic structure: a subscriber list.
Could you clarify why you feel that redux-thunk is an "abomination"? What specific concerns do you have?
I wrote a post a while back that answered several concerns I'd seen expressed about using thunks [0], and my "Redux Fundamentals" workshop slides [1] have a section discussing why middleware like thunks are the right way to handle logic [2].
Hi! I would love to read your articles and discuss this further but right now your blog is not viewable by Chrome, Firefox, or Edge. Firefox gives the most descriptive error string as SSL_ERROR_RX_RECORD_TOO_LONG.
Multiplexing two models together to my mind just constructs a new model whose values are tuples of the existing models and which subscribes to both of those models in order to notify its own subscribers whenever either side of those tuples change. If you do this, you can have a bunch of local stores and still say "this component's value changes whenever either of those values change." The key is that "normal" models accept a set(value) message to set their value to something, and you might play around with "dict" models which accept insert(key, value), deleteAt(key) messages, but a multiplex model would not be easily able to abstract over all the different possible messages to send upstream and so the easiest approach is just to make multiplexes nonresponsive -- you can't "dispatch" to them, in Redux terms.
redux-thunk is nice in that it helps keep people from making the mistake of sending otherwise-no-op actions to the store which then, inside a reducer as side-effects, do a bunch of async I/O to compute events that eventually make it back to the store. I would broadly agree with that.
My basic beef with redux-thunk is that it's unnecessary and complicates what would otherwise be a type signature that has no reference to I/O, which I regard as a good thing. Developers ought to know that, to quote one of the Austin Powers movies, "you had the mojo all along." It's a sort of talisman that you are using for purely psychological reasons to reassure developers and to coax them to doing updates outside of the reducers, but it's OK because "it's in `dispatch()` so it must be a Redux thing so we'll make it work." But such a talisman is unnecessary.
Erm.... that's bizarre. Should be a normal Let's Encrypt cert as far as I know. Are you accessing it through some kind of corporate proxy that's blocking it or something? Does that happen on any other machines?
Anyway. Reading the rest of your comment...
I'll be honest and say that you pretty much lost me in that discussion, as in, I genuinely am confused what you're trying to say. I'll try to give some thoughts here, but I don't know if this is going to answer things because I don't know what point you're actually trying to make.
The point of `redux-thunk` is to allow you to write logic that needs access to `dispatch` and `getState` when it runs, but without binding it to specific references of `dispatch` and `getState` ahead of time.
If you wanted to, you _could_ just directly `import {store} from "./myStore"`, and directly call `store.dispatch(someAction)`. But, A) that doesn't scale well in an actual app, and B) it ties you to that one store instance, making it harder to reuse any of the logic or test it.
In a typical React-Redux app, the actual specific store instance gets injected into the component tree by rendering `<Provider store={store}>` around the root component. As my slides point out, you _could_ still directly grab a reference to `props.dispatch` and do async work in the component, but that's also not generally a good pattern. By moving the async logic into its own function, and ultimately injecting `(dispatch, getState)` into that function, it's more portable and reusable.
Also, have you seen the actual implementation of `redux-thunk`? It's short enough that I'll paste it in here just for emphasis:
If you can try to clarify what you're saying about "type signatures" and binding the methods from the store, I'd appreciate it. (Actually, that bit about binding the store methods doesn't make any sense, because a Redux store isn't a class - it's a closure, so there's no `this`.)
If you're available to discuss this in a venue that may be better suited for it, please come ping me @acemarke in the Reactiflux chat channels on Discord (invite link: https://reactiflux.com ).
Hm. I will have to retry. You're right that this is my work laptop and sometimes Sophos does weird crap. Sorry for alarming you without checking downforeveryoneorjustme first.
I have read the source; indeed reading the source of redux-thunk was necessary for me to conclude it was pointless. I like everyone else thought that it was doing something more than `go = fn => fn()` does.
The code that I wrote you is logic which needs access to `dispatch` and `getState` when it runs, but it is not bound to specific references of `dispatch` and `getState`. It does not use your hack of importing a store from a global location, so it does not have problems with (A) or (B).
You cannot avoid grabbing that reference to props.dispatch either way. The crux of the argument that redux-thunk is just syntactic sugar, is that dispatch is already in scope whenever it is used and can be passed as an argument or a closure.
I agree somewhat with refactoring async logic into its own component when one wants to reuse it and make it portable. The question is just, should you pass `dispatch` and/or `getState` as an argument to that function? Or should you curry that dependency to a subfunction and pass that function as an argument to `dispatch`?
I opine that the latter is objectively worse than the former. You have `dispatch`: hand it directly to the function, let people know that this function is not an actual action but an asynchronous process. We are talking about a syntactic sugar, in other words, that doesn't make anything sweeter.
I'm actually refreshed to be reminded that the Redux store is a closure, I had forgotten since I first read the Redux code several months ago. So then it's even easier; one never has to bind anything.
I will try to ping you on Discord later tonight; there is a specific reason that I am preferring asynchronous messaging systems at the moment.
I mean it is quite possible that as "originally" originally understood in the late 1970s, the model did not have subscribers, but it was a part of the system as early as Smalltalk-80.
> The dependency (addDependent:, removeDependent:, etc.) and change broadcast mechanisms (self changed and variations) made their first appearance in support of MVC (and in fact were rarely used outside of MVC). View classes were expected to register themselves as dependents of their models and respond to change messages, either by entirely redisplaying the model or perhaps by doing a more intelligent selective redisplay.
> Because only the model can track all changes to its state, the model must have some communication link to the view. To fill this need, a global mechanism in Object is provided to keep track of dependencies such as those between a model and its view. This mechanism uses an IdentityDictionary called DependentFields (a class variable of Object) which simply records all existing dependencies. The keys in this dictionary are all the objects that have registered dependencies; the value associated with each key is a list of the objects which depend upon the key. In addition to this general mechanism, the class Model provides a more efficient mechanism for managing dependents. When you create new classes that are intended to function as active models in an MVC triad, you should make them subclasses of Model. Models in this hierarchy retain their dependents in an instance variable (dependents) which holds either nil, a single dependent object, or an instance of DependentsCollection. Views rely on these dependence mechanisms to notify them of changes in the model. When a new view is given its model, it registers itself as a dependent of that model. When the view is released, it removes itself as a dependent.
Like, I'm not getting this out of nowhere; at one point I inspected the code in the Model object and that's how it works...
Maybe somewhat. I think you can offer a simple explanation, but it depends a little on how you have already set up the problem.
Here's the problem setup: So we want to share a secret byte (178) among Alice, Bob, and Carol, so that we need all 3 of them to contribute to it. Three points defines a parabola so we choose two more random bytes: [38, 68], our polynomial is y = 178 + 38x + 68x². We then give Alice the point (1, 284), Bob the point (2, 526), and Carol the point (3, 904).
Now supposing that we have compromised both Bob and Carol's points we know that we have the two equations,
9a + 3b + c = 904
4a + 2b + c = 526
We can then eliminate b to get:
6a - c = 230
which we can rearrange as
c = 6a - 230.
Since `a` cannot be a fraction, we must be able to cut down the number of possibilities for c to just 42 possibilities, {4, 10, 16, 22, 28, ...}, since they must be separated by sixes. I'm not 100% sure but I think this factor grows like n!/(n-k)! for "I have compromised k of n secrets, by what factor have I reduced the search space?"
Here's how modular arithmetic solves this: It turns out that modulo a prime, all fractions are also whole numbers. That is, if I am working modulo the prime 13, I will find that I can divide 7/5 to find 4. Remember what division means, it inverts multiplication: I can find that 5 × 4 = 20 and then that 20 = 13 + 7, so they are at the same place "on the clock". In fact it suffices to just find 1/5 and multiply by 7, so you can find that 1/5 is 8 in the mod-13 ring, 8 × 5 = 40 = 39 + 1. You can also find that 1/6 is 11, so 6 × 11 = 66 = 65 + 1.
The proof that this must be the case is that if you take
[1, 2, ..., p-1].map(x => (x * n) % p)
this list cannot repeat itself: if it did, the resulting `x1 - x2` would divide `p`, by the distributive law of multiplication. It also is confined to only contain the numbers 1 through p-1, and so it must contain all of them exactly once: so if it doesn't repeat itself, it has to have a 1 in there somewhere.
That's kind of a brute force argument so you may want to also mention that there are two efficient ways to find these, one is called the "Extended Euclidean algorithm" (do a GCD computation to find that the GCD=1, but you can take the dividends that you discarded and cleverly assemble them to recover the constants from Bezout's identity, which in this case gives you the modular inverse) and the other is called "Fermat's little theorem" (since a^(p-1) % p == 1 for prime p, raise something to the p-2 power. Using exponentiation by squaring you only need ~log p multiplications that each take no more than ~log p time.
MPL, for folks who aren't aware, is a successor to the CDDL seen in Solaris etc. and is a per-file copyleft, thus having a nicer legal structure while getting the same rough benefits of GPL or at least LGPL. The idea is “any modifications to this file must also be open-sourced under MPL, but you can package this file with proprietary other files that are not MPL and integrate them into a larger proprietary thing as long as your modifications to this file alone are open-sourced.” The goal is to protect weakly from Microsoft-esque “embrace, extend, extinguish” as GPL does but enable commercial integration the way BSD does.
In practice nobody seems to be all that pissed at Mozilla because of their license; the MPLv2 added GPL compatibility so the GPLers are mostly able to use MPL software and it gives a nod towards the stronger copyleft they like; commercial applications which just use the software as a library don't mind.
You could tack on several more if the fragment being corrected is "...conjunctions like butandandandor..." so that they are missing spaces between but and and, and and and and, and and and and, and and and or, and commas would help immensely.
Right. The important thing about the Clay Mathematics Millenium Prize is that it is a set of well-chosen SMART-ish goals. They want to incentivize work on really hard fields of mathematics but they have tried to do this by choosing results that are each not too intimidating. The Navier-Stokes prize, for example, allows you to assume all of the nicest features of Navier-Stokes problems in practice -- basically all of the fluid flows are well below supersonic and are occurring in highly homogeneous, nice fluids -- and ask the most basic mathematical question: does a smooth solution to these equations always exist? That question by itself is not so important, but it's specific and measurable. But it's a bit of a "reach" -- you would have to have some cutting insight about turbulence in order to answer this question one way or the other. That cutting insight is what they're trying to incentivize.
P vs NP is the same: if solutions to a problem are easy to check, is there always some better way to analyze the mechanics of the checker to make those solutions easy to find, so that we aren't stuck with brute-forcing it? Whether the answer goes one way or the other, the point is that solving the problem would have to provide some insight to the effect of "here is a periodic table of elements for all of the 'easy' algorithms -- and here are the properties of all the 'molecules' made by combining those building blocks." And only once someone advances our understanding with those cutting insights can we say "yes we can always reach into the verification mechanism to understand it well enough to build a better-than-brute-force algorithm" or "no, here is such a thing that algorithms cannot do, it's not just that I am not smart enough to find a way for them to do this -- they are fundamentally incapable of doing it faster."
I think what's really at stake is that code and data are a sort of inventory: they are not what you are selling, but they get turned into what you are selling. Inventory always has a carrying cost, and generally people underestimate that because they are only looking at the direct cost of storage, not how the presence of the inventory itself gets in the way, makes getting to other things harder, makes bottlenecks harder to see.
And that's where you see that debt is a wrong metaphor, because debt has the particular property that you can pay all of it off and that would be a good thing. By contrast inventory is a good thing in the right place: It means that if one thing stops working, the system can still continue for a while. Really operating with zero inventory everywhere is possible, and it's not done because it would drive you out of business. Similarly, deleting all of your code is not accessible in the way that getting rid of all of your debts is.
Designing an API to have a separate messaging later from its business layer from its data management layer from its data fetching layer is a technical debt; the fact that any change in the system now needs to be distributed across 10 different places in the code base is your interest payment. I would argue that you would like to derive all of these from some shared source of truth to remove those interest payments, and when you do, I no longer think that it's a bad thing for you to have a homebrew HTTP framework that has those separations in its internal functions.