windowing: using a transparent nswindow hosting swiftui views for the floating call ui, and nspopover for the menu bar dropdown. we also toggle nsapp.activationpolicy to hide the dock icon when it's just running in the background
networking: livekit handles the webrtc video/audio (it’s been rock solid), with firebase cloud functions generating the tokens and firestore handling the signaling and presence state
security: standard firebase auth for user management, and we use the native macos keychain (ksecclassgenericpassword) to securely store credentials so you don't have to login every time
mac apis: lots of avfoundation for camera/mic permissions, nsstatusitem for the menu bar integration, and unusernotificationcenter to make sure you actually see incoming calls
I always find it odd when media (and others) consider consumerism as somehow "helping" the economy. The economy is entirely about the collective activity of humans serving humans. Everything we make or do is really about prioritizing that activity over others. Why would it be advantageous to prioritize barely-distinguishable "new" devices over the myriad other things human labor and capital could be put to?
Their audience is the capital class (the wealthiest 10% of Americans own 93% of stocks). Longer device ownership and service life is fiscally responsible but suboptimal for shareholders.
"Metaphorically, I think about any job given to Claude as having 3 dimensions. There's the breadth of the task (roughly how many lines of code it will touch), the depth of the task (the complexity, the layers of abstraction needed, the decision making involved, etc.), and the time spent working on it. Those three axes define a cube, and the size of the cube is how much entropy I'm shoving into the project."
That's an interesting conceptualization that tracks with my experience using CC. And they were able to get an impressive amount of work done:
"""
The specifics don't matter too much here, but for context, some of what I had it do:
- Research all the available on-device speech-to-text models with permissive licences
- Demo the transcription speed of each one on an android device attached to the PC
- Write a C wrapper for the best one (Moonshine) and build an embeddable dynamic library
- Build this for iOS, Android, Linux, and macOS, and integrate it with my app code using the FFI
- Build a Nim wrapper for the fdk-aac library
- Integrate it with miniaudio, so I can play AAC audio and pipe the audio into Moonshine
Thinking of bunny huang’s open source hardware efforts, a 28nm nommu system seems like a good long term open source target. How much complexity in the system is the mmu, and so how much complexity could we cut out while still having the ability to run third-party untrusted code?
> The last time I used [a doubly linked list] was in the nineties. I know the Linux kernel uses them, but that design was also laid down in the nineties; if you were designing a kernel from scratch today, you would probably not do it that way.
This and sibling comments about the supposed uselessness or outdatedness of linked lists taught me something about the disconnect between many systems engineers resistant to Rust and many members of the (evangelical) Rust community.
To address the technical topic first: a doubly linked list is what you reach for when you’re looking for an intrusive data structure with fast append and fast delete. You need to iterate over it, but don’t need random access. Queues where you may need to remove elements. Think a list of completions that can be cancelled.
Why an intrusive data structure? Because you don’t want to, or can’t allocate. Because you want only a non-owning reference to the object, and you don’t want to have to manage any other lifetimes or dynamic (potentially unbounded, or arbitrarily bounded) allocations. Intrusive data structures are a great tool to minimize allocations, and to keep allocations (and ownership) logically separated across modules.
And now onto the cultural topic. That a member of the Rust community might not know this (which is of course totally fine) is what taught me something, which was maybe obvious to many: Rust brings safety to a C++ style language and audience. For the same reasons many dislike C++, many dislike Rust. For the same reasons many would opt for C over C++ when they have the choice (either across all their projects or for certain projects), many continue to opt for C over Rust.
Rust will not be taking over systems programming for the same reasons C++ did not. Well, by the numbers, it probably did (and Rust probably will too), so maybe better to say that it will not stamp out C for systems programming for the same reasons C++ did not. And it’s not just because of the stubbornness of C programmers.
Systems programming has 2 cultures and practice groups: the C camp, and the C++ camp. The C++ camp is a big tent. I’d argue it includes Rust and Swift (and maybe D). Zig belongs to the C camp.
Safety is important, and maybe Rust has the right model for it, but it is not a solution for the C camp. It likely did not intend to be, and again, by the numbers, replacing C++ is probably the more important task.
I'm aware of the reasons Linux uses doubly linked lists. I'm still of the opinion this design decision made a lot of sense in the 1990s, and would make less sense in a new project today. You may disagree, and that's fine! Engineering is constrained by hard truths, but within those constraints, is full of judgment calls.
I'm not a member of the evangelical Rust community. I'm not proclaiming 'thou shalt rewrite it all in Rust'. Maybe you have good reasons to use something else. That's fine.
But if you are considering Rust, and are concerned about its ability to handle cyclic data structures, there are ways to do it, that don't involve throwing out all the benefits the language offers.
Linux's use of pointer-heavy datastructures is largely justified even today. The low-level kernel bookkeeping requirements rule out many typical suggestions. Modern hardware suggests certain patterns in broad strokes, which are sound for mostly any software. But the kernel has the unique role of multiplexing the hardware, instead of just running on the hardware. It is less concerned with, e.g., how to crunch numbers quickly than it is with how to facilitate other software crunching numbers quickly. As someone else noted elsewhere in the top-level thread, unsafe Rust is the appropriate compromise for things that must be done but aren't suitable for safe Rust. Unsafe Rust is one of the best realizations of engineering tradeoffs that I've seen, and neither C nor safe Rust justify its absence. Rust is only a "systems" language with unsafe, and that doesn't drag Rust down but rather strengthens it. "Here is the ugliness of the world, and I will not shy from it."
Beautifully put, poetic even. The only problem is that, in (*actual).reality, unsafe Rust is difficult to read, difficult to write, difficult to maintain, difficult to learn, has zero features for ergonomics, and has poorly defined rules.
C and Zig have a big advantage for writing maintainable, easy to read unsafe code over unsafe Rust.
> The only problem is that, in (*actual).reality, unsafe Rust is difficult to read, difficult to write, difficult to maintain, difficult to learn, has zero features for ergonomics
All true.
> and has poorly defined rules.
The rules aren't so poorly documented. The options for working with/around them, like say using UnsafeCell, do seem to be scattered across the internet.
But you omitted a key point: unlike C and Zig, unsafe code in Rust is contained. In the Rust std library for example, there are 35k functions, 7.5k are unsafe. In C or Zig, all 35k would be unsafe. If you are claiming those 35k unsafe lines in C or Zig would be easier to maintain safely than those 7.5k lines of unsafe Rust, I'd disagree.
I agree that unsafe Rust is not comfortable or simple, but I think it is usable when appropriately applied. It should very scarcely be used as a performance optimization. The Rust devs are quite invested in minimizing unsoundness in practice, particularly with Miri. In coming years, I expect unsafe to be further improved. Over the entire ecosystem, Rust with judicious use of unsafe is, in my opinion, vastly superior to C, unless the C is developed stringently, which is rare.
So, as an example, you'd be happily spending an extra allocation and extra pointers of space for each item in a list, even when that item type itself is only a couple of bytes, and you potentially need many millions of that type? Just so your design is not "from the nineties"?
An extrusive list needs at least 1 more pointer (to the item, separate from the link node), and possibly an additional backpointer to the link node when you want to unlink that node. It also adds allocation overhead and cache misses.
Intrusive lists are one of the few essential tools to achieve performance and low latency.
Or were you thinking of dynamically reallocating vectors? They are not an alternative, they are almost completely unusable in hardcore systems programming. Reallocating destroys pointer stability and adds latency, both very bad for concurrency.
I’m sorry, I did not intend to accuse you of being part of the evangelical community. Your article only prompted the thought I shared.
On the technical point, I think I do disagree, but open to changing my mind. What would be better? I’m working on an async runtime currently, written in C, and I’m using several intrusive doubly linked lists because of their properties I mentioned.
As to what would be better - this is also a reply to your sibling comments above - I don't have a single across-the-board solution; the equivalent of std::vector everywhere is fine for some kinds of application code, but not necessarily for system code. Instead, I would start by asking questions.
What kinds of entities are you dealing with, what kinds of collections, and, critically, how many entities along each dimension, to an order of magnitude, p50 and p99? What are your typical access patterns? What are your use cases, so that you can figure out what figures of merit to optimize for? How unpredictable will be the adding of more use cases in the future?
In most kinds of application code, it's okay to just go for big-O, but for performance critical system code, you also need to care about constant factors. As an intuition primer, how many bytes can you memcpy in the time it takes for one cache miss? If your intuition for performance was trained in the eighties and nineties, as mine initially was, the answer may be larger than you expect.
Even if you just go for big-O, don't forget that a resizable array won't give you even amortized O(1) delete in many cases. This alone is likely prohibitive unless you can bound the elements in the container to a small number.
And if you're trying to trade away good big-O for better cache locality, don't forget that in many cases, you're dealing with stateful objects that need to be put into the list. That means you likely need to have a list or queue of pointers to these objects. And no matter how flat or cache-friendly the queue is, adding this indirection is similarly cache-unfriendly whenever you have to actually access the state inside the container.
Or unless delete is a rare operation. So yeah, to make the best decisions here, you need to know expected numbers as well as expected access patterns.
As far as I can see, you are indeed going to incur one extra memory access apart from the object itself, for any design other than just 'Temporarily flag the object deleted, sweep deleted objects in bulk later' (which would only be good if deleted objects are uncommon). It still matters how many extra memory accesses; deleting an object from a doubly linked list accesses two other objects.
It also matters somewhat how many cache lines each object takes up. I say 'somewhat' because even if an object is bulky, you might be able to arrange it so that the most commonly accessed fields fit in one or two cache lines at the beginning.
> To address the technical topic first: a doubly linked list is what you reach for when you’re looking for an intrusive data structure with fast append and fast delete. You need to iterate over it, but don’t need random access. Queues where you may need to remove elements. Think a list of completions that can be cancelled.
>
> Why an intrusive data structure? Because you don’t want to, or can’t allocate. Because you want only a non-owning reference to the object, and you don’t want to have to manage any other lifetimes or dynamic (potentially unbounded, or arbitrarily bounded) allocations. Intrusive data structures are a great tool to minimize allocations, and to keep allocations (and ownership) logically separated across modules.
If you really wanted this in Rust, you could probably get away with just initialising a Vec::with_capacity(), then have each element’s next and prev be indexes into the vec.
That would be the “arbitrarily bounded” allocation. Arbitrary because now you have to make a decision about how many items you’re willing to maintain despite that number being logically determined by a sum over an unknown set of modules.
Rust won’t even succeed at replacing C++. There are technical and cultural issues at play. C++ has evolved a lot and still does some things better than Rust (dynamic linking, for example). Rust will be another popular systems programming language. There will be better ones. People will take their picks.
I find C++ friendlier for small hobby projects because it lets you get your code compiled even if it's a little messy or slightly unsafe.
As for "serious code", I admit that Rust is maybe better for low-level security-critical stuff. But for native applications it's on par thanks to smart pointers and also just -fsanitize=address.
(also default copy semantics just seem more intuitive i dunno why)
The Rust for Linux project has designed safe intrusive linked lists for use in the kernel, and while they're inconvenient to use, this and other things have led to the push to creating language features to improve the scenario.
As for the C camp, I agree it's different. The problem is that we don't know a way to design memory safe GC-free language without it being big and complex. Maybe it is possible. But until we figure out how, projects that need to be memory safe (which I believe is the vast majority of projects, although not all of them) will probably use Rust (and probably should use Rust), even if they would prefer to be pure C, because memory safety is just more important.
While Rust is certainly competing against C++ moreso than C, I think Rust has qualities that make it suitable to rewrite old C programs or write new programs that would be written in C. Drawbacks of Rust such as excessive dependency usage, simply feeling too complex to write and read, overuse of the type system, or the subpar low-level hardware correspondence, are not huge issues in the right subculture. I think Rust is not quite C but offers a distinct path to do what C does. In any case, legacy C programs do need a pick-me-up, probably many from different technologies, and I think Zig is at most a small factor.
Certainly onto something but misses how much large organizations are actually controlled by small organizations operating in the “large complex system” environment. It is only individuals and small organizations that have agency at all. Large organizations and large complex systems are both emergent, one with hierarchical control, and one with distributed control. What has really changed is how unequal small organizations have become in their influence and power. The small cadres of people at the “top” (of organizations, media, government, tech, etc) control/influence more and more, not only at the expense of other small organizations (power is zero sum) but also at the expense of the decentralized mechanism, ie the large complex system becomes increasingly hierarchically/centrally controlled (vs distributed/decentralized control).
I think I basically agree with this perspective, but I might try to add some nuance. As organizations become larger, there is a tendency for them to become less and less efficient. This seems to be linked to the second law of thermodynamics, which applies to information the same way it applies to matter.
One way to address the relative inefficiency of a larger organization is to consume more energy and not worry about the waste on entropy. This works so long as the large organization is growing — i.e., so long as it is able to extract more energy from its environment than it is wasting (in a relative sense) on its internal processes.
The strategies for minimizing entropy within an organization — large or small — seem to boil down to two, which are intertwined: 1) what @pg called "Founder Mode" and 2) alignment around mission and vision. In both cases, the effect is to drive the organization towards a "critical state" in which small details of information picked up at the edges can be shared relatively quickly across the entire organization, allowing every part of the organization to react in alignment to that new information. In the case of 1), this is facilitated by a dictator (i.e., the founder) who everybody willingly submits decisions to when they themselves are unsure of how the founder would decide. In the case of 2), this is facilitated by a shared understanding of what the "right" decision is across the organization in view of the mission and vision, which are clear and crisp enough to answer most questions, even about relatively obscure issues or questions that arise.
The ability to operate at scale seems more or less to be derived from one or both of these. Coase's theory of the firm in The Nature of the Firm can be understood in these terms — that is, 1) and 2) are the mechanism whereby internal management outperforms spot markets in coordinating production.
There's an interesting parallel with ML compilation libraries (TensorFlow 1, JAX jit, PyTorch compile) where a tracing approach is taken to build up a graph of operations that are then essentially compiled (or otherwise lowered and executed by a specialized VM). We're often nowadays working in dynamic languages, so they become essentially the frontend to new DSLs, and instead of defining new syntax, we embed the AST construction into the scripting language.
For ML, we're delaying the execution of GPU/linalg kernels so that we can fuse them. For RPC, we're delaying the execution of network requests so that we can fuse them.
Of course, compiled languages themselves delay the execution of ops (add/mul/load/store/etc) so that we can fuse them, i.e. skip over the round-trip of the interpreter/VM loop.
The power of code as data in various guises.
Another angle on this is the importance of separating control plane (i.e. instructions) from data plane in distributed systems, which is any system where you can observe a "delay". When you zoom into a single CPU, it acknowledges its nature as a distributed system with memory far away by separating out the instruction pipeline and instruction cache from the data. In Cap'n Web, we've got the instructions as the RPC graph being built up.
I just thought these were some interesting patterns. I'm not sure I yet see all the way down to the bottom though. Feels like we go in circles, or rather, the stack is replicated (compiler built on interpreter built on compiler built on interpreter ...). In some respect this is the typical Lispy code is data, data is code, but I dunno, feels like there's something here to cut through...
Agree -- I think that's a powerful generalization you're making.
> We're often nowadays working in dynamic languages, so they become essentially the frontend to new DSLs, and instead of defining new syntax, we embed the AST construction into the scripting language.
And I'd say that TypeScript is the real game-changer here. You get the flexibility of the JavaScript runtime (e.g., how Cap'n Web cleverly uses `Proxy`s) while still being able to provide static types for the embedded DSL you're creating. It’s the best of both worlds.
I've been spending all of my time in the ORM-analog here. Most ORMs are severely lacking on composability because they're fundamentally imperative and eager. A call like `db.orders.findAll()` executes immediately and you're stuck without a way to add operations before it hits the database.
A truly composable ORM should act like the compilers you mentioned: use TypeScript to define a fully typed DSL over the entirety of SQL, build an AST from the query, and then only at the end compile the graph into the final SQL query. That's the core idea I'm working on with my project, Typegres.
But at the same time, something feels off about it (just conceptually, not trying to knock your money-making endeavor, godspeed). Some of the issues that all of these hit is:
- No printf debugging. Sometimes you want things to be eager so you can immediately see what's happening. If you print and what you see is <RPCResultTracingObject> that's not very helpful. But that's what you'll get when you're in a "tracing" context, i.e. you're treating the code as data at that point, so you just see the code as data. One way of getting around this is to make the tracing completely lazy, so no tracing context at all, but instead you just chain as you go, and something like `print(thing)` or `thing.execute()` actually then ships everything off. This seems like how much of Cap'n Web works except for the part where they embed the DSL, and then you're in a fundamentally different context.
- No "natural" control flow in the DSL/tracing context. You have to use special if/while/for/etc so that the object/context "sees" them. Though that's only the case if the control flow is data-dependent; if it's based on config values that's fine, as long as the context builder is aware.
- No side effects in the DSL/tracing context because that's not a real "running" context, it's only run once to build the AST and then never run again.
Of the various flavors of this I've seen, it's the ML usage I think that's pushed it the furthest out of necessity (for example, jax.jit https://docs.jax.dev/en/latest/_autosummary/jax.jit.html, note the "static*" arguments).
Is this all just necessary complexity? Or is it because we're missing something, not quite seeing it right?
I think this kind of tracing-caused complexity only arises when the language doesn't let you easily represent and manipulate code as data, or when the language doesn't have static type information.
Python does let you mess around with the AST, however, there is no static typing, and let's just say that the ML ecosystem will <witty example of extreme act> before they adopt static typing. So it's not possible to build these graphs without doing this kind of hacky nonsense.
For another example, torch.compile() works at the python bytecode level. It basically monkey patches the PyEval_EvalFrame function evaluator of Cpython for all torch.compile decorated functions. Inside that, it will check for any operators e.g BINARY_MULTIPLY involving torch tensors, and it records that. Any if conditions in the path get translated to guards in the resulting graph. Later, when said guard fails, it recomputes the subgraph with the complementary condition (and any additional conditions) and stores this as an alternative JIT path, and muxes these in the future depending on the two guards in place now.
Jax works by making the function arguments proxies and recording the operations like you mentioned. However, you cannot use normal `if`, you use lax.cond(), lax.while(), etc,. As a result, it doesn't recompute graph when different branches are encountered, it only computes the graph once.
In a language such as C#, Rust, or a statically typed lisp, you wouldn't need to do any of this monkey business. There's probably already a way in the rust toolchain to interject at the MIR stage and have your own backend convert these to some Tensor IR.
Yes being able to have compilers as libraries inline in the same code and same language. That feels like what all these call for. Which really is the Lisp core I suppose. But with static types and heterogenous backends. MLIR I think hoped (hopes?) to be something like this but while C++ may be pragmatic it’s not elegant.
Maybe totally off but would dependent types be needed here? The runtime value of one “language” dictates the code of another. So you have some runtime compilation. Seems like dependent types may be the language of jit-compiled code.
Anyways, heady thoughts spurred by a most pragmatic of libraries. Cloudflare wants to sell more schlock to the javascripters and we continue our descent into madness. Einsteins building AI connected SaaS refrigerators. And yet there is beauty still within.
Really nice summary of the core challenges with this DSL/code-as-data pattern.
I've spent a lot of time thinking about this in the database context:
> No printf debugging
Yeah, spot on. The solutions here would be something like a `toSQL` that let's you inspect the compiled output at any step in the AST construction.
Also, if the backend supports it, you could compile a `printf` function all the way to the backend (this isn't supported in SQL though)
> No "natural" control flow in the DSL/tracing context
Agreed -- that can be a source of confusion and subtle bugs.
You could have a build rule that actually compile `if`/`while`/`for` into your AST (instead of evaluate them in the frontend DSL). Or you could have custom lint rules to forbid them in the DSL.
At the same time -- part of what makes query builders so powerful is the ability to dynamically construct queries. Runtime conditionals is what makes that possible.
> No side effects in the DSL/tracing context because that's not a real "running" context
Agreed -- similar to the above: this is something that needs to be forbidden (e.g., by a lint rule) or clearly understood before using it.
> Is this all just necessary complexity? Or is it because we're missing something, not quite seeing it right?
My take is that, at least in the SQL case: 100% the complexity is justified.
Big reasons why:
1. A *huge* impediment to productive engineering is context switching. A DSL in the same language as your app (i.e., an ORM) makes the bridge to your application code also seamless. (This is similar to the argument of having your entire stack be a single language)
2. The additional layer of indirection (building an AST) allows you to dynamically construct expressions in a way that isn't possible in SQL. This is effectively adding a (very useful) macro system on top of SQL.
3. In the case of Typescript, because its type-system is so flexible, you can have stronger typing on your DSL than the backend target.
tl;dr is these DSLs can enable better ergonomics in practice and the indirection can unlock powerful new primitives
> I just thought these were some interesting patterns.
Reading this from TFA ...
Alice and Bob each maintain some state about the connection. In particular, each maintains an "export table", describing all the pass-by-reference objects they have exposed to the other side, and an "import table", describing the references they have received.
Alice's exports correspond to Bob's imports, and vice versa. Each entry in the export table has a signed integer ID, which is used to reference it. You can think of these IDs like file descriptors in a POSIX system. Unlike file descriptors, though, IDs can be negative, and an ID is never reused over the lifetime of a connection.
At the start of the connection, Alice and Bob each populate their export tables with a single entry, numbered zero, representing their "main" interfaces.
Typically, when one side is acting as the "server", they will export their main public RPC interface as ID zero, whereas the "client" will export an empty interface. However, this is up to the application: either side can export whatever they want.
... sounds very similar to how Binder IPC (and soon RPC) works on Android.
I share the author's enthusiasm for coroutines. They're nice abstractions for all sorts of state-machine-like code and for concurrency (without parallelism).
> You could allocate a piece of memory for a coroutine stack; let the coroutines on it push and pop stack frames like ordinary function calls; and have a special ‘yield’ function that swaps out the stack pointer and switches over to executing on another stack. In fact, that’s not a bad way to add coroutines to a language that doesn’t already have them, because it doesn’t need the compiler to have any special knowledge of what’s going on. You could add coroutines to C in this way if you wanted to, and the approach would have several advantages over my preprocessor system.
Lua's stackful coroutines are awesome! The Lua C API even allows to pass a continuation function when calling a Lua function from C, so you can yield across the C-call boundary (e.g. a Lua function, calling a C function, calling a Lua function that yields). See also https://www.lua.org/manual/5.2/manual.html#4.7
I'd add that while I was already familiar with coroutines from a few different contexts and languages, I found this particular framing of them -- especially seeing them contrasted side by side with a state-machine -- enlightening and novel.
We implemented a low-power wireless sensing device on a microcontroller using asynchronous coroutines in C to replace eight state machines. It was a dream. Every operation process was clearly "linear" in readability while overlapping radio frequency hopping, sensor power-up/down sequencing, sense state, transmission, firmware update, etc. All while going into low-power mode until the next event to process.
If we go a bit old-school on AI and reason in the "connectionist" framing:
Let's say neural memories are encoded in some high-dimensional vector space.
And so memory recall is an associative process that entails constructing a query vector and issuing it across the neural memory space.
And the brain is constantly learning, and that learning entails some changes in the structure of the high-dimensional memory space.
And let's say that re-encoding of a neural memory happens upon recall, and only upon recall.
Then it could be that all experience is in fact stored, but because of changes due to learning, those memories become inaccessible. The machinery constructing query vectors has updated its structure enough that its encoding of those query vectors is sufficiently dissimilar from the encoding of the stored memory vectors (which use the encoding from the last recall).
> The machinery constructing query vectors has updated its structure enough that its encoding of those query vectors is sufficiently dissimilar from the encoding of the stored memory vectors (which use the encoding from the last recall).
Wouldn't that result in very bizarre memories instead of no memories?
The brain is sufficiently complex that I'd expect gross distortions will get swept under the rug. You'd get either lightly distorted memories (dad is 18ft tall, mom's face is wrong, favorite toy lived in this spot instead of that one) or nothing at all. If a memory is totally corrupted, your brain won't give it to you because it doesn't pass your perceptive filters.
Children believe a lot of silly things that they "grow out" of thinking.
How sure are you that your childhood memories are accurate? How sure are you that you aren't simply conditioned to ignore distorted childhood memories?
Doesn't really explain why it happens universally and why this doesn't happen after other major changes in lifestyle (people who move to a radically different country don't lose all memories of their life beforehand).
My 2 year old went on a mental breakdown of a temper tantrum last night because she saw an apple on the tv, decided it meant she wanted an apple, and couldn't understand why she could not have an apple despite seeing one on the tv just then! A toddler is still trying to understand how reality itself works.
A 4 year old knows that jumping off of the stairs onto tile is going to hurt. A 4 year old understands the apple on the tv is an apple on the tv and is not a physical apple in the house.
Obviously a 4 year old is much more together than a 2 year old. But we're talking about a fundamental difference so great that no memories can be preserved. That's a high bar.
Age 2: Can point to their own body parts; hold something in one hand while doing something with the other hand
Age 4: Changes behavior based on where you are; can draw a person with more than 3 distinct body parts
There's a huuuge amount of learning that happens through this period. Your brain is learning things like 3-dimensional space, temperatures exist and I don't like some of them, I-have-two-arms, things fall when dropped, I must engage my big toe to stay upright while walking, other people appear to have feelings, other people appear to believe that I appear to have feelings.
And in any case, the difference between 2 and 4 is only relevant to the question of whether a 4 year old can remember being 2, not what this article is about, which is adults not remembering being <4.
>There's a huuuge amount of learning that happens through this period. Your brain is learning things like 3-dimensional space, temperatures exist and I don't like some of them, I-have-two-arms, things fall when dropped, I must engage my big toe to stay upright while walking, other people appear to have feelings, other people appear to believe that I appear to have feelings.
Many of those things are completely innate. Walking for example, while people use the word "learn" in casual speech, is something that is innate. I just don't think the original comment is well-grounded in what we know about infant's cognition. And in any case, a 2 year old definitely understands 3D space.
... walking absolutely must be learned... They will automatically learn it without explicit teaching but indeed it must be learned. A child prevented from standing or walking for 5 years and then stood on their feet for the first time will not be able to walk.
That is simply not true. There are many cultures which greatly restrict infants' ability to move (e.g. traditional rural communities in Northern China or the Ache in Paraguay) and the children in these communities still learn how to walk. Not only that, but the basic neural mechanisms that are used in walking are innately specified (central pattern generators), not learnt (https://www.sciencedirect.com/science/article/abs/pii/S09594...). Now, there is a degree of "fine-tuning" that is learnt that makes the walking more fluent and precise, but the basic principles of walking are innate.
One only needs to see a foal walking less than an hour after birth to be convinced of this.
Part of the problem is that humans are born so premature that people confuse natural maturation with learning. Just as we don't learn puberty, we don't learn how to walk.
>Which cultures completely restrict their infants from attempting to walk?
The ones I mentioned in my comment.
>Did you read the paper you linked? It describes all the immense amount of learning that actually happens.
Again, as I said, there is a degree of fine-tuning but the core mechanisms are innate.
Some examples:
> In particular, the core premotor components of locomotor circuitry mainly derive from a set of embryonic interneurons that are remarkably conserved across different
species
>Detailed EMG recordings in chick embryos during the final week of incubation showed that the profiles of EMG activity during repetitive limb movements resemble those of locomotion at hatching
> In addition, human fetuses exhibit a rich repertoire of leg movements that includes single leg kicks, symmetrical double legs kicks, and symmetrical inter-limb alternation with variable phase.
I don't think you read the article, or else you think that "development" means learning.
>Have you actually seen a foal walking? They are very visibly learning how to do it!
They can walk right away, but they get better at it. It's innate, but you can fine-tune it. Like I said.
I think your definition of innate is counter to the common definition of innate. The common definition of innate is that there is no thought behind full understanding and capacity to perform- for example, snakes do not generally need to learn how to move without legs or how to open the mouth large enough to consume big food. There isn’t a try/fail cycle while they understand the capacities of their body. I fed my pet snake a baby quail for the first time in its life and it clearly had to learn how to eat it (tried and spat out the leg, wing, etc) even though the core mechanism of big mouth big swallow is there was clearly innate in it. Just because there is a core mechanism to walk existing in babies doesn’t mean the baby doesn’t still need to learn how to perform the behavior voluntarily, on command, consciously according to their own will.
What you just described for the baby applies equally to the snake. It's obviously difficult to neatly segment things into innate and non-innate, but the idea that walking is a matter of maturation rather than "learning" is the mainstream view among scientists and has been for a century.
Again, I conceded that you have to "fine-tune" to get good at walking. But the contrast that with say, playing golf. That's something that categorically has to be learnt, we don't see fetus practicing their drive in utero.
No, that is actually exactly what I was describing. If it was innate, they wouldn't need to trial and error their way (i.e. learn) to proficiency. But indeed they do.
Your analogy was puberty, which in fact happens with development regardless of trial and error (i.e. learning).
The two developmental processes are clearly distinct. The distinction is that one is a process of learning and the other is not.
I'm talking about being able to walk, you're talking about being proficient. I've said repeatedly that fine-tuning to get better is not incompatible with innateness.
reply