Hacker News new | past | comments | ask | show | jobs | submit | abainbridge's comments login

> I would have guessed that any kind of forests have quite limited cap how much carbon it could retain in dead wood

The article says, "We found that a forest that's developing toward old-growth condition is accruing more wood in the stream than is being lost through decomposition" and "The effect will continue in coming decades, Keeton said, because many mature New England forests are only about halfway through their long recovery from 19th- and 20th-century clearing for timber and agriculture".


Ah, overlooked they actually acknowledge the "cap" directly in the preceding paragraph, and even put it into "coming decades" time frame. Makes much more sense now, thanks for the pointer!

Still a bit confused about the emphasis in wood deposits in "streams" – reportedly way more effective, but I'd guess with very limited capacity to really "lock" the mass – compared to regular hummus – not that effective, but for forest with couple of centuries of growth ahead I'd guess way more capacious. Good news either way, though!


Reading between the lines in the article (which is of course always subject to incorrect interpretation) I think the reason for the focus on streams is just that nobody else has looked at that before and thus it is a factor not previously accounted for. Other sources have already been accounted for - they may be worth more than what is in streams, but it is already known so the article didn't mention them.


“Coming decades” is an understatement. It depends on local conditions but douglas fir pines in the PNW take 200-300 years to decay completely, so that’s centuries more of carbon capture as long as we let our forests rewild. Realistically a forest becomes old growth once there are at least three generations of trees in various states of decay. That may decades in warmer climates but much longer in the north.


I skimmed https://en.wikipedia.org/wiki/Sinoatrial_node#Function. Here's my guess at what is going on:

In humans (and I guess many animals), the thing that controls the heart beat is a structure in the heart called the Sinoatrial node. Each cell in the SA node has an ability to generate its own rhythmic electrical impulse. I imagine that when one of these cells thaws out in a Wood frog, it immediately starts producing its rhythmic pulse. It has to get in sync with the rest of the cells in the Sinoatrial node before the heart will beat correctly, so the cells have a mechanism to communicate their rhythm with their neighbours. I guess each cycle, each cell adjusts its phase a little towards the average phase of its neighbours and thus a consensus will be reached.


Fireflies eventually manage to more or less sync up and they’re completely separate organisms - tiny cells with physical connections inside the body should be able to make it work.


Yep. When I walk into a room containing my dog and the poo it did on the carpet two hours earlier, it is sorry about what it did.


Pavlovian response. The dog associates pooing on the carpet with you punishing it/being upset with it.


> When I walk into a room containing my dog and the poo it did on the carpet two hours earlier, it is sorry about what it did.

This shows that the dog knows it did something you didn't want it to do, yes. I'm not sure it shows "thinking about ourselves, scrutinising and evaluating what we do and why we do it", which is how the article this discussion is about described "mental reflection". Our dogs have made messes in our house and have shown evidence that they know they're not supposed to, but I see no evidence that they have done any reflection on why they did it.

Of course "mental reflection" is not an all or nothing thing, obviously there is a continuum of possibilities. So a more precise phrasing of my question might be: is there any evidence that dogs can perform mental reflection at a point on that continuum anywhere near the point where humans do it? Or are they only capable of it at a point on the continuum much, much closer to the other end, the "no reflection at all" end?


What would a metric for measuring distance on this continuum look like?


I'm not sure since we don't have any good way of quantifying "amount of mental reflection". But that doesn't mean there isn't a large difference between dogs and humans in this regard.


Another example is Low Density Parity Check Codes [1]. Discovered in 1962 by Robert Gallager but abandoned and forgotten about for decades due to being computationally impractical. It looks like there was a 38 year gap in the literature until rediscovered by David MacKay [2].

The first mainstream use was in 2003. It is now used in WiFi, Ethernet and 5G.

[1] https://en.wikipedia.org/wiki/Low-density_parity-check_code

[2] https://scholar.google.com/scholar?q=%22low+density+parity+c...


An article about that from 2000. https://www.theregister.com/2000/04/17/playstation_2_exports...

Brilliantly it says:

Register readers with very long memories indeed will recall similar concerns being raised over Sir Clive Sinclair's ZX-81. The fear then was that the sneaky Sovs would try to buy heaps of ZX-81s for their Zilog Z80-A CPUs and might 1KB RAM to upgrade their nuclear missile guidance systems.


You missed out the input to the LLM, which would presumably be a requirements spec with all behaviour specified in exact detail, including all the tricky corner cases were someone has to think hard about which solution is most useful and least confusing to the customer. Natural language isn't great for expressing such things. A formal notation would be easier. Perhaps something that makes it easy to express if-this-then-that kinds of things. I wonder if a programming language would be good for that.


Indeed, that is why, based on offshoring experience, I see a future where the developers of tomorrow are mostly technical architects, with Star Trek style "Computer do XYZ".

This has been tried before with UML, see Rational, Together or Enterprise Architect, however LLMs bring an additional automation step to the whole thing.


> strict aliasing allows for optimizations that are actually worthwhile

I don't think there are many sensible, real world examples.

A nice explanation of the optimizations the strict-aliasing rule allows: https://stackoverflow.com/a/99010/66088

The example given is:

    typedef struct Msg {
        unsigned int a;
        unsigned int b;
    } Msg;

    void SendWord(uint32_t);

    int main(void) {
        // Get a 32-bit buffer from the system
        uint32_t* buff = malloc(sizeof(Msg));

        // Alias that buffer through message
        Msg* msg = (Msg*)(buff);

        // Send a bunch of messages
        for (int i = 0; i < 10; ++i) {
            msg->a = i;
            msg->b = i+1;
            SendWord(buff[0]);
            SendWord(buff[1]);
        }
    }
The explanation is: with strict aliasing the compiler doesn't have to think about inserting instructions to reload the contents of buff every iteration of the loop.

The problem I have is that when we re-write the example to use a union, the generated code is the same regardless of whether we pass -fno-strict-aliasing or not. So this isn't a working example of an optimization enabled by strict aliasing. It makes no difference whether I build it with clang or gcc, for x86-64 or arm7. I don't think I did it wrong. We still have a memory load instruction in the loop. See https://godbolt.org/z/9xzq87d1r

Knowing whether a C compiler will make an optimization or not is all but impossible. The simplest and most reliable solution in this case is to do the loop hoisting optimization manually:

        uint32_t buff0 = buff[0];
        unit32_t buff1 = buff[1];
        for (int i = 0; i < 10; ++i) {
            msg->a = i;
            msg->b = i+1;
            SendWord(buff0);
            SendWord(buff1);
        }
Doing so removes the load instruction from the loop. See https://godbolt.org/z/ecGrvb3se

Note 1: The first thing that goes wrong for Stackoverflow example is that the compiler spots that malloc returns uninitialized data, so it can omit the reloading of buff in the loop anyway. In fact it removes the malloc too. Here's clang 18 doing that https://godbolt.org/z/97a8K73ss. I had to replace malloc with an undefined GetBuff() function, so the compiler couldn't assume the returned data was unintialized.

Note 2: Once we're calling GetBuff() instead of malloc(), the compiler has to assume that SendWord(buff[0]) could change buff, and therefore it has to reload it in the loop even with strict-aliasing enabled.


The strict aliasing stuff allows you to do "optimisations" across translation units that are otherwise unsound.

The compiler alias analysis is much more effective than those rules permit within a translation unit because it matters whether int* alias other int*.

And then we have link time optimisation, at which point the much better alias analysis runs across the whole program.

What remains therefore is a language semantically compromised to help primitive compilers that no longer exist to emit slightly better code.

This is a deeply annoying state of affairs.


Aliasing analysis is quite helpful for sophisticated compilers to generate good code.


Alias analysis is critical. Knowing what loads and stores can alias one another is a prerequisite for reordering them, hoisting operations out of loops and so forth. Therefore the compiler needs to do that work - but it needs to do it on values that are the same type as each other, not only on types that happen to differ.

Knowing that different types don't alias is a fast path in the analysis or a crutch for a lack of link time optimisation. The price is being unable to write code that does things like initialise an array using normal stores and then operates on it with atomic operations, implement some floating point operations, access network packets as structs, mmap hashtables from disk into C structs and so forth. An especially irritating one is the hostility to arrays that are sometimes a sequence of simd types and sometimes a sequence of uint64_ts.

Though C++ is slowly accumulating enough escape hatches to work around that (launder et al), C is distinctly lacking in the same.


Alias analysis is important. It's the C standard's type-based "strict aliasing" rules which are nonsense and should be disabled by default.

This is C. Here in these lands, we do things like cast float* to int* so that we can do evil bit level manipulation. The compiler is just gonna have to put that in its pipeline and compile it.


How does the version with buf0 and buf1 work? It looks like it sends always the same two values...


Hmmm, yes. I didn't understand what the code did.

Instead of creating those buff0 and buff1 variables before the loop, I should have done:

    for (int i = 0; i < 10; ++i) {
        unsigned a = i;
        unsigned b = i+1;
        msg->a = a;
        msg->b = b;
        SendWord(a);
        SendWord(b);   
    }
That gets rid of the load from the loop. https://godbolt.org/z/xsqWfxKzd



They come in normal aspect ratio jars in the UK. eg https://www.ocado.com/products/m-s-nonpareilles-capers-60903...


I doubt the surface area matters. The volume of air and therefore oxygen you introduce every time you open the jar is the important factor. I would have thought that oxygen will dissolve in the liquid over a day or so even with a small surface area between the air and the liquid.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: