Hacker Newsnew | past | comments | ask | show | jobs | submit | nmsmith's commentslogin

You mean the positive impact of consuming statins. Consuming statins coincides with lower LDL so I can imagine people conflating the two variables. I'm sure taking statins also has other effects on the body.


No, I mean the positive impact of lowering LDL. Statins of course show positive impact, and yes, including from also lowering inflammation. But so do other drugs that lower LDL through other mechanisms, including ezetimibe, bempedoic acid, and pcsk9 inhibitors. MR studies on genetics also show that lower LDL even without other factors that reduce inflammation significantly reduces the risk of of negative ASCVD outcomes.

Even from the inflammation standpoint, we know that lowering LDL has a causal effect on lowering inflammation in your arteries - plaque being deposited results in foam cell activation and cytokine signaling which directly increase localized inflammation which can then result in additional plaque deposition.

This is also just extremely well understood mechanistically - to have plaque deposited in your arteries, you have to have something that deposits it. This comes primarily from LDL in most individuals - though Lp(a) is a largely genetically driven carrier of atherogenic particles as well, which is why ApoB is a better measure - and with less LDL there is simply less to be deposited.

The idea that LDL is not a directly causal factor for ASCVD is one goes against mountains of evidence and the consensus of the absolutely overwhelming majority of experts in the field. That is not the same as them saying it is the only causal factor - but people trying to argue that LDL isn't causal have a huge burden of proof on them. This is some of the most studied science in health.


Hardware-based instruction reordering always preserves the behaviour of the original program. (Assuming the original program is valid.)

For example, an Intel CPU won't reorder `x += 1` and `x *= 2`.


Yep, that's an accurate summary! The model still features a form of "borrowing", it just happens at the granularity of groups.

I wrote a more detailed answer here: https://news.ycombinator.com/item?id=45057636


The "Group Borrowing" concept that we're discussing still imposes aliasing restrictions to prevent unsynchronized concurrent access, and also to prevent "unplanned" aliasing. For example, for the duration of a function call, the default restriction is that a mut argument can only be mutated through the argument's identifier. The caller may be holding other aliases, but the callee doesn't need to be concerned about that, because the mut argument's group is "borrowed" for the duration of the function call.

I suppose you could describe the differences from Rust as follows:

- Borrowing happens for the duration of a function call, rather than the lifetime of a reference.

- We borrow entire groups, rather than individual references.

The latter trick is what allows a function to receive mutably aliasing references. Although it receives multiple such references, it only receives one group parameter, and that is what it borrows.

Hope that makes sense!


> Borrowing happens for the duration of a function call

I don't understand this part. Cannot a function store a borrowed reference into a structure that outlives the function?


Yes, functions can return non-owning references. However, those references do not "borrow" their target, in the sense that they lock others out. That is the Rust model, and OP does a great job covering its limitations.

So, with the understanding that "borrowing" means "locking others out", a group parameter borrows the group for the duration of the function call. If it borrows the group as mutable, no other group parameters can borrow the group. If it borrows the group as immutable, other group parameters are limited to borrowing the group as immutable. This is reminiscent of the Rust model, but the XOR rule applies to group parameters rather than references, and borrowing lasts the duration of a function call, rather than the lifetime of a reference.


Hi there, I am the Nick whose design we're discussing. You raise some valid points: the blog post enumerates some limitations with Rust's model, but my design (as written) only resolves a subset of those limitations. We probably should have made that a bit clearer.

That said, there is still hope! I have been iterating on the design over the last 9 months and am fairly confident that I can model graphs, as long as a few restrictions are imposed, such as not being able to delete nodes. But I can't prove this will work yet, so we will need to wait and see what the next design iteration looks like. I'm fairly confident that the next iteration will be more powerful than the version presented in OP's blog post.


Good luck! I look forward to seeing the results.


Do you remember what language that was? Or at least, how I'd find it? I'd be interested in checking it out.


> building a tree in Rust will do a lot of reference counting

This isn't true in most cases. If every subtree is only referenced by its (unique) parent, then you can use a standard Rust "Box", which means that during compilation, the compiler inserts calls to malloc() (when the Box is created) and free() (when the Box goes out of scope). There will be no reference counting — or any other overhead — at runtime.


Tree based structures with root ownership are very constrained as general purpose data structure.


The multidimensional version definitely looks exciting! Do you have any benchmarks for it yet? And will you be publishing a paper on it?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: