I would say it is exactly the opposite. Most other universities have given up freedom of expression. You would be far freer at Hillsdale than most universities.
Need to understand that there are many Christians that are not young earth creationists nor have an with evolution. It is pure materialism that they would have an issue with. Don’t stereotype an entire group of people.
> Because Amalthea and Thebe have slightly tilted orbits, Batygin and Adams analyzed these small orbital discrepancies to calculate Jupiter's original size…
This seems like a non-sequitur. What do tilted orbits have to do with size?
My mind has been blown using ChatGPT's o4-mini-high for coding and research (it knowledge of computer vision and tools like OpenCV are fantastic). Is it worth trying out all the shiny new AI coding agents ... I need to get work done?
Kinda interesting, as I've found 4o better than o4-mini-high for most of my coding. And while it's mind blowing that they can do what they can do, the code itself coming out the other end has been pretty bad, but good enough for smaller snippets and extremely isolated features.
I would say yes. The jump in capability and reduction in hallucinations (at least code) to Claude 3.7 from ChatGPT (even o3) is immediately noticeable in my experience. Same goes for gemini which was even better in some ways until perhaps today.
My doctor tells me that PSA testing has now shown to not be effective so they don't do it anymore. I am 58 and my dad died of prostate cancer so I am concerned.
You have a direct genetic history of prostate cancer, thus you are at higher risk than most men. At age 57 I had no family history and no symptoms, yet my primary care doc suggested I be tested anyway. My PSA was in fact elevated. I got a biopsy and found my prostate was 80% cancerous. I got it surgically removed just in time. 10 years later I'm still cancer free.
Every day I five thanks that my doctor did NOT follow the standard medical advice back then NOT to test. Forewarned is forearmed.
This may sound like a silly question, but are there men who just have the prostate removed as a preventative measure? Some women have their breasts removed who have a high risk of breast cancer.
That said, if nerve-sparing surgery were done early instead of doing NON-nerve-sparing surgery later (a standard radical prostatectomy), perhaps that might diminish some of the typical side-effects of the standard surgery like impotence or incontinence. But I'm only speculating.
It should be patient dependent. Screening everyone is not currently thought to be useful but those with risk factors should be screened after a discussion of risks/benefits. Your father having prostate cancer (especially if he was diagnosed before age 65) is a risk and I would advocate for it, especially if it something you are worried about and you understand that sometimes a PSA can be falsely elevated in benign conditions, which may mean you get a biopsy that ultimately wasn’t necessary, and the potential risks that could have.
Think of how full of shit most software developers are. Now think of how much worse their advice would be if they could be sued for wrong answers, but were given all of ten minutes to look at a code base and come up with a recommendation. That's a doctor.
I agree with the sibling advice to insist on PSA labs. You are your own advocate. The primary job of a doctor is actually to be a bureaucrat, the first line of offense for the health management companies whose whole function is to deny healthcare. They can easily rubber stamp a few labs once you change their risk calculus of not doing it, by explicitly laying out your risk factors.
Sure, "in the US". Obviously you don't want to be as ham-fisted as to directly reference the liability dynamic, or to pop the doctor's ego by reminding them that most of their job is pushing paperwork. The point is to take the medical system off the pedestal in your own mind, such that there is one less thing holding you back as you have to repeatedly advocate for yourself. And I would think the need to advocate for yourself applies everywhere (Sturgeon's law), regardless of whether the system is as antagonistic as the one in the US or not. The US system just drastically increases the possible damage from failing to do so.
"the need to advocate for yourself" isn't the only thing you said. I was referring to "the first line of offense for the health management companies whose whole function is to deny healthcare" doesn't apply everywhere. I also don't think it particularly applies in the US, although I'm happy to see evidence of that.
Despite all that, as you say, you won't be sued for saying that stuff.
"Evidence" is a pretty high bar to clear, especially considering one of the reasons the healthcare industry was able to get so callous is exactly by focusing on top-down whole-cohort metrics while ignoring individual patients. I'm sure everything looks great from inside the system.
Anecdotally, healthcare management companies insist on individuals getting referrals from "primary care providers", who take several weeks to provide an appointment, a few weeks more to issue a referral, and will only do one referral at a time even for unknown problems despite it taking several months to get an appointment with a specialist. And finding an available new primary doctor is most certainly not easy, either. This has been my experience for myself and a handful of other people I've advocated for, across several different "insurance" companies. Obviously none of those requirements are necessary, except for expanding the bureaucracy to meet the needs of the ever expanding bureaucracy, but it has the net effect of constructively denying healthcare.
Might there be some regional healthcare system in the US where patients are seen promptly and where the bureaucratic procedures create efficiency rather than functioning as mechanisms to stonewall and run down the clock? Sure, of course. But given the terrible dynamics that are allowed to fester, it feels like a working system is the exception rather than the norm.
Sample size of 2 from my family but PSA tests led to biopsy and treatment with full recovery so far (knocks wood). It seems like low hanging fruit in the case where PSA levels spike.
This idea that everything must be initialized (i.e. no undefined or non-deterministic behavior) should never be forced upon a language like C++ which rightly assumes the programmer should have the final say. I don't want training wheels put on C++ -- I want C++ do exactly and only what the programmer specifies and no more. If the programmer wants to have uninitialized memory -- that is her business.
It's so ironic hearing a comment like this. If what you really want is for C++ to do only what you strictly specified, then you'd always release your software with all optimizations disabled.
But I'm going to go out on a limb here and guess you don't do that. You actually do allow the C++ compiler to make assumptions that are not explicitly in your code, like reorder instructions, hoist invariants, eliminate redundant loads and stores, vectorize loops, inline functions, etc...
All of these things I listed are based on the compiler not doing strictly what you specified but rather reinterpreting the source code in service of speed... but when it comes to the compiler reinterpreting the source code in service of safety.... oh no... that's not allowed, those are training wheels that real programmers don't want...
Here's the deal... if you want uninitialized variables, then explicitly have a way to declare a variable to be uninitialized, like:
int x = void;
This way for the very very rare cases where it makes a performance difference, you can explicitly specify that you want this behavior... and for the overwhelming majority of cases where it makes no performance impact, we get the safe and well specified behavior.
this is what the D programming language does. Every var declaration has a well know value, unless it is initialized with void. This is nice, optimizing compilers are able to drop useless assignments anyway.
I guess nullptr was put in there because because because "#define NULL 0" had some bad consequences for C++ and they needed a replacement.
std::nullopt doesn't seem so different to what I was talking about; I guess it's just less verbose. When I wrote that, I was thinking of things like "std::is_same<T1, T2>::value" being there.
Safe defaults matter. If you're using x to index into a array, and it's randomly initialized as +-2,000,000,000 because that's what happened to be in that RAM location when the program launched, and you use it before explicitly setting it, you're gonna have a bad time.
And if you used it with a default value of 0, you're going to end up operating on the 0th item in the array. That's probably a bug and it may even be a crasher if the array has length 0 and you end up corrupting something important, but the odds of it being disastrous are much lower.
The whole advantage of UB is that this places less restraints on what the optimizer can do. If I say something does not need to be initialized I am giving the optimizer the freedom to do more!
So what's the issue with introducing explicit syntax to do exactly that if you want to? A safe default does not preclude you from opting out of safety with a bit of syntax or perhaps a compiler flag.
> It's so ironic hearing a comment like this. If what you really want is for C++ to do only what you strictly specified, then you'd always release your software with all optimizations disabled
The whole idea of optimizations is producing code that’s equivalent to the naiive version you wrote. There is no inconsistency here.
Two points... the first is you want an optimization that violates the "as if" rule, sure... copy constructors are allowed to violate the "as if" rule, so here you go:
https://godbolt.org/z/jzWWTW85j
Compile that without optimizations and you get one set of output, compile it with optimizations and you get another. There are actually an entire suite of exceptions to the "as-if" rule.
The second point is that the whole reason for having an "as if" rule in the first place is to give permission for the compiler to discard the literal interpretation of the source code and instead only consider semantics that are defined to be observable, which the language standard defines not you the developer.
There would be no need for an "as if" rule if the compiler strictly did exactly what it was told. Its very existence should be a clue that the compiler is allowed to reinterpret the source code in ways that do not reflect its literal interpretation.
The standard says that "Copy elision is <...> one of the two allowed forms of optimization, alongside allocation elision and extension,(since C++14) that can change observable side-effects"
I agree you have a valid point though. I'd be interested to know the committee's reasoning.
Nothing about my post has anything to do with assembly. You asked for, and I quote "Show me some O2 optimizations that will act contrary to code I wrote - meaning they violate the “as if” rule." and I provided just that. You can go on the link I provided, switch back and forth between O2 and O0 and see different observable behavior which violates the as-if rule.
I'm not sure why you're bringing up assembly but it suggests that you might not correctly understand the example I provided for you, which I reiterate has absolutely nothing to do with assembly.
clang notices that in the second loop I'm multiplying by 0, and thus the result is just 0, so it just returns that. Critically, this is not "exactly and only what the programmer specifies", since I very much told it to do all those additions and multiplications and it decided to optimize them away.
The discussion about what should be the default behavior and of what should be the opt-in behavior is very different from what should be possible. It is definitely clear that in c++, it must be possible to not initialize variables.
Would it really be that unreasonable to have initialisation be opt-out instead of opt-in? You'd still have just as much control, but it would be harder to shoot yourself in the foot by mistake. Instead it would be slightly more easy to get programs that can be optimised.
C++ is supposed to be an extension of C, so I wouldn't expect things to be initialized by default, even though personally I'm using C++ for things where it'd be nice.
I'm more annoyed that C++ has some way to default-zero-init but it's so confusing that you can accidentally do it wrong. There should be only one very clear way to do this, like you have to put "= 0" if you want an int member to init to 0. If you're still concerned about safety, enable warnings for uninitialized members.
As someone who has to work in C++ day in and day out: please, give me the fucking training wheels. I don't want UB if I declare an object `A a;` instead of `A a{};`. At least make it a compiler error I can enable!
Ideally, there would be a keyword for it. So ‘A a;’ would not compile. You’d need to do ‘A a{};’ or something like ‘noinit A a;’ to tell the compiler you’re sure you know what you are doing!
Can you identify a compiler released in the last, say, 20 years that does not give a warning (or error, if the compiler is instructed to turn warnings into errors) for uninitialized variables when warnings are enabled?
Most of them. They have to, because declaring an uninitialized variable that is later initialized by passing a reference or pointer to it to some initialization function is a rather common pattern in low-level C++.
In a sane language that would be distinguishable by having the direction explicit (i.e. things like in/out/ref in C#), and then compiler could complain for in/ref but not for out. But this is C++, so...
Not me. I want to give the optimizer the freedom to do its thing. If I say something does not need to be initialized, then the optimizer has one less constraint to worry about.
Appealing to intuition doesn't work. The way DHH thinks is the complete opposite of people like me who love TypeScript but could never make heads or tails of Rails.
The problem is that the initialization semantics are so complex in C++ that almost no programmer is actually empowered to exercise their final say, and no programmer without significant effort.
And that's not just said out of unfamiliarity. I'm a professional C++ developer, and I often find I'm more familiar with C++'s more arcane semantics than many of my professional C++ developer co-workers.
Unfortunately C++ ended up with a set of defaults (i.e., the most ergonomic ways of doing things) that are almost always the least safe. During most of C++'s development, performance was king and so safety became opt-in.
Many of these can't be blamed on C holdover. For example Vector.at(i) versus Vector[i] – most people default to the latter and don't think twice about the safety implications. The irony is that most of the time when people use std::vector, performance is irrelevant and they'd be much better off with a safe default.
Alas, we made our bed and now we have to lie in it.
vector::at() is an interesting example. Most of the time, you don't use it because you don't index vectors in the first place - you use iterators, and there's no equivalent of at() for them that is guaranteed to throw when out of bounds.
By that logic, you'd have to dislike the situations where C++ does already initialize variables to defined values, like `int i;`, because they're removing your control and forcing training wheels upon you.
It's a gotcha to be sure. Sometimes it does, sometimes it doesn't. From a reference[0]:
#include <string>
struct T1 { int mem; };
struct T2
{
int mem;
T2() {} // “mem” is not in the initializer list
};
int n; // static non-class, a two-phase initialization is done:
// 1) zero-initialization initializes n to zero
// 2) default-initialization does nothing, leaving n being zero
int main()
{
[[maybe_unused]]
int n; // non-class, the value is indeterminate
std::string s; // class, calls default constructor, the value is ""
std::string a[2]; // array, default-initializes the elements, the value is {"", ""}
// int& r; // Error: a reference
// const int n; // Error: a const non-class
// const T1 t1; // Error: const class with implicit default constructor
[[maybe_unused]]
T1 t1; // class, calls implicit default constructor
const T2 t2; // const class, calls the user-provided default constructor
// t2.mem is default-initialized
}
That `int n;` on the 11th line is initialized to 0 per standard. `int n;` on line 18, inside a function, is not. And `struct T1 { int mem; };` on line 3 will have `mem` initialized to 0 if `T1` is instantiated like `T1 t1{};`, but not if it's instantiated like `T1 t1;`. There's no way to tell from looking at `struct T1{...}` how the members will be initialized without knowing how they'll be called.
> The results show that, in the cases we evaluated, the performance gains from exploiting UB are minimal. Furthermore, in the cases where performance regresses, it can often be recovered by either small to moderate changes to the compiler or by using link-time optimizations.
That's the whole point of UB -- it leaves open more possibilities for optimization. If everything is nailed down, then the options are more restricted.
> I want C++ do exactly and only what the programmer specifies and no more.
Most programmers aren't that good and you're mostly running other people's code. Bad defaults that lead to exploitable security bugs is...bad defaults. If you want something to be uninitialized because you know it then you should be forced to scream it at the compiler.
The dev should have the option to turn it off but I think that removing a lot of undefined and non deterministic behavior would be a good thing. When I did C++ I initialized everything and when there was a bug it could usually be reproduced. There are a few cases where it makes sense performance wise to not initialize but those cases are very small compared to most other code where undefined behavior causes a ton of intermittent bugs.
Not significantly, as I understand it. There's certainly variation in LLM abilities with different initializations but the volume and content of the data is a far bigger determinant of what an LLM will learn.
For LLMs (as with other models), many local optima appear to support roughly the same behavior. This is the idea of the problem being under-specified ie many more equations than unknowns so there are many ways to get the same result.
You end up with different weights when using different random initialization, but with modern techniques the behavior of the resulting model is not really distinct. Back in the image-recognition days it was like +/- 0.5% accuracy. If you imagine you're descending in a billion-parameter space, you will always have a negative gradient to follow in some dimension: local minima frequency goes down rapidly with (independent) dimension count.
reply