Hacker News new | past | comments | ask | show | jobs | submit | altrego99's comments login

Agree that this is a problem (if the programmer is not careful).

But serious question, why even bother with this one fix?

The only reason for the fix is so to make it more difficult to make errors.

Fix arrays, then you would fix null pointer, then you might add objects, templating/generics to support a good collections library, rtti, and before you know it you are creating another one of c++, D, go, java. And we already have those.

C paved the way. Why not let it be the end of it?


Because buffer overflows are probably the number 1 security bug in C programs.


I was wondering why you were championing this idea and agreeing with the posted link in almost every way. Then I went back to the link and figured it out :)

P.S. thank you for everything you have done with D. I read in another HN thread about Better C, and it convinced me that D is the language I should be investing my time in learning and using.


> I read in another HN thread about Better C

A good tool to check out, but which hasn't been promoted much because it's new, is dpp[1]. You can directly reference C header files in your D code. With that, betterC mode becomes a viable option for adding to an existing C project.

[1] https://github.com/atilaneves/dpp


Paved the way for what? Mainstream security exploits?

There were OSes being written in better languages outside Bell Labs, had it been allowed to sell UNIX instead of giving it away for a symbolic price to universities, and the historical outcome would have been completely different.


Yes, that looked like the relationship between number of pirate ships and global warming to me.


I think you did the right thing posting this. If I were in Facebook, I'd like to learn about this.


I have always found the connection curious. In video games, for example, you do exactly this - not calculate something unless it is being observed.

Who is to say the quantum effects are not actually artifacts of some optimizations in the simulation we are in?


That's not what quantum mechanics really does. The wavefunction, which is the core of a quantum calculation, "runs" all the time whether any part of it is observed or not. A given observable property of the wavefunction may not be predictable without looking, but that doesn't make the wavefunction any easier to simulate. And without the wavefunction running, we wouldn't observe the probabilities which we observe in experiments.


> The wavefunction, which is the core of a quantum calculation, "runs" all the time whether any part of it is observed or not.

The wave function doesn't really have to "run" to exist though... a wave doesn't actually have any influence on anything until it is collapsed. If nothing observes the wave, it won't have any affect at all. It will just be there, in some cosmic register, waiting for some dust cloud to inquire about it.

Consider a polygon in a game engine, which started at 0,0 and has a known velocity. You are at tick 4762, and that polygon is represented by a position function, but it doesn't actually "run" until you declare a tick, and do the math.


I dispute whether a wavefunction can influence things without collapsing, but putting that aside...

Your example of a polygon with a constant velocity is carefully chosen: that equation has an analytical solution, so you can calculate x(t) and "skip" forward in time. This is not possible in general, even in classical mechanics. If it were a system of more than two interacting particles, you wouldn't be able to fast-forward; you would have to calculate all the intermediate timesteps even if you only wanted the last one.


That's only because as a human being you deal with closed form solutions.

Who is to say that the "universe simulator" is only doing 21st century math?


If the universe simulator can solve iterative problems in O(1), then I question whether our concept of optimization is meaningful enough to it for this discussion to make sense.


This is a very interesting question. What if the universe is stateless and any state at time T can be calculated in O(1)

What if we just lack the expressiveness or the initial conditions to model a stateless universe?

A fun consequence of stateless universe would be that you can rewind, fast forward, loop, speed up, slow down time at no extra cost.


There are some physical systems (mathematically, "Hamiltonians") which have this "stateless" property. However, if the time/energy uncertainty principle is true, simulating time T in O(1) cannot be possible in general unless BQP=PSPACE (the unlikely idea that quantum computers can efficiently solve any problem that can be stored in polynomial amounts of memory). See this paper: https://arxiv.org/abs/1610.09619


> The wavefunction, which is the core of a quantum calculation, "runs" all the time whether any part of it is observed or not.

How can you tell?


See the last sentence of my comment: it's not consistent with experiments. The success of quantum mechanics as a predictive tool comes from acting as if the wavefunction is always present, no matter what aspect of it is measured.

There may be a whole different theory of physics which can replace quantum mechanics and doesn't have wavefunctions, and has completely different simulation requirements, but at that point you could postulate anything.


Or who's to say that they didn't try exact simulations and find it too boring?


In both cases you'd need to know some properties of the distribution to get a random bitstream.

Which pixel would you be looking at? Below what threshold would you call it a 0 rather than 1? How long till you can confidently say next reading will be uncorrelated?

I don't know how the astronomical imaging solved these (and TBH cosmic ray scanning seems a bit overkill to me too).


USB webcams get hit by cosmic rays, too. This paper is interesting because it's not just looking at the low bits of a normal image, but rather at at exceptional events. It's a different sort of random.


Actually, you can get pretty far with randomness extractors with scant assumptions on the properties of the bitstream.


The only reason I use Windows is that still most tripple A games need DirectX to run.


The only reason AAA games target Windows is because you rely on Windows for that.

It's a very stable economic arrangement /s.


Shifting the blame to the customers is pointless here. Most of them can't do anything about it. They are either incapable or just not interested after the system came with the PC.

What would be the point for those people to change to Linux if they can't do there what they've bought the product for?


And yet somehow the Innovator's Dilemma exists.


Given how not great Steam's Linux push worked out, it's not entirely surprising it stays the way it has... for now.


Yes. I want Steam on Linux to work as well.


It is working. I sure didn't expect platform parity when they announced it, and yet, I don't feel like a 2nd class citizen anymore. Linux games are now cross platform, not terrible ports. And the 3 largest game engines now compile to linux out of the box (more or less). The support burden has lessened as well. It is great all around.


Oh, I didn't mean to imply that it didn't work, or there were shortcomings in the platform... only that sales haven't been great (think Steambox, etc).


As a Mac user who loves to game, the situation can be incredibly frustrating. It seems about half the games I want to play are available (and great!) but half either remain PC-only forever, or take an age to get released on Mac — I've been waiting for The Witness for over a year now! Still, if the alternative is adverts in my file explorer, I'd take absolutely anything else instead.


Who would have thought building on proprietary standards would lead to vendor lock in?


Is "most" still true, or should it be "some" now?


something something vulkan something


Here is something that never took off. And never will despite the popularity of this video. https://www.youtube.com/watch?v=YrtANPtnhyg

The difference? Selfie stick does exactly what it says it does and isn't more awkward, lowres, latency prone or difficult to use than a normal person would expect.


The article tends to imply P-value should not be used at all, rather than misused. P-value definitely means something. For example, if the p-value is 1e-10 (which is often possible), you know for sure that the hypothesis generating has been disproved. So let me rephrase the title of the article - "It's time to use P-Value correctly."


Take X = all continuous well behaved 1-1 mapping f from [0,1] to R^3, with f(0)=f(1).

A knot is an equivalence class of such functions, equivalence defined as given two functions f and g, you can "morph" one into other continuously - i.e. if you can find a parametrized well-behaved (i.e. continuous, differentiable etc.) function h(x;t) s.t. h(x;0)=f(x) and h(x;1)=g(x) and h(x;t_ is in X for all t.

It's pretty much the definition you'd come up with too.


If I knew Russian, I would translate that into the English Wikipedia page.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: