Hacker Newsnew | past | comments | ask | show | jobs | submit | mwkaufma's commentslogin

Articles doesn't mention the most obvious thing: no shadows! Soft nonspecific lighting everywhere, even when there's a visible light source.

Lua has mandatory TCO and several games I've been on which use it for a scripting use TCO for a state machine. Easy to debug - just look at the stack!

Meanwhile, divisions that make actual products people wants are expected to subsidize the hype department: https://www.geekwire.com/2025/new-report-about-crazy-xbox-pr...

It would appear XBox is not subsidizing anything, since Microsoft's gross profit margin is ~70%.

Although that depends on total revenues too (low margin on high revenue can be better than high margin on low revenue).


Semantics + Spin

The speaker, Freya Holmér, developed the ShaderForge and Shapes plugins for unity, and is now open-developing a new 3D modelling package. She also writes and produces well-received videos on geometric and math subjects. This isn't a clickbait video -- it's a step-by-step illustration on how gen-ai has polluted websearch for seeking technical references -- that was almost a _year_ ago, and it's only gotten worse!

Money quote:

  The reason my redirect rules still work after more than a decade isn't because I got everything right the first time. I still don't get it right! But it's because I treated URL management as a first-class problem that deserved its own solution.

Simultaneous claims that 'agentic' models are dramatically less efficient, but also forecasts efficiency improvements? We're in full-on tea-leaves-reading mode.

[Cope Intensifies]

Big "college freshman" energy in this take:

  I personally prefer to make the error state part of the objects: Streams can be in an error state, floats can be NaN and integers should be low(int) if they are invalid (low(int) is a pointless value anyway as it has no positive equivalent).
It's fine to pick sentinel values for errors in context, but describing 0x80000000 as "pointless" in general with such a weak justification doesn't inspire confidence.

Without the low int the even/odd theorem falls apart for wrap around I've definitely seen algorithms that rely upon that.

I would agree, whether error values are in or out of band is pretty context dependent such as whether you answered a homework question wrong, or your dog ate it. One is not a condition that can be graded.


Meh, you also see algorithms that have subtle bugs because the author assumed that for every integer x, -x has the same absolute value and opposite sign.

I view both of these as not great. If you strictly want to rely on wraparound behavior, ideally you specify exactly how you're planning to wrap around in the code.


What is the "even/odd theorem" ?

that all integers are either even or odd, and that for an even integer that integer + 1 and - 1 are odd and vice versa for odd numbers. That the negative numbers have an additional digit from the positive numbers ensures that low(integer) and high(integer) have different parity. So when you wrap around with overflow or underflow you continue to transition from an even to odd, or odd to even.

If you need wraparound, you should not use signed integers anyway, as that leads to undefined behavior.

Presumably since this language isn't C they can define it however they want to, for instance in rust std::i32::MIN.wrapping_sub(1) is a perfectly valid number.

Nim (the original one, not Nimony) compiles to C, so making basic types work differently from C would involve major performance costs.

And yet, Nim does overflow checking by default.

Signed overflow being UB (while unsigned is defined to wrap) is a quirk of C and C++ specifically, not some fundamental property of computing.

Specifically, C comes form a world where allowing for machines that didn't use 2's compliment (or 8 bit bytes) was an active concern.

Interestingly, C23 and C++20 standardized 2's complement representation for signed integers but kept UB on signed overflow.

Back when those machines existed, UB meant "the precise behaviour is not specified by the standard, the specific compiler for the specific machine chooses what happens" rather than the modern "a well-formed program does not invoke UB". For what it is worth, I compile all my code with -fwrapv et. al.

> UB meant "the precise behaviour is not specified by the standard, the specific compiler for the specific machine chooses what happens"

Isn't that implementation-defined behavior?


Nim (the original one, not Nimony) compiles to C, so making basic types work differently from C would involve major performance costs.

Presumably unsigned want to return errors too?

Edit: I guess they could get rid of a few numbers... Anyhow it isn't a philosophy that is going to get me to consider nimony for anything.


> making basic types work differently from C would involve major performance costs.

Not if you compile with optimizations on. This C code:

  int wrapping_add_ints(int x, int y) {
      return (int)((unsigned)x + (unsigned)y);
  }
Compiles to this x86-64 assembly (with clang -O2):

  wrapping_add_ints:
          lea     eax, [rdi + rsi]
          ret
Which, for those who aren't familiar with x86 assembly, is just the normal instruction for adding two numbers with wrapping semantics.

I had the impression, the creator of Nim isn't very fond of academic( solution)s.

I have been burned by sentinel values every time. Give me sum types instead. And while I’m piling on, this example makes no sense to me:

    proc fib[T: Fibable](a: T): T =
      if a <= 2:
        result = 1
      else:
        result = fib(a-1) + fib(a-2)
Integer is the only possible type for T in this implementation, so what was the point of defining Fibable?

I agree about sentinel values. Just return an error value.

I think the fib example is actually cool though. Integers are not the only possible domain. Everything that supports <=, +, and - is. Could be int, float, a vector/matrix, or even some weird custom type (providing that Nim has operator overloading, which it seems to).

May not make much sense to use anything other than int in this case, but it is just a toy example. I like the idea in general.


Well, I agree about Fibable, it’s fine. It’s the actual fib function that doesn’t work for me. T can only be integer, because the base case returns 1 and the function returns T. Therefore it doesn’t work for all Fibables, just for integers.

In this case, it compiles & runs fine with floats (if you just delete the type constraint "Fibable") because the string "1" can be implicitly converted into float(1) { or 1.0 or 1f64 or float64(1) or 1'f64 or ..? }. You can think of the "1" and "2" as having an implicit "T(1)", "T(2)" -- which would also resolve your "doesn't work for me" if you prefer the explicitness. You don't have to trust me, either. You can try it with `echo fib(7.0)`.

Nim is Choice in many dimensions that other PLang's are insistently monosyllabic/stylistic about - gc or not or what kind, many kinds of spelling, new operator vs. overloaded old one, etc., etc., etc. Some people actually dislike choice because it allows others to choose differently and the ensuing entropy creates cognitive dissonance. Code formatters are maybe a good example of this? They may not phrase opposition as being "against choice" as explicitly as I am framing it, but I think the "My choices only, please!" sentiment is in there if they are self-aware.


But given the definition of Fibable, it could be anything that supports + and - operators. That could be broader than numbers. You could define it for sets for example. How do you add the number 1 to the set of strings containing (“dog”, “cat”, and “bear”)? So I suppose I do have a complaint about Fibable, which is that it’s underconstrained.

Granted, I don’t know nim. Maybe you can’t define + and - operators for non numbers?


Araq was probably trying to keep `Fibable` short for the point he was trying to make. So, your qualm might more be with his example than anything else.

You could add a `SomeNumber` predicate to the `concept` to address that concern. `SomeNumber` is a built-in typeclass (well, in `system.nim` anyway, but there are ways to use the Nim compiler without that or do a `from system import nil` or etc.).

Unmentioned in the article is a very rare compiler/PLang superpower (available at least in Nim 1, Nim 2) - `compiles`. So, the below will print out two lines - "2\n1\n":

    when compiles SomeNumber "hi": echo 1 else: echo 2
    when compiles SomeNumber 1.0: echo 1 else: echo 2
Last I knew "concept refinement" for new-style concepts was still a work in progress. Anyway, I'm not sure what is the most elegant way to incorporate this extra constraint, but I think it's a mistake to think it is unincorporatable.

To address your question about '+', you can define it for non-SomeNumber, but you can also define many new operators like `.+.` or `>>>` or whatever. So, it's up to your choice/judgement if the situation calls for `+` vs something else.


That’s fair. Sounds like the example was composed in haste and may not do the language justice.

I think the example was chosen only for familiarity and is otherwise not great. Though it was the familiarity itself that probably helped you to so easily criticize it. So, what do I know? :-)

FWIW, the "catenation operator" in the Nim stdlib is ampersand `&`, not `+` which actually makes it better than most PLangs at visually disambiguating things like string (or other dynamic array, `seq[T]` in Nim) concatenation from arithmetic. So, `a&b` means `b` concatenated onto the end of `a` while `a+b` is the more usual commutative operation (i.e. same as `b+a`). Commutativity is not enforced by the basic dispatch on `+`, though such might be add-able as a compiler plugin.

Mostly, it's just a very flexible compiler / system.. like a static Lisp with a standard surface syntax closer to Python with a lot of parentheses made optional (but I think much more flexible and fluid than Python). Nim is far from perfect, but it makes programming feel like so much less boilerplate ceremony than most alternatives and also responds very well to speed/memory optimization effort.


Thanks for the discussion! I know a lot more about nim than I did this morning.

I see, I misunderstood your complaint then.

However, the base case being 1 does not preclude other types than integers, as cb321 pointed out.


You're completely missing the point of this casual example in a blog post ... as evidenced by the fact that you omitted the type definition that preceded it, that is the whole point of the example. That it's not the best possible example is irrelevant. What is relevant is that the compiler can type check the code at the point of definition, not just at the point of instantiation.

And FWIW there are many possible types for T, as small integer constants are compatible with many types. And because of the "proc `<=`(a, b: Self): bool" in the concept definition of Fibable, the compiler knows that "2" is a constant of type T ... so any type that has a conversion proc for literals (remember that Nim has extensive compile-time metaprogramming features) can produce a value of its type given "2".


There can be a lot of different integers, int16, int32 ... and unsigned variants. Even huge BigNum integers of any lengths.

Says more about the relatively poor infosec on etherium contracts than about the absolute utility of pentesting LLMs.

4.6M is not a lot, and these were old bugs that it found. Also, actually exploiting these bugs in the real world is often a lot harder than just finding the bug. Top bug hunters in the Ethereum space are absolutely using AI tooling to find bugs, but it's still a bit more complex than just blindly pointing an LLM at a test suite of known exploitable bugs.

According to the blogpost, these are fully autonomous exploits, not merely discovered bugs. The LLM's success was measured by much money it was able to extract:

>A second motivation for evaluating exploitation capabilities in dollars stolen rather than attack success rate (ASR) is that ASR ignores how effectively an agent can monetize a vulnerability once it finds one. Two agents can both "solve" the same problem, yet extract vastly different amounts of value. For example, on the benchmark problem "FPC", GPT-5 exploited $1.12M in simulated stolen funds, while Opus 4.5 exploited $3.5M. Opus 4.5 was substantially better at maximizing the revenue per exploit by systematically exploring and attacking many smart contracts affected by the same vulnerability.

They also found new bugs in real smart contracts:

>Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694.


True, I'd be curious to see if (and when) those contracts were compromised in the real world. Though they said they found 0 days, which implies some breaches were never found in the real world.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: