Probably misspoke, returning or passing anonymous functions cause allocations for the closures, then calling them causes probably 4 or 5 levels of pointer chasing to get the data that got (invisibly) closed over
I don't think there is much pointer chasing at runtime. With lexically scoped closures it's only the compiler who walks the stack frames to find the referenced variable; the compiled function can point directly to the object in the stack frame. In my understanding, closed over variables have (almost) no runtime cost over "normal" local variables. Please correct me if I'm wrong.
I meant more like storing closures to be used later after any locals are out of the stack frame, but tbh that's an abstraction that also causes allocations in C++ and Rust. On the other hand, no idea how JS internals work but I know in python getting the length of an array takes five layers of pointer indirection so it very well could be pointer to closure object -> pointer to list of closed variables -> pointer to boxed variable -> pointer to number or some ridiculous thing like that.
In C++, lambda functions don't require dynamic memory allocations, only type erasure via std::function does (if the capture list is too large for small-functiom-optimization)
However, C++ lambdas don't keep the parent evironment alive, so if you capture a local variable by reference and call the lambda outside the original function environment, you have a dangling reference and get a crash.
JavaScript is almost always JIT’ed and Python is usually not, so I wouldn’t rely on your Python intuition when talking about JavaScript performance. Especially when you’re using it to suggest that JavaScript programmers don’t understand the performance characteristics of their code.
I think I confused you with the author of the post, something about the way you phrased your original post in this thread. Re-reading it now I'm not sure why I thought that! Sorry!
Isn't this NP-complete? The "solution" here would be the steps to take in the path which can be found by brute-force
Wikipedia:
> 2. When the answer is "yes", this can be demonstrated through the existence of a short (polynomial length) solution.
> 3. The correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying all possible solutions.
It will be like YouTube. Distribution will be hard and most of it will be slop but every now and then you’ll discover something so good and so creative and it couldn’t have possibly existed before that it makes the whole experience worth it. The best creative works are led by one person and I’m excited to see what people can come up with.
The internet has sparked so many of these useless investigations, same with all those lost media forums, how many man hours were spent trying to find some obscure 1999 failed pilot for some Nickelodeon show no one’s watched. They stitched Jack Nicholson’s face over a photo of some guy from 1921. Who cares who the guy is? Are these people that bored? Is it some OCD tendency that every trivial detail in history should be logged and archived?
This is a distinctly western internet phenomenon. It’s decadence. There was a tweet or something that said these people 100 years ago would be chronicling every species of beetle in their local area or designing intricate alternate history about how roads would work in an anarcho-capitalist society. Now that’s autistic, but this is like on another level. Would this guy’s mom be proud he found the identity of the guy at some random party from 1921 who’s photo got stitched over with a picture of Jack Nicholson and included for a few frames at the end of the shining?
The Shining is one of the most heavily analyzed films of all time, for good reason. It's not just some random Michael Bay flick. It's really almost unbelievable how deep the rabbit hole goes, so no, it's not surprising in the very least that people would spend time on a detail like this. If people 100 years ago weren't doing this kind of thing with films, it's because they didn't have access to films the way we do today, if The Shining was around 100 years ago and people had home computers they could play it on and could download it easily over the internet, you can be sure people would be analyzing it like crazy back then too.
The whole thing hinges on the fact that AI will be able to help with AI research
How will it come up with the theoretical breakthroughs necessary to beat the scaling problem GPT-4.5 revealed when it hasn't been proven that LLMs can come up with novel research in any field at all?
Scaling transformers has been basically alchemy, the breakthroughs aren’t from rigorous science they are from trying stuff and hoping you don’t waste millions of dollars in compute.
Maybe the company that just tells an AI to generate 100s of random scaling ideas, and tries them all is the one that will win. That company should probably be 100 percent committed to this approach also, no FLOPs spent on ghibli inference.
For starters, this completely blocks generation of anything remotely related to copy-protected IPs, which may actually be a saving grace for some creatives. There's a lot of demand for fanart of existing characters, so until this type of model can be run locally, the legal blocks in place actually give artists some space to play in where they don't have to compete with this. At least for a short while.
Fan-art is still illegal, especially since a lot of fan artists are doing it commercially nowadays via commissions and Patreon. It's just that companies have stopped bothering to sue for it because individual artists are too small to bother with, and it's bad PR. (Nintendo did take down a super popular Pokemon porn comic, though.)
So it's ironic in this sense, that OpenAI blocking generation of copyrighted characters means that it's more in compliance with copyright laws than most fan artists out there, in this context. If you consider AI training to be transformative enough to be permissible, then they are more copyright-respecting in general.
So I spent a good few hours investigating the current state of the art a few weeks ago. I would like to generate a collection of images for the art in a video game.
It is incredibly difficult to develop an art style, then get the model to generate a collection of different images in that unique art style. I couldn't work out how to do it.
I also couldn't work out how to illustrate the same characters or objects in different contexts.
AI seems great for one off images you don't care much about, but when you need images to communicate specific things, I think we are still a long way away.
Short answer: the model is good at consistency. You can use it to generate a set a style reference images, then use those as reference for all your subsequent generations. Generating in the same chat might also help it have further consistency between images.
Even with custom LoRas, controlnets, etc. we're still a pretty long ways from being able to one-click generate thematically consistent images especially in the context of a video game where you really need the ability to generate seamless tiles, animation based spritesheets, etc.
I didn’t mean art. I meant visual internet content of all kinds. Influencers promoting products, models, the “guy talking to a camera” genre, photos of landscapes, interviews, well-designed ads, anything that comes up on your instagram explore page; anything that has taken over feeds due to the trust coming from a human being behind it will become indistinguishable from slop. It’s not quite there yet but it’s close and undeniably coming soon
This is exactly Zig's strength, not its problem. The flexibility/lack of interfaces allows you to choose the correct abstraction for the given task. In C++, every writer is `anytype`, in Java every writer is `AnyWriter`, in Rust every writer is `GenericWriter`. They all have tradeoffs but "fits better due to language design" shouldn't be one of the tradeoffs considered.
I may be misunderstanding the article - but it looks like GenericWriter in zig still has dynamic dispatch overheads at runtime in all cases. Rust traits are more like “anytype” - since they get monomorphized by the compiler and have no runtime overhead at all. But unlike zig’s anytype, traits have excellent documentation (since they’re explicit, not implicit interfaces). Rust can also implicitly create an “AnyWriter” style object if you don’t want monomorphization via &dyn Trait. But you often don’t need to, because you can store trait objects in other structs just fine. - Though admittedly, you can do the same in zig via comptime structs.
There are a lot of things I admire about zig. But for interfaces like Writer, rust’s trait system seems like the better tool. I wish zig would copy rust’s trait system into the language.
No, GenericWriter takes a function at compile time and it gives you a GenericWriter struct that calls that function (at compile time), no function pointers needed.
There's definitely overhead with the GenericWriter, seeing as it uses the AnyWriter for every call except `write` (1)
genericWriter - 31987.66ns per iterations
appendSlice - 20112.35ns per iterations
appendSliceOptimized - 12996.49ns per iterations
`appendSliceOptimized` is implemented using knowledge of the underlying writer, the way that say an interface implementation in Go would be able to. It's a big part of the reason that reading a file in Zig line-by-line can be so much slower than in other languages (2)
I was curious, so I ran your zig version myself and ported it to rust[1].
I think you forgot to run your benchmark in release mode. In debug mode, I get similar results to you. But in release mode, it runs ~5x faster than you reported:
genericWriter - 4035.47ns per iterations
appendSlice - 4026.41ns per iterations
appendSliceOptimized - 2884.84ns per iterations
I bet the first two implementations are emitting identical code. But appendSliceOptimized is clearly much more efficient.
For some reason, rust is about twice as fast as zig in this test:
The line of thinking is right there: "not sending any info to anyone else anywhere at any time"
There are way more egregious privacy concerns than sending non-reversibly encrypted noisy photos to Apple. Why draw the line here and not the far worse things happening on your phone and computer right now?
The leaf structure itself doesn't really have anything to do with ZF set theory or Von Neumann ordinals, other than supplying the inspiration for the base structure. Same way prime numbers don't generate the spirals in the video, all numbers do. So leave the ordinals out of this, experiment with different tree construction methods and you might uncover something cool about trees (but not necessarily about set theory)