They only did that for image generation. The more interesting part is that an LLM can approach or find the correct caption for an image, video or audio during test time with no training using only the score as a guide. It's essentially working blind almost like the game Marco Polo where the scorer is saying "warmer" or "colder" while the LLM is finding its way towards the goal. This is an example of emergent capabilities since there are no examples of this in the training data.
Actually, it's the name of the paper. And while the team also developed and released a system to elicit the behavior by doing what you described, it's entirely possible that the researchers thought the title to be the most important finding in their work.
In many cases the build output also has hardcoded paths unfortunately
so doing `brew install` inside a container with the proper volumes it’s not sufficient to fix the issue. Everything would have to run from within the container as well.
“Fill in the gaps by using context” is the hard part.
You can’t pre-bake the context into an LLM because it doesn’t exist yet. It gets created through the endless back-and-forth between programmers, designers, users etc.
But the end result should be a fully-specced design document. That might theoretically be recoverable from a complete program given a sufficiently powerful transformer.
Peter Naur would disagree with you. From "Programming as Theory Building":
A very important consequence of the Theory Building
View is that program revival, that is reestablishing the
theory of a program merely from the documentation, is
strictly impossible. Lest this consequence may seem un-
reasonable it may be noted that the need for revival of an
entirely dead program probably will rarely arise, since it
is hardly conceivable that the revival would be assigned
to new programmers without at least some knowledge of
the theory had by the original team. Even so the The-
ory Building View suggests strongly that program revival
should only be attempted in exceptional situations and
with full awareness that it is at best costly, and may lead
to a revived theory that differs from the one originally
had by the program authors and so may contain discrep-
ancies with the program text.
The definition of theory used in the article:
a person who has or possesses a theory in this
sense knows how to do certain things and in addition
can support the actual doing with explanations, justi-
fications, and answers to queries, about the activity of
concern.
And the main point on how this relate to programming:
- 1 The programmer having the theory of the program
can explain how the solution relates to the affairs of the
world that it helps to handle. Such an explanation will
have to be concerned with the manner in which the af-
fairs of the world, both in their overall characteristics and
their details, are, in some sense, mapped into the pro-
gram text and into any additional documentation.
- 2 The programmer having the theory of the program
can explain why each part of the program is what it is,
in other words is able to support the actual program text
with a justification of some sort. The final basis of the justification is and must always remain the programmer’s
direct, intuitive knowledge or estimate.
- 3 The programmer having the theory of the program
is able to respond constructively to any demand for a
modification of the program so as to support the affairs
of the world in a new manner. Designing how a modifi-
cation is best incorporated into an established program
depends on the perception of the similarity of the new
demand with the operational facilities already built into
the program. The kind of similarity that has to be per-
ceived is one between aspects of the world.
From my understanding, the big bang requires that the proto-universe was in a completely homogenous state that was then pushed out of that equilibrium for some reason. But that reason doesn't require non-zero angular momentum. It only requires that a the proto-universe was homogenous and now the universe isn't. And that is what separates pre and post big bang. I could be wrong, I am not a cosmologist. Would be happy to hear from one though.
What causes a perfectly symmetric ball on top of a perfectly symmetric hill to roll down via one side? (Probably quantum randomness if everything else is perfectly symmetric)
If the base models already have the “reasoning” capability, as they claim, then it’s not surprising that they were able to get to SOTA using a relatively negligible amount of compute for RL fine-tuning.
I love this sort of “anti-hype” research. We need more of it.
Once rebuilding your venv takes negligible time, it opens up for all kinds of new ways to develop. For example I now always run my tests in a clean environment, just to make sure I haven't added anything that only happens to work in my dev venv.
The HN submission title is editorialized in a non-helpful way. Why beat a dead horse instead of focusing on what’s actually new in TFA?
The linked paper proposes an obvious-in-retrospect form of data augmentation: shuffle the order of the premises, so that the model can’t rely on spurious patterns. That’s kinda neat.
Definitely curious, this looks very similar to Coconut, even down to the CoT encoding process in Figure 2. They go into a lot more detail though, seems like parallel innovation.
I wonder whether even those models which emit thinking tokens in reality do most of the work within the latent space so the difference is only superficial
Similar to how we ended up with the huggingface/tokenizers library for text-only Tranformers.