Hacker News new | past | comments | ask | show | jobs | submit login

>> There are quite a few publications that explore the concept of generating programs, either using typed or untyped functional languages.

That's Inductive Functional Programming (IFP), a kind of Inductive Programming that also includes Inductive Logic Programming (ILP). The canonical example of IFP is Magic Haskeller:

https://nautilus.cs.miyazaki-u.ac.jp/~skata/MagicHaskeller.h...

As an example of a modern ILP I suggest Popper:

https://github.com/logic-and-learning-lab/Popper/

Or Louise (mine):

https://github.com/stassa/louise

One of the DreamCoder papers describes Inductive Programming as a form of weakly supervised learning, in the sense that such systems learn to generate programs not from examples of programs, but from examples of the target programs' beuav908rs, i.e. their inputs and outputs. By contrast LLMs or slightly older neural program synthesis systems are trained on examples that consist of pairs of (programming-task, program-solving-the-task).

Another way to see the difference between Inductive Programming systems and conventional machine learning systems used for program synthesis is that Inductive Programming systems learn by solving problems rather than from observing solutions.

The advantage is that, in this way, we can learn programs that we don't know how to write (because we don't have to generate examples of such programs) whereas with conventional machine learning we can only generate programs like the ones the system's been trained on before.

Another advantage is that it's much easier to generate examples. For instance, if I want to learn a program that reverses a list, I give some examples of lists and their reverse, e.g. reverse([a,b,c],[c,b,a]) and reverse([1,2,3],[3,2,1]) whereas e.g. an LLM must be trained on explicit examples of list-reversing programs; like, their source code.

IFP and ILP systems are also very sample efficient, so they only need a handful of examples, often just one, whereas neural net-based systems may need millions (no exaggeration- can give a ref if needed).

The disadvantage is that learning a program usually (but not always - see Louise, above) implies searching a very large combinatorial space and that can get very expensive, very, very fast. But, there are ways around that.




Thanks, I know ILP quite well and also your research, Muggleton et al, etc.

It is a very interesting field which I hope makes a comeback once systems become neurosymbolic.


Knowin ILP at all, let alone well, is so rare that I totally misread your comment.

I'm one of the last purely symbolic hold-outs in ILP I guess. But if you're interested in recent neurosymbolic work using MIL this is some recent work from my colleagues:

Abductive Knowledge Induction from Raw Data

https://www.ijcai.org/proceedings/2021/254


once systems become neurosymbolic

What does this mean?


There's a whole research area that tries to combine some ideas from symbolic AI with neural architectures.

IMHO, this might eventually become mainstream. For example, see all the work that merges theorem provers with RL and NN.


What are examples of "symbolic AI", "neural architecture" and a system that is "neurosymbolic" ?


Almost everything that comes out of Deep Mind.


What makes it symbolic?


There's usually a DL part (the neural network), welded together with a classical AI algorithm, e.g. Monte Carlo Tree Search, or some kind of theorem prover.


What's the symbolic part? What does 'DL' mean here?


The symbolic part is the MCTS and/or theorem prover. DL is deep learning, i.e. the multilayered neural network. Neurosymbolic approaches form a significant branch of current machine learning. You can read up on all of it if you're actually curious and asking all this in good faith, but I suspect you're not.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: