Exactly! A near property of the matryoshka embeddings is that you can compute a low dimension embedding similarity really fast and then refine afterwards.
Looks cool!
You can input either a search query or a paper URL on arxiv xplorer. You can even combine paper URLs to search for combinations of ideas by putting + or - before the URL, like `+ 2501.12948 + 1712.01815`
Sure! I first used openai embeddings on all the paper titles, abstracts and authors. When a user submits a search query, I embed the query, find the closest matching papers and return those results. Nothing too fancy involved!
Impressive!
Will you parse the papers in the future? Without citations this is not that usable for professors or scientists in general. The relevance ranking largely depends on showing these older, prominent papers.
(from our lab experience building decentralised search using transformers)
True, but similarly if your embeddings are any good they'll capture interesting associations between authors, topics and your search query. If you find any interesting author overlap results I'd be very interested!
Yeah I think it's important to be completely open about it since browser mining has a bad rep. Everyone trying it should know what happens as much as possible before just clicking 'mine'