Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is also a self hosted solution called Wallabag https://wallabag.org/

Same concept its about archiving rather than just the link, given how quickly links often die its often what you want depending on why you bookmarked it.



> Same concept

Unfortunately it looks like Wallabag has the same fundamental issue of treating links as primary entities and scraped content as additional metadata that I described in the first article linked in the parent comment.

Especially when it comes to long form articles which cover multiple topics or are by their nature inter-disciplinary, it is essential for highlights or slices of content to exist independently of their source, while retaining their source as metadata, and allowing them to be linked independently (via tags, collections, feeds, titles etc.) to other slices of content (ie. commentary on the same article).

Archiving is an important step forward though, especially for a self-hosted solution, and especially after so many people have been burned by Pinboard's failure to deliver on its archiving promises for a paid product. I ultimately took a different approach to this and instead of maintaining my own scraping/archiving product, built an integration with the Wayback Machine[1].

[1]: https://lgug2z.com/articles/notado-07-2023-update/




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: