Hacker News new | past | comments | ask | show | jobs | submit login
Adding comments to a static blog with Mastodon (carlschwan.eu)
297 points by ognarb on Dec 29, 2020 | hide | past | favorite | 76 comments



I achieve this using https://brid.gy/ (or https://fed.brid.gy/ if you want your blog to appear as a first class member of the Fediverse), though in my case I just syndicate out to Twitter. I then collect the webmentions using https://webmention.io/.

The nice thing about this is I get a breadth of methods for receiving comments/reactions across multiple platforms, including anything that directly supports webmention (such as https://micro.blog/, where my posts are also syndicated), without having to do any of the platform-specific wiring myself.


This looks fantastic, does it support plain RSS? My blog is a static site, but I'd like it to be exposed to the fediverse.


(another) self plug - if you use FastComments, you can get an RSS feed of your comments: https://blog.fastcomments.com/(7-08-2020)-create-an-rss-feed...


Just check the site for details. The short version is you need an Atom feed.


I did, looks like it needs webmentions, which needs server-side code.


That's what webmention.io is for. It'll receive webmentions on your behalf and then expose an API to pull em down. You hit that API during your static site build or do it dynamically with some client side JS. Either way it allows you to avoid hosting any services yourself.


Oh I see, thank you.


I keep hearing about webmentions in my fediverse bubble, maybe one day I should take a look at it.


Honestly, I set up webmentions to use brid.gy, not just for the sake of using webmentions. I wanted a two-way integration between my static blog and Twitter, and this solution has done the job nicely.

That I ended up with direct webmention support was just a bonus as far as I'm concerned.


So, would brid.gy automatically post your blog post to twitter? That's what you mean by two-way integration?


Correct. Using the right microformats, it'll auto syndicate notes as tweets and full posts as tweets with a summary and a link. All my site has to do is shoot out a webmention to brid.gy and it does the rest.

Then when people react (reply, like, etc) it'll proxy those interactions back as webmentions.


For the unaware, they're essentially identical to WordPress' Pingbacks. You configure your website so that other websites let you know whenever they link to you.


Thanks a lot for the tip. Did that last night and it was very easy to set up. Now I just need to display them on my blog :)


Great, thanks! I was just about to write something similar, and now I think I don't need to


Nice! How much does this approach cost (ballpark)?


Other than my time to set it up? Zero. All the services I mentioned are operated for free (or can be self-hosted).


Are the interactions valuable enough that if it weren’t available for free, you would be willing to pay for it each month?


I'm not doing anything to build a readership, so my content doesn't get a lot of interactions. So for me, no.

If I had a real twitter following, though, I might have a different answer.


Minor (and probably not intentional) thing I like - comments are loaded manually rather than by default on page load. For more popular blogs/sites that have comments on posts like Slate Star Codex or Less Wrong, there can be literally hundreds or thousands of comments on a single post. This makes scrolling through the post or judging the amount of content in the post via scroll bar basically impossible, but just loading comments with a button click fixes that in such a simple way.


Hiding the comments under a button was mostly because I wanted to limit to the maximum the number of request going to the mastodon server. But this has also the nice side effect of only loading them, if you need them.


It does take quite a while to load though, is there a way to improve it, maybe with some caching on the server since the query is static?


I use Github Pages for my blog and just found this little gem for implementing comments using issues: https://danyow.net/using-github-issues-for-blog-comments/


Interesting. GitHub recently added a "discussions" feature. I wonder if it would be more suitable for this use.


I switched over from a hacked together bbPress site for MaraDNS support to using GitHub discussions (which I also link to for blog comments). It’s one less site for me to maintain.


This is the strategy I'm using for https://Schwartz.world/blog

It makes sense for a coding blog since you can assume most of your users have a GitHub account.


I like the use of the fediverse for adding comments to a blog, but this is another example of conflating mastodon with the fediverse. I'm glad OP found a solution that works for them, but I hope too many people don't start using this technique.

A user could comment using any fediverse account, including Pleroma, Friendica, PixelFed, etc. Summing that up with 'mastodon account' is damaging to the whole idea of the fediverse. Non-technical people might not realize that they can choose to use the other platforms.

Also, I hate that the mastodon API has become the de facto standard for interacting with any fediverse platform. The other platforms implemented the API to maintain compatibility with mastodon clients. Now this not-quite-standard API is controlled by a single person who doesn't care about compatibility with other platforms.


  document.getElementById('mastodon-comments-list').innerHTML = 
    data['descendants'].reduce(function(prev, reply) {
      mastodonComment = `
        ...
      `;
      return prev + DOMPurify.sanitize(mastodonComment);
    }, '');
Why is it implemented like this? It looks quadratic, while it's trivial (and more intuitive IMO) to make it linear:

  document.getElementById('mastodon-comments-list').innerHTML =
    data['descendants'].map(function(reply) {
      return DOMPurify.sanitize(`
      ...
      `);
    }).join('');


It's not quadratic if the way reduce() and JS inlining work together causes this:

    return prev + DOMPurify.sanitize(mastodonComment);
to end up behaving like this:

    reduceValue = reduceValue + DOMPurify.sanitize(mastodonComment);
which ends up behaving like this:

    reduceValue += DOMPurify.sanitize(mastodonComment);
which ends up being a sequence of in-place string appends.

In any good implementation, a sequence of repeated appends takes linear time in the total length of all the appended strings, like join('').

I've just tested this theory in Safari 14.0.2 with a JS console one-liner. On this browser, both versions take linear time not quadratic, and the test corresponding to the first version even runs slightly faster.

The time difference is small though, and you're right, the second version is explicitly linear time. If I had tested with a much older browser (maybe IE8), I think the first version would have taken quadratic time.


It could be an issue for websites with huge audiences and many comments, but I (guess) this wouldn't be a big performance issue for most personal static websites. Probably, the audience size and number of comments is not big enough to cause any trouble.

Sometimes, a hacky solution is just enough ;)


That's another issue I didn't want to mention to not be too negative: the script relies entirely on the Mastodon instance to filter out the spam.


Your alternative is also pretty bad as it will generate lots of intermediate Strings before joining them up... Just use a forEach instead and append directly to the DOM:

    let root = document.getElementById('mastodon-comments-list');
    data['descendants'].forEach(reply => {
        let div = document.createElement('div');
        div.innerHtml = DOMPurify.sanitize( ... );
        root.append(div);
    });


"Pretty bad" is unwarranted in my opinion. Your alternative adds overhead of creating an extra div for each intermediate string in the code it replaces. It's trivial overhead, and may be a small number, but exactly the same applies to the overhead of temporary strings - they are comparable.

Intermediate string length doesn't matter for time complexity here. If they are short they will be fast. If they are long, the time to scan and parse in DOMPurify and innerHTML will dominate.

After building the DOM, your extra divs are processed every time that section of the DOM is styled and rendered, not just once. If the number of extra divs is low enough that this is negligible, so is the number of intermediate strings in the alternative code.

So I wouldn't assume your version is faster at setting up, and it may be marginally slower later on repeated renderings. I'd profile both versions, maybe with Browserscope (which probably has a test for this particular question already).

However if I couldn't profile and was asked my guess at the fastest version, my guess is the string-join version. I'd be more concerned with whether concatenating DOMPurify.sanitize() strings is guaranteed to maintain the same security properties.


If anyone wanted to provide half useful bikeshedding, they could make the "Load Comments" button disable and change to "Loading..." while it's being fetched. Right now there's no feedback and the user is likely to click multiple times wondering if it's working on a slower connection like mine.

They could also probably just html-escape the few foreign values rather than bringing an HTML parser to the entire template.


Are you serious? The intermediate strings are comments to a blog post... putting them all into Strings then joining them all up means you'll have huge amount of RAM being consumed for no reason. Updating the DOM as soon as you get the information is optimal for memory utilisation and for user feedback, which are more important than the total time (even if the total time is longer, which I doubt because if there's a large number of replies, most of them won't be visible in the viewport and hence should be very cheap too.


Yes I'm serious. The intermediate string's RAM is automatically much smaller than the RAM consumed by the DOM and the calculated box model and text layout graph, so its size is insignificant. If it's huge, you have much bigger problems from the DOM. Also, the intermediate string is temporary, and will be freed after the DOM is created but before the box model and text layout graph is allocated and calculated, so it might not end up using any extra RAM at all.

In this example, updating the DOM as soon as you get the information is not optimal for user feedback, because rendering is paused until after all the DOM updates anyway. DOM rendering in all modern browsers is lazy. It only starts after JavaScript returns from handling the current event queue.


Generating intermediate strings is much less expensive than painting to the DOM multiple times.


I doubt it very much. The user will see nothing until all your Strings are in memory (taking up unnecessary RAM). Update the DOM and it will be instantaneous (most of it probably won't be visible in the DOM, so likely to be very cheap as well).


Do people like comments on blogs? I can't find the links right now but I've found a couple of people talking about how they used to have comments on their blog and decided to get rid of it since they would occasionally become a curation/moderation nightmare.


Depends on the content but in good cases comments add crowd sourced fact checking to blogs. That's why I love reading HN comments on hot topic articles.


I guess I go to different places for that. The blog itself doesn't feel like a discussion platform to me like HN, reddit, etc. Those platforms have systems, rules, moderators, etc. for dealing with more free-form discussions.

I personally never read comments on blogs and I find having a "Contact Me" link at the bottom is better. On my blog I use a gmail account and a template for new articles so I get `[Contact Me](emailto:my.email+shorttitle@gmail.com)` so they're easy to categorize.


Sure but not all blogs are posted on HN.

But yeah these days you could technically have a reddit thread on your user page for each of your blog posts.

But how is that different than mastodon? At least you can self-host mastodon.


Personally, no. They distract me from content and I often find myself jumping to comments directly if I know there are some.

I'm from a region where they're pretty common on news sites, so I've created a Firefox add-on that hides the comments from about 40 local websites that I chose via Alexa ranks.


All looked great until this: "You can use your Mastodon account to reply to this post" and then you lost 99% of your possible commenters


Keeping your commenter-base small and elitist seems like a good thing. That's the point of moderation, right?


This seems like a feature to me


Is it my impression or it's basically impossible to do comment moderation by using Mastodon in this way?

Or at least harder than it should be.

I'd be happy to be proven wrong btw.

EDIT: don't get me wrong, this is cool :)


As avery42 already mentioned, this is intended to be used on your own self hosted mastodon server. I have used my own off and on, and for admins, it actually has pretty powerful moderation tools.

So the author is proposing tooting a link to the blog post article, and the leveraging your server’s REST endpoints to retrieve the comments.

This actually seems quite powerful, and elegant to me. If you run your own server, this could be a very nice & simple solution.


Yeah, fortunately for the moment I didn't receive any comments that I needed to remove. If you self host your mastodon server, it should be easy to remove comments but if you don't you basically have two choices: Report them and hope your admin remove them or filter them out with the small js script.


If you point the comments to be fetched from an instance you host, I assume you could silence/suspend users or domains on your instance and they would no longer show up in the comments (although that's probably not an ideal solution for most cases).


What I do with Hugo is:

- Added reddit and twitter attributes on the frontmatter

- After publishing and sharing to both services, I paste reddit link/tweet id to the frontmatter and rebuild

- On the blogs footer, I just embed both posts, so people can see how many replies, and also click to be redirected reply on each respective service.

Although I would love something more automated (and preferably with in-site response), at least this is a good flow for the readers.


This seems like a good approach (acknowledging it limits you to only comments from reddit/twitter users). How do you embed the posts so it shows # of replies?


> acknowledging it limits you to only comments from reddit/twitter users

As of now, my only visitors are a few anime bloggers from my circle, all of them using WordPress. As it's not possible to use WordPress comments, I'm not losing anything besides that. I could use Disqus, but I hate it, and I think they would log-in with Twitter to leave a Disqus comment so...

As for embedding, I use Hugo's Twitter shortcode, and reddit standard embed code - as all embed are from my own subreddit, I left that hardcoded, with a placeholder title, and just change the url on the code.

Seems that Twitter embed changed recently to not show number of replies anymore, but that's alright for now. They have to click on the embedded tweet and go to twitter to reply anyway :/

I might add the webmentions thing mentioned in other comments here to show the actual replies.

This is how it looks right now: https://geekosaur.com/post/anime-controversies-controversed/


I have absolutely no idea why was that downvoted. Anyone care to explain?


How about https://webmention.io/ ? Fediverse-friendly and Twitter-friendly.


seems like a cool idea, but without any demo to try, it's hard to go for it



Does syndicating third-party content onto your blog in this way expose you to legal liability for publishing that content? Are deletions and takedowns synced back from Mastodon, such as when someone is taken offline for breaking local law with their Mastodon instance, or would their content remain published on your blog?


I can technically filter comments pretty easily from the js script. When a comment is removed from mastodon it is also removed from the blog.


"One of the biggest disadvantages of static site generators is that they are static and can’t include comments." At first, I misread this as "One of the biggest advantages...", and I still think that is the correct conclusion.


I guess it depends if you want to interact with your readers.


I once made the mistake of putting an unguarded comment section on my now defunct blog. Within days, it was stuffed to the gills with tentacle porn links. There were no readers, just spam. I dropped the comments table and never went back.


If you leave the door to where you live wide open when you're away for some days, that can also end badly.


Yes, it was exactly that irresponsible.


Self plug. I run https://fastcomments.com, which would also solve this problem, for example: https://blog.fastcomments.com/(6-26-2020)-embedding-comments...


Your privacy policy is way better than the one in disqus and the like, but I still prefers for my need something decentralized and open source. Still a nice project.


Thanks! A good bit of the source is on Github, but not the backend yet.


You can embed a comments API using micro - m3o.com. Sign-up, run the comments service, copy/paste some JS code and there you have it.


I've gathered some more alternatives for static site comments on my blog [1] (ironically, I didn't include comments on my own blog)

[1] https://darekkay.com/blog/static-site-comments/


The technique demonstrated here for producing the markup is bad:

1. The use of DOMPurify is either unnecessary or insufficient. Specifically consider reply.content, which is provided from the server as HTML. (I believe it’s the only one that’s supposed to be serialised HTML; all the other fields are text.) If the backend guarantees that you get a clean DOM, DOMPurify is unnecessary†. But if it’s possible for a user to control the markup in that field completely, then DOMPurify as configured is insufficient, because although it blocks XSS and JavaScript execution, it doesn’t filter out remote resource loading and CSS, which can be almost as harmful. Trivial example, <a href=//malicious.example style=position:fixed;inset:0>pwned</a>. Given the type of content, you probably want to either blacklist quite a few things (e.g. the style attribute, and the img, picture, source, audio and video tags), or whitelist a small set of things.

2. Various fields that could hypothetically contain magic characters are dropped in with no escaping. If they do contain magic characters like < and &, you’ve just messed up the display and opened the way for malicious resource loading and problem № 3 below. Even if they are supposed to be unable to contain a magic character (e.g. I’m going to guess that reply.account.username is safe), it’s probably a good idea to escape them anyway just in case (perhaps an API change later makes it possible), and to guard against errant copy–pasters and editors of the code that don’t know what they’re doing. Perhaps at some point you’ll add or switch to reply.account.display_name, which probably can contain < and &.

3. The markup is produced by mixing static templating with user-provided input, and sanitisation is performed on the whole thing. It’s important when doing this sort of templating that each user-provided input be escaped or sanitised by itself in isolation, not as part of a whole that has been concatenated. Otherwise you can mess with the DOM tree accidentally or deliberately. Suppose, for example, that reply.content could contain `</div></div></div><img src=//ad.example alt="Legitimate-looking in-stream ad"><div class="mastodon-comment">…<div class="content">…<div class="mastodon-comment-content">…`. So this means:

• Apply attributes and text nodes to a real DOM (e.g. `img = new Image(); img.src = reply.account.avatar_static; avatar.append(img)`), or escape them in the HTML serialisation (e.g. `<img src="${reply.account.avatar_static.replace(/&/g, "&amp;").replace(/"/g, "&quot;")}">`).

• Do HTML sanitisation on just the user input, e.g. DOMPurify.sanitize(reply.content).

A part of my problem with the code as written is that, purely from looking at the code, I can see that there may well be various security holes. I require knowledge of the backend also before I can judge whether there are security holes. Where possible, it’s best to write the code in such a way that you know that it’s safe, or that it’s not—try not to depend on subtle things like “the particular fields that we access happen to be unable to contain angle brackets, ampersands and quotes”, because they’re fragile.

Incidentally, it would also be more efficient to run DOMPurify with the RETURN_DOM_FRAGMENT option, and append that, rather than concatenating it to a string and setting innerHTML. Saves a pointless serialisation/deserialisation round trip, and avoids any possibility of new mXSS vulnerabilities that might be discovered in the future. (I don’t really understand why DOMPurify defaults to emitting a string. I can’t remember seeing a single non-demo use of DOMPurify where the string is preferable to a DOM fragment.)

† Though if the server sanitised arbitrary user-provided HTML/MathML/SVG, I probably don’t trust it as much as I trust DOMPurify, for things that end up in the DOM. There are some pretty crazy subtleties in things like HTML serialisation round-tripping of HTML, SVG and MathML content. There’s fun reading in the changelogs and security patches.


I'm laughing a little about "reply.account.display_name" being safe. Back in the early days of Slashdot, I registered a super l33t username with "|<" in it to replace a "k". Well... turns out that this broke things all over the place, and I was satisfied with my username of "|" showing up everywhere :)


When I wrote my comment, it was using reply.account.username, which I would guess is safe. Now it’s using a properly escaped reply.account.display_name (good job there, ognarb), which I expect could otherwise contain < and &.


Thanks a lot for all this nice tips. I applied most of them to my blog post.


I'm building a very simple solution for this. Checkout https://blogstreak.com and I got recently featured on betalist.


The idea behind using Mastodon was to explicitly don't depends on another close source project and not depends on another silo. Your solution looks like it will be closed source and a silo.


Or your static page could have a link to a comments page. Which is also static.

It is pretty trivial to generate a new static page which includes the new comment. You do want some kind of database. A SQLite file works fine.


> manually escaping every field.

> Some fields don't look escaped.

hmm So the future of social media is everyone becoming software engineers huh. Good luck with those toots.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: