Anyone for Webmention?

Not sure if related, but you may want to put quotation marks around your HTTP attributes :slight_smile:

<link href="" rel="me">
<link href="" rel=me>
<link rel="pingback" href="">
<link rel="webmention" href="">
<link rel="authorization_endpoint" href="">

Also, in order to work properly, needs to have linked accounts registered as rel=me on your homepage. So if you want to gather feedback from more networks than Twitter and Github, you’ll need to add them there :slight_smile:

I had the double quotes originally, but looks like the HTML minification step removes them where they are OK to not be present. Example: Attributes whose values don’t have spaces probably don’t need to be quoted.

Correct. But as you quoted, I already have those there.

Well, those are the only 2 networks that make sense to me to associate with that blog. :slight_smile: I see that sent a lot more webmentions than I derive from the JSON object received from A very easy way to compare this is comparing the number of likes; they are not the same.

I’ll open an issue on the repo to understand this better.

1 Like

Are you sure it’s not a caching issue? Have you tried calling the API from some other place than Hugo? (for example from curl)

Because if you don’t run hugo with the --ignoreCache param, it will cache remote content “forever” (unless it’s stored in a tmpfs you clean regularly). So you could get a first batch of webmentions from the API, but this list would actually never be refetched from for updates :slight_smile:

That’s typically the kind of situation that got me wondering about configurable cache TTL.

1 Like

I can totally understand that. I couldn’t start writing stuff on my blog before I had my theme in a presentable state as per my liking AND had the whole blogging flow set up as I needed it work through Emacs Org mode.

TIL, thanks.

That’s a good summary, I will include that in a blog post about intro to indiewebbing (there are already many such posts by others, but why not…) :slight_smile:

Now that I have added the microformats2 classes to my template, it makes more sense.

For starting steps, the direct integration with seems to work fine. I will have to wait for someone to set up and explain the webmention moderation flow for static sites. But yes, the spam moderation will be a concern as webmentions becomes more widespread.

That’s another unknown for me… exploring that for some other day maybe…

Yes (as I recently learned). I just add a “Send Webmention” form to my posts.

Your help was very critical in helping me indiewebifying my site. Thank you! I believe I am almost there* in getting rid of Disqus.

Next steps:

  • Refine the webmentions integration
  • Download existing Disqus comments, convert that to a JSON format and present them somehow under the posts. Done!
1 Like

No, it was something else… API sends a JSON object with only 20 links by default… I just had to increase that count:

{{ $domain := "" }} <!-- Hard-code the domain during testing on localhost, branches -->
{{ $num_mentions_max := 200 }}
{{ $webmentions_rcv := getJSON (printf "" $domain .RelPermalink $num_mentions_max) }}


That --ignoreCache tip works for me when running hugo server locally. Thanks. Though, yes, it would be nice to get your configurable cache TTL feature baked in.

1 Like

They have this limit for a reason.

I read quite a bit about Webmentions since you mentioned it in the Disqus thread and I think this thread is a better place to reply.

I really liked the concept of using Webmentions to render comments and likes from various sources in a Hugo site.

But what stopped me from reading further is the DDOS attack vector:

And from what I’ve seen in this link the measures that one needs to take in order to prevent such an attack are quite complex.

@cmal @kaushalmodi What sort of measures are you taking to prevent abuse? Care to share?

That just limits the size of the JSON object returned by the API. I believe it’s 20 because webmentions are commonly displayed using Javascript, with N number of mentions per page. As I am creating a static page of Webmentions, I don’t need to paginate stuff… I just get every WM out there for that post.

Hmm. I haven’t yet thought much about that. I believe, I will have to deal with that when that happens. But your concern is legit, if you are thinking about implementing this for a commercial site. In my case, it’s a personal blog, which receives a handful of comments per month. If ever DDoS becomes an issue, I might need to just filter those out from the JSON object returned by getJSON.

In summary, the Webmentions commenting approach is much much nicer than using Disqus for me :slight_smile:

1 Like

Fair enough. I’m not a SysAdmin or an InfoSec expert but this much know: never play with things that have potential attack vectors.

Anyway I’ve been looking at your site’s source and how you implemented Webmention. Very clever approach. :+1:

But I was particularly intrigued with how you made those Twitter interaction buttons @kaushalmodi . It got me thinking and I’m going to implement something similar on my blog but with Mastodon (that I joined today).

I’ve been looking into the Mastodon API and it’s amazing! Did you know that you can make a POST request to favorite something on Mastodon through your Hugo site? Also you should be able to fetch Mastodon replies pretty much like you do through Webmention for the Twitter ones.

True Mastodon is small compared to Twitter etc. But… It feels a bit like the web used to be in the late 90s and early 00s and I really like it.

It’s not clear though if any kind of attack is easily possible on my site … it’s static… minimal attack surface than even most of the other static sites. I use and enforce https, have a pretty strict Content Security Policy (no inline scripts allowed… so even if someone injects inline scripts, they won’t run), disallow frames, … and a lot more (search of Security Headers).

So the worst case “attack” that I can forsee because of Webmentions is a static page with 1000’s of spam Webmentions (which can be easily taken care of).

Thanks :blush:! It was fun to implement something unique.

Welcome to Mastodon! It has a unique kind of crowd, and it’s a breath of fresh air… no ads! The only reason I haven’t integrated comments directly to Mastodon is that the Mastodon dev recently refused integrating auto-sending of Webmentions, and also there’s no mechanism to back-feed Webmentions from Mastodon (as there are so many instances out there). So discussions in Mastodons will stay stuck there… and never show up in the Webmentions feed below each post.

With the case of Twitter, discussions happening on the twitter thread get back-fed as Webmentions via a service called and so they show up below the respective post.

Once there’s a good Webmentions integration with Mastodon, I am switching to that as a backup interaction method for folks unfamiliar with sending Webmentions.


A DDOS attack may not be easy on your static site per se especially since you have a strict CSP as you pointed out. But it may slow down the server where your site resides through a multitude of HTTP requests. If I understood it right you have an automated process to trigger a build of your site whenever a new Webmention is received. Now imagine a scenario where Netlify receives thousands of build requests for your site and others at the exact same time. They probably already have precautions to mitigate such an attack, but I for one don’t feel comfortable with the Webmention i.e. the old Pingback vulnerability.

No. But with Hugo’s getJSON you can render everything from a Mastodon status on you site. Also if you give your application read & write access you can have direct interactions such as favoring a status from your site (if the visitor is an already authenticated Mastodon user). Their API exposes everything since it’s not a silo.

I saw Bridgy and I tested this feature on your site out of curiosity.

In this post I pressed the Twitter heart, went to Twitter and liked your status, but the count hasn’t updated on your site and it’s been like 20 hours already. Just thought to let you know.

Thanks! So far I like what I’ve seen a lot and also I really like the fact that there is a dedicated Mastodon instance for art.

Many thanks for sharing your configuration. Very useful source of inspiration.

1 Like

@kaushalmodi You have my thanks also for having made your Content Security Policy configuration available. It is especially great since you document the various options.

Also if you have the time maybe you should write a forum Tips & Tricks or even better include a page in the Hugo Docs about implementing Content Security Policy for a Hugo site with Netlify.

And I don’t think it would be out of place in the Hugo Docs at all. We need this kind of info available for Hugo sites because security is a very important topic. And I think @bep and others would agree on this.

1 Like

Totally agree!

Love that documented netlify.toml - man after my own heart! And you’ve pointed the way for me to simplify my own CSP which (only implemented for reporting so far) is threatening to be bigger than some content!!

And don’t get me started on the size of metadata I’m serving now just to satisfy all the incompatible microformats that different search engines and social media sites require.

I love the thought that Netlify have put into their service as well. All the right things either thought about in advance or allowed for because of the flexible design.

Once I’ve finally got to the bottom of all of the weird and wonderful headers and meta tags you are supposed to use these days, I’ll be documenting it in the hugo section of my blog at

When did web development get so complecated!! It wasn’t like that when I started nearly 2 decades ago :slight_smile:

When IE reigned supreme?

Personally I have no nostalgia for the old days from a web design perspective. I have too many hair pulling memories from back then.

And I much prefer how things are now. It may be more complex but at least these days browsers tend to respect specs not ignore them.

Haha - I don’t disagree particularly. Things were simple, but ugly.

But the more serious point is that my Hugo generated content is vastly larger than just the content due to the myriad of headers and meta tags that I have to send in order to meet all the “standards” now in use. It’s a mess.

Sorry, probably a bit off-topic.

1 Like

Webmention, like many federated protocols, is indeed vulnerable to some forms of DDOS attacks, although no such practical attack was spotted so far. Only annoying but harmless spam has been spread throughout the indieweb.

I would say the risk is more for the webmention endpoint than for your static website. The endpoints have to parse remote web pages to understand the semantics behind. In this regard, ActivityPub is more appropriate because parsing JSON is waaaaay faster and less error-prone than parsing raw HTML. ⁽¹⁾

But in any case nothing is wrong with the protocol in itself, we just need crypto auth (and encryption) plus moderation built on top. Some people within the indieweb movement are already working on this. We just need more pioneers to get on board and challenge the status quo :wink:

Well you could write your own ActivityPub endpoint (it’s really not hard), but if you need a prepackaged solution then there’s, a bridgy ActivityPub endpoint that will turn ActivityPub + AS 2.0 → Webmention + Microformats 2.0 :slight_smile:

Wow, that sounds nice. How does that work? How can you guess the Mastodon API address (i.e. the instance) when you generate your site, if it changes with every user? Do you have to use some Javascript vodoo to let users enter their instance address, and from there update the form action?

On a slightly different topic, i had started writing some content plugin system for my build script. I’m currently thinking on refining it and working on proper integration. Are other people interested in such pre-packaged solutions? Is it worth spending my time working on? :slight_smile:

⁽¹⁾ How a webmention endpoint works : it receives a request saying “page A linked to page B”. From there, it will try to load page A and B, ensure it’s responsible for page B’s webmentions and that the endpoint that sent the request is indeed page A’s webmention endpoint. This requires parsing both HTML pages to find rel links. Then, if those links are correctly setup (i.e. we’re not receiving webmention for a website we don’t manage, or from a fishy website), we can parse page A’s body to interpret the interaction that took place on that page. So that’s a lot of parsing and guessing for every interaction, which makes it more vulnerable to DOS/DDOS.

I am only interested in my published statuses, it’s easy to get these by entering the status id in the frontmatter of a Hugo content file. I haven’t had the time to test the POST request yet. It might not work out, but if it does I’ll let you know.

BTW you can read the Mastodon API docs over here if you want:

Well, I’m interested in the build workflow and how to integrate other dynamic sources. Mentions and comments are two things that might be updated at build time but things like twitter feeds and RSS feeds are a couple of other things I’m interested in.

This really falls under the banner of data-driven sites. We currently have a simple form of this with Hugo’s data folder which is great. But there are a number of use-cases where more open, dynamic data might be used to drive new or updated content.

Having a standard process for this using well recognised and supported tools would be, in my opinion, be a real boon to Hugo. However, I recognise that this probably isn’t a core Hugo feature.

Anyway this is just me musing about the art of the possible - I’m still learning the basics.

Sorry if I’m a little late but I saw this topic some time ago and now it has a lot of useful content I was looking for a hugo site with webmentions to have as an example as I want to implement them on my site.

But first I want to ask you about your twitter interaction buttons since I was looking to have something similar (Show a tweet conversation thread) to replace comments but your solution is way better and cleaner. I went to look your site code but I still don’t have very clear your posting process, do you have to tweet and then copy the tweet ID and paste it on the post front-matter to make it work or is a more automatic process.

Maybe as a webmention it gets the id or I’m totally confused?

1 Like

Sorry to say, but that’s what it is.

There is no automatic process involved. It you look at the git blame for a post, you will find the secret – I post, I tweet, I add the tweet ID, and I re-publish :slight_smile:.

1 Like