After only a single month of development (content include, thanks to all our writers) we are excited to finally let you learn about TheFreeBundle
Thanks to everyone who endlessly contribute to the Hugo project.
We danced the Mamushka while Nero fiddled…we danced the Mamushka at Waterloo…we danced the Mamushka for Jack the Ripper. And now, Hugo contributors- this Mamushka is for you!
Yeah, what a great looking site. I’d be interested in hearing about any “lessons learned” experiences or interesting techniques that went into making it.
The main thing I’m fighting against right now is SEO. As you might know, single page sites are a big no no for google. I lost all future SEO points by doing this type of layout.
On a regular website, you can simply have different sections and each section is a new opportunity for SEO. Google works very well when you do that. The more content you put inside, the more your site gets indexed.
Here, while my site will be indexed (meaning its address) its content won’t be indexed. So no matter how much I put into it, sadly, google won’t notice any changes in the content.
I’m trying to get my head around that. I’m using , tags and everything I can html wise to help google index my site, but at the end of the day there’s little I can do.
That’s one of the big lessons I learned.
That said, I am not going to change the layout. It stays this way.
The main problem was the loading time for the videos. I had to add a lazy load script for that alone. The problem with adding lazy load for all the site content is that you can’t browse it through the main menu if you do, so it has to stay a very long, very heavy page.
Thankfully, cloudfront came to the rescue and while my pocket will bleed, at least it won’t a lot.
I’m getting some comments about people saying “the site is too long, like 4 or 5 A4 blocks of text, who will read that!”
I think there must be a way to use index-able sections with content files, and still pull those files’ content onto the top index page. Maybe an issue number could be put in the config.toml to make the index page grab the relevant {{ .Content }} bit from several files…
That way, the SEO poor top page, https://www.thefreebundle.com, is pulling content from the current /issue-N, and, previous issues /issue-1, /issue-2 etc, can be indexed by Google.
Just spitballing. I’ve never setup a site like this. It’s an interesting challenge.
My mom told me there will be days like this. But I stick to my guns. I’m a programmer, dammit! (well, not really).
As it is, I’m creating a /issue-1/, /issue-2/inside content and it creates the folders and the index.html of each post too, but I don’t use those becausue the main index.html in issue-1 has everything compiled in one single page (as you can see online)
You can keep the layout as is but at the same time you can configure your site so that each article has its own permalink for Google to index.
It would be a shame for all this content not to be indexed by search engines.
Also I think that you need some kind of a loading animation script, or a web host with a CDN. It took a while for the website to load on my end and it didn’t look that great while loading.
Other than that I’m impressed with the long format of the page and the amount of content. It’s not tiring to the eye. It’s one of the better sites I’ve seen when it comes to comic books and games.
Each article has its own permalink, actually. It even has its own folder, but I did not upload those because their headers point back to the index (meta refresh)
I believe it would be worse to have all these redirects? Not sure, really.
The one thing I do is have a rss feed (https://thefreebundle.com/index.xml) and each article has its own link to its anchor, plus, the content’s summary. Not sure if that’s enough for google or if I should just leave the folders with the posts and the meta refresh for SEO purposes.
I added a loading script to the footer, since the page is long I first show that loading text, then hide it five seconds later (which gives enough time for the site content to load and push it down, after which it becomes the footer).
CDN = check
Loading = kinda. You meant what I did or the whole “loading the entire page”. I tried not to go that route because, if the site gets slow, people might get tired of waiting for the loading message to finish its thing.
Maybe Lazy Load (using it for the looping videos) but not sure if that’s too good for the anchor menu (will def speed up the sit, though)
Meta refresh redirects are not good at all for SEO.
Google Bot does look into RSS feeds as far as I know but actual web pages.
A good indication of what Google Bot will see when visiting your site is the sitemap: https://thefreebundle.com/sitemap.xml
Basically it will only see the 2 versions of the homepage (en + es) and that’s it.
I suggest that you remove the meta refresh tags and upload the permalink pages on your server. You don’t have to add links from your homepage to the permalink pages but at least now these will be indexable by Google and also you could share individual article pages on social media.
Also I would make sure to add links to the homepage on those permalink pages.
You really need to use Lazy Loading for images and videos that are below the fold of your homepage to improve the loading time.
P.S.1 All the stuff I wrote about are OT in this Forum (since it’s only about Hugo) if you have more questions try Stack Overflow and other Forums meant for general Web Development. Also you may want to use the Google Search console so that you have some control over how your page appears in the Search Results.
Glad to come across with a fan of comic books! I consider it literature, that’s why its under literature, not simply “comics”. Some of the greatest authors have seen their work featured in comic books.
About the Refresh issue, I did some research and I believe google doesn’t really mind about it, they just don’t recommend it.
Upon further research, I believe that’s the desired result for multilingual sites? @bep might be able to confirm. HUGO basically uses that root sitemap.xml to point at the two versions of the sitemap.xml file (one in English, the other in Spanish), which are the ones pointing to the page content in both languages.
What I believe is that google (through webmaster tools) reads that “main” sitemap.xml that points to the other two (bilingual sitemap.xml) with the articles.
Also, that sitemap (view-source:https://thefreebundle.com/en/sitemap.xml) looks kind of strange (there’s only one rel=“alternate” link when all articles are in both, SPA and EN)
My sitemap is kind of regular.
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
xmlns:xhtml="http://www.w3.org/1999/xhtml">
{{ range .Data.Pages -}}
{{ if .Content -}}
<url>
<loc>{{ .Site.BaseURL }}issue-{{ .Site.Data.issue_data.issue.current }}/index.html#{{ .Title | anchorize }}</loc>{{ if not .Lastmod.IsZero }}
<lastmod>{{ safeHTML ( .Lastmod.Format "2006-01-02T15:04:05-07:00" ) }}</lastmod>{{ end }}{{ with .Sitemap.ChangeFreq }}
<changefreq>{{ . }}</changefreq>{{ end }}{{ if ge .Sitemap.Priority 0.0 }}
<priority>{{ .Sitemap.Priority }}</priority>{{ end }}{{ if .IsTranslated }}{{ range .Translations }}
<xhtml:link
rel="alternate"
hreflang="{{ .Lang }}"
href="{{ .Site.BaseURL }}es/issue-{{ .Site.Data.issue_data.issue.current }}/index.html#{{ .Title | anchorize }}"
/>{{ end }}
<xhtml:link
rel="alternate"
hreflang="{{ .Lang }}"
href="{{ .Site.BaseURL }}issue-{{ .Site.Data.issue_data.issue.current }}/index.html#{{ .Title | anchorize }}"
/>{{ end }}
</url>
{{ end -}}
{{ end -}}
</urlset>
UPDATE ohhh there we go. My files were in both, Spanish and English, so the sitemap.xml shows the rel=“alternate” only on those with the same filename.
Example:
What I have
Hello.en.md
Hola.es.md
So the rel=“alternate” didn’t showed up for those.
What I should have:
Hello.en.md
Hello.es.md
That’s the only way the sitemap.xml is going to show the rel=“alternate”.
@Javier_Cabrera of the Cabrera Brothers the Cypher’s creators!??"?"?
what a surprise to find you here your new magazine is awesome… even if really big, it loaded really fast.
c ya