My blog is new, and I mostly discuss SEO, which is a very competitive space in the first place. So, in order for me to start ranking and even have any chance for any kind of valuable traffic, I kept on wondering about what is it that I can do now, that reduces the time it takes for me to start experiencing traffic; in particular, makes me stand apart.
Realistically speaking, I do have a long road ahead, but in the background, I wanted to do something that I really haven’t seen being used — at least in my experience.
We all know that SEO takes time (especially if you’re a new website), and how well and for what you rank primarily depends on:
- Links (external + internal).
- Crawlability and Indexability.
- I am sure you can come up with more like CRO, Keyword research, etc., but on the most highest-level, that’s how I think about it.
In any case, content, and links involve a lot of hard & manual work (which clearly I am executing on), but at the same time, I wanted to focus on something where I could increase my chances to rank quickly (even if it’s by 0.000001%). The obvious candidate became crawlability — by virtue of which, indexability.
What I was essentially thinking here is how can I make Google easily assimilate all of my URLs as many times as possible. So, to that end, I did the following.
Tactic 1: Creation of HTML Sitemap
This concept is actually not new, but I believe what slightly differentiates mine is in the following two ways:
- I included links to all my XML sitemaps — including the index sitemap.
- I also included a link to my robots.txt. (screenshot to follow).
I undoubtedly have and will continue to have all internal links to all of the blog posts, but I am putting them in the order of oldest to latest. In other words, the oldest blog is number one, and the most recent one will always be the last. The reason for this approach were two:
- On the front end, the order is switched. That is, the latest blog post always goes on top of my homepage.
- Crawlers typically crawl left to right and top to bottom, so because of how my homepage is structured, I wanted another way for crawlers to digest my older posts with every iteration of my website.
The side benefit of creating the HTML sitemap is that I will never have orphaned URLs/a.k.a URLs with no internal links whatsoever.
Tactic 2: Adding the HTML Sitemap, All XML Sitemaps & Robots.txt in the Header Menu
As a follow up to tactic 1, I ended up adding all of them to my header menu. In other words, I created site-wide links to all of the aforementioned, and any new page that gets created will follow suit.
You May Also Want to Check Out:
- Understanding Google’s Updated Guidelines on Nofollow, Along With Sponsored, and UGC Link Attributes
- 4 Chrome Extensions You Might Want to Consider for Your SEO Needs
- How to Prepare for SEO Job Interviews: A Guide for COVID-19 & Beyond
- 3 Basic URL Structure Best Practices for Avoiding Negative SEO Performance
- Google Disavow Tool: Everything You Need to Know
The Cherry on Top
Okay, so what, if I did those things? What will it achieve? Great questions. This is what I imagine will happen.
- Google will crawl my robots.txt, and then will crawl my sitemaps (Iteration 1 for all URLs)
- Google will crawl individual pages for my site, which means it will crawl my header menu. When it goes through it, it will crawl my XML sitemap (Iteration 2 for all URLs).
- It will crawl my HTML sitemap (Iteration 3 for all URLs)
- A subset of that would be crawling the XML sitemaps and robots.txt again (Iteration 4 & 5 [perhaps?] for all my URLs).
Imagine step 2 happening multiple times, site-wide. I doubt that it will literally end up behaving that way, and that my logical thinking is kind of naive and overly simplistic, but I am quite certain that I am probably forcing the crawlers to re-affirm my XML sitemaps, and the HTML sitemap multiple times.
It is said that Google will crawl XML sitemaps as often as it does (nobody knows the exact nature of it), but I believe doing this will help Google help you with better organic search visibility. Time will tell hows this performs for me, but so far, I am seeing positive signs.
And goes without saying, feedthecuriosity.com is a new website, so it’s not really apples-to-apples.
I want to add this to my footer menu too; just need to figure out my custom code before I go ahead and implement it.
Now, does this seem spammy? I think not. I believe it’s considered a very normal behavior for site owners to link to their important pages in the header, and the footer. I also know that it’s very typical to have an HTML sitemap. I mean come on, we all know it’s more for crawlers than users. Do you think they are going to rummage through thousands and thousands of URLs just to find what they’re looking for? I mean, what are the chances, eh?
The other rational for executing on this methodology was that the XML sitemaps and the robots.txt can typically be publicly accessed (if nothing; at least by going straight to the URLs). What I mean by it is some of my XML sitemaps have a no-index command (don’t really have a purpose for them to be indexed), but the crawling of them is allowed.
Plus, and it would be weird, but what if other websites are linking to your XML sitemaps, and or the robots.txt? Probably, there are some automatic aggregators that might already be doing this, so wouldn’t Google crawl those backlinks, anyway?
Should You Take a Similar Approach?
I think so! Unmistakably though, it’s not my place to tell you what you should and should not do, but if you’re reading this and maintain a big website, I believe the benefits of these tactics (if any), will come to fruition really quickly!
I sincerely hope you try. Us SEOs might be onto something here!