Google being one of the most popular search engines is the target and dream of every website owner when it comes to page indexing. Sure, there are other search engines out in the market too like Bing and Yahoo. However, Google is generally everyone’s to-go-to search engine when it comes to looking up for something.

This is precisely why web developers attempt to integrate into their websites the SEO tactics that will be recognized by Google.

Now aside from indexing web pages for user queries, Google is also quite keen and prompt in releasing the latest updates concerning Google. Likewise, the latest Google I/O was one of the events in which SEO strategies supported and not supported by Google came to light.

Explore Basics Of Adult SEO: How Adult SEO Works

SEO strategies not supported by Google

Generally, Google doesn’t support multifarious SEO strategies for each of their obvious reasons. However, below are 5 of the most unexpected ones which you might have thought are supported by Google:

1. Language tags

The implementation of language tags varies quite a lot across the web. For instance, they can be included in a request header, meta tag or even as a basic language element in an HTML tag. Due to this inconsistency caused by the variations in their application, Google overlooks them right away. Instead, in order to evaluate a web page’s language, Google has its own algorithms of doing so by examining the page text, itself.

2. Cookies

Cookies can potentially alter the content of a web page because it makes the page stateful. Google, on the other hand, aims to view pages in their original stateless position, similar to how any new visitor would see it. Ergo, Googlebot doesn’t use cookies while crawling pages. On the contrary, the only time Googlebot will eventually resort to cookies is when the page content doesn’t function in its isolation.

3. Sitemap frequency and priority

The original concept for Sitemap priority was to set a value between 0 to 1, inclusive, for every URL to control how frequently this URL will be crawled with respect to others. Likewise, the Sitemap frequency attribute was meant to notify the update frequency for a web page in terms of daily, weekly and monthly. However, web developers ended up setting their URLs’ priorities to 1 and frequencies to daily, for obvious reasons, leaving the entire system senseless. In light of this poor implementation of the sitemap attributes, Google now discards these values and uses its own algorithms to determine the crawl frequency of a web page. Essentially, this algorithm increases the crawl rate for those pages which change dramatically between crawls.

 

4. HTTP2

Although HTTP2 makes a significant change to website speed, it is overlooked by Google at present since Googlebot uses HTTP1.1 to crawl, at present at least. On the contrary, Google’s John Mueller announced that Google may resort to HTTP2 when crawling websites as this would enable them to change the way they cached elements. Nevertheless, Google still wouldn’t witness the same changes to speed as a web browser would.

 

5. Crawl-delay directive in Robots.txt

Since servers in the earlier days weren’t capable of handling intensive traffic, specifying a crawl delay directive in the Robots.txt file was helpful. As the name suggests, this directive indicated the number of seconds that should be waited between requesting pages.
Because today’s servers are much more powerful and capable of handling more traffic, Google ignores this directive now. Regardless, if you still want to change the rate at which Google crawls your website, the Site Settings tab in Google Search Console is the place to do it. If you’re still not satisfied with it, you can also submit a special request to Google which will signify to them the problems you’re encountering with Googlebot’s crawling of your website.

Now, these were some of the unexpected SEO strategies that Google straight away doesn’t support, and for good reasons. However, below are 4 of the expected SEO strategies that Google does support but unofficially. So although they’re not guaranteed to work 100%, they’re still partially effective.

Checkout my other post: Mastering SEO – Optimizing Page Speed and Why It’s So Important For Your Site

content marketing seo optimization

SEO strategies unofficially supported by Google

1. JS-injected canonical tags

Canonical tags are helpful in telling the search engine which URL version you want to appear in search results (to prevent the problem of duplicate content that sometimes appears across multiple URLs).
To much dismay, Google’s Tom Greenaway earlier said that Google only processes the rel=canonical tag when the web page is first fetched in its non-rendered version, so it’s likely to be missed if you’re relying on your client to render it. However, injecting canonical tags into the rendered pages isn’t the ideal scenario since tests conducted by searchViu’s team and Eoghan Henn concluded that it took 3 weeks for the target URL injected with JS tags to be picked up. Ergo, Mueller admitted that JS-injected tags indeed work but should not be relied on.

2. txt Noindex directive

Generally, the Noindex meta directive prevents certain areas of the website from being crawled for security reasons. Although Mueller stated that this directive should not be relied on, this practice still works according to many web developers, including a tester from DeepCrawl.

To erase any confusion, the noindex directive in Robots.txt is different from the standard meta noindex in the sense that the former is more polished, manageable and easy to understand. This is because Robots.txt will overrule any URL-level directives. Additionally, you can also noindex a group of URLs by mentioning URL patterns instead of applying them on every page that you don’t want to be indexed. To get yourself hands-on with this, you may test this noindex directive in the Robots.txt tester within Google’s Search Console.

3. AJAX Escaped fragment solution

Typically, the escaped fragment solution makes dynamic web pages that use AJAX more accessible to the web crawler. This is done by the use of a hashbang (#!) for stating a parameter that enables search engines in rephrasing the URL for requesting content from a static web page.
For a considerable amount of time, Google has largely spoken of deprecating this escaped fragment solution. However, although Google has moved towards rendering #! versions, Mueller has announced that the escaped fragments won’t be abandoned entirely due to the large volume of direct URLs pointing to them.

4. Hreflang attributes within anchor links

Generally, hreflang attributes are added to the sitemap files, response header, and page header. One might wonder if Google supports this in anchor links. To answer this question, Mueller declared that Google doesn’t support hreflang attributes within anchor links. However, some notable web developers and testers have validated that hreflang elements in anchor links are actually partially supported by Google. This means that not every hreflang attribute in anchor links was picked up by Google’s Search Console.

All in all, the major takeaway from this was that although Google announces a lot of things with the passage of time, you shouldn’t always take their word for it. Instead, you should do some of your own research and see if what they say is actually true. While they may have a good reason for announcing what they do, testing it at your end isn’t much of a harm to validate (or debunk) their statements.

Also checkout my Web Security Guide for SEO