Blog / News

10 Common Technical SEO Mistakes to Avoid

Most companies associate SEO mainly with keyword research, blogging and content optimization, while technical aspect often gets overlooked. Developing content marketing strategy is a process which requires time to implement and patience until it brings results. On-page SEO efforts remain a must, but are often pointless if technical elements are not in place.

Technical mistakes can be a silent killer for your SEO. Preventing them is usually simple, all you need to do is be aware of the right thing at the right time. In this article I will share some of the most common technical SEO mistakes I have come across during SEO audits I have conducted. Read on and prepare yourself to give your website the love it deserves.

1. You don’t set up canonical tag on pages with multiple URLs

We often see multiple URLs open a single page. What’s more, one page can generate different URLs through syndication, backlinking, or user path. Therefore, it’s always a good idea to determine a “preferred” URL and help search engines identify the master page to index. A canonical tag does exactly that.

Rel=”canonical” is an HTML element placed in the website HTML header. Its purpose is to prevent issues with duplicate content by telling search engines which URL is the canonical version of the page. Once crawlers index the master (canonical) page, all other versions are considered copies of it. This process of content consolidation by search engines eventually helps with page ranking.

2. You don’t apply nofollow tag to unnaturally earned outbound links

Linking to an external page means passing the SEO juice to that source of information. This tells search engines that the credit for your content should be granted to the page you are linking to.

Nofollow link is a link with rel=”nofollow” attribute applied. It keeps SEO juice on your page and prevents passing SEO credit to another website. Typically, you want to nofollow all unnaturally earned outbound links (paid links) and links being generated without your control (links within comments or any other form of user-generated content). But keep in mind that this is only as far as search engines are concerned. External linking still passes the amount of trust and authority in the eyes of your readers, so choose wisely who you link to.

3. You are not aware of doing cloaking

Have you ever searched for something on Google, clicked on a decently ranked link, and (what?!) the page didn’t show anything related to what you were hoping to find? Any difference between content presented to crawlers and visitors is called cloaking and considered violation of Google’s webmasters’ rules. Search engines easily identify this practice today, which doesn’t do any good; furthermore, it often results in complete page invisibility in search engine results pages.

Not always cloaking is done intentionally; sometimes developers and web design team are not aware of doing it, others don’t know the consequences cloaking has on SEO, so they don’t pay attention to it. Examples of cloaking might be providing HTML text to crawlers while serving visuals to page visitors, or matching text color with the background which Google interprets as you are hiding something from users. Keep in mind that cloaking can leave severe damages to your website’s ranking and try to avoid it at any cost.

4. You are not sure how to use H tags

Headings help search engines identify important content. They are also a suitable place for LSI (latent semantic indexing) keywords, which are keywords expressed in various ways. There are lots of debates going on online concerning heading tags, such as whether using multiple H1 tags per page is allowed or not, how many H2s is optimal, or whether H3 – H6 tags matter at all, and so on.

What you should do:

  • Use H1 tag for top-level heading;
  • Apply H2 tags for main categories;
  • Include H3 – H6 for subcategories and important links (if applicable) respectively.

What you should not do:

  • Repeat the same keywords across multiple headings;
  • Write the entire page content in H tag;
  • Not use H tags at all (where headings should be used);
  • Reverse the order of H tags (e.g. using H3 for top-level heading);
  • Use H1 for all headings on a single page.

However, if you have multiple equally important topics covered on one page (although targeting multiple keywords on one page is hard to rank due to inconsistency, but it happens where this is inevitable), it would make sense to use multiple H1 tags. Or if your content covers categories with 5+ levels of subcategories, it’s logical to apply one level lower H tag for each lower subcategory, meaning using all H tags to H6.

To conclude: When it comes to headings, the best practice you can apply is to follow your logic and use your own common sense, as far as SEO is concerned.

5. You don’t update/clear up the XML sitemap regularly

A sitemap is an XML file which lists URLs, helps with page indexation, and protects webmaster against duplicate content. XML sitemap is a website’s skeleton. Any inconsistency in the sitemap to its website URL hierarchy would be the same as a non-matching table of content to the actual book structure. Search engines get confused if what is served in the sitemap doesn’t reflect the actual website content.

What this means in practice: If you don’t update your XML sitemap regularly for any URL changes you make, you confuse search engines and prevent them from accessing your pages properly. Eventually, any errors in sitemap have a decreasing effect on your rankings.

6. You don’t optimize for mobile searches

Smartphone traffic exceeds desktop searches long time now. Users don’t have the patience to surf on non-mobile-friendly pages in today’s abundance of online content. Having a mobile-friendly website is a critical part of your online presence. If you haven’t made your website mobile-friendly, you risk of losing a majority of your potential traffic.

Webmasters and development teams sometimes forget or underestimate the importance of mobile-friendly design today. Responsive design, load time, conciseness, font size, touch elements, localization – these are some of the main components to think of when optimizing for mobile searches.

7. You are in doubts whether to go with subdomains or subfolders

I see a lot of websites still using subdomains for their blogs. Remember that Google treats subdomains as separate and unique websites. Links and authority a subdomain earns don’t necessarily keep benefits for the main website. Although search engines will probably “figure out” that blog.yourwebsite.com is related to yourwebsite.com, you’ll need to invest a lot of time and effort in link-building the two websites, still with no guarantee for passing SEO credit. Even worse, covering similar topics or targeting same keywords both on domain and subdomain may result in ranking battle between your own pages.

Even though adding subfolder such as yourwebsite.com/blog might be more complicated than building subdomain from the development perspective, avoid subdomains whenever possible if you care about SEO. It may also seem easier to localize or adapt content and design by creating subdomains, but be aware of the consequences this has on your SEO efforts.

Cases where subdomains are a better option:

  • focusing marketing strategy on advertising different products for your company;
  • having unique content for different franchises in multiple locations;
  • targeting multiple countries by offering content in various languages with no intention of bringing leads from one domain to another;
  • promoting separate branding.

Conclusion:

→ If you want your main domain to benefit the most and generate leads, use subfolders.

→ If you don’t need to spread SEO juice from one domain to another and only wish to present pages separately, use subdomains.

In any case, both subdomains and subfolders should be integrated into the overall SEO strategy, which will depend on your business strategy.

8. You forget to unblock search engines for crawlers after a website redesign

Robots.txt is the instruction for bots on how to access a website. You can control what you want search engines to find and index. While the website is still in dev environment, developers usually block robots from crawlers to avoid indexing test pages. The problem happens if they don’t unblock it before launch.

Don’t forget to double check on robots.txt file before your website goes live. Overseeing this is often the main reason why pages are not getting ranked properly or even being indexed at all. Search engines sometimes “figure out” this oversight and still index those pages, but don’t rely on it; set crawl directives for search engines and avoid the risk of ending up being invisible to crawlers and users.

9. You allow search engines to index test pages unintentionally

As opposed to the above scenario, developers sometimes forget to block robots.txt in dev environment, which often occurs during a website rebuild.

If this happens, bots have complete access to your test pages. Three major consequences of it:

  • search engines index incomplete, non-optimized content, not ready to go live yet;
  • search engines treat your original (once completed), live page as a duplicate of the test page, already indexed;
  • the new, original page, won’t rank properly due to duplicate content issues to its test page.

Don’t forget to set robots.txt directives during the website redesign to avoid indexing test pages.

10. You don’t think about local searches

Imagine the following scenarios:

  1. You’ve heard of a new fancy cafe opened in town a while ago with cool new coffee flavors everyone’s talking about. You got some free time during lunch and you decided to check it out. You open Google maps, start searching for it, but can’t seem to find any results.
  2. You are walking around the area in which you visited a nice bar with your colleagues some time ago. You feel like grabbing a beer with your friends now while in neighborhood. You remember the name but forgot the exact location, so you go to Google maps. After typing the bar’s name, the map takes you to another part of the city with photos showing completely different ambient.
  3. You liked a blow dry you got in a salon on your way to work and you’d like to book an appointment again. You google the name expecting to find a phone number to call, but Google is showing some random places on the map. You somehow manage to find your salon by swiping and zooming in the map manually (already frustrated), finally click on the icon only to realize there is no phone number provided.

How did you feel imagining these scenarios? That’s what your potential customers experience if your local SEO is not in place. I won’t go into details here on how to optimize for local searches; a localization strategy is unique to every business, but some local SEO elements to consider:

  • Google My Business account;
  • on-page contact details;
  • city/country in title/meta/alt/H tags;
  • local landing pages;
  • product/service customer reviews;
  • location & contact details on social accounts.

Remember that leveraging local online presence is a vital part of every SEO strategy and affects domain authority and rankings.

Final thought

Technical SEO is not as scary as it sounds. Most of these things you already do, you just need to do them the right way. Once you learn how certain elements affect technical aspect of SEO, you can predict them and prevent possible mistakes by optimizing your website properly. Help search engines crawl and index your pages, and enable users to find your content.