Technical SEO

This document contains a comprehensive guide to technical SEO.

This brand-new guide will teach you everything you need to know about:

    • Performing crawling and indexing
    • Sitemaps in XML format
    • Content that is duplicated
    • Data that has been structured
    • Hreflang
    • There’s plenty more.

This guide should prove to be extremely beneficial if you want to make certain that your technical SEO is up to date and effective at the same time.

CHAPTERS 1 AND 2

Technical SEO Fundamentals are important to understand.Let’s get things started with a chapter on the fundamentals.

In this chapter, I’ll go over why technical SEO is still extremely important in 2022, and how you can improve your rankings.As part of this demonstration, I’ll show you what is (and is not) considered “technical SEO.”

Let’s get started.

What Is Technical Search Engine Optimization?


Technical SEO is the process of ensuring that a website complies with the technical requirements of modern search engines in order to achieve higher organic search engine ranking positions. Search engine optimization (SEO) technical elements include crawling, indexing, rendering, and web site architecture.

What is the significance of technical SEO?

Having the best website and the best content is possible.
What happens, however, if your technical SEO is messed up?
If this is the case, you will not be ranked.
It is absolutely essential that Google and other search engines are able to find and crawl your website’s pages, as well as render and index the content on those pages.
Search engines must be able to find, crawl, render, and index the pages on your website in order for them to be useful.

However, this is only the tip of the iceberg. Even if Google does index all of the content on your website, this does not imply that your work is finished.
For example, in order for your site to be fully optimised for technical SEO, your site’s pages must be secure, mobile-friendly, free of duplicate content, fast-loading… and a thousand other factors that contribute to technical optimization.

That is not to say that your technical SEO must be flawless in order to rank well. It doesn’t work like that.
However, the more accessible your content is to Google, the better chance you have of ranking high in search results.

What Can You Do to Boost Your Technical Search Engine Optimization?

As previously stated, “technical SEO” encompasses more than just crawling and indexing.
In order to improve the technical optimization of your website, you must consider the following factors:

  • Javascript
  • Sitemaps in XML format
  • The design of the website
  • URL structure is an abbreviation for
    Uniform Resource Locator (URL).
  • Data that has been structured
  • Content that is lacking
  • Content that is duplicated
  • Hreflang
  • Tags that are canonical
  • 404 error pages
  • 301 redirects are a type of redirection.

And I’m sure I’m forgetting a few things.
Fortunately, I’ll go over all of these topics (and more) in greater detail throughout the rest of this guide.

CHAPTERS 2 AND 3

Structure and navigation of the website
In my opinion, the structure of your website is the “first step” in any technical SEO campaign.
(Yes, it is even more important than crawling and indexing.)

Why?

  • A poorly designed site structure, for starters, is a common cause of crawling and indexing issues in search engines. Consequently, if you complete this step correctly, you won’t have to be concerned about Google indexing all of the pages on your website.
  • Second, your site structure has an impact on everything else you do to optimise your site, from URLs to your sitemap to the use of robots.txt to prevent search engines from indexing certain pages on your website.

To summarise, having a strong structure makes every other technical SEO task MUCH easier to complete successfully.
With that said, let’s get into the specifics of the procedure.

Use a site structure that is flat and well-organized

The way all of the pages on your website are organised is referred to as the site structure.
In general, you want a “flat” structure for your structure. To put it another way, the pages on your website should all be only a few clicks away from one another.

What is the significance of this?

Search engines such as Google and other search engines will have an easier time crawling your entire site if it has a flat structure.
Because of its flat structure, Google and other search engines have an easier time crawling your site.
This isn’t a big deal if you’re running a blog or a website for a local pizza shop. But what if you have a 250k product page ecommerce website? A flat architectural design is a significant achievement.
In addition, you want your structure to be extremely well organised.

In other words, you don’t want a website structure that looks like this:
Structure of the website is inadequate.
“Orphan pages” are frequently created as a result of this jumbled structure (pages without any internal links pointing to them).
Pages that are no longer in use
It also makes it more difficult to identify and correct indexing problems.

If you want to get a bird’s eye view of your site structure, you can use the Ahrefs “Site Audit” feature to do so.
Site audit from the perspective of Ahrefs
This is extremely beneficial. However, it is not particularly visually appealing.
Visual Site Mapper can be used to get a more visual representation of how your pages are connected to one another.
It’s a free tool that allows you to take an interactive look at the architecture of your website.

 


URL Structure
That Is Consistent

There is no need to overthink the structure of your URLs. This is especially true if you operate a small website (like a blog).
After all is said and done, you want your URLs to have a consistent and logical structure. The fact that they can see where they are on your site actually helps users understand “where” they are.

Additionally, categorising your pages under different categories provides Google with additional context about each page within each category.
Provide Google with additional information about each page.
Example: The pages on our SEO Marketing Hub all include the “/hub/seo” subfolder to inform Google that all of the pages on our SEO Marketing Hub are part of the “SEO Marketing Hub” category of pages.
.Structure of the SEO Marketing Hub

This appears to be effective. If you search for “SEO Marketing Hub,” you’ll notice that Google includes sitelinks in the search results, which is a good thing.
Sitelinks are an important part of SEO marketing.
As you might expect, all of the pages that are linked to from these sitelinks are located within the hub as well.

Navigation using the breadcrumbs

No one can dispute the fact that the use of breadcrumbs navigation is extremely SEO-friendly.

This is due to the fact that breadcrumbs automatically add internal links to your site’s categories and subpages.
Breadcrumbs are links that are automatically added to your website.
This contributes to the solidification of your website’s architecture.
Furthermore, Google has transformed URLs into breadcrumb-style navigation in the search engine results pages (SERPs).
URLs in the style of a breadcrumb in Google search results
As a result, I recommend that you use breadcrumbs navigation when it makes sense.

CHAPTERS 3 AND 4

Crawling, rendering, and indexing are all types of search engine optimization.
Crawling, rendering, and indexing are all performed.
Using this chapter, you will learn how to make it incredibly simple for search engines to find and index your entire website.

The following chapters will teach you how to identify and correct crawl errors… as well as how to direct search engine spiders to the most important pages on your website.

Issues with Spot Indexing

The first step is to identify any pages on your site that are having difficulty being crawled by search engine spiders.

Here are three different approaches to accomplishing this.

Report on Coverage

The “Coverage Report” in the Google Search Console should be the first place you should look.
This report informs you if Google is unable to fully index or render pages that you wish to be indexed due to technical difficulties.
A report on Google Search Console’s coverage is available here.

Screaming Frog

Screaming Frog is the most well-known crawler in the world for a good reason: it’s extremely effective.
Following the resolution of any issues identified in the Coverage Report, I recommend performing a full crawl with Screaming Frog.
Crawl with the Screaming Frog

Ahrefs’s Site Auditing Service

Ahrefs has a sneakily good SEO site audit tool that you should check out.
A website auditing tool such as Ahrefs
What I appreciate most about this feature is that it provides information on the overall technical SEO health of your site.
Health score for Ahrefs’s site audit
Overall site performance in terms of page loading speed
Ahrefs – Overall Effectiveness
In addition, there are issues with the HTML tags on your website.
Ahrefs are HTML tags that point to other websites

Each of these three tools has its own set of advantages and disadvantages. As a result, if you have a large site with more than 10,000 pages, I recommend that you use all three of these approaches. Nothing will fall through the cracks in this manner.

Links to “Deep” Pages on the site’s internal network

The vast majority of people have no trouble getting their homepage indexed.
Deep pages (pages that are several links away from the homepage) are the ones that are most likely to cause issues.

Deep pages: Several links from the homepage lead to additional pages.
A flat architecture, on the other hand, usually prevents this problem from occurring in the first place. After all, your “deepest” page will only be three or four clicks away from your homepage when you use this method.

No matter which method you use, if there is a specific deep page or set of pages that you want indexed, nothing beats a good old-fashioned internal link to that specific page or set of pages.

Internal links to important pages on authority websites are provided.
The situation is particularly critical if the page from which you’re linking is highly authoritative and frequently crawled

 Make use of an XML Sitemap

Is an XML sitemap still required for Google to find the URLs on your website in this day and age of mobile-first indexing and AMP?

Yup.
In fact, a Google representative recently stated that XML sitemaps are the “second most important source” for locating URLs on the internet.
Websites like Google: XML Sitemaps are the second most important source of traffic.
There was no mention of the first link, but I’m assuming it was an external and internal link.)
The “Sitemaps” feature in the Search Console can be used to double-check that your sitemap is up to date and complete.
Sitemaps are a feature of Google Search Console.
This will display the sitemap that Google is currently displaying for your website.
Sitemaps are available on the Google Search Console.

“Inspect” is a GSC term.

Is there a URL on your website that is not being indexed?

The GSC’s Inspect feature, on the other hand, can assist you in getting to the bottom of things.
However, it will also inform you as to why a particular page is not being indexed
However, for pages that have been indexed by Google, you can see how the page is rendered
This way, you can double-check that Google is able to crawl and index the entirety of the content on that page, if necessary

CHAPTERS 4 AND 5

Content that is too thin and repetitive

If you write unique, original content for each and every page on your website, you should not have to worry about duplicate content on your website.

Having said that:

  • Even if a site does not have duplicate content, it is possible for it to appear on another site… especially if your CMS has created multiple versions of the same page on different URLs.
  • It’s the same storey with sparse content: it’s not a problem for the vast majority of websites. However, it can have a negative impact on your site’s overall ranking. As a result, it’s worth looking for and repairing

And in this chapter, I’ll show you how to prevent duplicate and thin content issues on your website in the first place, before they occur.

Duplicate content can be discovered with the help of an SEO audit tool

Finding duplicate and thin content is a difficult task, but there are two tools that are extremely effective.

These tools are specifically designed to detect duplicate content on your own website.
Pages that copy content from other websites are included under the category of “duplicate content.”
I recommend using Copyscape’s “Batch Search” feature to double-check that the content on your website is original.
In this section, you can upload a list of URLs and see where that content appears throughout the internet.

Search for a portion of text that appears on another website by putting the text in quotation marks.

If Google displays your page as the first result in the search results, it is assumed that you are the original author of that page.
Look for text snippets of interest.

After that, you’re good to go.

Please keep in mind that if other people copy your content and post it on their website, they have a problem with duplicate content. It’s not yours. Content on your website that has been copied (or is extremely similar) to content on other websites is the only thing you need to be concerned about.

Pages with no unique content are not included in the index.

The majority of websites will have pages that contain some duplicate content.
And that’s fine with me.When those duplicate content pages are indexed, the situation becomes more complicated.

What is the solution?

Add the noindex” tag to the pages that you don’t want to be found.
The noindex tag instructs Google and other search engines that the page should not be indexed.
The noindex tag is used to prevent search engines from indexing a page.

If you are using the GSC, you can double-check that your noindex tag is correctly configured by using the “Inspect URL feature.”
Fill in the blanks with your URL and click “Test Live URL.”
If Google is still indexing the page, you will see a message stating that the “URL is available to Google.” This indicates that your noindex tag has not been properly configured.
The URL for Google Search Console is now available.

However, if you receive a “Excluded by ‘noindex’ tag’ message, this indicates that the noindex tag is performing its function.
(This is one of the few instances in which it is desirable to see a red error message in the GSC.)
It may take several days or weeks for Google to re-crawl the pages you don’t want indexed, depending on your crawl budget.
As a result, I recommend that you check the “Excluded” tab in the Coverage report to ensure that your noindexed pages are being removed from the search index.

Certain posts at Gordon Melling, for example, have comments that are paginated. In addition, the original blog post is linked to from every single comment page.
Gordon Melling – Comments pages contain links to the original post.If those pages were to be indexed by Google, we’d have a slew of duplicate content problems on our hands. To prevent search engines from indexing those pages, we add a noindex tag to every single one of them.
Keep in mind that you can also prevent search engine spiders from crawling the page entirely by excluding each of their individual crawlers from the robots.txt file.

Make use of Canonical URLs

It is recommended that the no index tag be added to the majority of pages that have duplicate content on them. Alternatively, the duplicate content can be replaced with new content.However, there is a third alternative: canonical URLs

Canonical URLs are ideal for pages that have very similar content on them… with only minor differences between the pages in the URL structure.

Consider the following scenario: you run an ecommerce site that sells hats.
In addition, you have a product page dedicated solely to cowboy hats.
Page devoted to the product
Because every size, colour, and variation can result in a different URL, it is important to understand how your site is configured.
Every size, colour, and variation has a unique URL address.This is not good.
However, you can use the canonical tag to inform Google that the vanilla version of your product page is the “main” version of your product page. And all of the others are variations on that theme.
Use the canonical tag to identify your website.

CHAPTERS 5 AND 6

PageSpeed

The optimization of your pagespeed is one of the few technical SEO strategies that can have a direct impact on your site’s search engine optimization rankings.
That is not to say that having a website that loads quickly will automatically propulse you to the top of Google’s first page.(You’ll need backlinks to accomplish this.)
However, increasing the speed with which your site loads can have a significant impact on your organic traffic.

And in this chapter, I’ll show you three simple techniques for increasing the speed with which your website loads

1. Web page size should be reduced

CDNs. Cache. Loading slowly. CSS minification is important.
I’m sure you’ve heard or read about these strategies a thousand times already.
There is one page speed factor that is equally important that I do not see discussed nearly as much: page load time.

In fact, when we conducted our large-scale pagespeed study, we discovered that the total size of a page was more closely associated with load times than any other factor.
Factors that have an impact on how quickly a desktop or mobile application can be fully loaded
The key takeaway from this is as follows:

  • No such thing as a free lunch when it comes to pagespeed optimization.
  • You have the ability to compress images and cache your website to the maximum extent possible.

However, if your pages are large in size, it will take a long time for them to load.This is something with which we at Gordon Melling have difficulty. Because we use a large number of high-resolution images, our pages tend to be extremely largeI’ve made the conscious decision to accept slower loading times in order to save time. When given the choice, I would rather have a slow, awesome-looking page rather than a fast page with grainy images.
This has a negative impact on our Google PageSpeed Insights scores.
Google Page Speed Insights provides a poor rating.
If, on the other hand, increasing the speed of your site is a top priority, you should do everything you can to reduce the overall size of your page.

2. Load times are tested both with and without a CDN

One of the most unexpected findings from our pagespeed study was that content delivery networks (CDNs) were associated with slower load times.
The use of a CDN is associated with slower desktop page loading times.
This is most likely due to the fact that many CDNs are not configured correctly.
As a result, if your site makes use of a content delivery network (CDN), I recommend testing your site’s speed on webpagetest.org with the CDN turned on or off.

3. Third-party scripts should be eliminated

Each third-party script that is included on a page increases the load time by an average of 34 milliseconds.

The use of third-party scripts has a negative impact on page load times.
Some of these scripts (such as Google Analytics) are almost certainly required.
However, it never hurts to take a look at your website’s scripts to see if there are any that you can get rid of or deactivate.

CHAPTERS 6 AND 7

Additional Technical SEO Suggestions

Now it’s time to go over some quick technical SEO tips with you.
Redirects, structured data, Hreflang, and other topics will be covered in this chapter.

Technical SEO Tips