3 Common Technical SEO Problems and How to Solve Them
As SEO marketers we all face challenges that arise in our marketing efforts. This article will go over some of the common, and not so common, technical SEO problems and how to solve them. Chances are at some point in your field you will come across these issues as well.
1. Uppercase vs Lowercase URLs
This is a problem that is more common on websites that use.NET. It stems from servers that are configured to respond towards URLS with uppercase letters and they do not redirect to a lowercase version. This is becoming less and less of an issue since most search engines are improving their ability to choose the canonical version and ignoring any duplicates. However, search engines are not perfect and should not be relied on to solve the issue.
How to solve:
There are URL rewrite modules which will help this issue on IIS 7 servers. This tool has features within the interface which allows you to set lowercase URLs. When this is done, a rule is added to the web configuration file which will solve this problem.
2. Multiple Versions of the Homepage
This is found more frequently on.NET sites, but it can happen on a variety of platforms. There are many reasons why this issue can occur, but the solution for it is very easy.
How to solve:
Do a crawl of your website and export the crawl into a CSV. Filter this by the META column and do a search for the desired homepage title. This will allow you to locate duplicates of your homepage. Once located, adding a 301 redirect to the duplicate page will assist in pointing to the correct page. You can use the rel=canonical tag as well. Another option is to do the site crawl using Screaming Frog, or other tools, to locate the links that point towards the duplicate page. This allows you to edit the duplicate page to redirect to the proper URL, rather than using the 301 and losing link equity.
3. Query parameters added to the end of URLs
More common on eCommerce sites which are database driven, there is still a chance this can happen on any site. The filtering aspect makes the site user friendly, but not great for searching purposes, especially if the client does not search for a specific set of keywords. This can also cause you to use up a lot of crawl budget when the parameters are combined.
How to solve:
Address which pages you wish to allow Google to crawl and index based on your keyword research after cross referencing all database attributes with your main target keywords. After determining which attributes of your keywords that need to be used to find products, determine which combinations of these words will be used by your clients and take the time to ensure that the database attribute includes an SEO friendly URL. Once you know which attributes you need to be indexed, the next step is entirely dependent on whether your URL is indexed or not.
If the URL is not indexed then simply the structure to your robots.txt file. If the URL is already indexed you will need to use a plaster to fix the issues within your rel=canonical tag until you are able to get to the core of the problem. You will need to add the rel=canonical tag to your URLs you wish to not have indexed and redirect to the proper relevant URL.
Solomon Thimothy is the Co-founder of Clickx, a Chicago-based white label digital marketing platform. He has been in the agency space for over a decade and has helped 100s of entrepreneurs build 7 and 8 figure agencies. He helps agency owners on a 1:1 basis to scale sales and fulfillment. Follow him on twitter @sthimothy