Gary Illyes, Analyst at Google, has highlighted a major issue for crawlers: URL parameters.
During a recent episode of Google’s Search Off The Record podcast, Illyes explained how parameters can create endless URLs for a single page, causing crawl inefficiencies.
Illyes covered the technical aspects, SEO impact, and potential solutions. He also discussed Google’s past approaches and hinted at future fixes.
This info is especially relevant for large or e-commerce sites.
The Infinite URL Problem
Illyes explained that URL parameters can create what amounts to an infinite number of URLs for a single page.
He explains:
“Technically, you can add that in one almost infinite–well, de facto infinite–number of parameters to any URL, and the server will just ignore those that don’t alter the response.”
This creates a problem for search engine crawlers.
While these variations might lead to the same content, crawlers can’t know this without visiting each URL. This can lead to inefficient use of crawl resources and indexing issues.
E-commerce Sites Most Affected
The problem is prevalent among e-commerce websites, which often use URL parameters to track, filter, and sort products.
For instance, a single product page might have multiple URL variations for different color options, sizes, or referral sources.
Illyes pointed out:
“Because you can just add URL parameters to it… it also means that when you are crawling, and crawling in the proper sense like ‘following links,’ then everything– everything becomes much more complicated.”
Historical Context
Google has grappled with this issue for years. In the past, Google offered a URL Parameters tool in Search Console to help webmasters indicate which parameters were important and which could be ignored.
However, this tool was deprecated in 2022, leaving some SEOs concerned about how to manage this issue.
Potential Solutions
While Illyes didn’t offer a definitive solution, he hinted at potential approaches:
- Google is exploring ways to handle URL parameters, potentially by developing algorithms to identify redundant URLs.
- Illyes suggested that clearer communication from website owners about their URL structure could help. “We could just tell them that, ‘Okay, use this method to block that URL space,’” he noted.
- Illyes mentioned that robots.txt files could potentially be used more to guide crawlers. “With robots.txt, it’s surprisingly flexible what you can do with it,” he said.
Implications For SEO
This discussion has several implications for SEO:
- Crawl Budget: For large sites, managing URL parameters can help conserve crawl budget, ensuring that important pages are crawled and indexed.in
- Site Architecture: Developers may need to reconsider how they structure URLs, particularly for large e-commerce sites with numerous product variations.
- Faceted Navigation: E-commerce sites using faceted navigation should be mindful of how this impacts URL structure and crawlability.
- Canonical Tags: Using canonical tags can help Google understand which URL version should be considered primary.
In Summary
URL parameter handling remains tricky for search engines.
Google is working on it, but you should still monitor URL structures and use tools to guide crawlers.
Hear the full discussion in the podcast episode below: