Why a page is it not indexed by Google?
To determine whether a page is indexed or not, use in the query bar of the search engine, the command:
It may be present but in the depths of the ranking, or completely absent. This may have several causes...
- Maybe the HTML code is it not correct and therefore not recognized by robots parsers...
Check the syntax with the W3C validator.
- If the page is new, it takes several days or weeks for it to be taken into account.
- The page is not known by search engines because no link allow them to access it. You must verify that it is accessible from another page.
- Indexing is prohibited by a meta-tag inside the page.
<meta name="robots" content="none">
<meta name="robots" content="noindex">
- The canonical tag indicates that this page is located at another address.
- The robots.txt file prohibits crawlers from accessing the page. Therefore it can not be indexed.
It may take the form:
User-Agent: * Disallow: /dir/filename.html
- It is also possible that Google or another search engine decides not to index your website because the robots.txt file is malformed. See what is a robots.txt file.
How to make a page be indexed
The only way is to have a link to this page from another page already indexed. This may be an internal link or from another site.
It may also be a link in a sitemap.
If a site is not indexed as a whole, the first thing to check is the robots.txt file to see if it does not block robots. Robots are blocked with a directive of the form Disallow: /
They are not blocked if nothing follows Disallow.
- How to exit the sandbox. When a page is penalized, it remains in the index, but if far ago in result pages.
- How to build a sitemap and let search engines know it.
- Then see the list of errors to not commit in SEO. If your site is in no case you must wait until it is inserted in the index again.