I am hoping that someone in this group can help me out. For the past few
months I have been spotting errors for odd variations of the file
robots.txt. (among others)
Putting mistaken bots aside, there maybe would be one error for every ~100
visits, so I still have a very frequent look at the error logs (trying to
identify internal broken link), but I sometimes get unexplained errors,
e.g. so far this month:
/robots1.txt 8 times this month
The rest might be human errors:
Is it possible that some crawlers 'extended' this type of protocol?
Even /sitemap.rdf has been requested twice even though I haven't signed up
with Google Site Maps. Can all of the above just be visitors that temper
with the server? They seem to come from addresses that do not contain
numbers, but still have obscure domains.
Many thanks in advance,