__/ [Big Bill] on Friday 28 October 2005 20:46 \__
> On Fri, 28 Oct 2005 14:42:28 +0100, Roy Schestowitz
> <newsgroups@xxxxxxxxxxxxxxx> wrote:
>>__/ [John Bokma] on Friday 28 October 2005 14:33 \__
>>> Roy Schestowitz <newsgroups@xxxxxxxxxxxxxxx> wrote:
>>>> first things to crop in my mind. You see, search engine algorithms are
>>>> not mission-critical, so they are likely to be poorly tested.
>>> Are you serious?
>>Actually, yes. Let's think about it...
>>Search engine engineers write code which will analyse millions of sites.
>>They can also then embed some junk code in the trunk, for whatever
>>reason. When the refined algorithm is finally ready for 'prime time'
>>(e.g. Bourbon), would it make much difference if debugging information was
>>included in compilation and resulted in a 1% slowdown? Would it have just a
>>slight affect on the performance or will it unleash the thunder of death
>>upon the search engine?
>>In search engines, there are no right and wrong answers. There are many
>>pointers and their ordering (relevance) is a 'fluffy' art. It doesn't make
>>much difference if one domain among 80 million gets 8,000 links. It's
>>peanuts. It's affordable. There are bigger issues to address, but
>>nontheless such mistakes give a bad name to the SE and are embarrassing.
>>They should be high enough up the agenda.
>> This makes you wonder if there are 'test set' Web sites that SE's are
>>using to test their spiders on. Under such circumstance, there is unfair or
>>unbalanced treatment of the World Wide Web.
> I always assumed they had a mini-web set up they could run test algos
I guess they could use their cache, which may be out-of-date, but nontheless
serves as 'good enough' data to test premises on.
Roy S. Schestowitz | Useless fact: 111111 X 111111 = 12345654321
http://Schestowitz.com | SuSE Linux | PGP-Key: 74572E8E
2:40am up 64 days 11:55, 5 users, load average: 0.97, 0.61, 0.50
http://iuron.com - next generation of search paradigms