__/ [www.1-script.com] on Thursday 13 October 2005 17:31 \__
> Roy Schestowitz wrote:
>> Quite frankly, I only use it for falsified ranks
> Like so many other people that the whole concept of the Alexa rank is
> pretty much useless. Hey, I got an idea: you could add your site's Alexa
> rank to your list of useless facts in the sig ;-)
"Useless facts" in my signature are often both meaningful and true. Alexa
ranks are utter garbage. 5 co-workers with Alexa toolbar in a consulting
firm and... voila! Ranked 30,000th in the world. I think not! *smile*
> I do, however, use the Netscraft
>> toolbar quite heavily (i.e. roll my eyes towards it). It is spying
> Not familiar, gotta take a look. Is it Netcraft for Firefox you are
> talking about?
Yes. Once you get used to it, that becomes the first thing you look at when
you reach a page. A picture is worth a thousand words:
> Got your Iuron.com link up on my homepage. Needed a new image alt tags
> (this is a search engines group, after all ;-) in the link. Check it out;
> let me know if you OK it.
Wow!! Thanks for that. I can add you to my blogroll (~400 hundred pages
ranging from PR0-5). Let me know your desired anchor text...
> Speaking of Iuron: the concept diagram is a bit confusing. This is a
> knowledge engine, right? So why some of the facts (28.3 deg. elongation or
> 40% mapped) aren't harvested from the page?
I imposed a limit on the scale to keep the diagram simple. As I reflect, I
always find flaws and things I wish to change... but I probably will never
> I'm assuming the underscored
> ones are the ones that make it into the "fact storage (base)".
No, these are the Wikipedia links, from which I nicked the screenshot.
> Is it
> interactive, like whatever gets asked about, is stored? In this case it
> would need to be re-indexed upon every request. I'm sure this is not
> something you'd envisioned...
I think you have a misconception here and the draft proposal is needed as a
verbal complement. Facts are learned off-line by fetching plenty of data
from the Net. Facts that repeat themselves are encouraged whereas negation
or rare facts get discouraged. It's a genetic algorithms/machine learning
An interesting aspect that you led me to thinking about is learning from
user queries and response. I suspect that existing search engines are doing
this already (they ought to have figured it out by now). You can monitor
what pages are followed from the SERP's to understand expectation and
re-shuffle the SERP's (indices) in accordance with the user's selection of
Another implication that I am inclined to ponder: Google have so many users
wandering around their SERP's. It would be very easy for them to automate
re-ordering of SERP's based on behaviour of that vast number of users, some
of whom will run rare and obscure search strings. This essentially means
that they have this element of momentum that money cannot buy. Even if
Yahoo bought more machines for crawling, that would not warrant them as
much information that gets available from users (spying). Advertising
likewise. Why do you think the NYT wants you to register to read articles?
Many newspapers do the same these days. They want to bind a name to the IP
address or have a cookie that binds an address, occupation, etc. to
readers. Learn your readers and you will know how to serve them. Tailored
(on-the-fly) content comes to mind. Imagine a front page that is based on
articles you have read. Amazon/Alexa/A9 do the same with books, but
WORSE... they filed a patent for that immensely genius, unprecedented
As for learning from user, this reminds me of:
Roy S. Schestowitz | UNIX: Because a PC is a terrible thing to waste
http://Schestowitz.com | SuSE Linux | PGP-Key: 74572E8E
5:30pm up 49 days 5:44, 3 users, load average: 0.44, 0.53, 0.63
http://iuron.com - next generation of search paradigms