Rather than "search engines" we ought to be talking about "knowledge engines", or at least that is my contention. Googlism is worth citing as an initiative that took a similar approach, albeit it became static and was bound to a search engine that indexed rather than learned. Googlism used indexing as a 'bridge' to the formation of knowledge. Plenty of data was already drained or diluted by the time it piped through to the basic learning method.
There is currently no barrier that stands in the face of implementing large-scale knowledge-bases apart from computer power, bandwidth (for data quantity) and amount of code with intricate dependencies. With Open Source software (making use of public and Open sites, e.g. Freshmeat and Sourceforge), prototypes ought to be achievable within a realistic timescale.
Imagine a neural network out there which rather than contain text with your name has got complex knowledge about who you are. Moreover, it can answer questions that involve you and may eventually become too complex to be explored or administered by a human. To many bodies including governments this would be invaluable. Real incentive and desire is certainly there, yet privacy can be jeopardised, as always (c/f §B). Iuron aspires to be a beginning - a seminal manifesto - of knowledge engines as opposed to search engines.