L et us think of the Internet as a collection of complex, inter-related information. More cohesively, it takes an immense number of hypotheses and thus can contain valid, consistent knowledge. Although we can process (scan) all the information, higher-level knowledge, which is derived from collection of pages, is still missing. There is enough knowledge across the World Wide Web to answer more or less any question, assuming it is not subjective. All that is done at present is word indexing with the notion of word proximity.
Let us face the fact that, among the more popular uses of search engines, are pursuits for commercial companies, which provide products or services. Results that get returned by the engines sometimes correspond to the most valid and relevant authority for a given niche. This may be fine for insight into magnitude and breadth of companies (or their Web sites), but this equally often misleads the user.
Search engines at present fail to extend beyond a potentially morbid state of ``dominance prevails''. Rather than an engine that provides users with the most reasonable answer and/or reference to a site, it provides a Web link to what is most cited, typically due to fraudulent practices or subjective search engine optimisations.
All the all, search engines at present encourage link-related spam and content-related spam. In worse scenarios, their backlinks-based algorithms lead to rise in sponsored listings, whereas our natural incentive is to prefer what would ``work best for us'', not what got recommended by automated tools. These tools, which work at a shallow level without understanding, opt to prioritise large corporations with money to be spent on good listings and inbound links.
Iuron is a project that addresses the issues above. First and foremost, it converts the vast amount of information in the World Wide Web into facts. Moreover, it serves as an impartial source for answers and is not highly susceptible to deceit as it can discern true from false.