Home Messages Index
[Date Prev][Date Next][Thread Prev][Thread Next]
Author IndexDate IndexThread Index

Re: We've been duped!

"Roy Schestowitz" <newsgroups@xxxxxxxxxxxxxxx> wrote in message news:1996801.fmZO6o33aF@xxxxxxxxxxxxxxxxxx
__/ [ Stephen Fairchild ] on Tuesday 26 September 2006 23:10 \__

Roy Schestowitz wrote:

__/ [ Oliver Wong ] on Tuesday 26 September 2006 22:06 \__

"B Gruff" <bbgruff@xxxxxxxxxxx> wrote in message
news:4nsq7pFbu0fqU1@xxxxxxxxxxxxxxxxx

In Manchester, where Turing made his flawed philosophical assumption that set academic AI haring down the wrong path for forty years.

This author is the first person I've heard who phrased so strongly
that
the Turing Test is the wrong approach for detecting intelligence. Most
others either agree with Turing's approach, or are unsure, but have no
better suggestions. Because the author just dismissed the Turing Test
without explaining what is wrong with it, I'm not sure whether the he has
an educated opinion, or has completely misunderstood the test, or is just
trying to write something provocative to garner more attention.

First of, a bit of a clarification: The Turing Test is basically to get people to chat with computers, and if the computer can convince them that it is intelligent, than for all intents and purposes, the computer IS intelligent (and thus has passed the test). I think Turing is a believer in Strong AI, which means that as long as a device exactly what the human brain does, then it is intelligent. It doesn't matter whether this brain is implemented via neurons and synapses, or cogs and gears. In other words, there is nothing "magical" about the human brain. The alternative view is that there IS something special about the human brain, and that if you happen to build a device that mimicks it exactly (e.g. Adam Asimov's "Positronic Brain"), then you are merely "faking" intelligence.



He merely offers an alternative approach, which is perhaps more complex. The mind doesn't quite work in a simple imperative-like manner. Neural networks work (pseudo-)simultaneously and drive towards an outcome.

I don't think Turing specified a particular implementation for beating the Turing Test. You would not be disqualified, for example, by using a neural net as the implemention of your chatter program.



He works on chip design, so he wouldn't just dismiss Turing's work (Turing
is among the greatest sources of pride for CS/Math in the University). And
he wouldn't provoke as you suggest, trust me. He's a gentleman who keeps
low profile; and he is a Fellow of the Royal Society.

You would need to create a machine to genuinely think it is living a human
life before you could stand a realistic chance of creating a Turing Test
winner. This is much harder odds than creating merely a true machine
intelligence IMO.

This begs the question, though. How do you know whether the machine is "genuinely thinking" that it is a living human life, as opposed to merely pretending it genuinely thinks its a living human life (e.g. via stock phrases and brute force)?


If we encountered an intelligent machine, I don't think we'd have to have it believe itself to be human. We could explain to it the rules of the competition.

Us: "So you and a human contestant are going to be chatting with a judge, and the judge has to guess which of the two is the human, and which is the computer. If you can trick the judge into believing you are a human, you win."
It: "But I'm not a human."
Us: "We know. So you'll have to lie to win this game."
It: "Okay, I think I get it now."



The current attempts at beating the Turing Test are going down the avenue
of stock replies and syntactic and semantic analysis which in the end just
gives you a better human language parser.


The only intelligence being demonstrated in the Turing Test is that of the
programming teams.

The problem is that I think we are several years, if not decades, away from having AI sophisticated enough to process natural language without smokes and mirrors. But people want to win the cash prize now. So they use smokes and mirrors.


Note that so far, the Turing Test has been successful in screening these "fake" entries. That is, there isn't yet an entry which the judges have commented "Oh wow... I think we may seriously have a candidate for actual, self-aware AI here..." But of course, due the the way that Turing devised the test, by definition, it will always be accurate. Becasue by definition, anything which passes the test is intelligent, and anything which doesn't is not.

In other words, people misunderstand and believe that there exists some objective concept of intelligence out there, and that this test is an attempt to measure it. Wrong. Turing argues that there is NO objective concept of intelligent. And so he designs a test so that we can classify devices as being "intelligent" or "not intelligent" on the basis of whether it passes that test. To make an analogy, it's not like there's an objective concept of "one meter" out there, and we've been devising meter sticks as an attempt to measure it. Rather us, as humans, have wanted to standardize a measurement for length, so we built a meter stick, and defined the meter to be the length of that stick. (Since then we've refined our definition, e.g. to be the distance light travels in a certain amount of time)

Programming team member multiplied by the joint intelligence and experience
of a few. Plus brute force, which is where all the so-called power actually
lies...


I think it would be intersting to develop processors that work in parallel in
a collaborative and neural-type fashion. It's too ambitious a goal to even
suggest, but we might get there one day. At the moment, neural networks code
are being translated to simple machine code and are run in inefficient
ways... would be kind of funky if companies continues to develop
architectures that better suit machine learning. Back in the days when clock
speeds weren't as high as they are today (permitting some fun stuff in 2-D
and sometimes real-time 3-D as well), one had to build machines that are
optimised for machine visions. That's actually what my Supervisor worked on
for many years. So he sings those stories about the times when you had to
build your computers and physically design experiments rather than use
high-level P/L's.

I believe that they DO have dedicated hardware for running neural networks. They're not available to the public, of course, but they exist.


- Oliver


[Date Prev][Date Next][Thread Prev][Thread Next]
Author IndexDate IndexThread Index