EuroTitToss All American 4790 Posts user info edit post |
http://news.bbc.co.uk/2/hi/uk_news/magazine/7670050.stm
So this program elbot fooled 25% (3 out of 12) of people into thinking he was a human. The turing test requires 30%. My thoughts on this:
1. This robot is pretty bad. I have no clue how stupid these people had to be to get fooled. This quote pretty much sums up my feelings here:
Quote : | ""We have not had a single step forward since the 1960s. We should stop spending money in this direction," he tells me. He think we should stop trying to imitate human intelligence." |
2. Therefore, I'm not going to be very impressed when the turing test is passed. It's not going to mean much. I think a much larger percentage would be a better test, but perhaps it's a poor test of intelligence to begin with.
3. I'm not trying to shoot down AI. I find the topic interesting and I hope we make a lot of progress. But I want to tell all these fucking futurists like kurzweil to go eat a dick. He thinks we're going to have a "singularity" in 40 years and we haven't made a lot of significant progress in AI in the last 50. All in all, I think a different approach is needed. If you look at the transcript, I swear it looks like most of what they're doing is programming in responses to certain questions. And that isn't going to push the field forward.10/17/2008 10:03:45 PM |
CalledToArms All American 22025 Posts user info edit post |
looks like we wont be needing the Voight-Kampff Empathy Test anytime soon 10/17/2008 10:11:17 PM |
EuroTitToss All American 4790 Posts user info edit post |
I admit, I had to look it up.
10/17/2008 10:14:23 PM |
qntmfred retired 40726 Posts user info edit post |
it's gonna happen
last i heard, InsaneMan was this close to developing strong AI 10/17/2008 10:16:21 PM |
agentlion All American 13936 Posts user info edit post |
The test in question is further diluted because it is artificially restricted to one subject, and not to general knowledge or general conversation.
Until and unless our computers reach a singularity-like level, true AI will not occur with normal algorithmic programming. I think real AI will come, instead, from reproducing and simulating the actual functionality of the brain, and trying to figure out how consciousness arrives from the physical structure of the brain. 10/17/2008 10:17:39 PM |
CalledToArms All American 22025 Posts user info edit post |
^^^ i just love the book and the movie a lot so its what i first thought of heh(honestly i like the book more but the movie is dope) 10/17/2008 10:18:50 PM |
EuroTitToss All American 4790 Posts user info edit post |
Quote : | "Until and unless our computers reach a singularity-like level, true AI will not occur with normal algorithmic programming." |
Isn't that circular logic or am I misunderstanding you?
Quote : | "I think real AI will come, instead, from reproducing and simulating the actual functionality of the brain, and trying to figure out how consciousness arrives from the physical structure of the brain" |
I agree except when you say "real AI" I'd change that to AI with human likeness. Who says AI needs to behave similar to human intelligence? That is, of course, what the turing test is testing though.10/17/2008 10:28:09 PM |
agentlion All American 13936 Posts user info edit post |
i should have said "consciousness" will arrive from reproducing a human brain. I think consciousness is probably a pre-requisite for true AI. 10/17/2008 10:35:33 PM |
EuroTitToss All American 4790 Posts user info edit post |
I've been thinking about this a lot lately. If we actually do wind up going that route, wouldn't there be some implications no one has thought about yet? Such as:
-that kind of AI would make mistakes fairly often -the speed and recall might not be much better than a human -you'd have to "teach" this AI. As in, it would take years once you've developed it to get it to an "adult" state.
Point being if we recreate the human brain, especially without first fully understanding it, it'll be an amazing accomplishment, but what advantage will it have over a human? 10/17/2008 10:47:32 PM |
Charybdisjim All American 5486 Posts user info edit post |
Maybe the program seems crappy because of translation issues from english to german, but I find it hard to believe it fooled anyone.
It did give me a great response to one thing I said:
"I don't think you'll ever pass the turing test"
"If a machine could pass the turing test, are you sure it would really want to?" 10/17/2008 11:06:49 PM |
agentlion All American 13936 Posts user info edit post |
Quote : | "-you'd have to "teach" this AI. As in, it would take years once you've developed it to get it to an "adult" state." |
a machine that learns on it's own would also be required for "true AI", i believe. Although, even if a human-like brain is reproduced, i'm not sure a machine version of it will be constrained to the same limitations humans have, as far as physical growth and development. Sure, early AI machines will be relative "babies", but I think that machines would have several advantages, such as they can learn 24/7/365 without sleep or rest, they could have near-perfect memory instead of a human's faulty memory, and I'm sure they could figure out some kind of parallelism between multiple AI machines, or ways to feed already learned knowledge into other machines, so after the first generation, machines could be created and would automatically be in an "adult" phase, again unlike humans.10/17/2008 11:18:46 PM |
agentlion All American 13936 Posts user info edit post |
Quote : | "Maybe the program seems crappy because of translation issues from english to german, but I find it hard to believe it fooled anyone." |
also have to keep in mind that the testers were getting two sets of answers - one from a machine, and one from a human. So how well they picked out the machine also relied on how the human answered the questions as well as how well the computer answered. If the human was trying to trick the person by "talking like a machine" or giving nonsensical answers, or trying to make jokes that didn't make sense, or was not a native english speaker, then that could all influence how the tester viewed the real computer's responses.
------
also, with all this talk about AI and Turing, we have to realize that the Turing Test was just one suggestion set out by one person on how to judge AI. That doesn't mean that definition is gospel, or if those criteria are met that we would have anything close to real AI, again, depending on how you define it.
[Edited on October 17, 2008 at 11:26 PM. Reason : .]10/17/2008 11:23:38 PM |
CharlesHF All American 5543 Posts user info edit post |
After reading the conversation in that article, how could anyone think that what they were talking to was a human? 10/18/2008 12:28:43 AM |
moron All American 34142 Posts user info edit post |
^ I think they're being too generous. But, I remember back in the days, more that one person got fooled by those AIM bots (based on the Eliza one I think) there use to be. 10/18/2008 12:35:05 AM |
OmarBadu zidik 25071 Posts user info edit post |
Quote : | "last i heard, InsaneMan was this close to developing strong AI" |
someone find his AI thread 10/18/2008 9:30:26 AM |
EuroTitToss All American 4790 Posts user info edit post |
holy shit. just did a search for his topics. 95% are in feedback forum.
that must be some disappointing shit right there.
[Edited on October 18, 2008 at 10:15 AM. Reason : nvm. that must have been back when the owners gave a shit.] 10/18/2008 9:47:38 AM |
jaZon All American 27048 Posts user info edit post |
Quote : | "last i heard, InsaneMan was this close to developing strong AI" |
LOL, blast from the past10/18/2008 10:16:23 AM |