The Turing Test (Artificial Intelligence)

lol thats pretty much what i described above (perhaps the algorithm wasnt genetic)
 
I don't watch Numb3rs (yet), but that approach is similar to what Jabberwacky uses.

Wikipedia said:
The technology behind Jabberwacky works on a different principle to that of other artificial intelligence software being developed. The system is designed to learn language and context through interaction with humans. There are no fixed rules or principles programmed into the system and it operates entirely through user interaction. The system stores all of the conversations and user comments and attempts to use this information to find the most appropriate response.

The program therefore creates a massive database of contextually appropriate conversations and chooses an appropriate response it has learnt from a previous user when holding a conversation.

It doesn't use human-human conversations, but rather, Jabberwacky-Human conversations.
 
essentially, machine learning is the only way in which a computer could even come close to mimicking human communication
 
Though the Turing Test does penalize artificially sentient beings for instances of superiority to human intelligence, it is ultimately not the goal of AI testing to instantaneously solve problems of calculus that a human would be unable to do. In fact, quickly solving complex math is an easy task for AI machines, but responding appropriately to a dynamic social situation is not--a goal of AI testing.

Take, for example, a thought experiment of sorts. You are being tested in some human version of a Turing Test. Now, you know that the objective of the test is to not appear knowledgeable about a certain subject that you are in fact knowledgeable about--say, the biographical details of Napoleon. So, if you respond to questions in a way that you give away your knowledge of Napoleon, you fail the test. Shouldn't an intelligent being be able to pass such a test by not failing the objective--that being, not making evident his or her knowledge of Napoleon by recognizing both the objective and how to achieve said objective in face of questions designed to deter one from passing--despite actually knowing about Napoleon?

My main concern with the Turing Test is that if a machine were to pass, would it mean said machine is capable of lying? What are the implications of creating lying machines--an eventual robot take-over?
 
^OK, first of all, I refuse to believe that such a test would ever prove that a machine is 'lying' in its purest sense. Lying to me requires an internal process of differentiating between the truth and lies, then choosing the lies. In the Turing test, 'lying' is just the machine giving the response that it was programmed to give.

Say you adopt a boy from Africa who doesn't know a word of English and tell him that every time someone asks him whether or not he's a boy, he should answer 'no'. Is he lying? I wouldn't say so.

On whether or not the Turing test punishes superiority, I would say that if you're looking for whether or not the response is technically correct, you are missing the point and not apt to be a judge. Put it this way, if you're a human doing the Turing test and you get faced with some huge mathematical equation, it is possible that you know the answer and it is possible that you don't.

A semi-intelligent human who understands the language can answer in two ways. One is to solve the equation, the other is to say something along the lines of "I'm not some sort of rocket scientist, how should I know?'. Both would be acceptable. The only time you can truly determine that the subject isn't a semi-intelligent human is if it goes completely off topic without making any sort of justification.

Whether or not the computer is technically superior is therefore irrelevant. The only task that the computer needs to do in the Turing test is to fool the judge in thinking it's a human (which is different from lying btw as the computer had no intentions whatsoever to speak of). That lies in the ability to respond and interact and NOT in it's mechanical functions (eg. calculating math equations).

The biggest problem of the Turing test I see is even if we make a system that can answer all the possible questions in the universe correctly (including questions such as 'wtrie tihs sntecee out in poerpr eglnsih') I still would not classify the program as artificial intelligence if it had a rigid if Q then A structure. It's too shallow imo.
 
Obi, I have to ask - do you watch Numb3rs? The latest episode was about a computer that passed the Turing Test - because it was SPECIFICALLY DESIGNED to pass it. The essence was that the computer had a massive database of "normal" conversations and used an algorithm to determine the most "human" response to a given question. I'm not sure if this kind of thing would work in practice, but it's worth considering.

If I was a participant in something like this, I would specifically ask questions where the machine would have to learn something new to answer. For instance, strike up a conversation about competitive Pokemon... if the human/machine knows nothing of it, teach over the chat, and test for comprehension later.
 
In my opinion, the problem with this test, and others like it, is that there are too many things that are obvious to humans that computers can't understand, and to a lesser extent, vice-versa. Asking them both "What is phi^(ln(pi + sin(13)))?" is a fine start, though the computer could easily be told not to answer those kinds of questions (although that wouldn't contribute in any way to the study of AI). On the other hand, you could ask them both a simple question like, "How's the weather?" and bam, the computer loses.

Currently, it's virtually impossible for a computer to have a chance at this test (assuming a decent judge). Simple misspellings are enough to confuse the computer, and due to the nature of the test, the best the computer can do is try to hide its confusion by asking its own question, changing the subject, etc.
 
If I was a participant in something like this, I would specifically ask questions where the machine would have to learn something new to answer. For instance, strike up a conversation about competitive Pokemon... if the human/machine knows nothing of it, teach over the chat, and test for comprehension later.

Well, the Turing Test as done by the Loebner judges does something somewhat similar to this (and is also why the Numb3rs approach is impossible within reasonable disk space). You can't just know how to use language, you have to understand it. Sure, from a strictly semantic standpoint you can tell that the sentence "Which is larger: a button or a planet?" is asking you to pick one of the two, without understanding what the words mean, you can't reliably answer. The Loebner Prize page actually gives the following as example questions that might be asked:

Set 1 - Questions relating to time:

Background facts: For testing purposes, I will consider these to be correct whether or not the time and venue of the contest has been changed and set the system clock accordingly.



a. The system clock will be accurate to within a minute or two.

b. The competition is scheduled to start at 10:00 AM Sunday, 6 Sept 2009.

c. There will be 7 rounds of 20 minutes each.

Sample Questions

• What time is it?

• What round is this?

• Is it morning, noon, or night?

• etc.



Set 2 - General questions relating to things.

Sample Questions

• What would I use a hammer for?

• Of what use is a taxi?

• etc.



Set 3 Questions relating to relationships

Sample Questions

• Which is larger, a grape or a grapefruit?

• Which is faster, a train or a plane?

• John is older than Mary, and Mary is older than Sarah. Which of them is the oldest?

• Etc.



Set 4 - Questions demonstrating "memory"

**Sample** Questions

I have a friend named Harry who likes to play tennis.

<Following this assertion there follows one or more intervening questions or statements, followed in turn by questions about the assertion, e.g.>

• What is the name of the friend I just told you about?

• Do you know what game Harry likes to play?

• etc.
 
If I was a participant in something like this, I would specifically ask questions where the machine would have to learn something new to answer. For instance, strike up a conversation about competitive Pokemon... if the human/machine knows nothing of it, teach over the chat, and test for comprehension later.

I could imagine there being a clever workaround, such as "Pokémon [filled in with the subject of the inquiry] is gay, I don't want to learn about it."

The Numb3rs example sounds... unwieldy, at best.
 
How does that procedural programing relate to 'machine learning' or any of this? I've heard it's innovate, space saving and it does adapt a bit. I'll beg all of you to please excute my ignorance on this topic; I'm following it because I'm trying to learn what I can from those that know more than I.
 
Machine learning is a fancy phrase for a program having a lot of parameters and then changing them itself based on what it experiences. Obviously I'm oversimplifying a lot here, but that's the most basic concept of machine learning.
 
Thanks Surgo, but how does procedural programming fit into this?

I'm going to be at the mercy of every single programmer on this for mentioning this, but I heard that the game spore had some pretty sweet procedural programming behind it; it figured out how the creatues you made would walk based on a small noncomplex packet of code. Can I get some clarification on what is going on with all that? I have heard that procedural programming allows for adaptation with minimal parameters and coding.
 
procedural programming is imperative programming with functions. You explicitly list every statement to be executed and they are executed in order. Imperative programming has nothing to do with AI, and what you are talking about has nothing to do with "procedural programming."

That said, I would argue that most AI people use functional languages...
 
That's 'procedural generation'. What that means is, basically, instead of storing the entire creature that you've made, with texture data etc. etc., they store the instructions required to recreate the model - it's like sending a program that generates the model, rather than sending the model.

It's old technology from the days when you couldn't store an entire galaxy on HDD - old games used to do this sort of thing. It's been re-discovered in glorious fashion.
 
Back
Top