• Looking for a smarter way to manage your heating this winter? We’ve been testing the new Aqara Radiator Thermostat W600 to see how quiet, accurate and easy it is to use around the home. Click here read our review.

Light switches wired wrongly

Good question - I don't know.
That makes two of us, then - I'll see if I can find the answer.

I would also imagine that there is probably a requirement that the (definitely) human participant in the Test has 'reasonable' ('average'?) intelligence and literacy - since I imagine gthat a pretty dumb machine could look quite human in comparison with a human being of very limited intelligence and/or literacy?
 
With the Turing Test, as originally described, is the (definitely) human participant meant to 'behave naturally', or is it acceptable for them to deliberately do everything they can to confuse/expose a machine (should that be what they are conversing with)?
I haven't yet found an explicit statement of this, but it seems pretty likely that the Turing Test (as originally conceived/proposed) quite probably not only allowed ''deliberate attempts to confuse" but may well have more-or-less required that ....

... in devising his test, Turing was apparently inspired by a party game called "The imitation game". In that game an interrogator communicates with two people, one male and one female, by the passing of written notes, and has to work out which is male and which is female on the basis of their written responses to his written questions, but both the man and the woman aim to convince the guests that they are the other.
 
I haven't yet found an explicit statement of this, but it seems pretty likely that the Turing Test (as originally conceived/proposed) quite probably not only allowed ''deliberate attempts to confuse" but may well have more-or-less required that ....
.... [... both the man and the woman aim to convince the guests that they are the other.]
On reflection, I suspect that there's something not quite right here, otherwise I doubt that any machine would ever be able to pass the test...

.... I would have thought that most reasonably intelligent human beings would be able to do quite a good job of deliberately making themselves appear more 'stupid' (more 'machine-like') than a machine that was trying to appear human, in which case the machine, no matter how good/'intelligent', would nearly always fail the test?
 
So the questioner is interacting with 2 entities, one human, the other machine, and the human has to convince the questioner that he's a machine, and the machine has to do the opposite?

Interesting, and as you say, weighted in favour of the human. Unless you imagine the scenario where a machine, knowing that humankind is concerned about the abilities of AI, deliberately fails the test in order to mask its abilities.

I was thinking about problems arising from the questioner posing questions along the lines of "tell me what you know about X" and determining that a machine is answering because of a too extensive knowledge of too many Xs.
 
So the questioner is interacting with 2 entities, one human, the other machine, and the human has to convince the questioner that he's a machine, and the machine has to do the opposite?
Yes, that is my understanding.
Interesting, and as you say, weighted in favour of the human. Unless you imagine the scenario where a machine, knowing that humankind is concerned about the abilities of AI, deliberately fails the test in order to mask its abilities.
True, but that would still constitute a 'failure' of the machine to pass the test, even if it were a deliberately contrived failure.

It seems to me that if the test does work by above-postulated rules, it would probably be fundamentally flawed - since, in the final analysis, there won't be any discernible difference between a pretty 'non-intelligent' machine and a human who is (or is pretending to be) as unintelligent as the machine? That flaw would presumably disappear if the human was required to behave 'as intelligently as they could' - so perhaps that's how it's actually done?
I was thinking about problems arising from the questioner posing questions along the lines of "tell me what you know about X" and determining that a machine is answering because of a too extensive knowledge of too many Xs.
Perhaps, but,depending upon the nature of the X and the human concerned, they too might possibly have an extremely extensive knowledge of Xs ?
 
Perhaps, but,depending upon the nature of the X and the human concerned, they too might possibly have an extremely extensive knowledge of Xs ?

I had a play about with some such software, back in the 80's. It was quite limited, due to its very limited repertoire, but a modern machine would have much more accessible knowledge, it could tap into, via the internet.
 
I had a play about with some such software, back in the 80's. It was quite limited, due to its very limited repertoire, but a modern machine would have much more accessible knowledge, it could tap into, via the internet.
That's obviously true but, as I replied to morqthana, it's also possible for a human being to have a lot of knowledge about some particular topic/discipline, so I don't think one could necessarily take the fact that someone/something "knew a lot about a particular topic" to indicate that he/she/it was most likely to be a machine :-)

Of course, if he/she/it appeared to know an awful lot about virtually any topic one asked about, that would raise 'suspicions'.

However, we're now talking about 'access to information', rather than 'intelligence', per se. If we are going to allow a machine to make use of the Internet when answering questions,, I think that we would also have to allow the human participant to do it as well. "Looking up information", whether from the Internet, an encyclopedia, a aor whatever does not really require any significant amount of 'intelligence', since it's a pretty 'mechanical' task.
 
Of course, if he/she/it appeared to know an awful lot about virtually any topic one asked about, that would raise 'suspicions'.

1750009541775.png
 
True, but that would still constitute a 'failure' of the machine to pass the test, even if it were a deliberately contrived failure.
On the contrary -given the spirit of the test I think it would constitute quite a scary pass.


Perhaps, but,depending upon the nature of the X and the human concerned, they too might possibly have an extremely extensive knowledge of Xs ?
But how many Xs?

That's obviously true but, as I replied to morqthana, it's also possible for a human being to have a lot of knowledge about some particular topic/discipline, so I don't think one could necessarily take the fact that someone/something "knew a lot about a particular topic" to indicate that he/she/it was most likely to be a machine :)

Of course, if he/she/it appeared to know an awful lot about virtually any topic one asked about, that would raise 'suspicions'.

Which is why I said "a too extensive knowledge of too many Xs."


However, we're now talking about 'access to information', rather than 'intelligence', per se. If we are going to allow a machine to make use of the Internet when answering questions,, I think that we would also have to allow the human participant to do it as well. "Looking up information", whether from the Internet, an encyclopedia, a aor whatever does not really require any significant amount of 'intelligence', since it's a pretty 'mechanical' task.

But if you look at what tools like ChatGPT excel at, such a contest could not be in real time. The human would have to be given days, weeks, even months, to craft answers & explanations from reading and agglomerating/summarising info from the Internet, whereas an AI system could do it immensely faster than any human ever could.
 
On the contrary -given the spirit of the test I think it would constitute quite a scary pass.
It would certainly be a potentially pretty scary situation, but still a failure of the test, if the machine had not been correctly identified - even if that failure had been deliberately contrived by the machine.
But how many Xs?
.....Which is why I said "a too extensive knowledge of too many Xs."
...But if you look at what tools like ChatGPT excel at, such a contest could not be in real time. The human would have to be given days, weeks, even months, to craft answers & explanations from reading and agglomerating/summarising info from the Internet, whereas an AI system could do it immensely faster than any human ever could.
Yes, but don't forget an an 'intelligent' machine would know all that just as well as we do - so if (per what I've suggested) it was trying hard to conceal the fact that it was a machine, it would presumably ensure that it didn't reveal extensive knowledge of "many Xs", nor would it reveal the speed at which it could do things.

Hence, regardless of what time scale it was done over, a machine would presumably attempt to make the extent of its knowledge and the speed at which it could do things appear to not be markedly greater than would be expected to be the case with a human - i.e. it would probably try to be only slightly more 'capable'' than the average human?
 

If you need to find a tradesperson to get your job done, please try our local search below, or if you are doing it yourself you can find suppliers local to you.

Select the supplier or trade you require, enter your location to begin your search.


Are you a trade or supplier? You can create your listing free at DIYnot Local

 
Back
Top