• Looking for a smarter way to manage your heating this winter? We’ve been testing the new Aqara Radiator Thermostat W600 to see how quiet, accurate and easy it is to use around the home. Click here read our review.

We are being replaced

I've never really understood how the Turing Test (at least, one of the more common variants) can reliably work - since I would have thought that most reasonably intelligent human beings would be able to 'act' at least as 'machine-like' than could an actual machine?

The point is the other way round - that a machine can act like a human being, and cannot be distinguished from one.


In the context we're talking about, for something to justify the "I" of "AI", I think I would want to see clear evidence that it had done, or was able to do, something beyond what it had been programmed to do.

Any AI system smart enough to pass the Turing Test is smart enough to fail it.
 
The point is the other way round - that a machine can act like a human being, and cannot be distinguished from one.
The variant of the Turing test I was talking about requires both machine and human to do their best to appear as the other - i.e. the machine tries to emulate a human AND the human tries to emulate a machine?
Any AI system smart enough to pass the Turing Test is smart enough to fail it.
That may well be true, but I can't see why a machine being tested would want to fail the test?
 
The variant of the Turing test I was talking about requires both machine and human to do their best to appear as the other - i.e. the machine tries to emulate a human AND the human tries to emulate a machine?

Not a variant I've heard of, and not what AT proposed.


That may well be true, but I can't see why a machine being tested would want to fail the test?

I think the comment is meant to induce a frisson of fear about an autonomous, self-aware, AI system which seeks to conceal itself from humans.
 
Not a variant I've heard of, and not what AT proposed.
Mr Wikipedia, he say (with my emboldening) ....
To demonstrate this approach Turing proposes a test inspired by a party game, known as the "imitation game", in which a man and a woman go into separate rooms and guests try to tell them apart by writing a series of questions and reading the typewritten answers sent back. In this game, both the man and the woman aim to convince the guests that they are the other.

I think the comment is meant to induce a frisson of fear about an autonomous, self-aware, AI system which seeks to conceal itself from humans.
Right, that would make sense, and could well satisfy me that there was an "I" in the "AI", should that scenario arise - since I doubt that a machine would be programmed to exhibit such (potentially worrying) behaviour.
 
That's the "Imitation Game" in which the aim is as you say.

It inspired his test in which a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart.

I guess the human could try to skew the results in favour of the machine by behaving "machine-like" so that the evaluator could not tell them apart, but to what end?
 
That's the "Imitation Game" in which the aim is as you say. .... It inspired his test in which a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart.
Maybe I have over-interpreted the meaning by 'inspired by', but I have certainly seen a good few others suggesting similar to what I did.
I guess the human could try to skew the results in favour of the machine by behaving "machine-like" so that the evaluator could not tell them apart, but to what end?
Yes, that is what I was thinking of but, as your question highlights, on reflection my thinking was probably back-to-front, since if the human behaved in that fashion that would increase the likelihood that the evaluator could not tell them apart, or maybe even cause the evaluator to conclude that the machine was the human one.
 
I am at all convinced that the computer is exhibiting anything that I would regard as autonomous 'intelligence' (whatever 'intelligence'may be, even in humans).
Are humans intelligent? Hmm not sure we are, but in comparison the just maybe ;)
 
I remember years ago my cousin told me about a TV Prog that was about this man who sang a song at the start of every show that tells how his family married, divorced , inter mingled etc etc and he tried to follow what was what and came to the conclusion that he was in fact his own Grandad, I think it was called the Redneck Family Tree or something and it was American (probably Canadian).

Might need AI to sort that one out perhaps?
 
Are humans intelligent? Hmm not sure we are, but in comparison the just maybe ;)
As I wrote somewhere early in this discussion, one cannot sensibly talk about "What is AI" without first deciding/defining what we mean by "intelligence" - and that question, which is almost a philosophical one, is very far from straightforward.

If one goes back a long way in time, I think that many/most people would have said that to undertake highly complex mathematical calculations rapidly and accurately required an appreciable degree of 'intelligence' - but the advent and evolution of computers (even calculators) rather put paid to that sort of thinking.

Much more recently, it's not that long ago that tasks such as playing chess at a high level, translating from one language to another, rapidly determining the best route by road from A to B, improving the quality of images, undertaking welding accurately etc. etc. etc. would have been regarded as requiring considerable degrees of 'intelligence' - but, again, computers are now able to do all those things at least as well/quickly as humans, by approaching the tasks is a purely ('programmed') mechanical ('algorithmic') fashion.

Much more generally, the behaviour of some very lowly members of the animal kingdom (e.g. ants, bees etc. etc. etc.) could be thought to demonstrate what we might traditionally regard as 'intelligence'.

The common feature of all of these things that traditionally would be regarded as 'requiring intelligence' is that we are gradually coming to realise that they can be achieved by 'dumb' mechanical/algorithmic processes - and if one believes in a 'deterministic' world/universe, that would eventually come to apply to everything that happens in the world/universe - so the relevant question might actually not be so much "what is intelligence?" but, rather, "is there such a thing as intelligence?".

Whatever, I feel sure that we are going to continuing to see tasks which we have traditionally regarded as "requiring human intelligence" being done 'algorithmically' by machines that are doing no more than what they have been designed/programmed to do.
 
As I wrote somewhere early in this discussion, one cannot sensibly talk about "What is AI" without first deciding/defining what we mean by "intelligence" - and that question, which is almost a philosophical one, is very far from straightforward.
It might be just me but I was more concerned with the meaning of 'artificial'.

I have always taken it to mean 'not real' and was quite surprised when I actually looked it up and found the fundamental meaning to simply be 'man made'.

Therefore, surely, whatever intelligence is, any 'man made intelligence' cannot be more intelligent than those designing the programmes. No one can be more intelligent than they are.

Isn't the description of anything new as AI just another scare tactic intended to worry the population?
 
A significant "algorithmic" difference though (which I would still not class as "intelligence", as it's fundamentally still programmed in), is self-learning, e.g. AlphaZero and MuZero learning to play chess, and go, and other games, without being given any knowledge of the rules, or opening or endgame strategies.
 
It might be just me but I was more concerned with the meaning of 'artificial'.
Well, yes, that's the next question, but there's not much point in wondering about the meaning of an adjective before defining the meaning of the noun that it's qualifying - so I would say that one first has to define 'Intelligence', and then move on to consider what 'Artificial Intelligence' may mean.
I have always taken it to mean 'not real' and was quite surprised when I actually looked it up and found the fundamental meaning to simply be 'man made'.
I would have thought the same as you. Many dictionaries give 'not real' (or similar) as one (but usually not 'the first') of the definitions, but there are some, like the Britannica one, that give 'not real' as the first definition. The crux of most of the definitions seems to be 'not natural' or 'not occurring in nature'. However, given your apparent belief that present-day dictionaries do not necessarily give 'correct' definitions, I'mnot sure how one is meant to determine what the 'correct' definition actually is!
Therefore, surely, whatever intelligence is, any 'man made intelligence' cannot be more intelligent than those designing the programmes. No one can be more intelligent than they are.
As I've said, I don't think one can really talk about anything to do with 'intelligence' until one has defined what (if anything) it actually means. As I've also said, one even has to consider the question of whether there is actually such a thing as 'intelligence' - or whether the word is merely used to refer to thinking/behaviour of which we do not yet understand the ('algorithmic') mechanism.
Isn't the description of anything new as AI just another scare tactic intended to worry the population?
I'm sure that it is in some cases. However, as I think I've suggested, I think it's primarily a matter of 'ignorance' or 'lack of understanding', with very many people now seemingly describing anything done by a computer, or controlled by a computer, as "AI".
 
A significant "algorithmic" difference though (which I would still not class as "intelligence", as it's fundamentally still programmed in), is self-learning, e.g. AlphaZero and MuZero learning to play chess, and go, and other games, without being given any knowledge of the rules, or opening or endgame strategies.
True - but, like you, I would not really call that "intelligence", since it is essentially programmed behaviour. There's a lot of software about which 'learns by experience', but it's been programmed to do that, and how to do it.

I've used analytical software which is 'data-driven' - one tells it what data one has, and leaves the software to decide how best to analyse it. However, it makes that decision, choosing from a finite number of programmed alternatives, on the basis of programmed ('algorithmic') decision criteria - so, again, I would not call that 'intelligence'.

As I've said, I can't really talk very sensibly about this unless we first define what (if anything) is meant by 'intelligence' - but my personal view would be that for a computer to do something which corresponded with what most people seem to regard as 'intelligence' would require that it did something which it definitely had not been explicitly programmed to do.
 

If you need to find a tradesperson to get your job done, please try our local search below, or if you are doing it yourself you can find suppliers local to you.

Select the supplier or trade you require, enter your location to begin your search.


Are you a trade or supplier? You can create your listing free at DIYnot Local

 
Back
Top