• Looking for a smarter way to manage your heating this winter? We’ve been testing the new Aqara Radiator Thermostat W600 to see how quiet, accurate and easy it is to use around the home. Click here read our review.

We are being replaced

I run an annual quiz and occasionally more during the course of the year, as is happens this year I've run 6 and commented to a friend about the difficulty of preparing 600 questions, he suggested chatgpt so I gave it a try and asked it to produce a geography round, apparently the county town of Derbyshire is Derpy (yes I have spelt that correctly) and Kiev is in Russia.

Quite honestly I don't bother clicking on the AI results in Google as much of it is simply copied from Wikipedia or wrong.
 
Quite honestly I don't bother clicking on the AI results in Google as much of it is simply copied from Wikipedia or wrong.
It seems that the "AI Overview" resulting from a Google search is very often a copy, or near-copy, of material one of the other (non-AI) Google 'hits' - so often may well be 'no worse' than one would be looking at if the 'AI Overview' was not there!
 
I suppose if we investigate properly how humans become so intelligent then we might be able to think about creating computers that have AI, have we achieved that yet? I am starting to think NO . Do we imagine a number of possible answers to our own questions then trawl thru our answers to imagine which result looks best? Do we then store the answer(s) in our library in our brains ready for next time?

Do we truly get a computer to do all of that or do we really imagine it ourselves and get the computer to calculate very much faster than we can?
Tell a computer to send out signals to connected servos to build a wall we have to tell it where to place the first brick then the second and when to stop horizontally then lift vertically then reverse procedure then to stop again , go up reverse etc etc then finally when to stop, quite a complicated little prog cos we have to be quite exact but is something we easily do each time.

We can watch a "Big Dipper" in motion travelling up/down, back/forth and train our eye to centre and focus on each individual occupant in any order we choose and we can "imagine" the next turn or an impossible action. How do we do that? is it intelligent?

AI is great for Arnie Terminating Films but can it actually become reality?
 
It seems that the "AI Overview" resulting from a Google search is very often a copy, or near-copy, of material one of the other (non-AI) Google 'hits' - so often may well be 'no worse' than one would be looking at if the 'AI Overview' was not there!
Precisely but requesting something specific from it. Seems to result in so many errors it makes. it unusable
 
I suppose if we investigate properly how humans become so intelligent then we might be able to think about creating computers that have AI, have we achieved that yet?
As I've said, I think we first have to decide what we mean by 'intelligent', in human beings or anything else.

As I've said, it's not that long ago that most people would have agreed that the ability to play chess at the highest level required considerable 'intelligence' - but that idea has really gone out of the window since computers have been able to do the same, solely on the basis of 'programming' (i.e. 'algorithmically').

I suppose one could say that the humans doing the programming have to be intelligent - but I don't see how/why the machine that merely executes the programmed algorithms can be said to be exhibiting any intelligence, do you?

Kind Regards, John
 
Can I ask you to define "explicitly" in your context please?
Rearly? That's interesting. Could you perhaps give an example of the sort of thing your talking about?

Sorry - it was meant to be a joke about writing code which did not do what it was supposed to do, what I would have sworn blind I had written it to do, but instead did things I'd swear blind I had not written it to do.

But on a more general note, there are always problems arising from unintended consequences of "writing code" which is "supposed" to do something, and does indeed do it, but in doing so causes things to happen which were "not supposed to". The fault lying with the designers/writers of the code, not within the code itself. I can see the potential for massive cock-ups with AI from that.

Probably a very good example wasn't actually software (which is why I put terms in " " above), but was nevertheless an algorithm, with programmed rules.

In the (1974?) oil crisis in the USA the government was looking for ways to reduce consumption, and one idea they had was to impose maxima and minima on what temperatures govt offices were allowed to have their thermostats set to (minima because of the use of AC).

So to save energy, people were "programmed" to implement the rule that in summer you could not have the thermostat set to less than 70°F.

In Alaska the heating came on.
 
Sorry - it was meant to be a joke about writing code which did not do what it was supposed to do, what I would have sworn blind I had written it to do, but instead did things I'd swear blind I had not written it to do.
Ah! Sorry from this side, too, because, like ebee, I had not twigged that that was what you meant:-)

Yes, of course, we've all written code which doesn't (completely or partially) do what we intended it to do and/or which does things that we didn't intend it to do - but, as you go on to say, that's absolutely nothing to do with (the presence or absence of) intelligence on the part of the machine running that (imperfect) code!

Indeed, on the contrary, if the machine were intelligent enough, and if it knew what one was trying/intending to achieve, it might 'point out' that there was something wrong with the code!
But on a more general note, there are always problems arising from unintended consequences of "writing code" which is "supposed" to do something, and does indeed do it, but in doing so causes things to happen which were "not supposed to". The fault lying with the designers/writers of the code, not within the code itself. I can see the potential for massive cock-ups with AI from that.
Well, that can happen if a machine in functioning 'incorrectly' (in relation to 'intentions'), whether that be the result of an error in programming or an error in some intrinsic/autonomous behaviour of the machine - and I'm not sure that is necessarily any greater a risk with "AI" ...
... but, again, we first have to decide what we mean by "I", hence decide what we would mean by "AI", and then decide whether anything we currently have even approaches qualifying as "AI" by that definition - and I'm far from convinced that we have yet done (or, at least, 'completed') any of that deciding, or that we yet have anything (or anything much) that I would be happy to call "AI"..
 
Another example of unintended consequences - although again not really flagged as AI:

Various "driver assist" functions are now mandated in new cars, and one is "lane assist", which on the face of it has the laudable aim of stopping dozy drivers from drifting out of their lane.

But when implemented by a tw@ who doesn't realise that sometimes it's quite legitimate to drive on the "wrong" side of the road for miles, i.e. way outside any allowed exception for overtaking, drivers can end up having to fight the car's attempt to pitch them into the roadworks or Armco when they are in a contraflow.


I heard, a year or so ago, about bias issues with generative AI - if you asked for images of men, most of them came fully clothed, but images of women didn't, because of the enormous amount of porn/underwear/swimwear images out there.


These sorts of things don't highlight problems with semi-autonomous cars, or AI per se - they highlight problems with people not implementing the tools properly, not thinking about what might happen if you just turn something loose.
 
These sorts of things don't highlight problems with semi-autonomous cars, or AI per se - they highlight problems with people not implementing the tools properly, not thinking about what might happen if you just turn something loose.
Quite so- but, as you say, not really anything to do with "AI".
 
I reckon that if we try to describe "What is intelligence?" then we might be here for quite some time, this thread might still be running in a few thousand years time or till the end of the universe even!
 
I don't think it actually matters what 'intelligence' specifically is.

The worry is that 'man' will design machines which are more intelligent the he is.

Is this possible? I would say obviously not.



That this is being promoted now as if it has already happened is the really worrying aspect - just more brain-washing to dupe the masses.
 
I reckon that if we try to describe "What is intelligence?" then we might be here for quite some time, this thread might still be running in a few thousand years time or till the end of the universe even!
Quite so. Although it has wide-ranging relevance and implications,it's essentially a philosophical question.

However, I would also say that it's a crucial question. It really has no meaning, and makes no sense, to be thinking/talking about "artificial X","more/less X" or whatever until we have decided what "X" is, does it?
 

If you need to find a tradesperson to get your job done, please try our local search below, or if you are doing it yourself you can find suppliers local to you.

Select the supplier or trade you require, enter your location to begin your search.


Are you a trade or supplier? You can create your listing free at DIYnot Local

 
Back
Top