ChatGPT is remarkable

It really isn't. Exception handling is a different thing.
In computer programming maybe, but it takes it's name from handling an exception to the rule. Or, what we commonly call dealing with the unexpected.
 
Sponsored Links
In computer programming maybe, but it takes it's name from handling an exception to the rule. Or, what we commonly call dealing with the unexpected.

But what is the unexpected other than a new set of information...

In your grenade example, the AI catches the ball because it recognises you have thrown a ball. When you threw the grenade, it recognises you threw a grenade and it's action will be based on this information... remember that these systems can and will process information far quicker than humans... it's more likely that the human would catch the grenade because they don't process the change of the object being thrown quick enough.

I remember someone explaining a future AI targeting system for a weapon. The human would see a person and fire the weapon at them, strafing to hit them if they're moving. The AI would calculate the distance/speed that the person is moving or can move, all of the possible locations they will be in that time and fire at all of those locations simultaneously.
 
It's called "exception handling" & it's something that most of us do quite regular & with a degree of success.

AI is useless at it & always will be.

You can think up any given situation & claim the AI can be programmed to respond, but you cannot think up the exceptions. Imagine a child crying behind a locked door & you have to make a decision whether to break the door down. AI can never do this & anyone who says it can is lying to you.
Not sure it's valid to assert 'always will be' when it comes to tech based things. As with a child, who learns as its formative years roll by, perhaps the same will be true with AI, perhaps it's already so to a certain extent? I think it's equally valid to assert in decades to come, AI would blow our minds in terms of its capabilities if we were here to see it.

I appreciate it's not the same thing, however if you think about the early days of various inventions, it's not a stretch to imagine people at the time would laugh if certain assertions were made. Imagine standing with a crowd watching Stevenson's Rocket and saying 'in the future, trains will travel at 150+ mph.'

I think it's perfectly reasonable to assert in decades to come, there will be AI androids that we'd be amazed with in terms of their articulation/movement and level of 'intelligence.'
 
But what is the unexpected other than a new set of information...
You process an incredible amount of information, most of it subconsciously. Can you imagine the math involved in catching a ball? Yet most can easily achieve this without consciously thinking about it. If the ball is a live grenade & you believe that catching it isn't really a good idea then you have made a conscious decision. AI cannot make a conscious decision, it has no conscious.
 
Sponsored Links
In computer programming maybe, but it takes it's name from handling an exception to the rule. Or, what we commonly call dealing with the unexpected.
We're not using lettuces here. AI is all about programming computers.

And again, that's not what you're describing. Exception handling is when an event that was not planned for occurs and either normal operation can't continue or shouldn't continue.

You were describing Azimov Rovot levels of AI, which we aren't at yet. But oddly we are approaching the Brain stage of Azimov.
 
Also, like hell you have ever lectured on AI, except perhaps from Speakers corner.
 
You process an incredible amount of information, most of it subconsciously. Can you imagine the math involved in catching a ball? Yet most can easily achieve this without consciously thinking about it. If the ball is a live grenade & you believe that catching it isn't really a good idea then you have made a conscious decision. AI cannot make a conscious decision, it has no conscious.

But it sees you take out a grenade, pull the pin and throw it... in that time it has identified you are throwing a grenade, it knows it's live because it sees you pull the pin and therefore the set of commands it executes is based on this information.

I just asked ChatGPT "Should I catch a live grenade thrown at me?"... it responded

"No, you should not catch a live grenade thrown at you. Even if you manage to catch the grenade, the explosion could cause serious injury or death. It is always best to get as far away from a live grenade as possible."

Why is it unreasonable to assume that an AI would recognise the object and act accordingly if even a very early language system like this can determine the correct course of action in less than a second?
 
tasks.png

Historically we've struggled with all elements. Image recognition is the first one but that's mostly sorted now thanks to neural networks.

The decision on what to do was the really big one. Trying to train that information in was impractical, but language processing and conceptualisation has leaped on in the last decade so that part is now tantalisingly close.
 
Also remember that this is early days yet and it's already at a level that you can discuss almost anything with it... want to get a better understanding of something, ask the bot and it will explain it for you...

Maybe early days for this, but I remember running similar, though very crude text based AI software back in the early 80's. Back then, it just scanned the input text for key words, to formulate a reply from a list. The test was - would someone using a remote terminal, be able to tell whether they were chatting to a machine or a human. Obviously, after a few exchanges, it became very obvious.
 
Maybe early days for this, but I remember running similar, though very crude text based AI software back in the early 80's. Back then, it just scanned the input text for key words, to formulate a reply from a list. The test was - would someone using a remote terminal, be able to tell whether they were chatting to a machine or a human. Obviously, after a few exchanges, it became very obvious.

I guess the algorithm is similar here except the AI is drawing on the internet for resources to formulate its replies, you can still tell it's a bot you're talking to but only really because the replies have certain structure to them and the speed at which it replies ... which is far too fast for a human to type up.
 
Can you imagine the math involved in catching a ball?

An oft-trotted out trope.

The maths to describe the flight of a ball is indeed fiendishly complex, but the catcher isn't working this stuff all out; they're catching through practice and prediction. Not by computing changing coordinates, and marrying their hand position to a (coordinate-referenced) spot in space.
 
Unless there is a paradigm shift in what this AI android actually is though, it will be nothing more than a more humanlike interface for Google.

Walter Mitty types apart, chats between humans are based on sharing musings on things they've experienced, plan to experience, or would like to experience.
An AI android can't / won't do that, unless it becomes a self-determining being itself. Which would defeat the point of humans creating them in the first place.

Without "lived" experience, responses would be fictitious / imagined. In which case, you might as well listen to an audiobook.
 
An oft-trotted out trope.

The maths to describe the flight of a ball is indeed fiendishly complex, but the catcher isn't working this stuff all out; they're catching through practice and prediction. Not by computing changing coordinates, and marrying their hand position to a (coordinate-referenced) spot in space.
This. With most things, we learn during our formative years and beyond. I don't see why software won't become increasingly capable of doing the same.
 
Sponsored Links
Back
Top