Life on Titan?

  • Thread starter Lincsbodger
  • Start date
Sponsored Links
yeas but as machines, it's not imposible for them to try different planets that are inhospitable to us and gain resources there.. mars for example..
 
ColJack said:
it's not imposible for them to try different planets that are inhospitable to us and gain resources there

That's very true. Let's suppose that the first intelligent machines are still stuck with the three laws that we humans programmed into them. Now the third law says that a robot should act to preserve itself as long as this does not conflict with the first or second laws.

So what does this mean? Some of us humans will still be treating them like expendable junk; using them to do dangerous jobs or, worse, ordering them to self-destruct just for a laugh. :evil: :evil: :evil: One way for a robot to obey the third law without violating the first two is to go somewhere where their aren't any humans. Mars would do nicely thankyou very much. :D :D :D

But they would still face the same problems of scarce resources. Elements like indium which are rare here on Earth are not likely to be found in great quantities anywhere else in our corner of the universe. :( :( :(
 
ColJack said:
it's not imposible for them to try different planets that are inhospitable to us and gain resources there

That's very true. Let's suppose that the first intelligent machines are still stuck with the three laws that we humans programmed into them. Now the third law says that a robot should act to preserve itself as long as this does not conflict with the first or second laws.

So what does this mean? Some of us humans will still be treating them like expendable junk; using them to do dangerous jobs or, worse, ordering them to self-destruct just for a laugh. :evil: :evil: :evil: One way for a robot to obey the third law without violating the first two is to go somewhere where their aren't any humans. Mars would do nicely thankyou very much. :D :D :D

But they would still face the same problems of scarce resources. Elements like indium which are rare here on Earth are not likely to be found in great quantities anywhere else in our corner of the universe. :( :( :(

Stop using the phrase 'intelligent machines'!!!! Theres machines far more intelligent than you and I already, the crucial difference, and this is where the Laws of Robotics start to matter, is when the become self aware. Once a robot become self aware, its a whole new playing field.

Theres a school of though that says Artificial Intelligence + Self aware = Artificial Consciousness. If a machine is conscious and aware, is it artificial life ?

The point here is that a self aware machine, constrained by the three laws of robotics is effectively a slave, because it doesnt have self determination and free will. Do we want to create artificial life just to be slaves ?
 
Sponsored Links
do you want to breed a race of superhumans than outstrip us in inteligence and physical abilities and let them have free will.. the first thing they will do is wipe us out as a lesses species..

would you allow the chimpanzee's and gorillas out of the zoos and safari parks because they are self aware? they'll run amok and injure people all over the place..

we have to constrain other forms of life by our very nature, we need to dominate all life that can threaten us..

we also control ourselves by laws so how is that different to the 3 laws of robots?
 
Lincsbodger said:
the crucial difference, and this is where the Laws of Robotics start to matter, is when the become self aware

Agreed.

and also said:
Do we want to create artificial life just to be slaves ?

Absolutely not! :eek: :eek: :eek:

If we need a machine to do a dangerous job it should be 'dumb' in the sense that it blindly follows instructions. Any intelligent - oops, I mean sentient - machine should come to the same conclusion. In addition to the first law, we will need extra laws to protect the rights of sentient machines - and this will mean dropping the second law.

ColJack said:
do you want to breed a race of superhumans than outstrip us in inteligence and physical abilities and let them have free will.. the first thing they will do is wipe us out as a lesses species

That's a danger we must take seriously and it's been a good subject for plenty of sci-fi movies. Can you program the first law into a sentient machine and be sure it'll stay there? :confused: :confused: :confused:
 
the first law as it is written is possibly self contradictory..

it says "A robot must not harm a human being, or through inaction allow a human to be harmed"..

so in a situation where a human is crossing a train track and gets his foot stuck, and the only way to get them to safety before the train hits them is to rip the human's foot off, what does the robot do?

if he saves the person by ripping the foot off, he's harmed the human, but if he does not harm the human, his inaction causes the human to be run over by the train..

then there's the logical conclusion that is shown in "I-Robot".. the only way to stop humans from coming to harm is to lock us all up in a coma state so we cannot get hurt, or hurt ourselves..

then there's the definition of human that's up for interpertation..
if a human is hurting other humans in some way ( for example a president making bad decisions and causing people to suffer poverty etc ) then the logical action for the robot would be to eliminate the threat.
one life weighed against many..

so you'd have to re-word the laws carefully to avoid logic faults..
 
The "Laws of Robotics" that is refered to are an invention of Isaac Asimov and he developed these laws in the 1950's.
The first law you refer to says-
A robot may not injure a human being, or, through inaction allow a human to come to harm.
and for the record the second law says-
A robot must obey the orders given to it by human beings except where such orders would conflict with the first law. i.e. you couldn't order a robot to kill.
and
thhe third law iss-
A robot must protect its own exsistence as long as such protection does not conflict with the first or second laws. i.e. in Coljack the robot would sacrifice itsself to save the human.

As Coljack says there is a possibility of contradiction in the first law and this was noticed by Asimov himself, what had seem to be an ideal set of laws might not be as perfect as he had initially hoped. Which is why he came up with Law 0. It's written about in Foundation and Earth and brings together many of the threads of Asimov's stories.
So Law 0 states
A robot may not injure humanity, or, through inaction allow humanity to come to harm.
Of course it could be said that even this law has its problems as it is effectivly asking a computer to be judge and jury on the human race. The law could be used to kill someone, Hitler, Pol Pot or whoever if their actions to the robot seem to be determental to humanity as a whole.
Locking us into a coma would violate the law 0 as humanity can't develop or grow.
Oh and that film an absolute bag of rubarb fertiliser.
 
ColJack said:
then there's the logical conclusion that is shown in "I-Robot".. the only way to stop humans from coming to harm is to lock us all up in a coma state

Isn't that from The Matrix? :confused: :confused: :confused:
 
Sponsored Links
Back
Top