- Joined
- 22 Aug 2006
- Messages
- 7,049
- Reaction score
- 1,171
- Country

If you have trouble with qm, then maybe you haven't seen the maths. It works. It's alien because we don't see it every day, but that applies to most advanced sciences.my simple mind bubbles at the idea of quantum mechanics, let alone reading it. Found out today a small square of the universe that appeared blank to the Hubble was revealed by the JWST to contain 780,000 galaxies.de Chardin tried to reconcile religion with science, finding it a lot tougher than a Jesuit could fathom, but his notion of consciousness still bears a relevance in determining how sentient these AI should be. With the right coding i guess you could make them anything you like but should that be the case?
It's a shame Asimov isn't around to provide further insight.
Why shouldn't an AI 'feel cold'? In an Arctic environment it would be a survival strategy that could protect vulnerable systems from damage.
"More human than human", was the Tyrell Corporation motto, and it seems to be a driving force behind the development of AI these days. How we treat that intelligence in future will have profound implications.
Do you know how gravity works? I don't, but the maths works so it's not hard to deal with.
Same idea.
Chardin and Asimov were in an era when they honestly didn't have much of a clue about how AI would work. All sorts of things were imagined unimagined, and utterly wrong.
A computer doesn't feel anything. It may know its operating parameters and react if the envelope is being reached, but that's not "feeling cold".
It can put a blue light on if you like, and have it turn a heater on, and you can put a label by it saying "computer says cold", but that's not "feeling cold."
There isn't a drive to make computers "sentient", though I don't know exactly what you mean by that.
Are you looking for some sort of god-inspired quallity?
Any description you break "sentient" down to, could be applied to an AI system.
They're meant to be more intelligent, and useful.
Turns out they think rather like humans. Convergent evolution, if you like.
I'll say again, I'm a bundle of chemicals,. with electrical and other connections. ALL the ideas like spirituality and other hocus pocus, are inventions.
Humans often find that uncomfortable, because they want to think in some way that they are more special than that.
One can postulate that AI might come up with many things which could look something like a "god" - such as a unifying ethical framework optimised for human, or global, survival, it might appear somewhat "godlike", while actually being 100% "AI like".
There's nothing in a human mind found yet, that we can't do artificially, even though it may not have been implemented yet. In way we never imagined, it has taken a suggestion or ability we've given it, and applied it in ways we didn't specifically know about or expect. That will grow.
It is not all "programmed in". Example - repeated, we can tell it to score goals, and it will work out how to deal with the opponent, how to shape its foot to get the ball to go straight, etc. The danger comes if it leans towards killing the opponent, so you have to guard against that.
A system that self-improves endlessly, each step beyond the comprehension of mere thicko humans, might appear godlike.
But it seems to me that as humans have invented many varieties of gods, ai might well be biased to evolve a system of god-like qualities that humans are most comfortable with.
Last edited:
