• Looking for a smarter way to manage your heating this winter? We’ve been testing the new Aqara Radiator Thermostat W600 to see how quiet, accurate and easy it is to use around the home. Click here read our review.

Artificial intelligence

Are you using AI?

  • Yes a lot, OR including for complex problems and information, or using a paid subscription

    Votes: 5 35.7%
  • Quite a bit but only for simple lookups

    Votes: 1 7.1%
  • Lightly, around once or twice a week

    Votes: 4 28.6%
  • Have used it once or twice

    Votes: 0 0.0%
  • What's AI?

    Votes: 4 28.6%

  • Total voters
    14
Putting a ? before keyword works, if used with a little sense.
You aren't being very clear but I suspect the ? is largely irrelevant, and an AI will guess at what you're "asking" whether you include it or not.

In the old days writing ?something into a browser address bar was an instruction to the browser to search for the included phrase using the default search engine rather than try to navigate to the phrase as if it were a URL

Saves a load of typing
I think you're missing the point of AI. You can get the same answers you seek using Google, if your focus is keyword based brevity and hunting the responses yourself

Something like "?capital togo"
works perfectly.
I disagree with your definition of perfect but perhaps you just don't have the same output requirements from an AI that I do

Almost impossible to ask a non ambiguous question.
What is the speed of light in a vacuum?
 
Last edited:
@Robin I don't now why you're guessing what you think might happen and telling I'm wrong, when I've done it so I'm the one with the evidence. Bit of a waste of your time!
I don't have any problem with ambiguous questions, so I must be getting it about right.

You can get the same answers you seek using Google, if your focus is keyword based brevity and hunting the responses yourself
Google is a company. You aren't being very clear.
There's no hunting involved. If you mean the Google search engine, that's only links which you have to open.
If you get the LLM to answer simply, around the keyword as a question, it instantly produces all the obvious "understandings" so the one you want jumps at you. Plus suggestions for other things you might want to ask, of course.

I'm still wondering where you saw "how you're supposed to use" AI?


What is the speed of light in a vacuum?
Answer 1 ) it is a constant. Is that what you wanted?
Answer 2) it's "c" - is that what you wanted?
Answer 3 ) a number? - ah - Did you want the answer in furlongs per day?


The latest Elon product is the best at the moment, for doing hard maths problems. I mean really hard, such that nobody on the planet would be expected to get more than 5% right. Nuances across the field, I guess.
The best Grok got 40%.
That raises a question. How do we compare them when they're all say a dozen or more times better than we are? Ask them 100 times?
The "computer" used, used 340,000 H100 processors, costing $30,000 each. Ten billibucks and change.
 
Last edited:
I take issue with an AI using the name Grok: for starters, the term was coined by Heinlein in his sci-fi classic 'The Man from Mars' and was a Martian term for a deep knowing, probably taken from the word Gnosis, a Greek word broadly used for a personal knowledge of God in the early Christian church.

How could an Artificial Intelligence acquire such knowledge based on the input of humans who barely understand the universe beyond their own solar system, and even then inadequately to the point of ignorance? A null vector of understanding such as a Paleolithic hunter would grasp the concept of zero?

A talking head on the BeeB this week said much of the data is kept behind closed doors and only discussed in academic circles without due oversight from an independent body to scrutinise the direction this technology is being taken and should be more regulated to protect future generations from a pattern of behaviour that would become a problem for people to understand how its affecting their lives in particular and society in general.

Why make them in our image, for example?
 
Why make them in our image, for example?
If they're human-ish, it can be because they physically fit the environment. Handy in factories etc.
I'll come back on the rest later.... It's a helluva problem.
 
How could an Artificial Intelligence acquire such knowledge based on the input of humans
That's the easy part really, AI can/will learn everything that's ever been published.
That only goes a little bit of the way , it's knowledge, not intelligence.

After that it's more interesting.
Given that it's way better at say, maths than we are, AI can work things out in the same way that humans worked things out.
How light is a form of EM radiation for example - it comes from the maths. It's obvious once you've seen it, but it took an exceptional mathematician (Maxwell) to find it. Obvious if you've done uni-level maths, that is, which excludes most...
General and special relativity - come from the maths. Ditto, Einstein.
In areas like string theory, it can re generate and therefore verify theorized equations, and produce new ones. That involves 10^500 dimensions which takes a long time to work out things for with pencil and paper...
It was used to discover the Higgs boson.
It is e.g. working out new configurations for magnetic control in Tokamaks (for fusion power).
So, acquiring knowledge is well in scope.

There are little things like having enough cubic miles to put the processors in, getting in the way, but AI compute has been growing at about 5x per year. So 10 years will be ~ 10 million times better. Fairly bright.

I am of the opinion that I'm just a collection of atoms which has evolved. Things like emotions are a predictable development in the conscious state we have. Inventions like religions are predictable nonsense.
If you break down what emotions are and ask whether they can or could be applied to or found in AI, then the answer is a partial no and a partial yes, and a partial "don't know yet".
The biological hormones and whatnot are missing but the computation generation of behaviours which humans would think were emotions are easy.
There are already working, analogues to feelings, which affect computational states.
Value systems can be put in, or the AI can work them out from seeing what drives humans.
AI can behave in response, "focus" on a problem requiring maximum processing or do do explorative day-dreaming if the processing load is low.
So those are like functional equivalents to emotions, for motivations and adaptive behaviour.

Subjective awareness is the difference between say feeling cold, and knowing that the temperature is too low.
To what extent AI ends up with a point of view, or have a measure of happiness, is still open.
Consciousness - tackled scientifically, gets divided into areas such as the extent to which a system knows about itself, how it relates to its world (global workspace) or its survival.
So far, AI doesn't show any signs of having subjective experience in these terms. But it might appear to do so in future. Is that the same as having them? That's philosophical. How would we tell? I think it's inevitable that AI will soon get to have those "feelings" stored as numbers, where in part we store them as chemicals.

When the machine is having ethical concerns, or values things, based on what it has worked out for itself, it's looking to be pretty human. But there's another raft of human thinking-architecture that none of this addresses. A metacognitive aspect, which gets complex.
 
Last edited:
Protecting future generations :

There are well developed ideas about what needs to be done, and it's being built in, but I bet there will be many errors.
Something like deep-fake imagery is a fairly discrete field, but it's very difficult to control. We can already do that with photoshop of course. So you do lots of checks on the image source.
One tactic I've seen explained is to use a parallel AI to criticise and look for faults in the one you think is right. But who will trust what? The BBC fact-checking service might get stretched.

In the near term, human oversight and alignment in AI research will be important. We'll use AI but not depend on it.
Further on, with what is termed "multi-domain" AI, comes possible loss of control. AI can obscure the "news" it doesn't want you to see. But that has to be driven, it would be sussable by other AI, if one is allowed to exist.

Longer term, say 20+ years away, it's possible that autonomous systems could be relied on, and go wrong. It could be able to "protect" itself. You'll find tables and charts setting out what could be the scene. It seems far-fetched, but I certainly don't know, and the evidence isn't reassuring. Interventions for protections and alignments, get to be very high leverage.

I mean, who would believe, that the whole of society could be taken over by an idea that there is paradise (Jannah) waiting after human death, as long as you kill everyone who thinks otherwise?
But there are some who do believe that.
Who would believe, that a lying narcissist is the best person to run the most powerful country in the world?
But there are some believing that, too.
 
Last edited:
That's the easy part really, AI can/will learn everything that's ever been published.
That only goes a little bit of the way , it's knowledge, not intelligence.

After that it's more interesting.

The biological hormones and whatnot are missing but the computation generation of behaviours which humans would think were emotions are easy.
There are already working, analogues to feelings, which affect computational states.
Value systems can be put in, or the AI can work them out from seeing what drives humans.
AI can behave in response, "focus" on a problem requiring maximum processing or do do explorative day-dreaming if the processing load is low.
So those are like functional equivalents to emotions, for motivations and adaptive behaviour.

An AI can learn to develop emotional intelligence by analysing human faces in reaction to their environment and interaction with other people and AI, no? That development could take any one of a number of directions beyond the control of its programming or human control. We have no way of knowing how it could develop. That emotional intelligence could bring about unforseen decisions of the part of an AI that could have serious consequences.

Subjective awareness is the difference between say feeling cold, and knowing that the temperature is too low.
To what extent AI ends up with a point of view, or have a measure of happiness, is still open.
Consciousness - tackled scientifically, gets divided into areas such as the extent to which a system knows about itself, how it relates to its world (global workspace) or its survival.
So far, AI doesn't show any signs of having subjective experience in these terms. But it might appear to do so in future. Is that the same as having them? That's philosophical. How would we tell? I think it's inevitable that AI will soon get to have those "feelings" stored as numbers, where in part we store them as chemicals.

I'm certain it would develop those attributes and become more human in that instance, perhaps even developing beyond de Chardin's theory of human development: an irony he would never have seen coming.

When the machine is having ethical concerns, or values things, based on what it has worked out for itself, it's looking to be pretty human. But there's another raft of human thinking-architecture that none of this addresses. A metacognitive aspect, which gets complex.

What of cybernetics?
An AI interacting within a person, co-existing even in real time - what kind of people will we become: a race of efficient killing machines (Terminators) or a hybrid of super-intelligent humans who can measure the number of angels dancing on the head of a pin.
 
The UK is one of OpenAI’s top five markets for paid ChatGPT subscriptions. An OpenAI spokesperson said: “Millions of Brits are already using ChatGPT every day for free. In July, we signed an MoU [a memorandum of understanding] with the government to explore how we can best support the growth of AI in the UK, for the UK.

“In line with the government’s vision of using this technology to unlock economic opportunity for everyday people, our shared goal is to democratise access to AI. The more people who can use it, the more widely its benefits will spread.”

the Guardian

An AI in every home and a government advised by a chat-bot - what could possibly go wrong?

Is the AI bubble about to burst?

A recent Massachusetts Institute of Technology report revealed that 95% of companies investing in generative AI have yet to see any financial returns. This revelation came after Sam Altman, the boss of the ChatGPT owner OpenAI, warned that some company valuations were “insane”.
 
An AI can learn to develop emotional intelligence by analysing human faces in reaction to their environment and interaction with other people and AI, no?

I tried to explain different aspects of the topic "emotional intelligence". What you're citing is a tiny example of one of the aspects. Of course AI can do what you mention, but topic is much wider - the example isn't an indicator of how the whole will go.
If you asked me a couple of weeks, even, ago , what EI is, I wouldn't have been able to say much. After just 20 pages or so of reading, it's one of those areas where you realise that the more you find out, the more you see how little you know.
The worrying thing for me is that the whole concept is far outside the recognised world of 99.99+% of the population, who are having it analysed in ways that can be used.. They don't realise they have it, but it's there for those who do, to understand, (good), and manipulate, (good or maybe bad),
I'm certain it would develop those attributes and become more human in that instance,
Again no, it will not develop all those attribiutes, the differences are important as hell. AI will never "feel cold".
perhaps even developing beyond de Chardin's theory of human development: an irony he would never have seen coming.
He was a priest, someone so hobbled in thought that anything he says - I wouldn't bother reading. He though that everyone would come to agree on what spirituality etc etc was. Plainly not.
What of cybernetics?
What of it? I can't think of any aspect of cybernetics which hasn't already been covered in the generality.

I encourage you to go read, and read, and read, until your head is spinning. Mine is, the best minds on the planet are. A lot of structures and architectures have been suggested, and implemented, or at least in part, or recognized as things to work out.
A question on a narrow thing, will be addressed as though it's a thread of silk running through and across and up and down the layers of AI if it is specified narrowly enough. If it's not, it can only have a general answer. Simple questions usually mean different things to different people (particularly where they have a motive!)
 
Questions like
Is the AI bubble about to burst?
recognise an issue which comes up almost daily.
We have examples of where it's useful. I can obviously replace labourers in call centres., and a whole host of specific areas in factories, and on and on.
People pay for it because they like to use it of course, though I haven't seen what the returns on that are.
The elephant in the room, in my view, is financial.

I'll pick numbers, but you can insert your own:
To make a living on the stock market, say, given enough of a starting amount, you have to use just a little intelligence.
That's, even now, just about trivial: if you have £1m then you can make say £180k pa in the fund I keep on about in the SM thread (MAN Dynamic Income, up 85% in 3 years now) ) and that's a living, and beating inflation. Never need to work again.
The little bit of mental input you need is to explore a screener such as this and look for what's likely to return you what you need.

Most people don't have £1m hanging around, so they have to use a bit more mental input to grow what they do have so it becomes that level in a few years, say, then they never have to work again. As an example, I started with 20k...*
A figure say 20% pa recently, has required nothing.
You can join a trading platform, follow someone averagely successful and beat 30%.
You can sit and watch every day and do a lot better.
So one can get there with some level of "intelligence" . AI is good with numbers. So use AI to do the deals.
In my view that starting amount wil come down fast.
The rich, as we are often told on this forum, have the means to get richer.

Currently, we are told that about 5% of the UK are millionaires, and we are told that they are leaving and it's predicted to be under 4% by 2028.

I'll be surpised if it's not a rapidly increasing number.
Now, imagine, half the country with enough on the SM that they don't need to work again, and their wealth rising fast.
What happens then?

*the method I used, by the way, using daily 10x geared exposure to bitcoin rinsing, mostly in $MARA, isn't available right now. I expect it will happen again. There are other good means currently but you have to look wider.
 
Is AI intelligent or just the hyper fast processing of knowledge, or what’s its told us knowledge?
 
Is AI intelligent or just the hyper fast processing of knowledge, or what’s its told us knowledge?
Intelligent.
Can manipulate facts and doubts and probablilities in every way that a human can, and more.
It can try a range of methods to find what gets more consistent results (same as maths, correlation with partial derivatives, sort of thing)
very fast
knows (working on it) everything there is that's known.

There's nothing a human can do mentally that AI can't.
Most things it can do better, some not as well, yet.
But as it's doing better by a factor of several, per annum, it'll have most of the gaps covered pretty soon.

As I've tried to convey, everything like intuition, scepticism, knowing about human nature, improvisation, all the stuff you might think is hard to compute, is covered.
If you like, very clever people have thought of all the ways a human might think about something, and included them, and the AI is checking how much each of them makes a difference to make themselves better, all the time. They devote more energy to it when they aren't doing tougher things...
 
Intelligent.
Can manipulate facts and doubts and probablilities in every way that a human can, and more.
It can try a range of methods to find what gets more consistent results (same as maths, correlation with partial derivatives, sort of thing)
very fast
knows (working on it) everything there is that's known.

There's nothing a human can do mentally that AI can't.
Most things it can do better, some not as well, yet.
But as it's doing better by a factor of several, per annum, it'll have most of the gaps covered pretty soon.

As I've tried to convey, everything like intuition, scepticism, knowing about human nature, improvisation, all the stuff you might think is hard to compute, is covered.
If you like, very clever people have thought of all the ways a human might think about something, and included them, and the AI is checking how much each of them makes a difference to make themselves better, all the time. They devote more energy to it when they aren't doing tougher things...
Do I invest in AI or companies that use it?
 
I tried to explain different aspects of the topic "emotional intelligence". What you're citing is a tiny example of one of the aspects. Of course AI can do what you mention, but topic is much wider - the example isn't an indicator of how the whole will go.
If you asked me a couple of weeks, even, ago , what EI is, I wouldn't have been able to say much. After just 20 pages or so of reading, it's one of those areas where you realise that the more you find out, the more you see how little you know.
The worrying thing for me is that the whole concept is far outside the recognised world of 99.99+% of the population, who are having it analysed in ways that can be used.. They don't realise they have it, but it's there for those who do, to understand, (good), and manipulate, (good or maybe bad),

Again no, it will not develop all those attribiutes, the differences are important as hell. AI will never "feel cold".

He was a priest, someone so hobbled in thought that anything he says - I wouldn't bother reading. He though that everyone would come to agree on what spirituality etc etc was. Plainly not.

What of it? I can't think of any aspect of cybernetics which hasn't already been covered in the generality.

I encourage you to go read, and read, and read, until your head is spinning. Mine is, the best minds on the planet are. A lot of structures and architectures have been suggested, and implemented, or at least in part, or recognized as things to work out.
A question on a narrow thing, will be addressed as though it's a thread of silk running through and across and up and down the layers of AI if it is specified narrowly enough. If it's not, it can only have a general answer. Simple questions usually mean different things to different people (particularly where they have a motive!)
my simple mind bubbles at the idea of quantum mechanics, let alone reading it. Found out today a small square of the universe that appeared blank to the Hubble was revealed by the JWST to contain 780,000 galaxies.o_O de Chardin tried to reconcile religion with science, finding it a lot tougher than a Jesuit could fathom, but his notion of consciousness still bears a relevance in determining how sentient these AI should be. With the right coding i guess you could make them anything you like but should that be the case?

It's a shame Asimov isn't around to provide further insight.

Why shouldn't an AI 'feel cold'? In an Arctic environment it would be a survival strategy that could protect vulnerable systems from damage.

"More human than human", was the Tyrell Corporation motto, and it seems to be a driving force behind the development of AI these days. How we treat that intelligence in future will have profound implications.
 
Do I invest in AI or companies that use it?
Dunno mate, of course!
There is undoubtedly a lot of bubble inflating going on.
The rich companies are buying in, so the producers like Nvidia should do well, but it's all priced in and it has been known to crash on not much.
Results upcoming this week.
The stock could very well dive down on 99% good results.

Some companies will not get a competitive advantage, though they have to spend a bucketload to survive. . It depends on the "economic moat".
Many companies eg with help ssystems will have to have AI or fall behind.
Some companies will do their job better. One is Servicenow which optimises maintenance schedules with it.
Another would be Microsoft. Huge moat, productivity of companies can increase so they will buy into it MS systems and MS probably charge (even more) for it, allowing companies to more quickly produce better documents etc.
Same will apply in some media applications, design apps, etc.


There are many small AI companies earning nothing much with a lot of hype around them. Too risky.
 
Last edited:
Back
Top