• Looking for a smarter way to manage your heating this winter? We’ve been testing the new Aqara Radiator Thermostat W600 to see how quiet, accurate and easy it is to use around the home. Click here read our review.

Artificial intelligence

Are you using AI?

  • Yes a lot, OR including for complex problems and information, or using a paid subscription

    Votes: 5 35.7%
  • Quite a bit but only for simple lookups

    Votes: 1 7.1%
  • Lightly, around once or twice a week

    Votes: 4 28.6%
  • Have used it once or twice

    Votes: 0 0.0%
  • What's AI?

    Votes: 4 28.6%

  • Total voters
    14
Joined
22 Aug 2006
Messages
7,050
Reaction score
1,171
Country
United Kingdom
I'm putting it under Hobbies because I use it for interest, not for work.

It hasn't been mentioned in the forum much, which is odd perhaps for something which is about to take over our lives.

Its rate of improvement is extraordinary. There are significant steps forward, every month. ChatGPT v5 is currently out front, particularly as the excellent base version is FREE. Sam Altman's OpenAI produced it, so it's a competitor to products from Google, MIcrosoft, Apple(few), Tesla etc.

Anywhere that facts are king, it is better than humans, or will be soon. LIke medicine. It's better at diagnoses than a G.P.. Plus it knows more about pharmacy, drug interactions, operations, and all the other fields where a GP would call on a specialist.

For anything computational, no contest at all.
In Law, it knows them all, and can analyse and compare relevance to a situation you suggest.
If you want an Excel sheet with data you provide or pulled off the net , you can have it analysed and plotted, no problem.
You want a computer program? Done in a flash. Coders will be redundant.
Jobs are already vanishing, in predictable areas.

We are entering the start of the age where there will always be a general superintelligence to consult, and we'll produce kids who will never have known life without it.

It's astonishing that the vast majority of the population have almost no idea what's going on, or have never used it.

What do you use it for? I have a few things, but that's enough to read.
 
I wouldn't know what AI was even if it jumped up and bit me on the 'arris ;) .. :LOL: With the few years I probably have left, I can't see AI bothering me.
 
I think you may be in the majority - let's see - if you'd like to vote?
I think for almost any hobby or interest, it could give you interesting info.

I'm pretty sure you will have unwittingly used it.



Perhaps this would be useful:

The easiest way to access it, is possibly your phone.
Download ChatGPT from wherever you get apps - the App Store , Play Store, etc.
You can type in questions and hit the up-arrow to send them. or you can press the button with differently sized vertical lines to talk to it.
It'll reply by the same method.

It doesn't store your questions or "use" them, it says, though I see a list I have asked. Perhaps there's a setting for that. It's useful, if a further question occurs to you and you don't want a full repeat.

I have turned off the human-like interjections like "ah", which I found annoying.
 
Last edited:
There was a time when people used to say Wikipedia is going to take over.

Has it? No.

Will ChatGPT be any different? Probably not.

However, a problem with AI is using it to do exams or assignments. You would be surprised of how many students use ai and then paraphrase it.
 
Last edited:
It'll be totally different, I'm sure you'll find.
The spending on AI now is about the same as the GDP of Sweden or Poland. This year it'll probably match that of Brazil or Italy.

You won't be able to do anything without hitting it. Do you ever use a chatbot on a website? AI.
Use a GPS? AI. Used a face-changing "filter" on a phone camera? etc.

A friend's a teacher. You can submit the answers kids give, for AI to tell you if they're likely to be plagiarised. It has inside knowledge!

I know some stuff about AI and how it works. But for the most technical of discussions, there are loads of terms etc I have no clue about.
One you have to understand is "inferencing". Ask AI about it.



For intros and more, Jeffrey Hinton was an early designer. His Youtube videos are very helpful.
A Google Search on "jeffrey hinton youtubes" will show a lot of different lengths. I recommend the 90 minute one with the Dragon's Den guy. . In it, he describes how one of the most able developers has now devoted himself to limiting the damaging effects. We don't know much about how to do that, yet. The F1 car needs brakes...
AI's intelligence will be in relation to ours, as ours is now to that of a rabbit, in a few years.
There's nothing spooky about the brain. We could replace synapses and the rest with bits of electronics and it would behave in exactly the same way.
We have discovered a model of how it works, which seems to be close enough to support valid predictions of how to make it better - that's an important test.
Everyone on the planet, without prior knowledge will have trouble grasping how it works. I'm only a little way up the learning curve. Some people will not be able to grasp much at all - that's just the way it is, and why the best people are getting $1bn salaries.
 
Last edited:
ChatGPT (probably other AI tools) has a problem in that it cannot reliably tell fact from fiction. It admits it if you ask it. So its responses aren't guaranteed to be accurate.
 
ChatGPT (probably other AI tools) has a problem in that it cannot reliably tell fact from fiction. It admits it if you ask it. So its responses aren't guaranteed to be accurate.
True, They're putting a lot of effort into that. It's supposed to be much improved in the latest ChatGPT.
Using a previous version, I asked which were the 6 biggest companies reporting their results after the end of trading today.
It gave 6 names.
I asked about Apple, which it had missed out.
Yes, that was at the same time. It admitted it had got it wrong.
Humans get things very wrong, too, of course. They "hallucinate" as it's called, as well. They can come up with crazy ideas like "god".

If you get down to the limit of its knowledge, it starts repeating itself and waffling. It's hard to get there, though.


Google lens or
Identify a song; Ai?
Both very much so, yes.

It's worth reading/listening to a bit about how recognition works.
Everything is "layers". One layer would reconise shapes in the image, like angles, like cat's ears. The collection of recognised shapes goes to a layer which checks for what might have those shapes. So possibly cat, possibly yacht. It chases down the possibilities , matching with maybe millions of images it knows were "cat".
Tasks can go through hundreds or thousands of layers, at the moment. That will grow.. The more it has learned, the more it's sure it's looking at a cat not a yacht.
If there's a furry yacht out there with 4 outriggers, it would have to look closer.

The Training and Inferencing processes are different. I heard a few explanations before I'd really got it. Then you can match those to how a baby learns.
The easiest way to learn a bit, is to ask AI. You can easily make it clarify things.

I asked it how a single human cell, a fertilized egg, gets to be all the different cells in a body via the copying cell reproduction process (mitosis) which you would have done at O level. I knew a fair bit about how, but it soon went beyond that. I kept asking deeper questions, and eventually it was waffling.
One of the AI's on the phone, Perplexity perhaps, shows you illustrations. The others don't. Dunno why. Bandwidth I think.

The thing to bear in mind is that these things aren't improving x2 in 2 years (Moore's law for computer memory), x2 might be next month's iteration. They teach themselves. Every now and a gain a brilliant person has an idea which shifts gear, (Deepseek did that), which is where the $billion salaries come in. Deepseek did it by altering the number of categories used in the inferencing layers to be more efficient.

x2 every month is 4096x in a year, x 16.7 million in 2 years, 2.8times 10^14x in 4 years. and in ten years.......
 
Both very much so, yes.
Not necessarily. Song and image recognition services have existed for years before what we refer to as AI was developed enough to be useful.. Unless you're flexing the definition of AI to include Shazam's original algorithm from the pre-smartphone days
 
Not necessarily. Song and image recognition services have existed for years before what we refer to as AI was developed enough to be useful.. Unless you're flexing the definition of AI to include Shazam's original algorithm from the pre-smartphone days
That may well be, but is it not an early form of Ai ?
 
They would be using AI now, but when that description started, and for what, is a bit fuzzy.
You get Ai is "used for that which would typically require human intelligence".
Where do you draw the line? I'm not going to worry about it!
It has hit us like a brick with the release of the Large Language Models, so we can tell it what we want and have the answer in English. Before that they would have used numbers, spat the results out as graphs, etc. Same processing, but missing the English layers, I guess.
I think it coincided with the advent of neural nets and parallel processing, but someone may be along to say different.
 
That may well be, but is it not an early form of Ai ?
You're flexing your definition to suit your argument there, I think.

I wouldn't describe the original Shazam as AI, no. It reduced audio data to very short clips and picked out salient frequencies, using the order of the observations to form a signature for sections of every song. Any song fed in looking to be identified would have the same process applied and to some extent regardless of how terrible the input audio was, it would have the same set of salient frequencies, which could be matched as a number counting exercise
 
Last edited:
Yeah, the U and the Y ....

The AI that we have ( to suffer ) today is "intelligent" because the hardware is extremely fast and the "intelligent" computer can test many millions of "ideas" until it finds an "idea" that matches the question that has been asked,
Not really Bernard, it's not just that it's fast. If that were all, it would always come out with the same answer. It doesn't. It does things in a qualitatively different way. It does things in a way it wasn't specifically intended to.
If you put two AI driven human lilke bots on a football pitch and say the obvective is to get the ball in the net and stop your opponent doing so, they spend time doing nothing much that works. After a while they're playing football. They don't have a "play football" program.


You're flexing your definition
What definition is that?!
I asked a couple of AIs what the definition was - very woolly answers.
A lot is about matching patterns, for sure. When I was a kid (ok, sub 25) we were writing programs which looked clever because they'd ignore the less significant and concentrate on the moreso, to get a result. They were just going round loops and using IF statements. They were working out what they needed to do. But they did things like that because they were programmed to.
People would ask "how can it know?".
The answer then, was "how would YOU work it out". We have more of that now, it's like it's on steroids, nut it's not just bigger and stronger.
The difference is that AI finds out in ways we would not imagine.
Now, if you watch some youtubes, - Jeffrey Hinton's interviews are good, it becomes clear that we can't expect to know what the computer is doing.

Jensen Huang (Nvidia boss) is a massive intellect. He describes how the Nvidia system works, which takes him half an hour, and you realise what a whole ecosystem it is, then he describes how it works, and I start to get lost. As an aside to that, it explains more about the chips-to-China thing. Once they are using the ecosystem, if the new chips fit it, it's a hell of a committment to turn to Huawei, whatever the basis speed is.
They're monsters, in many ways.
 
Last edited:
Back
Top