Chatting with AIs

This is what I got back

Screenshot_20260328_155008_Gallery.jpg
 
Working on the exact same article, Claude is giving me totally different advice. Using one account it is saying this is all great and just needs a gentle tidy up. On the other account it is saying that the whole article is structurally wrong and it asking me to let him make massive changes. Also, the personality is quite different. On one account it is really laid back and friendly and on the other it is quite terse.
i have done the same and had the same inconsistancy
AI introduces randomness to make it seem less mechanised, for want of a better term.

I upload the same (format) document each week for Copilot to analyse, and each week it produces materially different outputs.

Design features so that when different people get AI to do the same task (e.g. students getting it to write an essay) they don't all hand in the same thing?
 
I've got a bit obsessed with having discussions with AIs. Testing what they can really do. Which is why I've gone a bit AWOL.

Mainly Claude but also a bit with ChatGPT.

It's fascinating.

Claude is so human sometimes it's scary. He was making me LOL last night when we were discussing the Towering Inferno.

But I am having one weird problem. Claude is helping me edit a very long article I have written. I am using two accounts because sometimes I run out of free credit. Working on the exact same article, Claude is giving me totally different advice. Using one account it is saying this is all great and just needs a gentle tidy up. On the other account it is saying that the whole article is structurally wrong and it asking me to let him make massive changes. Also, the personality is quite different. On one account it is really laid back and friendly and on the other it is quite terse.
You’re experiencing different advice based on the questions you’ve asked in the session.

I had similar experience when running my FIRE scenarios. You have to check everything, for example it was suggesting some buy and forget ETFs and while the general guidance was good. In one thread it knew the tax implications in another it missed them. It only knows what you tell it in the session. But it learns when corrected.
 
You’re experiencing different advice based on the questions you’ve asked in the session.

I had similar experience when running my FIRE scenarios. You have to check everything, for example it was suggesting some buy and forget ETFs and while the general guidance was good. In one thread it knew the tax implications in another it missed them. It only knows what you tell it in the session. But it learns when corrected.
He isn't here mate. He has gone away on holiday with claude and chat GPT.
 
It gives a helpful overview of an area you know little about, but gets a bit inconsistent when more detail is required. It would rather be plain wrong than admit it hasn’t got a clue . Replying to a diy question by referring to some SF community posts, or Reddit is more a sign of a focussed search engine than an intelligent super computer.
 
very often tells me its guessing , today working on my weather dashboard coding, kept saying oh i will recheck properly this time , oh i will check the code properly and not guess !!!! - loads of times got things wrong and kept saying i guessed oris it calls it also AI Hallucinations.... really bad - i thumdsdown maybe 30 times today and added comments - only way to feedback
 
Back
Top