Sorry - it was meant to be a joke about writing code which did not do what it was supposed to do, what I would have sworn blind I had written it to do, but instead did things I'd swear blind I had not written it to do.
Ah! Sorry from this side, too, because, like ebee, I had not twigged that that was what you meant
Yes, of course, we've all written code which doesn't (completely or partially) do what we intended it to do and/or which does things that we didn't intend it to do - but, as you go on to say, that's absolutely nothing to do with (the presence or absence of) intelligence on the part of the machine running that (imperfect) code!
Indeed, on the contrary, if the machine were intelligent enough, and if it knew what one was trying/intending to achieve, it might 'point out' that there was something wrong with the code!
But on a more general note, there are always problems arising from unintended consequences of "writing code" which is "supposed" to do something, and does indeed do it, but in doing so causes things to happen which were "not supposed to". The fault lying with the designers/writers of the code, not within the code itself. I can see the potential for massive cock-ups with AI from that.
Well, that can happen if a machine in functioning 'incorrectly' (in relation to 'intentions'), whether that be the result of an error in programming or an error in some intrinsic/autonomous behaviour of the machine - and I'm not sure that is necessarily any greater a risk with "AI" ...
... but, again, we first have to decide what we mean by "I", hence decide what we would mean by "AI", and then decide whether anything we currently have even approaches qualifying as "AI" by that definition - and I'm far from convinced that we have yet done (or, at least, 'completed') any of that deciding, or that we yet have anything (or anything much) that I would be happy to call "AI"..