Got it!
Right...
Let a & b each be equal to each other and both have a vaule of 1.
Since a & b are equal:
b^2 = ab - equation 1 (for non mathsy people, 'ab' means 'a' multiplied by 'b' and b^2 means 'b squared' or 'b' multiplied by 'b')
Since 'a' equals itself, it is obvious that
a^2 = a^2 - equation 2
Subtract equation 1 from equation 2. This gives
(a^2) - (b^2) = (a^2) - ab - equation 3
We can factor both sides of the equation:
(a^2)-ab equals a(a-b)
Likewise:
(a^2)-(b^2) equals (a + b)(a - b) - non mathsy people might struggle with this jump, but it is true.. (try plugging in numbers if you aren't sure)
Substituting into the equation 3 , we get
(a+b)(a-b) = a (a-b) - equation 4
So far, so good....?
Now divide both sides of the equation by (a-b) and we get
a + b = a - equation 5
Which means that:
b = 0 - equation 6
But we set b to 1 at the very beginning of this proof, so this means that
1 = 0 - equation 7 (oh dear!!)
So if 1 = 0 then we can say that 2 + 2 + 0 = 5 i.e 2 + 2 = 5 (as we have already proved that 0=1)
So, conceptually, any number is equal to zero.
That's also the reason why when you divide by zero, it is 'Undefined.' Which is what is happening in this equation...can you spot where?
So, as I say, it's a flawed proof, but kind of interesting I suppose...