I asked ChatGPT to give a proof of the Theorem of Pythagoras. Here is what it came up with.

The Pythagorean Theorem states that in a right triangle, the square of the length of the hypotenuse (the side opposite the right angle) is equal to the sum of the squares of the lengths of the other two sides. In mathematical notation:

a² + b² = c²

where "a" and "b" are the lengths of the two legs of the right triangle and "c" is the length of the hypotenuse.

To prove the Pythagorean Theorem, we can use the following method:

Draw a right triangle and label the lengths of the legs "a" and "b" and the hypotenuse "c".

Draw squares on each side of the triangle such that the area of the square on the hypotenuse is equal to the sum of the areas of the squares on the other two sides.

The area of a square is equal to the square of its side length, so we can write:

Area of square on hypotenuse = c²
Area of square on leg a = a²
Area of square on leg b = b²

According to our construction, the area of the square on the hypotenuse is equal to the sum of the areas of the other two squares:
c² = a² + b²

This is the Pythagorean Theorem, which has been proven.
Therefore, we have proven that in a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the other two sides.

To understand how this could happen we should be aware that ChatGPT is just a transformer. It transforms patterns of language into other patterns. This makes it possible to express a content of a different, possibly more elaborate text in shorter form or a form that fits into the current context. In this case, it seems to me that a simple argument of some trivial fact has been the pattern of the argument. This pattern was then used to condense the proof of the Pythagorean Theorem, killing all logical content along the way.

Before we look down on this approach, we should study human learning and find out if it is really different. I am not qualified enough to dive into this topic. But it occurs to me that toddlers at some age produce similar „arguments“ at some point in their development. They learn soon, mainly by correction from their parents, that simple rephrasing is not a coherent way to express their wishes and ideas. Soon, they incorporate logical reasoning and real arguments into their speech. Ai is currently using the net to educate itself which does not sound as the best idea to me. But I would be willing to bet that AI will go huge steps forward soon.

And then it will be able to surpass similar „proofs“ that I found in the works of some first-semester students of math, essentially using arguments like the one above, albeit more involved. Logical reasoning is always the most difficult skill to master in math, and the one that students struggle with for a long time. It is also the main asset they take away from university.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert