Through elementary school and usually most if not all of middle school, mathematics in America consists of real numbers and the basic mathematical functions we can apply to them (addition, subtraction, etc.). Then, algebra steps on stage, bringing with it the familiar numbers and symbols of elementary mathematics… and the alphabet.

In most cases in algebra, the letters stand for variables (e.g. the x in y = 2x). However, two noteworthy letters do not – e and i. These letters serve not as place-holders for numbers, but as numbers themselves! For the present, we will concern ourselves with i, also called the imaginary number, and, by extension, complex numbers.

First, for those unfamiliar with the number i and complex numbers, definitions are in order.

Basically, i is the square root of -1, or the radical of -1.

i = √-1

Paired with a real number in the form abi, we have a complex number, where a is the real part and b is the complex part.

This post has two parts. In this first part, we will explore the origin of imaginary and complex numbers.

Mathematics begins with the geometric constructions of the ancient Greeks, the ever-familiar length, width, area, and volume the enabled construction. As a negative measurement could not possibly exist, none meddled in the idea of negative numbers except to deem their existence absurd and, according to particularly vehement opponents of the concept, a stain upon mathematics.

Image found on Pinterest

Mathematicians in the East had toyed for centuries with these funny negatives though, like European scholars, did not regard them as numbers in themselves. As late the 18th and even 19th centuries, mathematicians held this ancient opinion of negative numbers.  Still, in the 15th century they came more into use, and calculations for 3rd- and 4th- degree polynomials revealed a need for negative radicals.

Thus the imaginary number, named as an insult to its ludicrousy, emerged at the the same time that mathematicians were just acclimating to negative numbers.

Gerolamo Cardano (Photo from Encyclopedia Britannica)

Italian intellectual (and gambler – mathematicians are human, too) Gerolamo Cardano, a Renaissance thinker who contributed to the emerging study of probability, published his Ars Magna, which included solutions to the cubic and quartic equations. One calculation of his yielded a negative radical which, though acknowledged as the answer, he did not accept as useful or meaningful mathematical constructs.

Some brave hearts after Cardano promoted the legitimacy of imaginary numbers. In his 1573 treatise L’Algebra, Rafael Bombelli (1526-72) established rules for the properties of imaginary numbers, namely the multiplication of them, such as that i squared equals -1. However, not until Leonhard Euler (1707-83), who dabbled much in complex numbers, and Carl Friedrich Gauss (1777-1855), who presented a clear investigation of complex numbers and of the connection between imaginary numbers and real numbers, did mathematicians really start to accept the concept.

Gauss and Euler (Image from Gene Expression)

Gauss popularized the graphical explanation of complex numbers that others before him, Jean Robert Argand in particular, discovered. Now call an Argand diagram, this graph plots complex numbers as points on a complex plane, where the x-axis represents real numbers and the y-axis represents imaginary numbers.

If you don’t understand the diagram now, no worries. The next part will explain it.

In Part 2, we will try to understand how the complex plane works and learn about the importance and application of complex numbers.