Development

Sir Issac Newton and Gottfried Leibniz, mathematicians and philosophers of the 17th century, both contributed to our modern understanding of calculus but, remarkably, did not collaborate on the development of the idea, as Newton lived in England and Leibniz in Germany. (For much of their lives, Newton, Leibniz, and the mathematics community feuded over who truly discovered calculus.)

The invention of calculus marked the culmination of centuries of mathematical studies dating as far back as the ancient Egyptians. A few prominent figures before Newton and Leibniz, including Newton’s own teacher Isaac Barrow, drew remarkably close to defining the methods of calculus, but did not quite reach the goal. Preceded by mathematical visionaries such as Rene Descartes and Blaise Pascal, Newton and Leibniz had the tools to initiate this new study, of which the derivative was the base.

Newton viewed calculus in terms of motion – appropriate, considering that he investigated the mathematics as part of his studies in physics. He began exploring what would become calculus during his years at Cambridge University, but engaged particularly with it when the college closed in the wake of the Great Plague of London (1665-1666). He partially published his findings in 1693, and in full in 1766 in his Method of Fluxions (written, in fact, in 1671), where “fluxions” was the term he used for “derivative.” As he considered calculus a representation of motion, in his research the derivative stood for the velocity of the dependent variable x.

By contrast, Leibniz considered calculus a representation of sums and differences. From 1661-1663 Leibniz studied philosophy and mathematics at the University of Leipzig, but his dedicated mathematics research did not begin until 1672 when, visiting Paris, he met Christiaan Huygens, a Dutch mathematician who exposed him to the work on summing series, areas under curves, infinitesimals, and other ideas. Beginning in 1673, Leibniz attempted to develop calculations and notation for his differential calculus. His Nova Methodus pro Maximis et Minimis, itemque Tangenibus, published in 1684, presented his mature findings.

Though they explored similar ideas, Newton’s and Leibniz’s methods and notations differed. It would take some time to describe the specific methods they applied, so we’ll suffice to say that both calculations involved infinitesimals, as discussed in Part 1, which mathematicians eventually developed into limits. As far as notation goes, students generally use Leibniz’s more often than Newton’s, both for its ease of reading and for its applicability to more advanced concepts of differentiation.

While Newton denoted the derivative of the function f as

Definition

I mentioned in Part 1 that a study of calculus leans on a study of limits because we define the derivative and integral in terms of limits. Why is this?

A derivative is, in essence, the slope of a tangent line to a curve. In fact, Leibniz’s notation suggests just that, as the change in y over the change in x is the equation for the slope of a line. The find the slope of a curve, we could start by drawing a secant, a line that crosses two points on the curve, as shown below:

This is the graph of the function yf(x). Say we wanted to find the slope of the curve at x = -1.5, the point at the left green dot. The slope found by dividing the change in y by the change of x of the two green points on the blue secant line would give us an approximation of the desired slope, but to find a more exact slope we would need to have less change in our x valuesi.e., we don’t want the second green dot as far away. Moving that second dot closer to the point of interest (x = -1.5) would decrease the change in x, notated dx or Δ[delta x]. At the point where Δx equals 0, we have drawn a tangent on the curve at the point x = -1.5.

However, simply plugging 0 in for Δx in the slope equation Δy/Δwould give us an undefined value. Thus, to calculate the tangent of the line in this fashion, we want to find the value the slope takes as Δx approaches 0, but does not reach it. Thus arises the limit definition of the derivative:

This works given that yf(x), so f(x+Δx) – f(x) is ${y}_{2} - {y}_{1}$, which is also the Δy. Basically, the derivative is Δy/Δof some point on a curve, calculated practically at the point itself.

Other posts in “History of Calculus”