In differential calculus, the product rule is both simple in form and high in utility. As such, it is typically presented early on in calculus courses — soon after the linearity of the derivative, in fact. Moreover, the product rule is easy to derive from first principles:

**Theorem (Product Rule): ***Let and be differentiable on the open set . Then is differentiable on , and we have*

for all *.*

*Proof: *For , we have (by definition of the derivative)

under the assumption that each of these last two limits exists. This of course holds, as these limits are and , respectively.

All in all, then, the product rule is easy to prove and easy to use. But — and this is of utmost pedagogical importance — * is the product rule intuitive? *By this proof alone, I would argue not; the manipulation of the numerator is weakly-motivated and our result falls out without reference to more general phenomena.

In this post, we’ll explore the merits of a second proof of the product rule, one that I hope presents a motivated and compelling argument as to **why**** **the product rule should look the way it does.

**— PART I (PRODUCTS AND CHAINS) —**

In what sense, if any, should the product rule be natural? As suggested in the introduction, the derivative is — fundamentally — a linear operator. What business, then, does the derivative have in respecting products?

In one sense, very little. To hash this thought out more fully, let be an algebra over the ring . A -linear operator is called a * derivation *if satisfies the product rule, i.e.

for all . Derivations can be thought of as formal counterparts of the derivative, and in this light we make two observations: firstly, that the product rule shines as the *characteristic property *of derivations; secondly, that this holds because (and only because!) we have prescribed it.

To put some of this into perspective, I’d like to compare the product rule to a second mainstay of differential calculus: the familiar chain rule. Frequently, this is taught long after the product rule, in part for the following:

- While the quotient rule is perhaps most naturally a corollary of the product and chain rules, it can be (and often is) derived independently. By circumventing the chain rule, one can differentiate the trigonometric functions sooner (i.e. before returning to the chain rule). This is done in Stewart, for example.
- In a curriculum that focuses on differentiating each of the so-called “elementary functions“, the chain rule is only required insofar as it used to derive the differentiation laws for inverse functions (e.g. the inverse trigonometric functions and either the logarithm or the exponential).

There’s also the question about proof: on a moral level, the chain rule follows from the factorization

in which the first term is recognized as and the latter as . Unfortunately, it may be the case that fails to inject in any neighborhood of , in which case our “moral proof” falls short.

*Remark: This is no more than a technical obstruction: for such that , we simply replace our left-most difference quotient by . (This all works by continuity of .)*

Despite this obstruction, our moral proof of the chain rule is elegant in form and obvious in execution. As one might expect, this simplicity has categorical significance: the chain rule encodes precisely the fact that the derivative (and generalizations) give functors from the category of differentiable manifolds to the category of tangent bundles.

**— PART II (LOGARITHMS) —**

By now, I hope that this post has made two opinions clear: that the derivative is fundamentally a *linear *object, and that the chain rule respects this linearity in ways that the product rule does not. This motivates our present interest in logarithms, as a method to turn products into sums. As it turns out, we’ll need just one Lemma:

**Lemma: ***Let be defined on an open set . Then for all, and for (provided that ).*

*Remark: *In most usual definitions of the logarithm, one of these statements will be obvious. If the logarithm is defined as an anti-derivative of , for example (making our first assertion tautological), then a result due to Saint-Vincent (1647) implies that . On the other hand, it is also common to first define the logarithm as inverse to the exponential (which gives the stated functional equation), and prove that equals its own derivative. *(This, in turn, can be used to define .)*

We are now primed to present a second proof of the product rule. Regrettably, we must finally break the symmetry we’ve created between the product rules for differentiable (resp. complex-differentiable, i.e. holomorphic) functions defined between subsets of (resp. subsets of ).

**Proposition: ***Suppose that and are differentiable and non-vanishing on the open set . Then is differentiable on , and we have*

for all .

*Proof: *Let , a connected component of .* *If and are functions of a real variable, we may assume by continuity that on (negating or if necessary). Then on , and implicit differentiation gives

in which we have used the chain rule and our Lemma. Our result follows by clearing denominators.

In the complex case, the fact that and are non-vanishing throughout gives the existence of local branches to the logarithm. With these branches, our proof carries through as in the real case.

As it stands, this version of the product rule has been artificially weakened by the hypothesis that and be non-vanishing on . In this sense, I would compare it to our (somewhat incomplete) proof of the chain rule – an elegant proof with some technical holes pushed under the rug.

On the other hand, this gap is not so hard to fill: borrowing some intuition from perturbation theory, we are led to consider functions of the form, in which the perturbation is chosen such that and become locally non-vanishing (about a fixed point in the domain of differentiability of and ). Then

by our Proposition. On the other hand, linearity of the differential gives

.

It follows that , after cancellation, i.e. the product rule.

**— PART III (THE PROBLEM WITH RINGS) **—

And now, for some last-minute abstract nonsense:

Having seen these two proofs, it’s obvious why our first dominates the classroom, despite the haunting simplicity of line (1). Less obvious — and far more troubling — is the inherent difficulty in relating additive and multiplicative constructs (cf. the Goldbach and *abc* Conjectures), a thorn in the side of number theorists and algebraists the world over.

When multiplication and addition *do* behave (in some predetermined context), it is frequently because there exist sufficiently well-behaved analogues of the logarithm and exponential functions. In the case at hand (asking how a certain* linear* operator respects *multiplication* of functions), it has been enough to know that the logarithm satisfies a characteristic functional equation and has a well-understood derivative.

In the case of formal group laws over a ring of positive characteristic, for example, the non-existence of logarithms/exponentials is central to the field’s depth. In certain cases, these formal group laws give rise to actual group laws, e.g. on the completion of the base ring with respect to the -adic topology. In particular, -adic convergence of the logarithm affords us — in a small but tangible way — a better understanding of the group structure on an elliptic curve.

**— EXERCISES —**

**Exercise: **The product rule trivializes if we assume some multivariable calculus. Let and , and define . Calculate

using the multivariate chain rule.

**Exercise: **Given a matrix Lie group , let denote the set of matrices such that , where denotes the matrix exponential. Then is a * Lie algebra*, known as the Lie algebra associated to . Find the Lie algebras associated to and .

**Exercise: **If is a field of characteristic zero, prove that the additive and multiplicative formal group laws are isomorphic over .