# Notes on Riemann Sums In this note I’d like to talk about one of the consistent sources of confusion for students learning calculus: Riemann sums.  For many students, Riemann sums are those abstract irritations that crop up in the middle of proofs and can be safely ignored until they go away. (Of course, this is the issue with teaching integrals before sums.) In this article, I hope to give some additional context for Riemann sums, so that their role in mathematics (and the history of mathematics) can be made a bit clearer.

As a quick reminder, suppose that $f(x)$ is a continuous function on the interval $[0,1]$. (Or any finite interval $[a,b]$, but the case $[0,1]$ is the easiest to talk about.)  To estimate the area under the curve $f(x)$, we approximate the region by a sum of non-overlapping areas that are easy to compute. There are a few options for this (rectangles, triangles, trapezoids, sectors, etc.), but for simplicity one often uses rectangles of uniform width.  In the diagram above, an approximation using 20 rectangles of width 1/20 might look like: In general, an approximation to the (signed) area between $f(x)$ and the $x$-axis on the interval $[0,1]$ using $n$ rectangles of constant width $1/n$ would take the form $\displaystyle\text{area} \approx \sum_{k=0}^{n-1} \left(\text{area of the rectangle with } I_k:=\left[\frac{k}{n},\frac{k+1}{n}\right] \text{ as a base}\right)$ $\displaystyle\approx \sum_{k=0}^{n-1} \bigg(\text{height} \approx f(x), \text{ some } x \in I_k \bigg) \cdot \left(\text{width} = \frac{1}{n}\right) \approx \frac{1}{n} \sum_{k=0}^{n-1} f(x_k),$

in which $x_k$ is chosen so that $f(x_k)$ gives a “good” approximation to $f(x)$ on the interval $[\frac{k}{n},\frac{k+1}{n}]$.

Here, “good” depends on the application at hand.  If you’re trying to show that the Riemann sum converges to the definite integral, you might care about upper and lower bounds, in which case $x_k$ would be chosen such that $f(x_k)$ is a local max/min (resp.) for $f(x)$ in the interval $[\frac{k}{n},\frac{k+1}{n}]$.

In other cases, “good” might mean taking $x_k$ such that some mean value property holds.  (This is a typical step when proving the fundamental theorem of calculus or defining the differentials associated to different coordinates systems like polar coordinates.)

— RIEMANN SUMS IN PROTO-CALCULUS —

The most honest application of Riemann sums is undoubtedly numerical integration. This is a topic that will show up in any course in numerical analysis, but it’s often rushed in calculus classes that focus on closed-form integration instead.

And yet, when an closed-form anti-derivative $F(x)$ is known for $f(x)$, the (second) fundamental theorem of calculus gives $\displaystyle F(b)-F(a) = \int_a^b f(t) dt,$

which means that Riemann sums are by now the “wrong” way to study simple definite integrals. (Riemann sums are a good way to motivate the integral $\leftrightarrow$ area analogy, however.)

Stepping into our time machine, we can forget the fundamental theorem of calculus and go back to a simpler time when Riemann sums were used to compute definite integrals. (In these examples I’ll skip some of the details that justify convergence to the stated result.)

Example 1: Find the area under the curve $f(x) =x^2$ over $x \in [0,1]$.

Solution: The Riemann sum for an $n$-term approximation to this integral using rectangles of width $1/n$ is $\displaystyle \frac{1}{n}\sum_{k=0}^{n-1} f(x_k).$

Since it does not matter which $x_k \in [\frac{k}{n},\frac{k+1}{n}]$ we choose, we take a right endpoint approximation for each $x_k$. In other words, we choose $x_k = (k+1)/n$. The limit of our Riemann sums is then $\displaystyle\lim_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} \left(\frac{k+1}{n}\right)^2 = \lim_{n \to \infty} \frac{1}{n^3} \sum_{k=1}^n k^2 = \lim_{n \to \infty} \left(\frac{1}{n^3} \cdot \frac{2n^3+3n^2+n}{6}\right),$

in which we’ve used a closed-form expression for the partial sum of the first $n$ squares to execute the sum. The limit (and our answer for the area under the curve) is $1/3$. $\square$

A similar calculation (omitting limits and other justification) was performed as early as 1629 by the Italian mathematician Bonaventura Cavalieri (1598-1647).  In particular, this result predated the fundamental theorem of calculus (in Newton’s form, say) by several decades.

Example 2: Compute the area under the curve $f(x)=e^x$ over $x \in [0,1]$.

Solution: Here, a right endpoint approximation and the geometric series gives $\displaystyle \lim_{n \to \infty} \frac{1}{n}\sum_{k=1}^{n} e^{k/n} = \lim_{n \to \infty} \frac{1}{n} \frac{(e-1) e^{1/n}}{e^{1/n}-1}=(e-1) \lim_{z \to 0^+} \frac{z}{e^z-1},$

in which we’ve set $z=1/n$. The limit is then $(e-1)$ by l’Hôpital’s rule. $\square$

Example 3: Compute the area under the curve $f(x)=1/x^2$ for $x \geq 1$.

Solution: Rather than approach the entire range $x \geq 1$ at once, we fix $t>1$ and consider $f(x)$ over the range $x \in [1,t]$.  Furthermore, since left/right endpoint approximations using rectangles of uniform width give unapproachable sums, we are forced to embrace rectangles of variable width.

Choosing an $n$-term subdivision with intervals of the form $[t^{k/n},t^{(k+1)/n}]$ gives the left endpoint approximation $\displaystyle \lim_{n \to \infty} \sum_{k=0}^{n-1} \frac{1}{(t^{k/n})^2} (t^{(k+1)/n}-t^{k/n}) = \lim_{n \to \infty} \sum_{k=0}^{n-1} \left( t^{\frac{1-k}{n}}-t^{-\frac{k}{n}}\right),$

which is a simple difference of geometric series! Summing and simplifying gives the explicit form $\displaystyle \lim_{n \to \infty}\frac{t^\frac{2}{n}(1-t^{-1})}{t^\frac{1}{n}-1}-\frac{t^\frac{1}{n}(1-t^{-1})}{t^\frac{1}{n}-1}=\lim_{n \to \infty} t^\frac{1}{n}(1-t^{-1})=1-t^{-1}.$

Taking $t \to \infty$ gives the full range $x \geq 1$ and the final answer $1$. $\square$

Similar techniques can be used to compute areas under curves of the form $f(x)=1/x^\alpha$ for arbitrary $\alpha \neq 1$ (as was done with minimal rigor by John Wallis), but some care needs to be taken in the case $f(x)=1/x$. Our result in that case simplifies to $\displaystyle \lim_{n \to \infty} n(t^\frac{1}{n}-1) = \lim_{z \to 0^+} \frac{t^z-1}{z} = \log t.$

This last case was “proven” by Grégoire de Saint-Vincent (1647) in the sense that Saint-Vincent discovered and proved some basic properties of what we now call the logarithm (originally defined as an area). This result was put on more solid analytic footing by Nicholas Mercator in 1668.  (Not to be confused with the cartographer Gerardus Mercator.)

Note — There is some general anachronism in these examples in that logarithms and exponentials were not well understood until after the first proofs of the fundamental theorem of calculus. Moreover, l’Hôpital’s rule (due to Johann Bernoulli in 1694) would not be published until 1696.

— FROM RIEMANN SUMS TO INTEGRALS —

While Riemann sums do a good job at motivating integrals, they do a terrible job at computing with them.  (This is why we should be happy that the fundamental theorem of calculus exists.)

Once we realize this, we recognize that the utility of Riemann sums lies not in converting integrals to sums, but in converting sums to integrals. Perhaps this is why Riemann sums show up sparingly enough to be forgotten by students each time — they’ll be used to introduce the definite integral, the arc length formula, and little else.

It’s really only after calculus that Riemann sums creep back into mathematics, as a way to simplify sum calculations using integrals.  For an example that’s appeared on this very blog, consider the following:

Theorem: The set of positive integers $n$ with a prime factor greater than $\sqrt{n}$ has natural density $\log 2$.

Proof (sketch): The proof is given in full on my post concerning admissible orders for simple groups. By a simple counting argument and the prime number theorem, we reduce our problem to that of estimating $\displaystyle \lim_{n \to \infty} \sum_{k \leq \sqrt{n}} \frac{1}{k \log n/k}.$

We divide the range $k \leq \sqrt{n}$ into the $m$ subintervals $[n^{i/(2m)},n^{(i+1)/(2m)}]$ and apply upper and lower bounds over these shorter intervals to show that $\displaystyle \frac{1}{2m}\sum_{i=0}^{m-1} \left(1-\frac{i+1}{2m})\right)^{-1} \gtrsim \sum_{k \leq \sqrt{n}} \frac{1}{k \log n/k} \gtrsim \frac{1}{2m}\sum_{i=0}^{m-1} \left(1-\frac{i}{2m})\right)^{-1}.$

In the limit as $m \to \infty$, we evaluate these sums as left and right endpoint Riemann sums for the same integral. For example, the lower bound (and left endpoint approximation) gives $\displaystyle \lim_{m \to \infty} \frac{1}{m} \sum_{i=0}^{m-1} \frac{1}{2 -i/m} = \int_0^1 \frac{dt}{2-t} = \log 2,$

as desired. $\square$

You’ll also see Riemann sums as a way to translate discrete inequalities into integral inequalities.  For a cute example of this, I present the following:

Example 4: Let $f(x)$ be a continuous function on $[0,1]$. Prove that $\displaystyle \exp \int_0^1 f(t) dt \leq \int_0^1 e^{f(t)} dt.$

Proof: Fix $n \geq 1$, and define $\{a_i\}_{i=1}^n$ by $a_i = e^{f(i/n)}$.  A right endpoint approximation to the integral of $e^{f(t)}$ using rectangles of width $1/n$ gives $\displaystyle \int_0^1 e^{f(t)} dt = \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^n a_k.$

By the AM-GM inequality, we have $\displaystyle \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^n a_k \leq \lim_{n \to \infty} \left(a_1 \cdots a_n \right)^\frac{1}{n}=\exp \bigg( \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^n \log a_i \bigg)$

Simplifying, we recognize this as a Riemann sum using rectangles of width $1/n$: $\displaystyle =\exp\bigg(\lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^n f\bigg(\frac{i}{n}\bigg)\bigg) = \exp\int_0^1 f(t) dt,$

which completes the proof. $\square$

— CONCLUDING REMARKS —

I hope that this handful of examples has hinted at the enduring role of Riemann sums beyond the scope of a typical calculus course. For a few more examples and applications, check out the exercises!

— EXERCISES —

Exercise 1: Find the definite integral of $\sin x$ over $x \in [0,\pi]$ using Riemann sums and the angle addition formula for $\sin x$.  (Or by using the complex exponential.)

Exercise 2: Compute the following limit: $\displaystyle \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^n \frac{2k+1}{n+k}.$

Hint: This problem uses a non-uniform division of the interval $[0,1]$. The term $(2k+1)$ might tell you what partition was used.

Exercise 3: Show that the statement of Example 4 does not hold when $[0,1]$ is replaced by a general interval $[a,b]$.

Riemann sums are a mainstay of math contests as well, often in the guise of integral estimations. For example, the 1996 Putnam Exam includes the following problem:

Exercise 3 (Putnam B2, 1996): Show that for every positive integer $n$, $\displaystyle \left(\frac{2n-1}{e}\right)^{\frac{2n-1}{2}} < 1 \cdot 3 \cdot 5 \cdots (2n-1) < \left(\frac{2n+1}{e}\right)^{\frac{2n+1}{2}}.$

Hint — If you remember Stirling’s estimate for the factorial, it’s of little surprise that these bounds come from an integral estimate of the logarithm.