# What is the Best Proof of Cauchy’s Integral Theorem?

My book on Complex Analysis is now available! You can find it at xtothepowerofn.com. The material below is there along with other sample chapters on Common Mistakes and on Improving Understanding.

Today’s post may look as though I’m going all Terry Tao on you with a long post with lots of mathematical symbols. It’s really about the learning and teaching of Cauchy’s integral theorem from undergraduate complex analysis, so isn’t for everyone. If it’s not your cup of tea/coffee, then pop over here for some entertainment.

Cauchy’s Integral Theorem

Cauchy’s Integral Theorem is one of the greatest theorems in mathematics. There are many ways of stating it. Here’s just one:

Cauchy’s Integral Theorem: Let $D\subseteq \mathbb{C}$ be a domain, and $f:D\to \mathbb{C}$ be a differentiable complex function. Let $\gamma$ be a closed contour such that $\gamma$ and its interior points are in $D$.
Then, $\displaystyle \int_\gamma f =0$.

Here, contour means a piecewise smooth map $\gamma :[a,b]\to D$.

In my years lecturing Complex Analysis I have been searching for a good version and proof of the theorem. My definition of good is that the statement and proof should be short, clear and as applicable as possible so that I can maintain rigour when proving Cauchy’s Integral Formula and the major applications of complex analysis such as evaluating definite integrals. Many of the proofs in the literature are rather complicated and so time is lost in lectures proving lemmas that that are never needed again.

Here’s a version which I think has a good balance between simplicity and applicability. I’ve highlighted the difference with the version above.

Cauchy’s Integral Theorem (Simple version): Let $D\subseteq \mathbb{C}$ be a domain, and $f:D\to \mathbb{C}$ be a differentiable complex function. Let $\gamma$ be a simple closed contour made of a finite number of lines and arcs such that $\gamma$ and its interior points are in $D$.
Then, $\displaystyle \int_\gamma f =0$.

Here an important point is that the curve is simple, i.e., $\gamma$ is injective except at the start and end points. This means that we have a Jordan curve and so the curve has well-defined interior and exterior and both are connected sets.

With this version I believe one can prove all the major theorems in an introductory course. I would be interested to hear from anyone who knows a simpler proof or has some thoughts on this one.

Proof of Simple Version of Cauchy’s Integral Theorem
Let $\text{Int}(\gamma )$ denote the interior of $\gamma$, i.e., points with non-zero winding number and for any contour $\alpha$ let $\alpha ^*$ denote its image. First we need a lemma.
Lemma
Let $\gamma$ be a simple closed contour made of a finite number of lines and arcs in the domain $D$ with $\widetilde{D} = \gamma^* \cup \text{Int}(\gamma)\subseteq D$. Let $Q$ be a square in $\mathbb{C}$ bounding $\widetilde{D}$ and $f : D \to\mathbb{C}$ be analytic. Then for any $\epsilon > 0$ there exists a subdivision of $Q$ into a grid of squares so that for each square $Q_j$ in the grid with $Q_j \cap\widetilde{D} \neq \emptyset$ there exists a $z_j \in Q_j \cap \widetilde{D}$ such that

$\left| \dfrac{f(z) - f(z_j)}{z - z_j} - f'(z_j) \right| < \varepsilon$
for all $z \in Q_j \cap \widetilde{D}.$

Proof of Lemma
The set up looks like the following.

Square covering

For a contradiction we will assume the statement is false. Let $Q_1 = Q$ and divide $Q_1$ into 4 equal-sized squares. At least one of these squares will not satisfy the required condition in the lemma. Let $Q_2$ be such a square. Repeat the process to produce an infinite sequence of squares with $Q_1 \supset Q_2 \supset Q_3 \supset \dots$.By the Nested Squares Lemma (which is just a generalization of the Nested Interval Theorem) there exists $z_j \in \bigcap\limits_{n=1}^\infty Q_n$.

As $f$ is differentiable there exists $\delta > 0$ such that

$\left| \dfrac{f(z) - f(z_j)}{z - z_j} - f'(z_j) \right| < \varepsilon$
for $|z - z_j| < \delta$. But as the size of the squares becomes arbitrarily small there must exist $Q_N$ such that $Q_N$ is contained in the disk $|z - z_j| < \delta$. This is a contradiction.

Main part of proof
Given $\varepsilon > 0$ there exists a grid of squares covering $\gamma^* \cup\text{Int}(\gamma)$. Let $\{S_j\}_{j=1}^n$ be the set of squares such that $S_j \cap ( \gamma^* \cup \text{Int}( \gamma )) \neq \emptyset$ and let $\{ z_j \}_{j=1}^n$ be the set of distinguished points in the lemma.

Define $g_j : D \to \mathbb{C}$ by

$g_j(z) = \left\{ \begin{array}{cl} \dfrac{f(z) - f(z_j)}{z - z_j} - f'(z_j), & z \neq z_j \\ 0, & z = z_j \end{array} \right.$

Then as $f$ is differentiable, $g$ is continuous (and hence integrable).

Without loss of generality we can assume that $\gamma$ is positively oriented. Let $C_j$ be the union of positively oriented contours giving the boundary of $S_j \, \cap \, ( \gamma^* \cup \, \text{Int}( \gamma ))$. Since $\gamma$ is made of a finite number of lines and arcs $C_j$ will itself be the union of a finite number of lines and arcs. For $S_j$ such that $S_j \cap \gamma^* =\emptyset$, $C_j^*$ is just the boundary of a square.

C_j^* for a square

On $S_j$ we have

$f(z) = f(z_j) + (z - z_j) f'(z_j) + (z - z_j) g_j(z).\qquad (1)$

As
$f(z_j) + (z - z_j)f'(z_j)$
is the derivative of
$(z - z_j)f(z_j) + \dfrac{(z - z_j)^2}{2} f'(z_j)$
by the Fundamental Theorem of Calculus and the fact that $C_j$ is closed we get

$\displaystyle \int_{C_j} f(z_j) + (z - z_j)f'(z_j) dz = 0. \qquad (2)$

Now,
$\displaystyle \int_\gamma f(z) dz = \sum_{j=1}^n \int_{C_j} f(z) dz$

and edges of touching squares will cancel.

Cancelling of edges

So

$\begin{array}{rcl}\left| \int_\gamma f(z) dz \right| &=& \left| \sum_{j=1}^n \int_{C_j} f(z) dz\right| \\ &\leq &\sum_{j=1}^n \left| \int_{C_j} f(z) dz \right| \\ &=& \sum_{j=1}^n \left| \int_{C_j} (z - z_j) g_j(z) dz \right| \end{array}$

by (1) and (2). We now estimate each of the integrals in the sum.

Let $s$ be the length of the side of the squares. For $z, z_j \in S_j$ we have $\left| (z - z_j)g_j(z) \right| < \sqrt{2} s \varepsilon$ because $|z - z_j| \leq \sqrt{2} s$ as $S_j$ is a square and $| g_j(z) | < \varepsilon$ as the grid of squares satisfies the conclusion of the lemma.

Let $l_j$ be the length of the curve(s) in $S_j \cap \gamma^*$ (the length may be zero). Then $L(C_j) \leq l_j + 4s.$ Hence, by the Estimation Lemma

$\displaystyle \left| \int_{C_j} (z - z_j) g_j(z) dz \right | < \sqrt{2} s \varepsilon (l_j + 4s).$

Therefore,
$\displaystyle \begin{array}{rcl} \displaystyle \left| \int_\gamma f(z) dz \right| & < & \sum_{j=1}^n \sqrt{2} s \varepsilon (l_j + 4s) \\&=& \sqrt{2} \varepsilon \sum_{j=1}^n (sl_j + 4s^2) \\&=& \sqrt{2} \varepsilon (sL(\gamma) + 4A) \end{array}$

where $A$ is the area of all the squares $\{S_j\}_{j=1}^n$. Now $s$ is less than or equal to the length $S$ of the side of the original square enclosing $D$.
Hence,
$\left| \int_\gamma f(z) dz \right| < \sqrt{2} \varepsilon (SL(\gamma) + 4S^2).$
As $\varepsilon$ was arbitrary and $S$ and $L(\gamma)$ are fixed we have $\displaystyle \int_\gamma f(z) dz = 0.$

This ends the proof.

Some thoughts on teaching:

For a teacher what’s good about this way of proving it? Well, it means you have rigorously proved a version that will cope with the main applications of the theorem: Cauchy’s residue theorem to evaluation of improper real integrals. For these, and proofs of theorems such as Fundamental Theorem of Algebra or Louiville’s theorem you never need more than a finite number of arcs and lines (or a circle – which is just a complete arc).

Also, the proof is divided into distinct sections rather than being mixed up. The standard proof involving proving the statement first for a triangle or square requires a nesting during which one has to keep track of an estimation. In the proof above the nesting is separated from the estimation and hence, I believe, is easier to understand and follow. Furthermore, standard proofs then have to move to a more general setting. Usually this is achieved by applying the triangle result to show that the on a star-shaped/convex domain an analytic function has an antiderivative. This can then be used to prove a version of the theorem involving simple contours or more general domains such as simply connected spaces.

The proof above can also be followed with a generalization to more complicated contours and domains but I think for an introductory course with not much time to give all the details, then this is unnecessary. Anyone who is interested will be able to find a proof of the more general version.

One flaw in almost all proofs of the theorem is that you have to make some assumption about Jordan curves or some similar property of contours. I think this is unavoidable but at least the Jordan Curve Theorem is intuitively obvious so I feel justified in not proving it.

Acknowledgements
Thanks to Matt Daws for conversations about this proof and to Steve Trotter for typing the original Latex.