Fundamental theorem of calculus

2008/9 Schools Wikipedia Selection. Related subjects: Mathematics

Topics in calculus

Fundamental theorem
Limits of functions
Continuity
Vector calculus
Matrix calculus
Mean value theorem

Differentiation

Product rule
Quotient rule
Chain rule
Implicit differentiation
Taylor's theorem
Related rates
List of differentiation identities

Integration

Lists of integrals
Improper integrals
Integration by:
parts, disks, cylindrical
shells, substitution,
trigonometric substitution,
partial fractions, changing order

The fundamental theorem of calculus specifies the relationship between the two central operations of calculus, differentiation and integration.

The first part of the theorem, sometimes called the first fundamental theorem of calculus, shows that an indefinite integration can be reversed by a differentiation.

The second part, sometimes called the second fundamental theorem of calculus, allows one to compute the definite integral of a function by using any one of its infinitely many antiderivatives. This part of the theorem has invaluable practical applications, because it markedly simplifies the computation of definite integrals.

The first published statement and proof of a restricted version of the fundamental theorem was by James Gregory (1638-1675). Isaac Barrow proved the first completely general version of the theorem, while Barrow's student Isaac Newton (1643–1727) completed the development of the surrounding mathematical theory. Gottfried Leibniz (1646–1716) systematized the knowledge into a calculus for infinitesimal quantities.

Intuition

Intuitively, the theorem simply states that the sum of infinitesimal changes in a quantity over time (or some other quantity) add up to the net change in the quantity.

To comprehend this statement, we will start with an example. Suppose a particle travels in a straight line with its position given by x(t) where t is time and x(t) means that x is a function of t. The derivative of this function is equal to the infinitesimal change in quantity, dx, per infinitesimal change in time, dt (of course, the derivative itself is dependent on time). Let us define this change in distance per change in time as the speed v of the particle. In Leibniz's notation:

\frac{\mathrm dx}{\mathrm dt} = v(t).

Rearranging this equation, it follows that:

\mathrm dx = v(t)\,\mathrm dt.

By the logic above, a change in x (Δx) is the sum of the infinitesimal changes dx. It is also equal to the sum of the infinitesimal products of the derivative and time. This infinite summation is integration; hence, the integration operation allows the recovery of the original function from its derivative. As one can reasonably infer, this operation works in reverse as we can differentiate the result of our integral to recover the original derivative.

Formal statements

There are two parts to the Fundamental Theorem of Calculus. Loosely put, the first part deals with the derivative of an antiderivative, while the second part deals with the relationship between antiderivatives and definite integrals.

First part

This part is sometimes referred to as First Fundamental Theorem of Calculus.

Let f be a continuous real-valued function defined on a closed interval [a, b]. Let F be the function defined, for all x in [a, b], by

F(x) = \int_a^x f(t)\, \mathrm dt

Then, F is differentiable on [a, b], and for every x in [a, b],

F'(x) = f(x)\,.

The operation \int_a^x f(t)\, \mathrm dt is a definite integral with variable upper limit, and its result F(x) is one of the infinitely many antiderivatives of f.

Second part

This part is sometimes referred to as Second Fundamental Theorem of Calculus.

Let f be a continuous real-valued function defined on a closed interval [a, b]. Let F be an antiderivative of f, that is one of the infinitely many functions such that, for all x in [a, b],

f(x) = F'(x)\,.

Then

\int_a^b f(x)\,\mathrm dx = F(b) - F(a).

Corollary

Let f be a real-valued function defined on a closed interval [a, b]. Let F be a function such that, for all x in [a, b],

f(x) = F'(x)\,

Then, for all x in [a, b],

F(x) = \int_a^x f(t)\,\mathrm dt + F(a)

and

f(x) = \frac{\mathrm d}{\mathrm dx} \int_a^x f(t)\,\mathrm dt.

Examples

As an example, suppose you need to calculate

\int_2^5 x^2\, \mathrm dx.

Here, f(x) = x2 and we can use F(x) = {x^3\over 3} as the antiderivative. Therefore:

\int_2^5 x^2\, \mathrm dx = F(5) - F(2) = {125 \over 3} - {8 \over 3} = {117 \over 3} = 39.

Or, more generally, suppose you need to calculate

{\mathrm d \over \mathrm dx} \int_0^x t^3\, \mathrm dt.

Here, f(t) = t3 and we can use F(t) = {t^4 \over 4} as the antiderivative. Therefore:

{\mathrm d \over \mathrm dx} \int_0^x t^3\, \mathrm dt = {\mathrm d \over \mathrm dx} F(x) - {\mathrm d \over \mathrm dx} F(0) = {\mathrm d \over \mathrm dx} {x^4 \over 4} = x^3.

But this result could have been found much more easily as

{\mathrm d \over \mathrm dx} \int_0^x t^3\, \mathrm dt = f(x) {\mathrm dx \over \mathrm dx} - f(0) {\mathrm d0 \over \mathrm dx} = x^3.

Proof

Suppose that

F(x) = \int_{a}^{x} f(t) \mathrm dt.

Let there be two numbers x1 and x1 + Δx in [a, b]. So we have

F(x_1) = \int_{a}^{x_1} f(t) \mathrm dt

and

F(x_1 + \Delta x) = \int_{a}^{x_1 + \Delta x} f(t) \mathrm dt.

Subtracting the two equations gives

F(x_1 + \Delta x) - F(x_1) = \int_{a}^{x_1 + \Delta x} f(t) \mathrm dt - \int_{a}^{x_1} f(t) \mathrm dt. \qquad (1)

It can be shown that

\int_{a}^{x_1} f(t) \mathrm dt + \int_{x_1}^{x_1 + \Delta x} f(t) \mathrm dt = \int_{a}^{x_1 + \Delta x} f(t) \mathrm dt.
(The sum of the areas of two adjacent regions is equal to the area of both regions combined.)

Manipulating this equation gives

\int_{a}^{x_1 + \Delta x} f(t) \mathrm dt - \int_{a}^{x_1} f(t) \mathrm dt = \int_{x_1}^{x_1 + \Delta x} f(t) \mathrm dt.

Substituting the above into (1) results in

F(x_1 + \Delta x) - F(x_1) = \int_{x_1}^{x_1 + \Delta x} f(t)\mathrm dt. \qquad (2)

According to the mean value theorem for integration, there exists a c in [x1, x1 + Δx] such that

\int_{x_1}^{x_1 + \Delta x} f(t) \mathrm dt = f(c) \Delta x .

Substituting the above into (2) we get

F(x_1 + \Delta x) - F(x_1) = f(c) \Delta x \,.

Dividing both sides by Δx gives

\frac{F(x_1 + \Delta x) - F(x_1)}{\Delta x} = f(c).
Notice that the expression on the left side of the equation is Newton's difference quotient for F at x1.

Take the limit as Δx → 0 on both sides of the equation.

\lim_{\Delta x \to 0} \frac{F(x_1 + \Delta x) - F(x_1)}{\Delta x} = \lim_{\Delta x \to 0} f(c).

The expression on the left side of the equation is the definition of the derivative of F at x1.

F'(x_1) = \lim_{\Delta x \to 0} f(c). \qquad (3)

To find the other limit, we will use the squeeze theorem. The number c is in the interval [x1, x1 + Δx], so x1cx1 + Δx.

Also, \lim_{\Delta x \to 0} x_1 = x_1 and \lim_{\Delta x \to 0} x_1 + \Delta x = x_1.

Therefore, according to the squeeze theorem,

\lim_{\Delta x \to 0} c = x_1.

Substituting into (3), we get

F'(x_1) = \lim_{c \to x_1} f(c).

The function f is continuous at c, so the limit can be taken inside the function. Therefore, we get

F'(x_1) = f(x_1) \,.

which completes the proof.

(Leithold et al, 1996)

Alternative proof

This is a limit proof by Riemann sums.

Let f be continuous on the interval [a, b], and let F be an antiderivative of f. Begin with the quantity

F(b) - F(a)\,.

Let there be numbers

x1, ..., xn

such that

a = x_0 < x_1 < x_2 < \ldots < x_{n-1} < x_n = b.

It follows that

F(b) - F(a) = F(x_n) - F(x_0) \,.

Now, we add each F(xi) along with its additive inverse, so that the resulting quantity is equal:

\begin{matrix} F(b) - F(a) & = & F(x_n)\,+\,[-F(x_{n-1})\,+\,F(x_{n-1})]\,+\,\ldots\,+\,[-F(x_1) + F(x_1)]\,-\,F(x_0) \, \\
& = & [F(x_n)\,-\,F(x_{n-1})]\,+\,[F(x_{n-1})\,+\,\ldots\,-\,F(x_1)]\,+\,[F(x_1)\,-\,F(x_0)] \, \end{matrix}

The above quantity can be written as the following sum:

F(b) - F(a) = \sum_{i=1}^n [F(x_i) - F(x_{i-1})] \qquad (1)

Next we will employ the mean value theorem. Stated briefly,

Let F be continuous on the closed interval [a, b] and differentiable on the open interval (a, b). Then there exists some c in (a, b) such that

F'(c) = \frac{F(b) - F(a)}{b - a}.

It follows that

F'(c)(b - a) = F(b) - F(a). \,

The function F is differentiable on the interval [a, b]; therefore, it is also differentiable and continuous on each interval xi-1. Therefore, according to the mean value theorem (above),

F(x_i) - F(x_{i-1}) = F'(c_i)(x_i - x_{i-1}). \,

Substituting the above into (1), we get

F(b) - F(a) = \sum_{i=1}^n [F'(c_i)(x_i - x_{i-1})].

The assumption implies F'(ci) = f(ci). Also, xixi − 1 can be expressed as Δx of partition i.

F(b) - F(a) = \sum_{i=1}^n [f(c_i)(\Delta x_i)] \qquad (2)
A converging sequence of Riemann sums.  The numbers in the upper right are the areas of the grey rectangles.  They converge to the integral of the function.
A converging sequence of Riemann sums. The numbers in the upper right are the areas of the grey rectangles. They converge to the integral of the function.

Notice that we are describing the area of a rectangle, with the width times the height, and we are adding the areas together. Each rectangle, by virtue of the Mean Value Theorem, describes an approximation of the curve section it is drawn over. Also notice that Δxi does not need to be the same for any value of i, or in other words that the width of the rectangles can differ. What we have to do is approximate the curve with n rectangles. Now, as the size of the partitions get smaller and n increases, resulting in more partitions to cover the space, we will get closer and closer to the actual area of the curve.

By taking the limit of the expression as the norm of the partitions approaches zero, we arrive at the Riemann integral. That is, we take the limit as the largest of the partitions approaches zero in size, so that all other partitions are smaller and the number of partitions approaches infinity.

So, we take the limit on both sides of (2). This gives us

\lim_{\| \Delta \| \to 0} F(b) - F(a) = \lim_{\| \Delta \| \to 0} \sum_{i=1}^n [f(c_i)(\Delta x_i)]\,.

Neither F(b) nor F(a) is dependent on ||Δ||, so the limit on the left side remains F(b) - F(a).

F(b) - F(a) = \lim_{\| \Delta \| \to 0} \sum_{i=1}^n [f(c_i)(\Delta x_i)]

The expression on the right side of the equation defines an integral over f from a to b. Therefore, we obtain

F(b) - F(a) = \int_{a}^{b} f(x)\,\mathrm dx

which completes the proof.

Generalizations

We don't need to assume continuity of f on the whole interval. Part I of the theorem then says: if f is any Lebesgue integrable function on [a, b] and x0 is a number in [a, b] such that f is continuous at x0, then

F(x) = \int_a^x f(t)\, \mathrm dt

is differentiable for x = x0 with F'(x0) = f(x0). We can relax the conditions on f still further and suppose that it is merely locally integrable. In that case, we can conclude that the function F is differentiable almost everywhere and F'(x) = f(x) almost everywhere. This is sometimes known as Lebesgue's differentiation theorem.

Part II of the theorem is true for any Lebesgue integrable function f which has an antiderivative F (not all integrable functions do, though).

The version of Taylor's theorem which expresses the error term as an integral can be seen as a generalization of the Fundamental Theorem.

There is a version of the theorem for complex functions: suppose U is an open set in C and f: UC is a function which has a holomorphic antiderivative F on U. Then for every curve γ: [a, b] → U, the curve integral can be computed as

\int_{\gamma} f(z) \,\mathrm dz = F(\gamma(b)) - F(\gamma(a)).

The fundamental theorem can be generalized to curve and surface integrals in higher dimensions and on manifolds.

One of the most powerful statements in this direction is Stokes' theorem.

Retrieved from " http://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus"
This Wikipedia Selection was sponsored by a UK Children's Charity, SOS Children UK , and consists of a hand selection from the English Wikipedia articles with only minor deletions (see www.wikipedia.org for details of authors and sources). The articles are available under the GNU Free Documentation License. See also our Disclaimer.