1. Newton integral 2. Riemann Integral 3. Properties of the Riemann Integral 4. The Fundamental Theorem of Calculus a. The Newton-Leibnitz-Barrow formula 5. More properties of the definite integral 6. Other kinds of definite integral - 2ºBach CCNN 2011 20

29  23  Descargar (0)

Texto completo



1 1. Newton integral

2. Riemann Integral

3. Properties of the Riemann Integral 4. The Fundamental Theorem of Calculus

a. The Newton-Leibnitz-Barrow formula 5. More properties of the definite integral 6. Other kinds of definite integral



Newton integral (antiderivative)

In this section we will introduce the Newton integral. The idea is quite simple: the derivative tells us how to change functions into different ones. The Newton integral attempts to undo this procedure, to a given function we will try to find its antiderivative.



Let f be a function defined on an interval I. We say that a function F on I is an antiderivative of f if F is continuous on I and F '(x) = f(x) for all x from the interior Int(I ) of I.

If such an antiderivative on I exists, then we say that f is (Newton-)integrable on I.

The definition is done in this way because we use the notion of antiderivative in two settings. In order to have a derivative at a certain point x, a function must be defined on a neighborhood of this point. So the most natural situation is when we look for an antiderivative of f on an open interval (a,b). Every point in such an interval has a neighborhood there, so the hoped for antiderivative F can be differentiated everywhere and we have the condition that F '(x) = f(x) on (a,b). Since differentiability implies continuity (see Derivative in Derivatives - Theory - Introduction), such a function F is automatically also continuous on (a,b) = Int(a,b).

However, often we need an antiderivative F on a closed interval [a,b]. The definition then specifies how to extend the above natural situation. We still want that F ' = f on (a,b) as we cannot have a derivative at endpoints (endpoints have no neighborhood where F would be guaranteed to exist). To describe the required behaviour of F at a and

b we use the second condition, that F be continuous on [a,b]. By the previous remark

we know that due to differentiability, F would be continuous on (a,b) anyway, so the continuity condition actually just adds requirement of one-sided continuity at endpoints of our closed interval.

Example: The function F(x) = 3x - 1 is an antiderivative of f(x) = 3 on the interval (0,13].



2 In the same way we check that also G(x) = 3x + 7 is an antiderivative of f(x) = 3 on (0,13]. As a matter of fact, these two functions are antiderivatives to 3 on any interval, typically we would use open or closed ones; the half-open interval in this example was used just to show how it would work.

This example shows several interesting things. First of all, how did we get these antiderivatives? We guessed using our experience with derivatives. Unlike the

procedure of differentiation, where we had a reliable algorithm and could differentiate any function that comes our way, the antiderivative is a different story. We will get to it later.

The second interesting thing is that we can have more antiderivatives to the same given function. Again, based on our experience, this should not be surprising; for any constant

C, the function 3x + C is an antiderivative of f(x) = 3 on any given interval.

Are there also any other kinds of antiderivatives of f(x) = 3? No. The following theorem says that adding a constant to an already known antiderivative is the only way to get other antiderivatives:


Let F be an antiderivative of f on some interval I.

(i) For any constant C, the function G(x) = F(x) + C is an antiderivative of f on I. (ii) If H is another antiderivative of f on I, then there is a constant C such that

H(x) = F(x) + C for all x from I.

The first statement is easy, since G is continuous where F was and

G'(x) = [F(x) + C]' = F '(x) + [C]' = f(x) + 0 = f(x).

We can also consider this situation from a geometric standpoint. The antiderivative condition F ' = f means that we are looking for a function whose graph has tangent lines with prescribed slopes, namely the slope at a point x is equal to f(x). If we find such an

F and then add C to it, we are simply moving the graph up or down, therefore the slopes

stay the same and the shifted function also satisfies the condition.



3 Thus the graphs have to have the same shape, just shifted up or down.

Newton integral

There is no notation to express directly the fact that some F is an antiderivative of f. However, there is a notation for the set of all antiderivatives on a given interval. We

denote it by . This is called the Newton integral of f on I.

Since the above theorem tells us exactly how the set of all antiderivatives looks like, we usually describe this set - the Newton integral - in the following way: If F is some antiderivative of f on an interval I, we write the set of all antiderivatives of f on I as

The process of finding an antiderivative is called integration.

Note that the notation is actually wrong. Since the antiderivatives form a set, the proper notation should be

However, this seems like too much writing, therefore people prefer the incorrect but easier notation. As long as we remember that the answer on the right hand side is a set (or, to put it another way, any function of the given form, where for C we can put an arbitrary constant), we should be fine. One frequent mistake is that people forget to put the "+C" there. This is a serious error, which in simple problems may look formal and like a nitpicking, but in applied problems this can be quite serious.

Example: The first example above can be written like this:

Unless the interval is somehow determined by the problem, we always try to put the largest interval possible. This is called the domain of the integral. We determine it by intersecting the domain of the integrated function with the domain of the antiderivative we find, perhaps removing some points where the antiderivative is defined but has some problem with its derivative. In the above example, the domain would be the whole set of real numbers.



4 Integration as the "opposite" of differentiation

The fact that there are many antiderivatives also has another consequence that one has to keep in mind. Although we call the resulting function antiderivative, the procedure

does not undo the differentiation process. Indeed, we know that the derivative of 3x + 7

is 3, but when we try to find an antiderivative to 3, we pick some function of the form 3x + C, and there is no guarantee that it will be the 3x + 7 that we started with. So if we start with a function F, find its derivative f and then find an antiderivative of f, we need not obtain F again. The best we can say is that the antiderivative we found is equal to

F + D for some constant D.

If we use the Newton integral notation, this question of reversing does not even make sense. The outcome of the integral procedure is not one function but a set of functions, so it cannot be equal to the one given function that we started with and differentiated.

On the other hand, if we start with a function f, find an antiderivative F and then differentiate it, we end up with f again. This follows directly from the definition of an antiderivative. With the Newton integral notation this becomes a bit unclear: How do you differentiate a set of functions? But if we adopt a convention for a moment that this means differentiating all functions from the given set and making it into a new set, we can write (not precisely, but it captures the spirit):

Properties and elementary integrals

Recall that when we introduced the integral notation, we started like this: "If F is some antiderivative of f on I..." How do we get this F? In fact, as we will see later, some functions do not have an antiderivative, in other words, the integration procedure fails! But before we get to it, we look at some properties of an antiderivative:

Theorem (linearity of integral).

(i) Let F be an antiderivative of f on some interval I. If k is a real number, then (kF ) is an antiderivative of (kf ) on I.

(ii) Let F be an antiderivative of f on some interval I, let G be an antiderivative of g on I. Then (F + G ) is an antiderivative of (f + g ) on I.

This theorem is actually very easy to prove. For instance, the function (kF ) is continuous where F was and the differentiation rules immediately yield [kF]' = k[F]' = kf. The second statement is equally obvious.



5 This notation is easier to use than the language of antiderivatives when it comes to actual integration. For instance, using our experience with derivatives we guess and easily write that

In the problem we used a different variable. Just like in the ur�it� integral, the variable is not really important here, what matters is the formulas. We just have to be careful not to change it by a mistake during calculations. A good habit is to check at the end that our answer has the same variable as the question. Note also that a sum of two constants whose values are arbitrary numbers is just one number, again arbitrary. When people integrate using the rules from the linearity theorem, they usually do not bother with putting more constants and write just one "+C" right away.

Another good habit is to check that we got the right answer. This is one and perhaps the only nice thing about integration. We can easily check that we got the right answer by differentiating it and comparing with the given function:

In the same way we check that the following list of elementary integrals based on derivatives that we remember is correct:



6 complicated sets as domains. However, in some cases we have a choice between several possible intervals.

Therefore we introduce an important convention. When we put some conditions instead of an interval as the integral domain, we can use this integral on any interval that satisfies these conditions. In the first integral we actually did not have a reason to avoid the proper notation, but writing x > 0 is easier than writing that x belongs to (0, ) and people use it a lot. It should be noted that for special values of , the domain may be larger, depending on where the corresponding power exists. If the power is a negative integer (apart from -1, that is a special case), the integral can be used on any interval not including 0. If is a natural number, this integral is true on the whole real line.

The second integral is a good example of our convention. We can use the antiderivative ln|x| for instance on (1,13) or [-7,-2]. The largest possible intervals are (0, ) and


It is more complicated with the integral leading to tangent. The result is true on any interval not containing the specified points. The largest possible intervals are of the

form .

The list can be surely made longer. Every derivative result that you remember can be put into this table in an integral form. However, the integrals above are the most

important, you cannot hope to integrate successfully without knowing them by heart and at the same time, they should be enough for most problems. Of course, if you remember more, you chances of success increase.

This may be a good place to remark that when we integrate a fraction, people would often put the dx (or whatever differential we have) into the numerator to make it shorter. For instance, we write

Examples, integrability



7 Check by differentiation that the answers are correct.

The procedure for integration that we just saw resembled differentiation in the way it works. You remember some elementary integrals, you remember some rules and the rest is algebra. Unfortunately, this is one of the disadvantages of integration, perhaps the main disadvantage. This impression about algebraic approach is quite correct; but unlike the case of differentiation, the linearity theorem shows the only two rules available for integration. There is no product rule, no quotient rule, no chain rule for composition of functions! Since most functions are somehow composed using products, ratios and compositions, this means that we would not know how to integrate them directly.

There are some specific rules that can help with special types of integrals. This is not such a good news as it sounds, because it means that the whole procedure becomes quite messy. Given an integral, one has to guess which procedure to apply, often some tricks are involved, sometimes simple, sometimes very difficult. Even a very simple function can take several pages to integrate. There are no algorithms that would tell you what to do next, the only guide is experience. If you want to become a successful integrator, a lot of practice is essential. We focus on this problem in the sections on Methods of Integration. In Theory - Integration methods we cover the specific rules and methods, in Methods Survey - Integration we try a practical approach.

We close this section with a brief look at the question of integrability. Since the process of integration is rather difficult, it is probably not surprising that for some functions it is impossible. This is the case of the step function that we had as an example in the section on Riemann integral. Here we will prove that this function has no antiderivative on the interval [0,2].

One good news is this:


If f is a continuous function on a closed interval I, then it has an antiderivative there.



8 theoretical function, which exists and one can draw its graph, but the graph is so strange that its shape cannot be expressed using any algebraic combination of elementary functions.

Such is the case of the function

This is a very nice function given by a simple expression, it is continuous on the whole real line. In fact, it is the famous "bell-shape curve" that is used in probability a lot (Gaussian curve):

By the above theorem it has an antiderivative, this picture shows one of them:

However, this shape cannot be expressed using elementary functions. No matter what kind of an expression you write for F, you will never get F ' = f. So despite the fact that this function is so simple and nice, we cannot write its antiderivative.

There are more functions of this kind, we list some of the nicer ones. The following integrals exist, but we cannot write the antiderivatives using elementary functions:





Riemann integral

Motivation: Consider a function f defined on a closed interval [a,b]. For simplicity, imagine that f is continuous and positive. Then it makes sense to look at the region between the x-axis and the graph of f.

If we can somehow determine the area of this region, we will call this number the definite integral of f from a to b.

There are many ways to try to determine the area. Depending on the properties of the function f, it may be difficult or even impossible to do so. Here we will try the approach of Riemann. It is based on a simple observation that the area of a rectangle is easy to calculate. Therefore we try to approximate the region under the graph of f by suitable rectangles.



10 Now we will make this procedure precise.

The widths of rectangles are determined by splitting the interval [a,b] into smaller segments. For this we use the notion of partition:


Consider a closed interval [a,b]. By a partition of [a,b] we mean any finite set

P={x0, x1,..., xN} of points from [a,b] such that a = x0 < x1 < . . . < xN = b.

Assume that we have a bounded function f on an interval [a,b]. Given a partition, the interval [a,b] is split into N segments that determine the sides of the approximating rectangles:

Now we have to decide on their heights. There are several methods, here we use the one that is easiest to handle. To be on the safe side, we look at the largest and smallest possible (and reasonable) rectangles, obtaining the upper sum and the lower sum:


Let f be a bounded function defined on a closed interval [a,b]. Given a partition P of [a,b], for k = 1,...,N, set

We define the upper sum associated with P by



11 Note that since the function f is bounded, the suprema and infima in the definition always exist finite. Therefore the sums make sense. In both sums we are adding areas of rectangles. Their bases are given by the partition, the heights by the supremum or infimum of f in each rectangle.

The upper and lower sum is shown in the following pictures. On the left, the area of the shaded region is the upper sum; on the right, the area of the shaded region is the lower sum. We also indicated expressions connected with calculating the area of the third rectangle.

If we denote the area under the graph of f by A (hoping that it makes sense), then from the picture it seems clear that

To determine the area A we will try to manipulate the rectangles so that the upper sum gets smaller and the lower sum gets larger, until they get almost equal. Since the area is always between the upper and lower sum, equality between the two sums means that we determined A. This manipulation has the form of taking narrower rectangles. The error of approximation is then smaller, which means that the upper sum gets smaller (and therefore closer to A) and the lower sum gets larger (and closer to A). In the next



12 The advantage of the upper/lower sum approach is that we do not have to worry about mechanics of this procedure, all the details are hidden in the definition below.

Unfortunately, the Riemann approach using rectangles succeeds only if the function f is nice enough, when f is Riemann integrable. Precisely:


Let f be a bounded function defined on a closed interval [a,b]. We say that f is Riemann integrable on [a,b] if the infimum of upper sums through all partitions of [a,b] is equal to the supremum of all lower sums through all partitions of [a,b].

Then we define the Riemann definite integral of f from a to b by

We usually just say Riemann integral, it is understood that we mean the definite integral. Since for Riemann integrable functions, the infimum of upper sums is equal to the supremum of lower sums, we could also use the latter to determine the Riemann integral.



13 the graph. So if we decide to use a different variable in the same formula, the shape and therefore the integral stay the same. Thus, for instance,

Indeed, the area under the same piece of the given parabola is always the same, regardless of what letter we write next to the horizontal axis.

The symbol dx is the differential of x (see for instance Derivatives - Theory - Introduction -

Leibniz notation) and here it has only a symbolic role. It is a part of the notation of the Riemann integral, so it is important not to forget it. This is especially important if the integrated function has several variables (or some parameters), the differential then makes clear which of the letters in the formula is used as the variable in integration. While leaving out the differential is not recommended, in general expressions people often write just f instead of f(x) to save time; the variable is clear from the differential anyway. We will do it here, too, but in important statements we will try to write things properly.

For a more thorough explanation of the meaning of the integral notation (not mathematically correct, but very useful for understanding the concept), click here.

In our definition, we put the smaller limit (the left endpoint) as a lower limit. Sometimes we may want to "integrate backward", from b to a. Often we want to be able to simply write the integral without worrying about the order, so we need a more general

definition. This is done as follows: Let a < b. We define

Now that we can integrate with any order of limits, the above equation becomes a general rule: We can switch the limits in the integral, provided we also add the minus sign in front.



14 We need to decide on some partitions that would involve smaller and smaller segments, hoping that the corresponding upper and lower sums will get closer until they agree. Unless there is a good reason to do otherwise, it is usually a good idea to try a regular partition, that is, given a natural number N, split the interval [2,4] into N equal

segments. Thus we have the following partition (check)

Now we need to determine the suprema and infima for the sums, but this should be easy just by looking at a picture:

We can calculate the upper and lower sum:



15 Thus

that is,

Hence the function f(x) = x + 1 is Riemann integrable on [2,4] and

We can check that this answer is correct by direct calculation from the picture using the formula for the area of a trapezoid.

The above calculation was not easy, even though we were lucky that we remembered the formula for adding first N natural numbers. For more complicated functions, it may be impossible to determine an explicit formula for the upper and lower sums. This is the reason why we usually use other means than the definition for evaluating Riemann integrals (see The Fundamental Theorem of Calculus).

What is the Riemann integral?

In our pictures we always had a positive function; the Riemann integral is then equal to the geometric area of the region between the graph of f and the x-axis. What if we have a negative function? Since the value of f determines the height of rectangles, we get areas with negative sign. Therefore, for negative functions, the Riemann integral is equal to minus the geometric area of the region between the graph of f and the x-axis.




Which functions are Riemann integrable?

This is a very important question. For our purposes, the most useful fact is this:


Every continuous function on a closed interval is Riemann integrable on this interval.

This example shows that if a function has a point of jump discontinuity, it may still be Riemann integrable. On the other hand, the example of Dirichlet function shows that if there is too many points of discontinuity, the function is not Riemann integrable. In fact, a function defined on a closed interval is Riemann integrable there exactly if it does not have too many points of discontinuity. For more information, click here.

The fact that Riemann integrability is not hurt by a finite number of discontinuities is related to the fact that the value of Riemann integral is not influenced by a change of the integrated function at a finite number of points. Precisely, assume that f is Riemann integrable on an interval [a,b]. If g is a function that is equal to f on [a,b] with exception of a finite number of points, then g is also Riemann integrable on [a,b] and the integrals of f and g agree.

To see why this could be true, look at the following picture, where we obtained g by adding 1 to the function f at the point c.

The region under the graph of g is the same as the region under the graph of f, plus an extra vertical segment at the point where we changed f into g. Since this segment has the thickness of one point, which is zero, its area is also zero and therefore there is no extra area under g.



17 Theorem.

Every monotone function on a closed interval is Riemann integrable on this interval.

3. Properties of the Riemann integral

We start by listing some properties that should seem obvious because the Riemann integral is understood as an area (mathematical).


(i) Let f be a Riemann integrable function on [a,b]. If f 0 on [a,b], then


If f is continuous on [a,b] and f > 0 on [a,b], then .

(ii) Let f be a Riemann integrable function on [-a,a]. If f is an odd function, then


If f is an even function, then .

(iii) Let f be a Riemann integrable function on [a,b]. If m,M are real numbers such that

m f M on [a,b], then

(iv) Let f and g be Riemann integrable functions on [a,b]. If f g on [a,b], then



18 The two properties in (ii) follow from the symmetry of graph just by looking at a


Also other kinds of symmetry can be useful at times, for instance the following result should be hardly surprising:

Indeed, since the graph of the sine function is symmetric,

the areas above and below the x-axis cancel each other out.

The comparison properties (iii) and (iv) should be clear from this picture:

While the above facts were basically just observations, the following properties are quite important:




19 (ii) Let f be a Riemann integrable function on [a,b], let k be a real number. Then the function kf is Riemann integrable on [a,b] and

(iii) Let f and g be Riemann integrable functions on [a,b]. Then the function f + g is Riemann integrable on [a,b] and

The first property should be again clear from a picture:

The second property follows algebraically from the definition of the Riemann integral. Indeed, the constant k can be factored out of the supremum and infimum of f over individual segments, then out of the sums for upper and lower limits and finally from the infima and suprema defining the integral.

The third property is less obvious, since the supremum of a sum of two functions is definitely not equal to the sum of individual suprema. However, one has a suitable inequality there and it can be worked out. There is also a geometric argument, a curious reader may find the outline (along with some interesting tidbits concerning areas) here.

Example: Evaluate , where



20 We have to decide on partitions. Note that in rectangles that do not involve the point

x = 1, the function f is constant. This means that the supremum and infimum are equal

there, in other words, the upper and lower sums agree at those parts. The only difference between the upper and lower sum will happen around x = 1, so we will chose partitions that ignore the constant parts and focus on x = 1:

We look at the picture

and see that




21 We therefore get bounds for infimum and supremum

and so



This answer seems clear from the picture, the region consists of three squares of side one.

4.The Fundamental Theorem of Calculus



22 It may therefore come as a surprise that in fact, there is a deep connection between these two. This is the topic of this section. As one of the consequences we will find a

convenient way of evaluating definite integrals.

We start with a definition. Let f be a function that is Riemann integrable on an interval [a,b]. Pick any c from [a,b]. Then f is also Riemann integrable on [c,x] for all x from [c,b] and on [x,c] for all x from [a,c]. Therefore for all x from [a,b] we can define

Note that since we used x as an upper limit, we cannot use it as a variable in the integral and had to choose another letter. The value of F(x) is the shaded area:

Since we can define this number F(x) for all x from [a,b], we obtained a function on [a,b] in this way. For a possible interpretation of this integral, click here. We have the following:

Theorem (The Fundamental Theorem of Calculus I, TFC 1).

Let f be a function that is Riemann integrable on [a,b], let c belong to [a,b]. For x from [a,b], define

Then F is a continuous function on [a,b]. Moreover, for x from (a,b), if f is continuous at x, then F is differentiable at x and F '(x) = f(x).

Example: Consider the function f(x) = x + 1 on the interval [0,3].



23 If x 1, then similarly

Thus on [0,3] we have

Indeed, we now see that F is continuous on [0,3] and on (0,3) we have

F '(x) = x + 1 = f(x), exactly as the theorem claimed.



24 Theorem (The Fundamental Theorem of Calculus I, TFC 1).

Let f be a continuous function on [a,b], let c belong to [a,b]. For x from [a,b] we define

Then F is an antiderivative of f on [a,b].

We now have the first connection. We see that a continuous function is both Riemann integrable and Newton integrable, and we can get an antiderivative (the Newton integral) using the Riemann integral. There is a connection the other way, too:

Theorem (The Fundamental Theorem of Calculus II, TFC 2).

Let f be a continuous function on [a,b]. If F is an antiderivative of f on [a,b], then

This is also called the Newton-Leibniz Formula. Since finding an antiderivative is usually easier than working with partitions, this will be our preferred way of evaluating Riemann integrals. Since this is used so often, in calculations we will be also using this convenient notation:

Example: We know that F(x) = 3x2 + x - 3 is an antiderivative of f(x) = 6x + 1 on [0,3] (check that F ' = f ) and f is continuous. Therefore we can evaluate

Recall that when we find a Newton integral, we express it as F(x) + C. It is easy to see that when we use such an antiderivative in evaluating a definite integral using the Newton-Leibniz formula, the constant C cancels. This is the reason why we simply ignore constants in antiderivatives when evaluating definite integrals. It is easier, for instance in the above calculation we would prefer to write



25 The two Fundamental Theorems show that for a continuous function, the Riemann and Newton integrals are somehow connected. A more general statement is also true.

That strange integral at the beginning did not just come out of the blue, in fact it has a natural interpretation in physics. The Fundamental theorem then follows from

elementary physical reasoning, it is a very useful take on this topic and we offer it in this note. Here we will show a useful mathematical interpretation.

Recall that given a function, if we first integrate it and then diffferentiate, we arrive at the same function as in the beginning. However, if we first differentiate and then integrate, then this no longer works (see Newton integral in Integrals - Theory -

Introduction). However, when we look at our results here not from the point of view of

f, but we focus on F, we get a very interesting formula:

So if we differentiate a function, we can recover it by integrating it if we use the definite integral in the right way.

5. More properties of the definite integral

Here we will look at some properties of the Riemann integral that are not directly related to its evaluation, but are more of a theoretical interest.


Let f be a Riemann integrable function on [a,b]. Then the equalities




26 Note that the integrals in limits make sense. Consider the first equality: We assume that

f is Riemann integrable on [a,b]. If we pick some B between a and b (note that in the

limit we approach b from the left, now we can see why), then by the Theorem here part (i), f is also Riemann integrable on [a,B] and so we can integrate inside the limit. The situation is shown in the following picture.

If we cut away a part of the region along the right edge and make this cut-away part smaller and smaller, the resulting areas should converge to the whole area.

Similarly, f is Riemann integrable on [A,b], and in the second equality we cut away along the left edge.

The second property we will cover here is a modification of the standard Mean Value Theorem (see Derivatives - Theory - MVT) for the function F defined in the

Fundamental Theorem of Calculus:

The Fundamental theorem then says

When we substitute for F, we get the following thorem.

Theorem (The Mean Value Theorem for integrals, Lagrange theorem for integrals). Let f be a continuous function on [a,b]. Then there is a number c in (a,b) such that

If we recall the definition of an average of f (see Applications - Average), we can restate the Mean Value Theorem as follows:




27 Note that the continuity in the assumption of the Mean Value Theorem is crucial.

Indeed, recall the example of a jump function we saw before. Its average over [0,2] is 3/2, but the function is never equal to 3/2.

The Mean Value Theorem for integrals has many versions. The one we stated above is probably the most popular, but also the weakest, merely a reformulation of the good old MVT. There are much stronger statements, we will show one of the more popular here.

Theorem (The Mean Value Theorem for integrals).

Let f be a continuous function on [a,b] and g an integrable function on [a,b] that is positive there. Then there is a number c in (a,b) such that

Note that if you use this theorem with the constant function g(x) = 1, you get the first, weaker statement.


Other kinds of definite integral

The fact that many functions are not Riemann integrable inspired development of different kinds of definite integrals. Today this represents a vast body of knowledge. Here we will just briefly outline two popular and intuitively simple definitions of definite integrals.

The Newton definite integral


Let f have an antiderivative F on an interval I, where I is an interval od arbitrary type from a to b. We then define the Newton definite integral of f from a to b by

If the limits exist finite, we say that the integral converges.

This definition is done in this general way so that it can be also applied to other intervals than closed ones, it will even apply to intervals whose endpoints may be infinite, in which case we actually obtain the improper integral (cf. here).

At this moment, the most natural setting is when F is an antiderivative of f on [a,b]. The definition then reads (thanks to the continuity of F)



28 Corollary.

If f is Riemann integrable and Newton integrable on [a,b], then the two definite integrals agree.

Indeed, we then have exactly the Newton-Leibniz formula.

Note that in general, these two notions need not agree. The example of jump function shows that there are functions that are Riemann integrable without being Newton integrable. There are also examples of functions that are Newton integrable without being Riemann integrable.

The Lebesgue definite integral

We will not present a precise definition here, because this would involve some fairly advanced mathematics. We will just try to show the main idea.

When we defined the Riemann integral of a function f, we split the area under the graph of f into vertical strips given by partitioning the x-axis. The Lagrange integral does it in a different way: it splits the area into vertical strips given by partitioning the y-axis.

Choose some value y and consider a very thin horizontal slice at the level of y. If it intersects the graph of f, look at the set B of all points x such that f(x) lies in this strip.

Now consider the region under the parts of the graph of f that lie in this strip (the shaded area). If we can somehow find a way to measure the size of the set B, then the area of the shaded part is equal to y times the size of B. There are many ways to measure the size of a set. The Lebesgue integral uses one that is called Lebesgue measure and is too difficult to describe here. The Lebesgue measure is quite natural, for instance, the Lebesgue measure of an interval [a,b] is exactly its size b-a. It is denoted by the Greek letter mu. By the way, if we choose a different way to measure sets, we get a different kind of integral.



29 From the picture it would seem that if you consider those shaded regions as above for different values of y, the regions are disjoint:

Therefore, the total area under the graph is the sum of areas of all of these regions:

This is called the Lebesgue integral of f.

Since the definition is very complicated, one can expect that the notion of Lebesgue integral is very powerful. Indeed, it can be applied to some very strange functions, even to the Dirichlet function that is not Riemann nor Newton integrable. One can prove that the Lebesgue integral of this function over any interval is equal to zero. Using the Lebesgue integral we can also integrate over other sets than just intervals.