errata & addenda for TOPOLOGICAL VECTOR SPACES (chapters 22-30) in HAF

• Additional remarks for Chapters 21 and 24 -- about Henstock versus Lebesgue integrals. For many decades, the Lebesgue integral has been the "standard" integral of research mathematicians -- i.e., it is the integral understood to be in use when the word "integral" is used without further specification. The Henstock integral is a later invention which is not yet widely used. Each of these integrals has its advantages:
• The Henstock integral is more concrete. It does not require complicated machinery (sigma-algebras, measures, etc.); it can be defined immediately. Its computations yield much intuition and insight into measures, particularly on intervals. (This is why I chose to construct Lebesgue measure and other Borel measures via the Henstock integral, in 24.35.) Finally, the Henstock integral can integrate more functions, and so ultimately it may be useful in more applications.
• The Lebesgue approach is simpler when the domain of the functions is anything more complicated than an interval. Also, although the space L1[0,1] is smaller than the space of Henstock integrable functions, it is also simpler, and easier to understand and work with -- e.g., it has a complete norm, and many good convergence theorems and other elegant results associated with that norm. (The analogous results for Henstock integrals are still being worked out today; that is a current research area. So far, though the Henstock results generalize the Lebesgue results, the Henstock results are rather complicated.)
Because each approach has its advantages, my opinion is that the student would benefit from studying both approaches (as in my book).

• Additional remark for Chapter 22. In Chapter 19 we saw that every metric space has a unique (up to isometry) metric completion. Somewhere in Chapter 22 (perhaps in 22.8 or after 22.13) it should be remarked that the completion of a normed space is a normed space -- i.e., there is a natural way to put a linear structure on that larger metric space, so that the complete metric is given by a complete norm. This is a bit difficult to prove using 19.33.a, but it is very easy to prove using 19.33.b -- if X is a normed space, then all the other (pseudo)metric spaces mentioned in 19.33.b are easily seen to be (semi)normed spaces. You might add a cross-referencing sentence in 19.33.b about this too. --- Here is another method for proving that the completion of a normed space is a normed space; this method is easier but makes the result a corollary of much more advanced results: As noted in 23.20, X is isometrically and linearly embedded in X**. But X** is a complete normed space, by 23.8. Thus the closure of X in X** is a completion, which is easily seen to be linear.

• Remarks for 22.17. This lemma illustrates a general, imprecise principle in analysis: In analysis we often consider a space F of functions f:W-->X, from a set W into a metric space X. In most cases of interest, the function space F is a complete metric space, provided that the codomain X is complete. We don't need the domain W to be complete; it might not even be a metric space. Other examples of this principle (besides the applications of 22.17) are in 19.12, 19.13, 21.35, 22.31, 23.2.d, 23.8, and several examples after 26.6.

• In 22.22's first sentence, "for each real number r" should be "for each real number x."

• In 22.23.a, delete "non"; it should be "only finitely many ... are zero." [JI]

• Remarks for 22.24. The power series for sin, cos, and exp are important examples that should be mentioned here. How do we know that the function given by the power series is the same as the function studied in elementary calculus? One proof is via the formula for the nth coefficient in terms of the nth derivative -- see Section 25.27. Another proof is via the uniqueness of the solutions of the appropriate differential equations; see Section 30.9.

• Additional material for 22.27.e. Apparently this question is not still an open one. I'm reading about the results now. When I've understood it a bit better (hopefully within a few more weeks) I'll post a brief summary here.

• Additional material for 22.28. Minkowski's inequality is the statement that ||.||p is subadditive. That inequality is implicit in the discussion of 22.28, but I should have mentioned it explicitly. (I did mention it explicitly in the more specialized context of sequence spaces, in 22.25.) The proof that I used in 22.28, via Minkowski functionals and 12.29.g, is a bit abstract and unconventional; it's the sort of thing you might find in a functional analysis book. In books on measure and integration, it is customary to first prove Holder's inequality by a computation like that in 22.33, and then use Holder's inequality plus another computation of the same sort to prove Minkowski's inequality, and then use Minkowski's inequality to prove that the Lp space is a linear space. That conventional approach is more concrete, and may seem less mysterious. The concrete computations are not terribly long, and recently some people have found ways to shorten the computations a bit further; you can find two such shortenings in
L. Maligranda, A simple proof of the H�lder and the Minkowski inequality. Amer. Math. Monthly 102 (1995), 256-259.

H. K�nig, A simple proof of the Minkowski inequality. General Inequalities, 6 (Oberwolfach, 1990), 469, Internat. Ser. Numer. Math., 103, Birkh�user, Basel, 1992.

At present I'm undecided about which approach I prefer. (Maybe it would be best to show the students both approaches.) The concrete computations are actually a bit unmotivated; they're based on clever tricks that are produced like a magician pulling rabbits out of hats. The abstract, functional analytic approach (i.e., via 12.29.g) appears more natural and more general, but it's actually not more general: If you try to replace the function tp with an arbitrary convex function of t and then try to carry through the proof of Minkowski's inequality based on 12.29.g, you'll actually find that many conditions get imposed on that convex function, and ultimately (after much computation) those conditions force you to use tp.

• In 22.29, Lp(mu,X) and Lp(mu;X) mean the same thing. [JT]

• Additional material after 22.29. Here is an interesting result: Scheffe's Theorem. Suppose p1, p2, p3, ... and p are nonnegative integrable functions. Suppose that pn converges to p pointwise, and the integral of pn converges to the integral of p. Then pn converges to p in L1-norm. (We emphasize that the convergence is not assumed to be monotone or dominated.) A very short proof (using the Dominated Convergence Theorem) can be found in the appendix of P. Billingsley, Convergence of Probability Measures, Wiley, New York, 1968.

• Further remarks after 22.30.b: Thus we could instead define the Lp space to be the completion of the (pseudo)metric space of integrable simple functions, with the appropriate norm or G-norm. That definition is used in some books; it is equivalent to the one given in 22.28 of HAF. (Mention of it could be added to 22.28, as well.) It has the advantage that it does not require 21.36 and related results. It has the disadvantage that it merely defines the Lp "functions" as equivalence classes of Cauchy sequences of integrable simple functions; we still have to go through some additional work to show that the Lp "functions" are indeed (equivalence classes of) measurable functions.

• 22.31.a is called the Riesz-Fischer Theorem.

• 22.40 is wrong. The proof that I gave for (C) implies (B) is erroneous: its sequence (pn) does not not necessarily satisfy the hypotheses of (C). Conditions (A), (B), and (D) are indeed equivalent to one another, and should be called uniform convexity; but they are strictly stronger than condition (C), which is known as "full 2-convexity." That fact can be seen from the reference below. The l2-direct sum of Banach spaces l2, l3, ..., is fully 2-convex but not uniformly convex. [SS2]
Ky Fan and Irving Glicksberg, Fully convex normed linear spaces, Proc. Nat. Acad. Sci. USA 41 (11) (Nov. 15, 1955), pp. 947-953.

• In 22.47, "the bar over g(j) may be omitted" should say "the bar over g(omega) may be omitted". [JT]

• In 22.51, "there is a among the members" should be "there is among the members". [JT]

• 22.52. The theorem is right but the proof is missing a step. I gave a long proof of clsp(S) subsetof S++, when a much shorter proof would have sufficed: We know that S++ is a closed linear space that contains S, so it must contain clsp(S), QED. On the other hand, I gave no proof of S++ subsetof clsp(S). Here is such a proof: Let any u in S++ be given. Let z be the nearest point to u in clsp(S). By 22.51, u-z is in [clsp(S)]+. Since clsp(S) contains S, we know that [clsp(S)]+ is contained in S+. Thus u-z is in S+. Now, z is in clsp(S), which is contained in S++; and u is in S++. Thus u-z is in S++, and also in S+, so u-z=0. Thus the given u, an arbitrary member of S++, is equal to z, a member of clsp(S).

• 22.54.e is called Bessel's Inequality.

• Remarks for 22.55. The sequence space l2 has an orthonormal basis given by the coordinate vectors; that's rather trivial. A deeper and more interesting example of an orthonormal basis is as follows: The functions sin(nx) (n=1,2,3,...) and cos(nx) (n=0,1,2,3,...), multiplied by appropriate constants, make an orthonormal basis for the Hilbert space L2[0,2pi]. The proof is not trivial. One elegant way to prove this is by using the Stone-Weierstrass Theorem -- but that theorem is not in my book.

• In 23.6.a, in the definition of C[0,1], that should be continuous functions from [0,1] into F. [JT]

• Additional remarks for 23.7 (or perhaps 9.55). What is the dual space, really? We may view X* as a method for studying X. For a particularly simple example, let X be some finite-dimensional vector space -- e.g., R^3. What happens in X can be described in terms of its coordinate projections into one-dimensional spaces. That description may or may not be useful, depending on what we're studying in X. In general, if X is any normed space (or any object in a category of the sort described in 9.55), then the members of X* play a role similar to that of coordinate projections. The members of X may be infinite dimensional, but their projections via members of X* are only one-dimensional -- i.e., numbers -- and, presumably, easier to understand in some respects.

• In 23.9's first equation, the X* should be a subscript of the norm. [JT]

• 23.25 is a form of Pettis's Theorem.

• Add to beginning of Chapter 24: The definitions of the various integrals are admittedly somewhat complicated. For motivation, the reader is invited to glance ahead to the chapter on derivatives; the definition of derivative is actually more simple and more natural than the definition of integral. Our integrals are justified by the fact that, in various ways, they are antiderivatives --- see sections 25.15 through 25.18. The biggest advantage of the gauge integral over the Riemann and Lebesgue integrals is given by 25.18.

• Additional remarks for 24.8. Instead of taking one of the two functions to be vector-valued and taking the other to be scalar-valued, it might be nicer to take BOTH functions to be vector-valued, and use a bilinear product, as mentioned in 11.41. Most of the proofs about Stieltjes integrals are essentially the same whether we take both, one, or neither of the functions to be scalar-valued.

• Further remarks for 24.9: In Section 24.9 I said that the refinement integral deserves further study.'' What I meant was that I was not very familiar with that integral, and I wondered about it. I have now done a little reading and thinking about the refinement integral, and I now regret having mentioned it at all in my book --- the reader's time is limited, and I now feel that the refinement integral is not a very good integral, and it does not belong in an introduction to integration. It integrates far fewer functions than does the Henstock or Lebesgue integral.

• 24.10 is due to Cousin.

• In 24.19, second paragraph of proof, p's should be q's.

• In 24.23(ii), the integral should end in d phi(t), not just in dt.

• Section 24.23 (the Henstock-Saks Lemma) is correct (except for one minor typographical error), but it is complicated and may be confusing to beginners. I've thought of a few ways to make it easier to understand. I'm putting that material on a separate page.

• Addition after 24.25. The "improper integral" mentioned on page 630 of my book is not needed for Henstock integrals. Any function that has an improper Henstock integral, also has a proper Henstock integral. That is Hake's Theorem, which really ought to be added to this treatment. It is not true for the Riemann integral (for instance, consider the function on page 630 of my book), nor for the Lebesgue integral (see the example in section 25.20 of my book).

• Additions for 24.26.(ii). Actually, the result is also true (by a different proof) if f is scalar-valued and phi is vector-valued. In fact, the result is true if both functions are vector-valued. See Lee and Rey . That paper also shows an integration-by-parts formula, and shows that the class of Henstock-integrable functions is closed under the operation of multiplying by a function that has bounded variation.

• Possible modifications for 24.28 and 29.34. I'm not happy with my book's treatment of the Riesz Representation Theorem; I think that the material in 24.28 and 29.34 does not deliver enough insight to justify all its effort. If I could do it over, I would probably follow a much shorter treatment, proving this much simpler statement: Any continuous linear functional on C[a,b] is given by the Riemann-Stieltjes integral with respect to some integrator which has bounded variation. That result is very easy to prove, using the Hahn-Banach Theorem; a short proof can be found (for instance) in Groetsch's book. It makes 24.28 entirely unnecessary.

• 24.35. The theorem is right but the last paragraph of the proof is wrong. Indeed, from 24.22.b we find that
• mu-sub-phi( [a,p) ) = phi(p-) - phi(a),
• mu-sub-phi( [a,p] ) = phi(p+) - phi(a).
In particular, in the latter formula, taking p=a, we find
• mu-sub-phi( {a} ) = phi(a+) - phi(a).
That last expression vanishes if phi is right-continuous, so a right-continuous phi cannot yield a measure that is positive on the singleton {a}. For a correct proof of 24.35, modify the last paragraph slightly: Define
phi(t) = mu( [a,t) ),
which is left-continuous. (By the way, the earlier part of the proof of 24.35 can be shortened slightly: Use 24.32.b to show that K is closed under finite union; this shortens the proof that K is a sigma-algebra.)

• Remarks for 24.41.b. It turns out that the Riemann-Lebesgue Lemma is not true for Henstock integrals -- in fact, it is not even true for improper Riemann integrals, though a counterexample of that is rather complicated to produce. One example was given by Riemann: let g(x) be x to the power 1/3 times cos(1/x), and let f(x)=g'(x); then f has an improper Riemann but its Fourier coefficients do not tend to 0. A more general example was produced by Titchmarsh and is presented with proof in section 2.22 of the old edition of Zygmund's book Trigonometric Series. My thanks to Bob Bartle for bringing this to my attention.

• In 24.44, the proof contains some single vertical bars |..| which should be double bars ||..||. [DJ]

• Additional result for 24.46. That theorem can be improved slightly: If a function is continuous except on a set of measure 0, then that function is measurable. (This is proved, for instance, in Theorem 2.5 of Russell Gordon's book.) Thus, one of the hypotheses of 24.46 can be dropped.

• Additional material for sections 25.1-25.6. Students who are only familiar with the one-dimensional case are surprised when they learn how complicated derivatives can be in higher dimensions -- e.g., that if f maps from X into Y, then f ' maps from X into BL(X,Y), and f '' is more complicated still. We can't make the higher-dimensional derivatives less complicated, but we may be able to provide additional insight by making the one-dimensional derivatives more complicated, as follows. Let's take a very simple example: Say f(x) is the function x-squared. Then f '(x) = 2x, and in particular f '(3) = 6. Now, instead of thinking of f '(3) as the number 6 (a member of R), let us think of f '(3) as the operation "multiply by 6" (a map from R into R). Of course, we generally represent that operation by the number 6, but we can say that the derivative is "really" the operation, not the number. This viewpoint is not mentioned in freshman calculus courses because it is merely distracting if one is only considering one-dimensional problems. But this viewpoint may be helpful in making the conceptual jump to derivatives in higher dimensions.

• Additional material for 25.14. It is interesting to note that when C[0,1] is equipped with its usual sup norm, then the set of all nowhere differentiable functions is not a Borel subset of C[0,1]. This was proved by Mauldin .
R. D. Mauldin, The set of continuous nowhere differentiable functions, Pacific J. Math. 83 (1979), 199-205.
By the way, here is a related question for which I haven't yet found a good answer. Does anyone know of an explicitly constructible example of a subset of the reals that is Lebesgue measurable but not Borel measurable? (I haven't finished investigating this; I suppose I should look at some books on descriptive set theory.)

• Improvements for 25.16 and related material. A nice theorem which I wish I'd included is this: The indefinite integral of any real-valued Henstock integrable function is differentiable almost everywhere, and the derivative is equal to the integrand. That can be proved using the Henstock-Saks Lemma and the Vitali Covering Theorem (which I also omitted from my book); a proof can be found in P. Y. Lee's book. Then the theorem in 25.16 follows as an easy corollary, using a proof similar to that in Dunford and Schwartz. (This approach would also replace 24.43.) By the way, the differentiation result is false for Henstock integrals taking values in an arbitrary Banach space. For a pathological example, modify the example of 24.47: Let H be a non-separable Hilbert space, with orthonormal basis {e_t : t in [0,1]}. Let f(t) = e_t. The argument given in 24.47 shows that f(t) has Riemann integral 0 on every subinterval of [0,1]. Hence F(t) = 0, and F'(t) = 0 which is nowhere equal to f(t). (This example uses a nonseparable Banach space; I don't yet know whether the differentiation result is valid in separable Banach spaces.)

• Additional note for 25.20 (and for 24.36). In 25.20 we used derivatives to give an example of a Henstock integrable function f (real-valued, on a bounded interval) that is not Lebesgue integrable. But here is another method which could be given earlier in the book -- it doesn't require derivatives; it just requires unconditionally convergent series (covered in 10.41-10.43 and 23.26-23.27). Subdivide a bounded interval into countably many subintervals, which pile up at one point -- e.g., subdivide [0,1] by using the powers of 1/2 for endpoints. Define f(t) to be constant on each subinterval, alternating between positive values and negative values. Choose those values so that the areas of the resulting rectangles in the graph of f form an alternating series that is conditionally convergent.

• Additional result related to 25.25. Any monotone function (from an interval of R, into R) is differentiable almost everywhere. This theorem deserves to be mentioned, at least, but I'm undecided about whether the proof should be included in an introductory treatment of real analysis; the proof is fairly long. It uses the Vitali Covering Lemma. A proof of this classical result can be found in many books; for instance, it is Theorem 4.9 in Russell Gordon's book.

• Additional remark for 26.13. The F-seminorm rho-sub-phi defined in this section is a slight variant of the norm defined by Luxemburg for Banach spaces of Orlicz type in his thesis. Such norms have subsequently been called "Luxemburg norms"; this terminology apparently was first used in Krasnoselski's book on Orlicz spaces.

• 26.16. The proof is correct for p strictly between 0 and 1, but a slight adjustment is needed for the case of p=0. In that case, use Gamma(x) = |x|/(1+|x|); thus Gamma(x) is always less than 1. Choose an integer n large enough so that 1/n is less than r. Choose the partition to be just n subintervals of equal length. Then rho(g-sub-j) is less than 1/n, hence less than r, hence g-sub-j is a member of V.

• 26.17. To show that the sequence (y_j) is bounded, we cannot reason in terms of |||phi|||, because phi doesn't have an operator norm, because l_p is not a normed space. Instead, reason as follows: Suppose (y_j) is not bounded. Choose some subsequence (y_j_k) with absolute values tending to infinity. Let v_k = (e_j_k)/(y_j_k). Then phi(v_k) = 1, by linearity of phi. However, ||v_k|| = 1/y_j_k, so v_k --> 0 in the topology of l_p. By continuity of phi, we have phi(v_k) --> 0, a contradiction.

• Additional remark for 26.59. An example: For p in [1,oo], the Lebesgue spaces Lp are Dedekind complete Banach lattices. The lattice supremum is given by the essential supremum of measure theory, defined in 21.42. The order convergence is the same as convergence pointwise almost everywhere (see 21.42 and 21.43). Thus, the norm convergence should not be confused with the order convergence; neither of those convergences implies the other.

• Additional remarks for 27.25. A related result deserves mention: A topological vector space X is a Baire space if and only if every closed, balanced, absorbing subset of X is a neighborhood of some point. That is Theorem 1 of S. A. Saxon, Two characterizations of linear Baire spaces, Proc. AMS 45 (1974), 204-208.

• Additional material after 27.28. My book omitted the Open Mapping Theorem. Most functional analysis books make the Closed Graph Theorem into a corollary of the Open Mapping Theorem. But things can be done the other way too -- the Open Mapping Theorem can be made into a corollary of the Closed Graph Theorem. That's the way it's done, for instance, in Kelley and Namioka's book Linear Topological Spaces. I'm posting my own variant of that argument on a separate web page.

• Additional remarks for 28.8, 28.9.c, and 28.27.
• In 28.8, we do not lose any generality if we also require that each member of S be weakly closed, convex, and balanced; that will be evident from (newly added) remarks after 28.27.
• We can strengthen the statement in 28.9.c: Let S1 be the set of all weakly relatively compact subsets of Y; then we still get the two collections yielding the same S-topology. That will be evident from the remarks below.
• Add these observations after 28.27: Let the "bipolar" of a set mean its weakly-closed, convex, balanced hull. Let S be a collection of sets satisfying the requirements of 28.8. Then it is easy to verify that the collection of all bipolars of members of S also satisfies 28.8. Moreover, that collection yields the same uniform convergence topology, since a neighborhood base for the uniform convergence topology is given by sets of the form So = Sooo.

• Correction for 28.29. The proof of (UF28) ⇒ (UF1) has inadvertently used AC in choosing nets. A correction was sent to me by Renan Mezabarba. I'm going to rewrite the entire proof, using notation that works better on a web page. I'll use lowercase for points, uppercase for sets, and boldface for sets of sets; Greek for Ω, unstarred Latin for X, and starred Latin for the dual of X.
• Let Ω be any nonempty set, and let Φ be a proper filter of subsets of Ω; we wish to show that Φ is contained in an ultrafilter.
• Let X = {bounded functions from Ω into R}; this is a real Banach space when equipped with the sup norm || ||. Note that for each set Λ ⊆ Ω, the characteristic function 1Λ: Ω → {0,1} is a member of X.
• Let X* be its dual space, {bounded linear maps from X into R}. We use w* to indicate the weak-star topology on X* -- that is, the topology generated by finite subsets of X. Let U* be its closed unit ball {x*∈X*: ||x*||≤1}. Then U* is w*-compact, by (UF28).
• For each ω∈Ω define the evaluation map e*ω: X→R by taking e*ω(x) = x(ω) for x∈X. It is easy to see that e*ω is linear and ||e*ω|| ≤ 1, so e*ω ∈ U*. For sets Λ ⊆ Ω, we shall also denote e*(Λ) = {e*ω: ω∈Λ} ⊆ U*.
• The collection Φ has the finite intersection property (i.e., the intersection of finitely many of its members is nonempty). Consequently the collection {w*-cl(e*(Λ)): Λ∈Φ} also has the finite intersection property. By 17.2(B), therefore, that collection has nonempty intersection. Fix some
h0*   ∈   ∩{w*-cl(e*(Λ)): Λ∈Φ}.
• Define μ: (Ω) → R by μ(Λ) = h0*(1Λ) for each Λ⊆Ω. Note that μ is finitely additive; that is, if Λ and Γ are disjoint subsets of Ω, then μ(Λ∪Γ) = μ(Λ) + μ(Γ).
• (Up to this point, I've been following the book, except for change of notation; the changes in the content begin here.)
• Our goal is to show that μ takes only the values 0 and 1, and that μ takes the value 1 on every member of Φ. By 21.12.a, it then will follow that μ is a {0,1}-valued probability charge, and thus the characteristic function of an ultrafilter that extends the given filter Φ, which will complete the proof.
• Fix any set Γ⊆Ω; we shall investigate the value of μ(Γ). Temporarily fix any Λ∈Φ and ε>0. Since h0* ∈ w*-cl(e*(Λ)), we know e*(Λ) = {e*λ: λ∈Λ} meets every w*-neighborhood of h0* in X*. In particular,
{e*λ: λ∈Λ} ∩ {g*∈X*: |g*(1Γ) - h0*(1Γ)| < ε}
is a nonempty subset of X*. Therefore
{λ∈Λ   :   |e*λ(1Γ) - h0*(1Γ)| < ε}
is a nonempty subset of Λ. That set is equal to
{λ∈Λ   :   |1Γ(λ) - μ(Γ)| < ε}
which therefore is also a nonempty subset of Λ. That can be restated as:
{1Γ(λ) : λ∈Λ} ∩ (μ(Γ) - ε, μ(Γ) + ε)
is a nonempty subset of R. That set is also bounded, so it has a supremum, which we shall now denote by μ^(Γ). Observe that μ^(Γ)∈{0,1}, since 1Γ(λ) can only take the values 0 or 1. Observe that |μ^(Γ) - μ(Γ)| ≤ε, so limε↓0 = μ(Γ), so μ(Γ)∈{0,1}. Finally, if Λ=Γ, then 1Γ(λ)=1; this shows μ takes the value 1 on members of Φ.

• Additional material for 28.29. Here are a couple more variants of the Alaoglu Theorems which perhaps deserve to be added. First,
In any topological vector space, any weakly bounded set is weakly totally bounded.
This follows from 19.15.g, without using the Axiom of Choice or any weak form of choice. (Here "totally bounded" is defined as in 19.14.) Next, using (UF24), we obtain this result:
(UF25.5) In any topological vector space, any weakly bounded set is weakly precompact.
I'm calling that "(UF25.5)" because it would be inserted right before (UF26). We can prove without much difficulty that (UF25.5) implies (UF26). Proof: Let X* have the weak-star topology; note that that is the relative topology induced by the product topology on FX, where F is the scalar field. Let V be an equicontinuous subset of X*, and let C be its closure in FX. Then C is also equicontinuous, by 18.33.a, and we easily verify that C is a subset of X* and that C is complete (in the weak-star topology). We also verify that C is bounded; hence by (UF25.5) it is precompact; hence it is compact; hence V is relatively compact.

• 28.36. In the proof that (B) implies (C) and (E) in normed spaces, the last part of the argument is missing a few steps. More explanation is needed, to show that the limit actually lies in X rather than in X**.

• Comment on 28.37: Perhaps a more intuitive proof would be by this route: First prove the theorem for Banach spaces; the general case then follows easily from 28.35. (Question: What is the nicest or most intuitive proof in Banach spaces? I'm still trying to decide; I'd be interested in your comments or opinions on this.)

• Additional material for 28.41. It would be good to give examples showing that condition (G) is satisfied when X is reflexive, but not when X is not reflexive. Here are some simple examples. Consider the sequence spaces X=lp. When p is strictly between 1 and infinity, then X is reflexive and X*=lq (where q is the conjugate exponent). For any f=(f1,f2,f3,...) in X*, we can define a corresponding x=(x1,x2,x3,...) in X by taking xj=|fj|(fj)r for appropriate r. On the other hand, when X=1 then X is not reflexive, and we get a counterexample to (G) by taking f=(1/2, 2/3, 3/4, 4/5, ...).

• Additional remark after 29.2. Any weak measure is a measure. In other words: Suppose mu is a map from some sigma-algebra G, into a Banach space X. Suppose that whenever S1, S2, S3, ... are disjoint members of G, then mu of the union of the Sj's is equal to the sum of the mu's of the Sj's, where the series converges in the weak topology. Then the series also converges in the norm topology, and so mu is a measure. Proof: 28.31.

• In 29.26 on page 802, in the 5th line, put a period after 29.24.e.
• Additional material for 29.26: Theorem 29.26 says that any reflexive Banach space has the RNP. That result can be improved slightly. A separable dual is a Banach space that is separable and that is the dual of some Banach space. For example, l1 is a separable dual, since it is separable and (c0)*=l1. Then Example 29.22 (l1 has the RNP) is a corollary of this strengthened version of 29.26:
If a Banach space is either (i) reflexive or (ii) a separable dual, then it has the RNP.
This can be proved by slight modifications of the proof that I gave in 29.26. Throughout the proof, replace the weak topology with the weak-star topology. (That's no change at all in the reflexive case.) A couple of other changes must be noted:
• The set B is weak-star compact, by (UF28) in 28.29. Then it is weak-star sequentially compact, by 28.36(E) in case (i), or by 23.23 and 28.24.a and 17.33(B) in case (ii).
• Let X0 be the weak-star-closed linear span of the union of the ranges of the gn's. Then X0 is also norm-separable, by 28.14 in case (i), or by 15.13.d in case (ii).

• 29.34. The proof needs some repair. The approximating functions u converge to f uniformly on the half-open interval (a,b], but in general u(a) doesn't converge to f(a) -- we have u(a)=0 but f(a) could be anything. This minor error is fairly common in the literature; see remarks below. To correct it, change the definition of u so that u(a)=f(a). This also requires changing the definition of phi. Leave it unchanged on (a,b], but do not define phi(a) to be 0. Rather, define phi(a) to be equal to minus one times lambda-hat of the characteristic function of the singleton {a}. By the way, I think that sections 24.28 and 29.34 are unnecessarily complicated; it would be more compatible with the rest of this book if we replaced those results with a slightly weaker but much simpler result:
Every continuous linear functional on C[a,b] can be represented as the Riemann-Stieltjes integral with respect to some function of bounded variation. (Moreover, the function in BV can be chosen so that its variation is equal to the operator norm of the continuous linear functional.)
A relatively short proof of that result can be given; it is similar to the 2nd, 3rd, 4th, and 5th paragraphs of the proof of 29.34 (with the correction noted above). The theorem apparently is due to F. Riesz (1909). The proof using the Hahn-Banach Theorem apparently first appeared in Banach's book (1932). Banach made the error about not dealing with the endpoint properly; I guess it got past the referees because they already knew Riesz's theorem was true. Banach's proof, along with the minor error, subsequently found its way into many other books -- for instance,
• A. Friedman, Foundations of Modern Analysis, Holt, Rinehart, Winston, 1970, reprinted by Dover.
• C. W. Groetsch, Elements of Applicable Functional Analysis, Marcel Dekker, 1980.
• W. H. Ruckle, Modern Analysis, PWS-Kent, 1991.
and, of course, Limaye's book (cited in my bibliography) and my own book. However, a few mathematicians have been more careful. For instance,
• E. Kreysig, Introductory Functional Analysis with Applications, Wiley, 1978
deals with the endpoint correctly. By the way, here's another reference that I can't resist mentioning:
• J. Mikusinski, A remark on functionals in the space of continuous functions, Bull. Polish Acad. Sci. Math. 33 (1985), 623-626.
gives a variant of Banach's proof which does not use the Hahn-Banach Theorem; thus it is constructive (I think). It is only slightly longer than Banach's proof. However, I think it is also a little less intuitively appealing than Banach's proof.
• In 29.36(B), "phi" should be the Greek lower-case letter (j), rather than the name of that letter. (Those who know TeX understand that I typed $phi$ when I meant to type $\phi$.)
• Remarks for 30.6: The proof in the book uses Zorn's Lemma, but actually it would suffice to use Dependent Choice. (My thanks to Ralph McKenzie for helping with this proof.) Let A be the initial time. For each non-maximal solution u, choose an extension v by the following criteria. Say u is defined on [A,B) or [A,B]. Let T be the supremum of all numbers S such that an extension can be found on [A,B+S) or [A,B+S]. Then there exists an extension v on [A,B+(S/2)] --- or, if T is infinity, there exists an extension v on [A,B+1]. After choosing a sequence of extensions in this fashion, take the union of their graphs; it is the graph of a maximal extension.

• In 30.17 I said "we mention a couple of examples for concreteness," but in fact I only gave one example. (There used to be another, but it was too complicated so I took it out.)

• In 30.26, in the first displayed line, in the last fraction, the alpha on the top of the fraction should be a beta.

• Bibliography errata: The bibliography is not quite in alphabetical order:
• Brunner should go before, not after, Bruns and Schmidt.
• I'm undecided: Kothe after Kopperman, but Koethe before Kopperman.
• Kuo should go after Kunen.
• Pettis should go after Peressini and Pervin.
• J.B.Rosser 1939/1965 should go before J.B.Rosser 1953/1978.