Jet Wimp*
A Course on Integral Equations by Allen C. Pipkin Springer Texts in Applied Mathematics, Vol. 9 N e w York: Springer-Verlag, 1991. xiii + 268 pp. US $39.00. ISBN 0-387-97557-8.
Integral Equations: A Practical Treatment from Spectral Theory to Applications by David Porter and David G. Stifling N e w York: Cambridge University Press, 1990, xi + 372 pp. US $29.95. (paper) ISBN 0-521-33742-9.
Linear Integral Equations by Rainer Kress Springer Texts in Applied Mathematics, Vol. 82 N e w York: Springer-Verlag, 1989. xi + 299 pp. US $49.00. ISBN 0-387-50616-0.
Integral Equations and Applications by Constantin Corduneanu N e w York: Cambridge University Press, 1991. ix + 366 pp. US $94.95. ISBN 0-521-34050-0.
Reviewed by Thomas S. Angell In the May 1697 issue o f the Acta Eruditorum appear articles by Johannes and Jakob Bernoulli and also a note by Leibnitz, all discussing the solution of the famous * Column Editor's address: Department of Mathematics, Drexel Uni-
versity, Philadelphia, PA 19104 USA.
problem of the brachistochrone: What is the plane curve having the property that the time necessary for a particle to slide d o w n a path to the lowest point on the curve is a minimum? Many of us are familiar with the history of this "challenge problem" posed to the mathematicians of Europe by Johannes Bernoulli in June of 1696. The note of Leibnitz appears there simply because, as is perhaps not so well known, he had already solved the problem. In response to a private letter sent to him on June 9th, Leibnitz had communicated his solution to Bernoulli on June 16, 1696. The note of Leibnitz is remarkable not only for its reticence but also for Leibnitz's assertion that Huygens too, had he been alive, could have solved the problem. Huygens, who died in 1695, had not seen Bernoulli's challenge. Not that the problem was new: Galilei in 1638 had posed it and suggested, incorrectly, that the curve of steepest descent was an arc of a circle. The Bernoullis and Leibnitz showed 49 years later that the brachistochrone was instead a cycloidal arc. Huygens was probably ignorant of Galilei's problem. On the other hand, both Leibnitz and the Bernoullis were well aware of the investigations of Huygens into the properties of the cycloid and, in particular, its isochronous property, which Huygens had described in 1673. Indeed, the Bernoullis expressed amazement that the solution of the problem of the curve of quickest descent was the same as Huygens's tautochrone, the plane curve having the property that the time necessary for a particle to slide down a path to the lowest point on the curve is independent of the point of origin. I enjoy talking to the students of elementary calculus about Huygens's study of the pendulum clock and the tautochrone's importance to navigation, geography, astronomy, and the development of a clock with a period
THE MATHEMATICALINTELLIGENCER VOL. 16, NO. 1 (~)1994 Springer-Verlag New York
63
independent of the length of the pendulum. Of course, sometimes a curious student catches me up by asking how Huygens came to try the cycloid in his search for an isochronous curve. My answer is that good intuition always characterizes the work of a master, an assertion that hardly sits well with students. I wish that I could tell them about the work of Abel in 1823. Abel developed a method, starting from first principles, for finding the tautochrone simply from its defining property. Unfortunately, Abel's work is a bit distant from the subject matter of the course, and my students are not sophisticated in matters of mechanics. The problem, as stated by Abel, was a generalization of the problem of Huygens in that he asked for the path along which a particle, confined to a vertical plane and subject only to the force of gravity, should fall so that its time of transit would be equal to a given function of the vertical distance fallen. Abel proceeded from the relationship between potential and kinetic energy and, integrating with respect to arc length, derived the relation y=O ds _ T(h), =h v/2g(h - y)
~
where T is the given function of transit time depending on the vertical distance h. If one sets ds/dy = -u(y), his equation becomes
1 h v ~ ~o ~u(Y) d y = T(h). We obtain Huygens's problem by setting T(0) = 0 and T(h) - c f o r h > 0. In modern terminology, this last equation, Abel's equation, is a Volterra integral equation of the first kind for the unknown function u. What is particularly fascinating about this equation, apart from its mathematical character, is that the physical problem leads ab initio to an integral equation rather than to a differential equation that can be rewritten in integral form. Also, the equation seems to be ubiquitous, occurring, for example, in mechanics, geophysics (in the description of earthquake shocks), and tomography. The equation is still an object of lively investigation. Today we write Abel's equation in symbolic form as )~Au = T, the symbol A denoting the map
u ---* f0 h ~ u(y) dy, with A = 1/v/-~. The function A(h, y) = (h - y)1/2 is the kernel of the integral equation. This equation and Volterra equations of the second kind were both first studied systematically by Vito Volterra in 1896. He summarized his work in his book of 1913 [19]. Equations of the second kind have the form
(I + )~K)f = g, 64 THEMATHEMATICALINTELLIGENCERVOL.16,NO. 1,1994
the operator K being generated by ~a t
f --*
K(x, y)f(y) dy,
with variable upper limit of integration. Although Volterra used the theory of algebraic equations primarily as a guide, he did advocate the use of infinite determinants. Other eminent mathematicians soon became interested in integral equations. Du Bois Reymond in 1888 was the first to suggest the name "integral equations," whereas Poincar6, in 1894, used integral equations in his study of a three-dimensional version of Liouville's problem concerning the cooling of a bar. Unlike Abel's equation, the equations considered by Poincar6 arose from differential equations, in this case the heat equation. In Poincar6's time, derivations of integral equations from differential equations were common. George Green had proposed them in his Essay of 1828. Green had studied Laplace's equation, was the first to pose what we now call the Dirichlet problem, and had derived what we now call Fredholm integral equations of the first kind. It remained for Beer [2] in 1865 to derive an integral equation of the second kind for the Dirichlet problem. Integral equations and partial differential equations have been intimately linked ever since. This was the heritage that Fredholm enjoyed at the turn of the century when he developed his theory of integral equations. Fredholm exploited the analogy with linear algebraic equations in a w a y that Volterra did not, and he laid the foundation for much of what was to come, particularly the work of Hilbert and his school. The theory flourished in the first half of our century with significant contributions made by Gevrey, Tamarkin, Tonelli, Carleman, Weyl, E Riesz, Wiener, and Hopf. For those who enjoy history, I recommend the introduction to Corduneanu's book under review.
Is There a Field of Integral Equations? The subject should not be relegated to mere history; it is quick, not dead, as a glance at Mathematical Reviews or Zentralblatt will confirm. Even its oldest manifestations retain vitality, as the current interest in Abel's equation shows [5]. The application to population dynamics, envisioned by Volterra and d'Ancona in the late 1920s, is another example. The books reviewed here are full of current and classical applications: thermostatic regulators, acoustic scattering, composite materials, population genetics, automatic control input-output systems. Yet it is a rare undergraduate who has ever encountered the subject. I maintain that most graduate students have only a passing acquaintance with integral equations, one gleaned either from examples of compact operators in a course in functional analysis or from applications, numerical or otherwise, in a course in partial differential equations or in mathematical physics. A re-
cent article in the SIAM News [11], which discusses the training of the next generation of applied mathematicians, fails to suggest integral equations as a study topic. As the classical tools of applied mathematics the author lists complex variables, linear algebra, advanced calculus, ordinary and partial differential equations, perturbation, and asymptotic methods. He mentions modern tools as well, but absent from either list is integral equations. This omission, this gap in the knowledge of our students, is simultaneously understandable and lamentable. As a result of the basic simplicity of textbook models and the weak background of students in physical applications, most lecturers present their applications in the language of differential equations. There may even be a psychological barrier. I know colleagues who in the 1950s were exposed to integral equations, but whose recollec-
" . . . thermostatic regulators, acoustic scattering, composite materials, population genetics, automatic control, input-output s y s t e m s . . . " tions are of massive systems of linear algebraic equations and large, complicated Fredholm determinants. Their recollections are not happy ones. There is also the understandable view that modern functional analysis is so rich, and the compact operators form such a small chapter, that one can afford to mention concrete realizations of the operators only in passing. Nonlinear matters, e.g., monotone operators and Hammerstein's equation, are best left to advanced seminars. Perhaps the appearance of these books, all "based on lectures," heralds a change from curricular neglect. The curricular neglect is not for the want of suitable texts. When I began to teach the subject, the four books of which I had some knowledge were Tricomi [18], Hochstadt [7], Muskhelishvili [13], and Krasnosel'skii [9]. Add to that list the book of Smithies [16]. These were the books called to m y attention as being "top notch." Those of Muskhelishvili and Krasnosel'skii were monographs suitable for research students, the others were texts. None of these books has lost its luster. The books reviewed here are all described by their authors as textbooks. As one might expect, there is considerable overlap, but the books differ in style, approach, audience of choice, and emphasis on topics. They fit at different levels and even into different branches of the mathematics curriculum. Of the four, the text of Pipkin is the most elementary. The author intends the book as an advanced undergraduate or perhaps a beginning graduate text. Its greatest strength--as is appropriate for such an audience--is the breadth of examples and applications. Pipkin emphasizes analogies with linear algebra, and in the latter half of the book he uses complex variables. For example,
to work through Chapter 9 on principal value integrals, one must be proficient at contour integration. Pipkin's discussion is somewhat informal, but I think appropriately so. He offers us a "techniques" book, which allows the great breadth of topics, though it has the disadvantage that, without an experienced instructor, students will find this book difficult. The first five chapters present the basic Fredholm theory, Hilbert-Schmidt equations, and Volterra equations. In particular, the fourth chapter contains a brief account of the Laplace transform. After a chapter entitled Reciprocal Kernels that deals with integral representations of solution of convolution equations of the first kind, Pipkin introduces Wiener-Hopf equations (suitable for problems defined only on a half-space) and principal value integrals. He devotes the final three chapters to integral equations whose kernels are of the form K(x, y) = (x - y)-l. Woven into the development throughout are applications involving fiber-reinforced materials, input-output systems, lateral vision, airfoils, and viscoelastic materials. One omission, curious in a m o d e m text, is a discussion of numerical methods. However, one cannot cover everything. There are many good exercises emphasizing computation, with some answers provided in an appendix. The author provides precise statements of theorems but few proofs. I recommend the book to experienced instructors with a good knowledge of integral equations who, along with their students, are interested in applications (with a seasoning of engineers). The book of Porter and Stirling also will require an experienced instructor, but for a different reason. The authors direct the book toward final-year Honors Mathematics students or M.Sc. students studying in the British educational system. Unfortunately, the book will not fit into a niche in the standard curricula of U.S. graduate schools. Though the benefits might be considerable, an instructor in the United States should exercise great care in using the book. What are the problems? Problems of necessary background, mostly. Few beginning graduate students in the United States have sufficient acquaintance with complex variables and linear algebra, few have the background sufficient for this book, though in all other respects the book is really introductory. The authors thoughtfully supply two appendices, one on the facts of Lebesgue integration through Fubini's theorem, and one on the facts about operators on Hilbert space, which, with certain sections of the text, provide a whirlwind tour of the spectral theorem for compact operators. The authors admit from the start that they are addressing students with a first-level graduate course in functional analysis. Those with such exposure will find the text an excellent w a y to gain acquaintance with integral equations and will enjoy a review of the functional analysis recently learned. At my suggestion, a colleague used this book in just this w a y with great success, no doubt partly due to the large number of exercises. THE MATHEMATICAL INTELLIGENCER VOL. 16, NO. I, 1994
65
The book begins with two chapters of motivation presenting problems in ordinary and partial differential equations and showing how integral equations arise in each. Here we meet, for example, Tricomi's equation, Laplace's equation (as a special case of the former), and Sturm-Liouville problems. There follow chapters on Fredholm theory, compact operators, the spectral theorem for these operators in the self-adjoint case, and one on positive operators. The emphasis is on eigenfunction expansions, and this naturally leads to a discussion of the topic of approximation methods for eigenvalues and eigenfunctions. The authors close with chapters on variational techniques and Galerkin's method, and on singular equations and the Hilbert transform. This last chapter is particularly well done. There are nice things scattered throughout the text. For example, Porter and Stirling present a variant of Galerkin's method, called the Iterated Galerkin Method, which apparently goes back to a 1975 article of Sloan, Burn, and Datyner [15]. This variant yields better approximations to the solution of second-kind equations. The method can be explained simply as follows: One replaces the equation ( ! + ,~K)x = f with a fixed point equation x = T x with the affine map x ~ )~Kx + f. A solution will then lie in the range of T. Although the usual Galerkin procedure computes an approximate solution xn of the fixed point equation, the iterated Galerkin procedure replaces this approximate solution xn with ~,~ = Txn, which is both an element of R ( T ) and of the fiber over x,~. Examples show the increased accuracy obtained. The table on page 282 compares the exact solution, the Galerkin approximate, and the iterated Galerkin approximate for a specific example. The cautious reader will note that the last two rows of the table on page 282 are interchanged. I really don't want to argue matters of taste, but I find it curious that, even though the authors emphasize the subject's origins in differential equations, they do not include an account of potential theory. One does not find single- or double-layer potentials or any systematic account of Laplace's equation. Considering the role played by potential theory in the history of the subject, the authors have missed a great opportunity.
Applying Functional Analysis Kress doesn't miss the opportunity. He devotes a chapter to the subject and another to potential-theoretic methods for the heat equation. He uses similar methods in the final chapter of his book on inverse acoustic scattering. As the author remarks in the preface, his book is based on lectures given in G6ttingen in which he presents not only theory, but also applications and numerical methods. There is no question here about the need of a firm foundation in functional analysis. Kress employs it heavily throughout, including in the error and convergence analysis of numerical methods. Most American students 66
THE MATHEMATICAL INTELLIGENCER VOL. 16, NO. 1, 1994
will need a course in functional analysis before they achieve any facility with the ideas of compact operators, dual systems in Banach spaces, and Sobolev spaces. On the other hand, if you wish to teach a course that combines functional analysis and integral equations (not a bad combination), then this book would be an excellent choice. For someone with a grounding in functional analysis, this book provides a well-constructed self-contained presentation. It is clear, appropriate details are filled in, and a connected thread runs throughout. This thread is, of course, the story of how linear boundary value problems in partial differential equations can be approached using integral equations. Fortunately, the author does not allow the theme to overwhelm the development: The subject remains linear integral equations. Kress's approach to the Fredholm alternative and the Riesz-Schauder theory, using general dual pairings of Banach spaces, deserves mention, as does his treatment, in the same setting, of normal solvability. To my knowledge, this approach has appeared only in the books of J6rgens [8] and Heuser [6]. Here, the use of more general pairings--rather than that of a Banach space with its d u a l - - allows a proof of the Fredholm theorems which avoids the use of the Hahn-Banach Theorem, therefore of Zorn's Lemma. The tactic of circumventing the axiom of choice is familiar in analysis, although perhaps not so much pursued in the recent past. My opinion, which I pass on to my students, is that such niceties, if the only such rationale, to an applied mathematician hardly justify the effort. In Kress's b o o k however, it is not just a nicety; the rationale is much more practical. If one studies integral equations with square-integrable kernels, say in the space L2[0, 1], then the adjoint equation is again an integral equation in L2[0, 1]. However, in case the function space is C[0, 1], the adjoint equation is in an inherently different dual space. The use of dual systems avoids the even more difficult question of how to interpret the adjoint equation if we have no concrete knowledge of the dual space. Kress's treatment of Sobolev spaces is straightforward and tailored to the numerical methods presented later in the text. In the final chapters, Kress carefully works out the material on ill-posed problems and their regularization; in the final chapter, on inverse scattering theor~ we can see all the theory in action. What really sets this book apart, and makes it so valuable in the present climate, is its emphasis on numerical methods. These are worked out in detail in five chapters. Although there are books available (e.g., [4]) treating such methods, this is the only general text that I know that contains this material. I can well imagine the book becoming a standard text for second-year graduate students. My suggestion to Springer-Verlag is that it persuade the author to include a diskette containing basic codes. They have done that with other texts. It would make a stunning addition to Kress's book.
Altogether different from the previous books is the one by Corduneanu. After the historical introduction mentioned above, written with G. Bantam, it discusses briefly some applications in which integral equations arise and summarizes the basic theory of linear Volterra and Fredholm equations. The author reveals the true nature of the book in the last section of the first chapter, devoted to nonlinear equations of Hammerstein type. He articulates the hope that the "topics featured in the book will convince the reader that integral equations are a very useful and successful tool in contemporary research .... " I think he has succeeded admirably with his emphasis on nonlinear problems and equations of Volterra type, an invaluable bibliography, and sets of bibliographical notes. Apparently only the few of us working in areas like control theory or dynamical systems know that Tonelli in 1928 [17] introduced an extensive generalization of Volterra equations, what are now called abstract Volterra operators. In the engineering control literature, the term used is "causal operator." Such an operator V defined on some function space E([t0, T], R) is characterized by the property that if x(s) = y(s), to <_ s <_ t, t <_ T, then (Vx)(t) = (Vy)(t). I first learned of such operators from a d o s e friend and fellow graduate student, an electrical engineer, who wrote his Ph.D. thesis on nonlinear feedback systems with causal operators. Working on my dissertation, I was naturally excited to find the same idea, and the reference to Tonelli, in the book of O~uzt6reli [14]. The idea shows up later in Warga's book [20] under the name p-hereditary operator, and most recently in Krasnosel'skii and Pokrovskii [10] as a physically realizable transducer. As Corduneanu so well documents, the use of Volterra operators can be a unifying tool for a range of equations including integrodifferential equations of Volterra type, functional differential equations, and linear and nonlinear Volterra integral equations in one or several variables. As the last three references show, they are important for the description and analysis of control systems that involve hereditary dependence. It is fortunate that Corduneanu treats these ideas and amply illustrates their usefulness. I contend that much could be done to develop a systematic approach to hereditary systems by using Volterra operators. To my knowledge, such a synthesis has yet to be accomplished. Also of growing applied interest are equations with multivalued right-hand sides, sometimes called contingent equations. Differential inclusions, i.e., relations of the form :b(t) E Q(x, t), were known in the 1930s. They appeared explicitly in the 1937 paper of McShane [12], which anticipated many of Filippov's results. A systematic development of the subject, driven largely by the demands of control theor~ occurred only in the 1960s. Two recent references are [1, 3]. Of still more recent vintage are integral inclusions. Corduneanu gives us a brief introduction to the subject with a treatment of Volterra-
Hammerstein integral inclusions. His discussion, and the material in the additional references, affords an entry into what should become a fertile field of research. Considering that much of the motivation for the development of the theory of differential inclusions was their occurrence in control theory, it is a pity that Corduneanu does not illustrate their use in his discussion of control problems monitored by Volterra equations. Much is to be discovered in this field of applications, particularly in the investigation of multidimensional problems, and those who read this book will find it easy to identify promising research problems.
References 1. J. P. Aubin and A. Cellina, Differential Inclusions, SpringerVerlag, Berlin, Heidelberg, New York, 1984. 2. A. Beer, Einleitung in die Electrostatik, die Lehre vom Magnetismus und die Elektrodynamik, Vieweg, Braunschweig, 1865. 3. K. Deimling, Multivalued Differential Equations, de Gruyter, Berlin, New York, 1992. 4. L. M. Delves and J. L. Mohamed, Computational Methods for Integral Equations, Cambridge University Press, Cambridge, New York, 1985. 5. R. Gorenflo, Abel Integral Equations: Analysis and Applications, Springer-Verlag, Berlin, New York, 1991. 6. H. Heuser, Funktionalanalysis, Teubner, Stuttgart, 1975. 7. H. Hochstadt, Integral Equations, Wiley, New York, 1973. 8. K. J6rgens, Linear Integral Operators (G. Roach, trans.), Pitman, London, 1982. 9. M. A. Krasnosel'skiL Topological Methods in the Theory of Nonlinear Integral Equations, Pergamon Press, Oxford, New York, 1964. 10. M. A. Krasnosel'skii and A. V. PokrovskiL Systems with Hysteresis, Springer-Verlag, Berlin, Heidelberg, New York, 1989. 11. C. D. Levermore, Training a new generation of applied math faculty, SIAM News 25(6) (1992), 17. 12. E. J. McShane, A navigation problem in the calculus of variations, Amer. J. Math. 59 (1937), 327-334. 13. N. I. Muskhelishvili, Singular Integral Equations; Boundary Problems of Function Theory and Their Application to Mathematical Physics, P. Noordhoff, Gronigen, The Netherlands, 1953. 14. M. N. O~,uzt6reli, Time-Lag Control Systems, Academic Press, New York, London, 1966. 15. I. Sloan, B. Burn, and N. Datyner, A new approach to the numerical solution of integral equations, J. Comp. Physics 18 (1975), 92-103. 16. F. Smithies, Integral Equations, Cambridge University Press, Cambridge, 1958. 17. L. Tonelli, Sulle equazioni funzionali di Volterra, Bull. Calcuta Math. Soc. 20 (1929), 31-48; Opere Scelti 4, 198-212, Edizioni Cremonese, Rome, 1960. 18. E G. Tricomi, Integral Equations, Interscience Publishers, New York, 1957. 19. V. Volterra, Theory of Functionals and of Integral and IntegroDifferential Equations, Dover, New York, 1959. 20. J. Warga, Optimal Control of Differential and Functional Equations, Academic Press, New York, London, 1972.
Department of Mathematical Sciences University of Delaware Newark, DE 19716 USA THE MATHEMATICAL INTELLIGENCER VOL. 16, NO. I, 1994
67
Exact Constants in Approximation Theory by N. Korneichuk (translated by K. Ivanov) Cambridge: Cambridge University Press, 1991, xii + 452 pp. US$89.50. Reviewed by I". M. Mills In teaching undergraduate calculus, we try to ensure that our students learn the qualitative fact that oK)
S := E n-2 < o0. It is easy to prove this by comparing partial sums with integrals. If we note that, for n > 1,
1 1 ---~ n < n(n - 1--~)
1 n - 1
1 n'
then, by estimating the partial sums again, we can prove the quantitative fact that S<2. However, I never cease to enjoy telling my students that S = 7r2/6. We have found an "exact constant." If you agree with me that there is something final and something beautiful about the statement that S = ~r2/6, then you will appreciate the subject matter of this new book by Korneichuk. It deals with best-constant problems in the context of approximation theor~ Specifically, approximation theory is the study of methods for approximating functions by using simple
functions such as polynomials, or splines, or rational functions. In this study, one is concerned with estimating errors that are committed by approximation methods. Estimating these errors as precisely as possible often comes down to finding a best constant in some inequality. This usually gives us the last word in estimating an error. Exact Constants in Approximation Theory deals with finding these constants across a broad range of problems in approximation theory. To give readers some flavour of both the subject and the book, I have described below a typical problem. Many mathematicians think that approximation theory began and ended with Weierstrass's theorem; perhaps the description below will help to dispel this myth.
Typical Problem Let X
=
C2~ = {f : I~ ~ I~[ fcontinuous everywhere, 27r-periodic}
and V,~ be the set of trigonometric polynomials of order n - 1 with real coefficients. When endowed with the uniform norm [[fllx = [[f[[~ = sup{lf(x)[ : x E ~}, X becomes a Banach space and V,~ is a finitedimensional subspace of X. Obviously, V1 c V2 c V3 c ... and, by Weierstrass's approximation theorem, Un~176 1 Wn is dense in X. One of the principal developments in approximation theory is the study of best approximation, which is an attempt to quantify Weierstrass's theorem. Dunham Jackson's doctoral thesis [2] is the starting point of this development. Let f E X. Then, the best approximation to f by elements of Vn is defined to be E n ( f ) = inf{llf - zllx : T E Vn}. Jackson estimated E n ( f ) in terms of w(f; .), the modulus of continuity of f. This is defined by w(f;6) =
sup{If(x)- f(y)[
: Ix - y l - < 6;x,y E I~}.
If f : ~ --* I~ is a 27r-periodic function, then the following statements are equivalent: o f E C27r 9 f is uniformly continuous on 9 lim~__.0+w(f; 6) = 0. Jackson proved the following result.
THEOREM 1 (D. Jackson). There is a finite constant c such that Figure 1. En(f) is the best approximation to f by elements of Vn. 68
~ E MATHEMATICAL INTELLIGENCER VOL. 16, NO. 1,1994
E n ( f ) ~ cw(f; 7r/n)for all f E X, n E 1~.
(1)
So as n --* c~ we have 1/n --* O, w(f;rc/n) ~ O, and, hence, E,~(f) --, O. Thus, Jackson's theorem is a quantitative version of Weierstrass's theorem. For example, suppose that f is not only continuous but has a bounded derivative. Then, for some M > 0, If'(x)I < M for all x E ~. Hence, by the mean value theorem,
w(f;5)
= <
sup(If(x) - f(Y)l : l x - Yl <- 6;x,y ~ ]~} MS,
and so Jackson's theorem implies that, for n = 1,2, ,...
E n ( f ) < cM/n. We can now ask whether inequality (1) can be improved. Jackson showed that there is a function f0 E X and a positive constant co such that
E,~(fo) > cow(fo, Tr/n)
(n = 1,2,3,...).
Hence, we cannot replace 1/n in Equation (1) by a smaller function of n, such as 1/n 2 for example. It remains to find the smallest c that can be used in inequality (1). This is a typical problem in the book under review. The solution to the problem is given in the following result (p. 270, Theorem 6.2.2.): THEOREM 2. The assertion
En(f) <_ cw(f;Tr/n) for all f E X , n E I~ is valid for c = 1 but not for any c < 1. Thus, we have solved a best-constant problem for Jackson's theorem. Jackson's theorem has been a rich source of ideas for generalisation. One of my favourite pieces of reading is the dissertation by James Case [1] which is an exposition of the generalisations of Jackson's theorem. Some entrepreneurial mathematician should organise a conference on Jackson's Theorem: The first hundred years to be held in 2011 in some exotic location. Another worthwhile project would be to publish the collected works of Dunham Jackson. His papers (listed in [3]) are beautiful to read, and they provide fine examples of style and clarity for graduate students to emulate. Readers of the Mathematical Intelligencer enjoy photos: Does any one out there have a photo of Dunham Jackson? Other P r o b l e m s The problem above has certain key ingredients: the space X, the norm (l" IIx, and the increasing sequence of finite-dimensional subspaces ~c 89189
We can create new settings for our best-constant problem by varying these ingredients. For example, we could let X be 1. Lp, the space of 27r-periodic functions whose pth power is Lebesgue integrable over [0, 2r); 2. "-'2~m(m) = { f E V21rI f (m) E U2~}, the space of 27rperiodic functions which are m times continuously differentiable; 3. C([a, b]), the space of continuous real-valued functions defined on [a, b] (with no restrictions about periodicity). Also we could let V,~ be 1. a finite-dimensional space of splines, 2. [in the case of X = U([a, b]) mentioned above] the space of ordinary polynomials of degree at most n. As you vary the scenario of the problem, you generate new best-constant problems to be solved. The author does this throughout the book.
General R e m a r k s 1. Title: I do not like the title, because all constants are exact. "Best Constants in Approximation Theory" would be a better title, as most English-speaking mathematicians would prefer the phrase "best constant" to "exact constant." 2. Introduction: The author does not give an introduction to the subject matter which would give a nonspecialist reader some idea of what the book is about. In fact, he does not define the topic of the book at all, unless you accept his claim that the title is selfexplanatory. The Preface suggests that the material may be useful to "the applied scientist who is looking for mathematical approaches to the solution of practical problems .... " As a mathematician who works sideby-side with applied scientists every day, I reckon that most "applied scientists" could not read this book. The book will be useful to mathematicians who work in approximation theory and are interested in best-constant questions. Perhaps some people who work in numerical analysis or computational mathematics also may be interested in it. 3. Price: Like all books in this series, this book is expensive. Graduate students could not afford it; academic mathematicians who could afford it may think twice about buying it; many libraries will baulk at spending this amount of money without strong justification from faculty. 4. Printing: The book is up to the usual high standard set by Cambridge University Press for the series. THE MATHEMATICAL INTELLIGENCER VOL. 16, NO. 1,1994
69
5. Exercises: There are 90 exercises at the ends of chapters. They appear to be fairly difficult (at least to this reviewer!) as many of them are based on results in research papers. This is common in monographs from the (former) USSR. 6. Translation: The quality of translation is satisfactory, particularly considering that the translation was done by Professor K. Ivanov (Bulgarian Academy of Sciences) whose native language is not English. One sentence, though, tickled my fancy: "This fact is far from trivial unlike the other results in this section" (p. 285). However, it is irritating to see that certain book references have not been translated. For example, it would be more helpful to refer to an English version of a book where it exists rather than the Russian version. Of the 45 Russian books given in the bibliography, 19 have been published in English. Many were originally published in English: some were even published by Cambridge University Press! 7. Bibliography: The bibliography is extensive and heavily biased towards Soviet work. This may not be a very unfair bias as Soviet mathematicians have been the most active in the field of best constants for a long time: but the bias is a little unfair. Still an extensive guide to the Soviet literature is useful to non-Soviet best constant hunters. 8. Index: Brief. 9. Notation: I often find the notation in this area of mathematics difficult to remember, especially in the description of function classes. For example, we are told on p. 212 that it is clear that the class W m K H 1[a, b], which, incidentally, is the same as the class K W ~ +1 [a, b], is contained in W TM K ~ [a, hi. This problem is aggravated by the fact that mathematicians do not read mathematics books from cover to cover. We tend to dive in at the middle.
References 1. James Robert Case, Extensions and Generalizations of Jackson's Theorem, Ph.D. Dissertation, Syracuse University, 1970. 2. Dunham Jackson, Uber die Genauigkeit der Ann/iherung stetiger Funktionen durch ganze rationale Funktionen gegbenen Grades und trigonometrische Summen gegebener Ordnung, Gekr6nte Preisschrift und Inaugural-Dissertation, G6ttingen University, 1911. 3. William L. Hart, Dunham Jackson 1888-1946, Bull. Amer. Math. Soc. 54 (1948), 847-860.
Department of Mathematics La Trobe University College of Northern Victoria PO Box 199, Bendigo, Victoria 3550 Australia 70
THE MATHEMATICALINTELLIGENCER VOL. 16, NO. 1, 1994
Leonardo, Special Issue, Visual Mathematics Guest Editor:. Michele Emmer Journal of the International Society for the Arts, Sciences, and Technology Volume 25, Numbers 3 and 4 N e w York: Pergamon Press (1992) Reviewed by Harold L. D o r w a r t This will be an unconventional review of an unconventional but beautifully produced publication that is worthy of high praise. First some facts, by w a y of stage setting. The following one-sentence biography appears in Webster's Dictionary: "Vinci, Leonardo da, 1452-1519; Italian painter, sculptor, architect, scientist, musician, and natural philosopher." Therefore it is easy to see w h y Frank J. Malina (1912-1981), who was an "aeronautical engineer, pioneer in rocketry, research administrator, promoter of international cooperation, artist, and editor," chose the name Leonardo when, in 1967, he founded a "professional journal for working artists to write about their own work." Concerning the founding publisher, I. R. Maxwell (1923-1991), who was then chair of Pergamon Press, it has been stated that "his vision of the future of publishing was instrumental to the establishment of contemporary scientific and scholarly publications and resulted in a major contribution to the development of modern science." The International Society for the Arts, Sciences, and Technology was founded in 1981 as a "non-profit organization that seeks to encourage the interaction of art, science, and technology." Thus it is not surprising that the bimonthly publication Leonardo was chosen as the official journal of the organization, which would then become known as Leonardo/ISAST. Headquarters were in Europe until recently, when the organization moved to San Francisco. 1 N o w that the stage is set, it is time for some general comments concerning the publication to be reviewed. Visual Mathematics is apparently the most elaborate of a series of special issues, and of theme packs "of frequently requested articles that have appeared in the Journal over the years." Its face size is the same as that of The Intelligencer, but its thickness is that of a book. It weighs one pound, six ounces. It was printed in Great Britain on very good, heavy paper, the binding is excellent, and the cover is strong enough to hold up under considerable use. The price to a non-member of the Society or to an Institution/Library--not mentioned on the cover but discovered in the classified advertisements section - is $45. The Society "gratefully acknowledges donations from Research on Demand, the Malina Trust and CRSS Architects, Inc.," for "partial support of this volume." 1Beginning in 1993, Leonardo was published by the MIT Press-Editor's Note.
The editors have organized the contributions under 7 headings: Geometr~ Computer Graphics and Geometr~ Computer Graphics and Art, Topology, Tessellations, Perspective, Art and Mathematics. The difficult part of the review now has to be faced! H o w do you do justice to a guest editor and to 29 individuals who have written an Introduction and 22 articles, when your feelings--after thumbing through the book a few t i m e s - - are those of a small child of long ago who was given some money and told to spend it at a candy store? After gazing for some minutes at the plates of delectables, the child was completely bewildered. For the first few days after receiving the publication, this reviewer merely looked (and looked) at the 179 black-and-white and 28 beautifully colored figures, all with carefully written captions, and then read the helpful abstracts that preceded each article. Approximately half the authors describe themselves as mathematicians or as working in closely related fields. The other half are sculptors, composers, writers, etc. Of the 30 article writers, exactly half give the USA as their home base, and the others are from Italy, United Kingdom, German)~ Switzerland, Canada, and Chile. The articles are heavily referenced; it is interesting to lo0k at some of the bibliographic entries m in particular, at the books that the present authors have considered important in helping their ideas come to fruition. Among the older reference books, one finds D'Arcy Thompson's On Growth and Form, J. Hambidge's The Elements of Dynamic Symmetry, and H. M. Cundy and A. P. Rollet's Mathematical Models. Hilbert and Cohn-Vossen's Geometry and the Imagination comes up frequentl)~ as do the early and later works of Coxeter, including Escher and the Visual Representation of Mathematical Ideas, of which Coxeter, M. Emmer, R. Penrose, and M. L. Teubner are editors. Also, there are references to modern books on fashionable subjects, such as Mandelbrot's The Fractal Geometry of Nature, Peitgen and Richter's The Beauty of Fractals, Peitgen and Saupe's The Science of Fractals, and Gleick's Chaos, Making a New Science. In his Introduction, entitled "Visual Mathematics: a N e w Era?", guest editor Michele Emmer mentions one of his favorite paintings, "The Flagellation," by the Italian Renaissance artist Piero della Francesca, and then quotes Morris Kline's description of this artist as a mathematician (from KIine's Mathematics in Western Culture): The artist who perfected the science of perspective was Piero della Francesca. This highly intellectual painter had a passion for geometry and planned all his work mathematically to the last detail. The placement of each figure was calculated so as to be correct in relation to other figures and to the organization of the painting as a whole. He even used geometric forms for parts of the body and objects of dress and he loved curved surfaces and solidity. He later quotes from The Mathematical Experience, by P. J. Davis and R. Hersh, concerning the use of highspeed computers in mathematics, and briefly describes
Figure 1. An impossible figure, the tribar, drawn in perspective.
the seminal work at Brown University (and elsewhere) in computer graphics, concluding, "it is time to propose a comparison between the research of mathematicians and the works of artists, in order to gain an understanding of what interesting results can be expected from the field of visual mathematics." I will select just a few of the articles and describe them. In "Portraits of a Family of Complex Polytopes," H. S. M. Coxeter and G. C. Shephard exploit recent developments in computer graphics to represent regular complex polytopes on the real plane. The intricate and very beautiful figures include, for example, four projections of the M6bius-Kantor polygon 3{3}3 showing how its appearance gradually changes as it is rotated. Roger Penrose, in "On the Cohomology of Impossible Figures," explores the close relationship between certain types of impossible figures, including the tribar (see Fig. 1), and the mathematical idea of cohomology. He first gives a fascinating definition of an impossible figure: It is a two-dimensional representation meant to be reconstructed by the observer as a three-dimensional figure but in which it is impossible for the viewer to decide the distance from his mind's eye to any given point in the reconstruction. In the abstract, such figures are said to possess an ambiguity group. Weirdl)~ an impossible figure may be decomposed into possible figures, as is the case with the tribar, which shows that the lack of ambiguity can be a local phenomenon (see Fig. 2). In "On the Edge of Science: The Role of the Artist's Intuition in Science," the remarkable sculptor Charles O. Perry describes his highly geometric sculptures, and the intuitive processes that led to their creation. He credits D'Arcy W. Thompson's On Growth and Form as being his bridge from real life to art. THE MATHEMATICAL INTELLIGENCER VOL. 16, NO. 1, 1994
71
T h e Physical Basis of the Direction of Time by H. D. Zeh N e w York: Springer-Verlag, 1992. x + 188 pp. US $39.95 paper. ISBN 0-387-54884-X
Reviewed by John C. Baez
Figure 2. The tribar shown pieced together out of overlapping smaller drawings, each of which depicts a possible structure. In "Visualization in Art and Science," the educator Harriet E. Brisson addresses the puzzle of how humans obtain a cognitive grasp of what in reality are not visualizable structures, such as four-dimensional objects or space-filling curves. Obviously to some extent we can learn to do this, because we can utilize in mathematical proofs the salient properties of such objects. Recent advances in computer graphics and sculptural forms have shed some light on the process by means of which we create internal models for such objects. In "Visualization of Soap Bubble Geometries" Fred Almgren and John Sullivan discuss startling new techniques for generating computer graphics of bubble clusters. The software, based on Fresnel's equations, produces both the colored interference pattern and the Fresnel effect of varying transparency. Several articles deal with patterns and tilings. Branko Griinbaum and G. C. Shephard, in "Interlace Patterns in Islamic and Moorish Art," explore the rather deep mathematical structures inherent in these very old Moorish decorative patterns. Instead of discussing all the many fascinating articles in detail, which is what the writer would like to do, he will tell you how the child solved the candy selection problem. After counting pennies and seeing that there was a one-to-one matching with candy containers, the child announced, "I'll take a penny's worth of each kind." I hope the reader does the same, and purchases this wonderful journal. H a p p y reading! 17 Cobble Road Salisbury, CT 06068 USA 72
THE MATHEMATICAL INTELLIGENCER VOL. 16, NO. 1,1994
In this book Zeh brings some clarity to a very murky problem: Why are the future and the past so different? One need only read the physics journals to see that this is a multifaceted and very real issue that still vexes the experts. To understand its seriousness, it is first necessary to see how similar the future and the past are. They don't seem so in everyday life: We remember the past but not the future, our actions affect the future but not the past, and so on. From this standpoint it is really surprising that the dynamical laws of p h y s i c s - - with one small exception--seem symmetrical under time reversal. Before we go further, it's important to get a clear idea of what time-reversal symmetry really means. At the simplest level we may think of the laws of physics as equations involving a time variable t, and say that they are symmetric under time reversal if given any solution, and making the substitution t ---, - t , we obtain another solution. To take an easy example, consider a point particle of mass m in three-dimensional space with no forces acting on it. If we write its position as a function of time, r(t), Newton's second law says that
d2r --
dt 2
~0.
If r(t) satisfies this equation, then so does r(-t). Typically, the laws are more complicated, and one may have to be more careful in defining time-reversal symmetry. Different laws of physics involve very different mathematical structures, but they are usually separated into two components, the "kinematics" and the "dynamics." The kinematics consists of the description of the set S of states that the system can be in at any given time. For example, in classical mechanics, we can specify the state of a point particle in three-dimensional space by giving its position r and its velocity v, so S = ~3 x ]1~3. The dynamics tells how states change with time. In a theory where we can predict both the future and the past from the present, and where there are no time-dependent external influences, we usually describe dynamics with a family of maps U(t): S ~ S, where t E I~. If the state of the system is ~b at some time to, then the state is U(t)~b at time to + t. The maps {U(t)} should form a "oneparameter group," that is, u ( t ) U(s) = u ( t + s),
v t, s, ~ ~.
We say that the physical system given by (S, U) has "time-reversal symmetry" if there is a map T: S ---, S, called time reversal, such that u(-t)
= T -~ U(t) T.
For example, our point particle with no forces on it moves with constant velocity, so U(t)(r,v) = ( r + tv, v). It's easy to check that U is a one-parameter group and that the system has time-reversal symmetry, where T(r, v) = (r, - v ) . Note, by the way, that time-reversal symmetry in the sense described is different from requiring that a given state r be invariant under time reversal: Tr162 Our world is evidently in a state that is not even approximately invariant under time reversal; there are many processes going on whose time-reversed versions never seem to happen. This is logically independent of the question of whether the dynamical laws of physics admit time-reversal symmetry. Keeping this distinction straight is crucial for thinking clearly about the direction of time. Even people who claim to understand the distinction often slip. When reading about time-reversal symmetry, I become infuriated when authors confuse symmetry of the laws with symmetry of the state, and I am happy to report that not once did I hurl Zeh's book to the floor in anger.
... Violation of T symmetry has so far been seen only in the decays of a single particle . . . . At this point, we could go through all theories of physics and check to see whether they have time-reversal symmetry. However, let us simply turn to the most upto-date and complete laws of physics we know: the standard model and general relativity. The "standard model" is a complicated theory of quantum fields that describes the most fundamental particles we know (mainly leptons and quarks) and the forces (electromagnetism and the weak and strong nuclear forces) by means of which they interact. In other words, it treats everything except gravity. The standard model has time-reversal symmetry except for effects involving the weak force. This is the force that permits a proton and electron to turn into a neutron and a neutrino, as happens in some radioactive atoms, or vice versa, as in some others. In quantum field theory, time reversal or T, is one of a trio of possible symmetries, the others being charge conjugation or U, which amounts to interchanging particles with their antiparticles, and parity, or P, which is related to spatial inversion
(t, x, y, z)
(t, - x , -y, - z )
in somewhat the same way that time reversal relates to the map --.
( - t , x , y , z).
In a quantum field theory, states are given by unit vectors in a Hilbert space H. The symmetries C and P are given by unitary operators on H - - i f the theory in question admits these symmetries--whereas T is given by an antiunitary operator, that is, a conjugate-linear oneto-one and onto norm-preserving map from H to itself. I will restrain myself from explaining why T must be antiunitary rather than unitary, fascinating though this is. The key point here is that in the standard m o d e l the weak force violates C, P, and T symmetry, whereas electromagnetism and the strong force admit all these symmetries. Moreover, although violation of P symmetry is common and blatant-- neutrinos, for example, which only interact by the weak force, come in a left-handed form but not a right-handed one--violation of T symmetry has so far been seen only in the decays of a single particle, the neutral kaon, and the amount of violation is minute. Most physicists believe that this small T symmetry violation is not particularly related to the gross time asymmetry of the state of the universe. However, there is something very curious about this, as in the elaborate Islamic designs that are perfectly symmetrical except for one tiny flaw put in to avoid the wrath of Allah. Zeh's book does not treat the T asymmetry of the weak interaction very thoroughly, but luckily there is already a good book that does just this [1]. On the other hand, general relativity treats gravity, which is a great puzzle in its own right. It seems very difficult to unify gravity with the rest of the forces. Unlike all the other forces, it is not at all natural to formulate its dynamics in terms of a one-parameter time evolution group. Essentially, this is because it treats the geometry of space-time itself and describes how it wiggles around. Although the dynamics of general relativity is by now moderately well understood, the modifications required for a quantum theory of gravity are still very poorly understood and seem to require a radical rethinking of the very notion of time. In his last chapter, "The Quantization of Time," Zeh tours this fascinating subject. Although a quantum theory of gravity would be likely to have profound implications for the study of time reversal one can fairly say that, so far, the dynamics of gravity seems to admit time-reversal symmetry. It's worth noting that there are some cases where at first glance it looks as if the laws of physics are asymmetric under time reversal, but on closer inspection it turns out to be the fault of the particular state of the universe in which we are. The two most famous examples are the "time arrow of radiation" and the "time arrow of thermodynamics." Here an "arrow of time" is used loosely to denote something that is not symmetric under time reversal. The time arrow of radiation refers simply to the fact that when we shake an electrically charged object, it emits waves of radiation that ripple outward as time progresses into the future, rather than the past. We express this concept mathematically in terms of what are THE MATHEMATICAL INTELLIGENCER VOL. 16, NO. 1, 1994
73
called Green's functions. To understand these, it's easier to consider the scalar wave equation rather than Maxwell's equations of electromagnetism in their full glory. We have a "field" r ]~4 ----+ ~ being produced by a "source" f: I~4 ~ ~; we assume that both are smooth functions and that
De = where O=
f,
02
02
02
02
Ot2
COX2
Oy2
OZ2"
The source does not uniquely determine the field, but it is possible to write down formulas that give us for any source f a field r with EJr = f. In particular, we say that G(t, x, y, z) is a Green's function (actually a distribution) for the scalar wave equation if r
= j ( a ( t - t',r - r')f(t',r') dt' dr'
(where r is short for (x, y, z)) implies that o r = f. Two Green's functions are the "advanced" one, G.dv(t,r) -- 5(t + Irl) [rl
'
and the "retarded" one, G~t(t,r)
- 5(t-
Irl) Irl
'
where 6 is the Dirac delta distribution. In electromagnetism, one typically uses the retarded Green's function, so that if f(t, r) is nonzero only for times t E [to, tl], then r is typically nonzero after tl, but is zero before to. It may seem odd that although the equation []r = f is preserved by the transformation ( t , x , y , z ) -* ( - t , x, y, z), we are solving it in a way that doesn't respect this symmetry. There are, however, two things that help resolve this puzzle. First, working with the retarded rather than the advanced Green's function is, at least for f vanishing outside a bounded set, equivalent to an assumption about the nature of the field r namely, that it vanish as t -* -oc. In short, we are making a timeasymmetric assumption about the state of the system in choosing the retarded Green's function. Why do we make this assumption? For a quite interesting reason: because it's dark at night. Light radiates out from the sun and from our flashlights, rather than coming into them from the distance, because the universe is a dark and cold place. The very fact that space is mostly dark and empty, with a speckling of hot bright stars that radiate outward, is blatantly time asymmetric, so the time arrow of radiation appears to be cosmological in origin. This fact about the universe is crucial to life as we know it, as all life on earth is powered by the outgoing radiation of the sun, and the earth, in turn, dumps its waste heat into the blackness of space. A second, subtler point is that the equation De = f 74 T.EMATHEMATICAL INTELLIGENCER VOL 16, NO. I, 1994
does not fit into the general framework of one-parameter groups because the field r is subject to an arbitrary timedependent external influence, the source f. Here one wants to imagine oneself, the experimenter, as being able to do whatever one wants with the source f, and to see what it does to the field r This is related to the notion of free will: We like to think that the laws of physics govern the behavior of everything else, but that we are free to do whatever we want. In the most fundamental laws of physics we k n o w - - the standard model and general rela t i v i t y - n o "arbitrary external influences" appear. In these laws, there is no need to choose between a retarded and advanced Green's function (or some other Green's function, for that matter). There is only the need to choose the state that best matches what we observe. The time arrow of thermodynamics is perhaps the most famous aspect of time-reversal s y m m e t r y - - s o I will treat it very briefly here. Why is it so much more likely that a porcelain cup will fall to the floor and smash to smithereens, than it is for a pile of porcelain smithereens to form into a cup and jump into one's hand? Disorder seems always on the increase. In fact, in thermodynamics there is a quantity called entropy, S, which is a measure of disorder--although one must be very careful not to fall for the negative connotations of "disorder," which here has a very precise and sometimes counterintuitive sense. The second law of thermodynamics states that dS -->0. dtThis law appears utterly time-asymmetric, and reconciling it with the (almost) time-symmetric fundamental laws of physics has exercised the minds of many physicists for many years. The final resolution seems a simple one: This law is not true except for certain states. That is, it expresses a time asymmetry of the state of a system, rather than an asymmetry in the dynamical laws. As long as the dynamical laws admit symmetry under time reversal, for every state with dS/dt > 0 there is a time-reversed state with dS/dt < 0. It is also worth noting that the vast majority of states typically have S quite large and dS/dt -~ O. As with the time arrow of radiation, in the last analysis it appears to be nothing but a raw experimental fact that the entropy of our universe is increasing. In a sense this is not surprising because chemistry and biology convince us that life as we know it requires entropy to be changing monotonically, rather than staying about the same. One might ask why dS/dt > 0 rather than dS/dt < O, but this is essentially a matter of convention. Processes like remembering and planning, which define the psychological notions of past and future, can occur only in the direction of increasing entropy. That is, a memory at time t can only be of an event at time t' for which S(t') < S(t), whereas a plan at time t can only be for an action at time t' for which S(t') > S(t). Because we have settled on using calendars for which the number of
the years increase in the direction of plans rather than memories, we have chosen a time coordinate t for which t < t' implies S(t) < S(t'). The main remaining mystery, then, is w h y the state of the universe is grossly asymmetric u n d e r time reversal, though the dynamical laws of physics are a l m o s t - - b u t not q u i t e ! - - s y m m e t r i c . If readers wish to puzzle over this some more, or want supporting evidence for some of the (perhaps upsetting) claims I've m a d e above, they could not do better than to read Zeh's book.
References 1. Robert G. Sachs, The Physics of Time Reversal, Chicago, University of Chicago Press (1987).
THE SELFAVO IDING WALK by N.N. M a d r a s , York University, Ontario & G.D. S l a d e , M c M a s t e r University, Ontario
"This is the first book written on the self-avoiding random walk that has been written for mathematicians rather than for chemists. The authors' achievement is impressive. A particularly awesome treatment is given of the lace expansion, a technique of which professional combinatorialists are not yet aware. Required reading for whoever takes applied discrete mathematics to heart." - - G i a n - C a r l o Rota, The Bulletin of Mathematics Books and Computer Software
Department of Mathematics University of California Riverside, CA 92521 USA
CONTENTS: Introduction 9 Scaling, Polymers and Spins 9 Some Combinatorial Bounds 9 Decay of the Two-Point Function 9 The Lace Expansion 9 Above Four Dimensions 9 Pattern Theorems 9 Polygons, Slabs, Bridges, and Knots 9 Analysis of Monte Carlo Methods ~ Related Topics 9 Index
Corrections
1992 425pp. Hardcover $64.50 ISBN0-8176-3589-0 Probability and Its Applications
The authors of the first book reviewed in vol. 15, no. 3, pp. 69-71 are Yu. V. Egorov and M. A. Shubin. We regret having misspelled one of these names. In the review by David M. Bressoud in vol. 15, no. 4, 71-73, both formulas on p. 73 are wrong due to an editing error. In each case, the summation should be a product.
MOVING? We n e e d y o u r n e w address so that y o u do not miss a n y issues of
THE MATHEMATICAL INTELLIGENCER. Please fill out the form below and send it to:
Springer-Verlag New York, Inc. Journal Fulfillment Services 44 Hartz Way, Secaucus, NJ 07096-2491 Old Address (or label)
"This well organized and clearly written advanced textbook introduces students to analytic functions of one or more real variables. Many historical remarks, examples and references to the literature encourage the beginner to study further this ample, valuable and exciting theory." - - Mathematical Reviews This book treats the subject of analytic functions of one or more real variables using, almost solely, the techniques of real analysis. This point of view dramatically alters the natural progression of ideas and brings previously neglected arguments to the fore. 1992 200pp. Hardcover $49.50 ISBN0-8176-2768-5 Birkhiiuser Advanced Texts: Basler Lehrbiicher, Volume 4
Name. Address. City/State/Zip
New Address
by S.G. K r a n t z , W a s h i n g t o n University, St. Louis & H.R. P a r k s , Oregon State University
Name Address. City/State/Zip.
To Order! Call ToU-Free 1-800-777-4643. In NJ call (201) 348-4033. Or Write to BirkhA'useI,Dept. Y751, 44 Hartz Way, Secaucus, NJ 07096-2491. Prices valid in North America only. For information outside North America, please contact:
B kh u,er Verlag AG, aost berg 23, P.O. Box 133, CH-4010 Basel, Switzerland. FAX 061 271 7666.
Birkhduser B o s t o n Basel Berlin
Please give us six weeks notice. THEMATHEMATICALINTELLIGENCERVOL,16,NO. 1,1994 75