Advanced engineering analysis the calculus of variations and functional analysis with applications in mechanics 1st Edition Lebedev - Quickly download the ebook to explore the full content
Advanced engineering analysis the calculus of variations and functional analysis with applications in mechanics 1st Edition Lebedev - Quickly download the ebook to explore the full content
com
https://ebookgate.com/product/advanced-engineering-analysis-
the-calculus-of-variations-and-functional-analysis-with-
applications-in-mechanics-1st-edition-lebedev/
OR CLICK BUTTON
DOWLOAD EBOOK
https://ebookgate.com/product/advanced-calculus-with-applications-in-
statistics-2ed-edition-khuri-a-i/
ebookgate.com
https://ebookgate.com/product/advanced-calculus-an-introduction-to-
linear-analysis-1st-edition-leonard-f-richardson/
ebookgate.com
https://ebookgate.com/product/the-calculus-of-variations-
universitext-2004th-edition-brunt/
ebookgate.com
Some applications of functional analysis in mathematical
physics 3rd ed Edition S. L. Sobolev
https://ebookgate.com/product/some-applications-of-functional-
analysis-in-mathematical-physics-3rd-ed-edition-s-l-sobolev/
ebookgate.com
https://ebookgate.com/product/finite-element-analysis-applications-in-
mechanical-engineering-2012-farzad-ebrahimi/
ebookgate.com
https://ebookgate.com/product/the-functional-analysis-of-english-3rd-
edition-thomas-bloor/
ebookgate.com
https://ebookgate.com/product/principles-of-functional-analysis-
second-edition-martin-schechter/
ebookgate.com
Advanced
Engineering
Analysis
The Calculus of Variations
and Functional Analysis with
Applications in Mechanics
This page intentionally left blank
Advanced
Engineering
Analysis
The Calculus of Variations
and Functional Analysis with
Applications in Mechanics
Leonid P. Lebedev
Department of Mathematics,
National University of Colombia, Colombia
Michael J. Cloud
Department of Electrical and Computer Engineering,
Lawrence Technological University, USA
Victor A. Eremeyev
Institute of Mechanics, Otto von Guericke University Magdeburg, Germany
South Scientific Center of RASci
and South Federal University, Rostov on Don, Russia
World Scientific
NEW JERSEY • LONDON • SINGAPORE • BEIJING • SHANGHAI • HONG KONG • TA I P E I • CHENNAI
Published by
World Scientific Publishing Co. Pte. Ltd.
5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601
UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
For photocopying of material in this volume, please pay a copying fee through the Copyright
Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to
photocopy is not required from the publisher.
ISBN-13 978-981-4390-47-7
ISBN-10 981-4390-47-X
Printed in Singapore.
Preface
A little over half a century ago, it was said that even an ingenious per-
son could not be an engineer unless he had nearly perfect skills with the
logarithmic slide rule. The advent of the computer changed this situa-
tion crucially; at present, many young engineers have never heard of the
slide rule. The computer has profoundly changed the mathematical side
of the engineering profession. Symbolic manipulation programs can cal-
culate integrals and solve ordinary differential equations better and faster
than professional mathematicians can. Computers also provide solutions
to differential equations in numerical form. The easy availability of mod-
ern graphics packages means that many engineers prefer such approximate
solutions even when exact analytical solutions are available.
Because engineering courses must provide an understanding of the fun-
damentals, they continue to focus on simple equations and formulas that
are easy to explain and understand. Moreover, it is still true that stu-
dents must develop some analytical abilities. But the practicing engineer,
armed with a powerful computer and sophisticated canned programs, em-
ploys models of processes and objects that are mathematically well beyond
the traditional engineering background. The mathematical methods used
by engineers have become quite sophisticated. With insufficient base knowl-
edge to understand these methods, engineers may come to believe that the
computer is capable of solving any problem. Worse yet, they may decide
to accept nearly any formal result provided by a computer as long as it was
generated by a program of a known trademark.
But mathematical methods are restricted. Certain problems may ap-
pear to fall within the nominal solution capabilities of a computer program
and yet lie well beyond those capabilities. Nowadays, the properties of so-
phisticated models and numerical methods are explained using terminology
v
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
Leonid P. Lebedev
Department of Mathematics, National University of Colombia, Colombia
Michael J. Cloud
Department of Electrical and Computer Engineering, Lawrence Technolog-
ical University, USA
Victor A. Eremeyev
Institute of Mechanics, Otto von Guericke University Magdeburg, Germany
South Scientific Center of RASci and South Federal University, Rostov on
Don, Russia
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
Contents
Preface v
vii
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
Contents ix
Bibliography 483
Index 485
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
Chapter 1
1.1 Introduction
1
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
hence
A function in n variables
Consider the minimization of a function y = f (x) with x = (x1 , . . . , xn ).
More cannot be expected from this theory than from the theory of functions
in a single variable.
Definition 1.2. A function f (x) has a global minimum at the point x∗ if
the inequality
f (x∗ ) ≤ f (x∗ + h) (1.9)
holds for all nonzero h = (h1 , . . . , hn ) ∈ Rn . The point x∗ is a local
minimum if there exists ρ > 0 such that (1.9) holds whenever h =
(h21 + · · · + h2n )1/2 < ρ.
Let x∗ be a minimum point of a continuously differentiable function
f (x). Then f (x1 , x∗2 , . . . , x∗n ) is a function in one variable x1 and takes its
minimum at x∗1 . It follows that ∂f /∂x1 = 0 at x1 = x∗1 . Similarly, the rest
of the partial derivatives of f are zero at x∗ :
∂f
= 0, i = 1, . . . , n. (1.10)
∂xi x=x ∗
by which the constraints gj are adjoined to f . Then the xi and λi are all
treated as independent, unconstrained variables. The resulting necessary
conditions form a system of n + m equations in the n + m unknowns xi , λj :
∂f (x) ∂gj (x)
m
+ λj = 0, i = 1, . . . , n,
∂xi j=1
∂xi
gj (x) = 0, j = 1, . . . , m. (1.17)
Functionals
The kind of dependence in which a real number corresponds to another
(or to a finite set) is not enough to describe many natural processes. Ar-
eas such as physics and biology spawn formulations not amenable to such
description. Consider the deformations of an airplane in flight. At some
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
point near an engine, the deformation is not merely a function of the force
produced by the engine — it also depends on the other engines, air resis-
tance, and passenger positions and movements (hence the admonition that
everyone remain seated during potentially dangerous parts of the flight).
In general, many real processes in a body are described by the dependence
of the displacement field (e.g., the field of strains, stresses, heat, voltage)
on other fields (e.g., loads, heat radiation) in the same body. Each field is
described by one or more functions, so the dependence is that of a func-
tion uniquely defined by a set of other functions acting as whole objects
(arguments). A dependence of this type, provided we specify the classes to
which all functions belong, is called an operator (or map, or sometimes just
a “function” again). Problems of finding such dependences are often formu-
lated as boundary or initial-boundary value problems for partial differential
equations. These and their analysis form the main content of any course
in a particular science. Since a full description of any process is complex,
we usually work with simplified models that retain only essential features.
However, even these can be quite challenging when we seek solutions.
Humans often try to optimize their actions through an intuitive — not
mathematical — approach to fuzzily-posed problems on minimization or
maximization. This is because our nature reflects the laws of nature in
total. In physics there are quantities, like energy and enthalpy, whose val-
ues in the state of equilibrium or real motion are minimal or maximal in
comparison with other “nearby admissible” states. Younger sciences like
mathematical biology attempt to follow suit: when possible they seek to
describe system behavior through the states of certain fields of parameters,
on which functions of energy type attain maxima or minima. The energy
of a system (e.g., body or set of interacting bodies) is characterized by a
number which depends on the fields of parameters inside the system. Thus
the dependence described by quantities of energy type is such that a numer-
ical value E is uniquely defined by the distribution of fields of parameters
characterizing the system. We call this sort of dependence a functional. Of
course, in mathematics we must also specify the classes to which the above
fields may belong. The notion of functional generalizes that of function so
that the minimization problem remains sensible. Hence we come to the
object of investigation of our main subject: the calculus of variations. In
actuality we shall consider a somewhat restricted class of functionals. (Op-
timization of general functionals belongs to mathematical programming, a
younger science that contains the calculus of variations — a subject some
300 years old — as a special case.) In the calculus of variations we min-
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
yields the length of the plane curve y = y(x) from (a, y(a)) to (b, y(b)).
The obvious minimizer is a straight line y = kx + d. Without boundary
conditions (i.e., with y(a) or y(b) unspecified), k and d are arbitrary and
the solution is not unique. We can impose no more than two restrictions
on y(x) at the ends a and b, because y = kx + d has only two indefinite
constants. However, the problem without boundary conditions also makes
sense; its solution is the set of horizontal segments y = d starting at the
vertical line x = a and ending at x = b.
Problem setup is a tough yet important issue in mathematics. We shall
eventually face the question of how to pose the main problems of the cal-
culus of variations in a sensible fashion.
Let us consider the problem of minimum of (1.20) without additional
restrictions, and attempt to solve it using calculus. Discretization, in this
case the approximation of the integral by a Riemann sum, will reduce the
functional to a multivariable function. In the calculus of variations other
methods of investigation are customary; however, the current approach
is instructive because it leads to some central results of the calculus of
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
variations and shows that certain important ideas are extensions of ordinary
calculus.
We begin by subdividing [a, b] into n partitions each of length
b−a
h= .
n
Denote xi = a + ih and yi = y(xi ), so y0 = y(a) and yn = y(b). Take an
approximate value of y (xi ) as
yi+1 − yi
y (xi ) ≈ .
h
Approximation of (1.20) by the Riemann sum
b
n−1
f (x, y, y ) dx ≈ h f (xk , yk , y (xk )) (1.21)
a k=0
gives
b
n−1
f (x, y, y ) dx ≈ h f (xk , yk , (yk+1 − yk )/h)
a k=0
= Φ(y0 , . . . , yn ). (1.22)
or
y1 − y0 y1 − y0
f y x0 , y0 , − h fy x0 , y0 , = 0. (1.27)
h h
For i = n we obtain
yn − yn−1
f y xn−1 , yn−1 , = 0. (1.28)
h
In the limit as h → 0, (1.27) and (1.28) give, respectively,
fy (x, y(x), y (x))x=a = 0, fy (x, y(x), y (x))x=b = 0.
Finally, considering the first two terms in (1.26) for 0 < i < n,
yi+1 − yi yi − yi−1
fy xi , yi ,
− fy xi−1 , yi−1 ,
h h
− ,
h
we recognize an approximation for the total derivative −dfy /dx at yi−1 .
Hence (1.26), after h → 0 in such a way that xi−1 remains a fixed value c,
reduces to
d
fy − fy = 0 (1.29)
dx
at x = c. A nonuniform partitioning will yield this equation similarly for
any x = c ∈ (a, b). In expanded form (1.29) is
The limit passage has given us this second-order ordinary differential equa-
tion and two boundary conditions
fy x=a = 0, fy x=b = 0. (1.31)
Equations (1.29) and (1.31) play the same role for the functional (1.20) as
equations (1.10) play for a function in many variables. In the absence of
boundary conditions on y(x), we get necessarily two boundary conditions
for a function on which (1.20) attains a minimum.
Since the resulting equation is of second order, no more than two bound-
ary conditions can be imposed on its solution (see, however, Remark 1.20).
We could, say, fix the ends of the curve y = y(x) by putting
If we repeat the above process under this restriction we get (1.26) and cor-
respondingly (1.29), whereas (1.31) is replaced by (1.32). We can consider
the problem of minimum of this functional on the set of functions satisfying
(1.32). Then the necessary condition which a minimizer should satisfy is
the boundary value problem consisting of (1.29) and (1.32).
Conditions such as y(a) = 0 and y (a) = 0 are normally posed for
a Cauchy problem involving a second-order differential equation. In the
present case, however, a repetition of the above steps implies the addi-
tional restriction fy |x=b = 0. A problem for (1.29) with three boundary
conditions is, in general, inconsistent.
We have obtained some possible ways to set up the problem of minimum
of the functional (1.20).
1. Suppose
The total derivative with respect to x, denoted d/dx, arises when we dif-
ferentiate while considering y(x) and y (x) to be functions of x. The total
derivative of the partial derivative fy is, by the chain rule,
d d
fy ≡ fy (x, y(x), y (x)) = fy x + fy y y + fy y y ,
dx dx
where, for example,
∂ ∂
fy y =
f (x, p, q) .
∂p ∂q p=y(x), q=y (x)
∂
fy = f (x, y, p, q, r) ,
∂y p=u(x,y), q=ux (x,y), r=uy (x,y)
∂
fu = f (x, y, p, q, r) ,
∂p p=u(x,y), q=ux (x,y), r=uy (x,y)
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
∂
fux = f (x, y, p, q, r) ,
∂q p=u(x,y), q=ux (x,y), r=uy (x,y)
and
∂
fuy = f (x, y, p, q, r) .
∂r p=u(x,y), q=ux (x,y), r=uy (x,y)
Finally, let us display the notation for the total derivative d/dx of fux ,
where f denotes f = f (x, y, p, q, r):
d
fu = fqx + fqp ux + fqq uxx + fqr uyx ,
dx x p=u(x,y), q=ux (x,y), r=uy (x,y)
|F (y + ϕ) − F (y)| < ε.
Definition 1.3. The space C (1) (a, b) is the normed space consisting of
the set of all functions ϕ(x) that are continuously differentiable on [a, b],
supplied with the norm (1.37). Its subspace of functions satisfying (1.35) is
(1)
denoted C0 (a, b). The set of all functions having k continuous derivatives
on [a, b] is denoted C (k) (a, b).
In many books these spaces are denoted by C (k) ([a, b]) to emphasize
that [a, b] is closed. To keep our notation reasonable throughout the book,
we introduce
For example, the notation ·C (1) (a,b) (where the dot stands for the
argument of the norm operation) is shortened to · in the present section.
At times, only some aspect of the full label can be suppressed. For example,
we may use the notation ·C (1) if only the domain [a, b] is understood. With
this convention in mind let us proceed to
Historically this type of minimum is called “weak” and we shall use only this
type and simply call it a minimum. Those who pioneered the calculus of
variations also considered “strong” local minima, defining these as values of
y for which there is a δ such that F (y+ϕ) ≥ F (y) whenever ϕ(a) = ϕ(b) = 0
and max |ϕ| < δ on [a, b]. Here the modified condition on ϕ permits “strong
variations” into consideration: i.e., functions ϕ for which ϕ may be large
even though ϕ itself is small. Note that when we “weaken” the condition
(1)
on ϕ by changing the norm from the norm of C0 (a, b) to the norm of
C0 (a, b) which contains only ϕ and not ϕ , we simultaneously strengthen the
statement made regarding y when we assert the inequality F (y +ϕ) ≥ F (y).
Let us turn to a rigorous justification of (1.29). We restrict the class
of possible integrands f (x, y, z) of (1.33) to the set of functions that are
continuous in (x, y, z) when x ∈ [a, b] and |y−y(x)|+|z−y (x)| < δ. Suppose
the existence of a minimizer y(x) for F (y) (see, however, Remark 1.13 on
(1)
page 21). Consider F (y + tϕ) for an arbitrary but fixed ϕ(x) ∈ C0 (a, b).
It is a function in the single variable t, taking its minimum at t = 0. If it
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
is differentiable then
dF (y + tϕ)
= 0. (1.38)
dt t=0
Definition 1.7. The right member of (1.39) is denoted δF (y, ϕ) and called
the first variation of the functional (1.33).
In the integrand we see the left side of (1.29). To deduce (1.29) from (1.40)
we need the fundamental lemma of the calculus of variations.
hold for every function ϕ(x) that is differentiable on [a, b] and vanishes in
some neighborhoods of a and b. Then g(x) ≡ 0.
Proof. Suppose to the contrary that (1.41) holds while g(x0 ) = 0 for
some x0 ∈ (a, b). Without loss of generality we may assume g(x0 ) > 0. By
continuity we have g(x) > 0 in a neighborhood [x0 − ε, x0 + ε] ⊂ (a, b). It is
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
See Fig. 1.1. The product g(x)ϕ0 (x) is nonnegative everywhere and positive
b
near x0 . Hence a g(x)ϕ(x) dx > 0, a contradiction.
x 0- ε x0 x 0+ ε x
Lemma 1.9. Let g(x) be continuous on [a, b], and let (1.41) hold for any
function ϕ(x) that is infinitely differentiable on [a, b] and vanishes in some
neighborhoods of a and b. Then g(x) ≡ 0.
The proof is the same as that for Lemma 1.8: it is necessary to con-
struct the same bell-shaped function ϕ(x) that is infinitely differentiable.
This form of the fundamental lemma provides a basis for the theory of gen-
eralized functions or distributions. These are linear functionals on the sets
of infinitely differentiable functions, and arise as elements of the Sobolev
spaces to be discussed later.
Now we can formulate the main result of this section.
Theorem 1.10. Suppose y = y(x) ∈ C (2) (a, b) locally minimizes the func-
tional (1.33) on the subset of C (1) (a, b) consisting of those functions satis-
fying (1.34). Then y(x) is a solution of the equation
d
fy − fy = 0. (1.42)
dx
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
so the global minimum of F (y) really does occur at ȳ(x). Although such
direct verification is not always straightforward, a large class of important
problems in mechanics (e.g., problems of equilibrium for linearly elastic
structures under conservative loads) yield single extremals that minimize
their corresponding total energy functionals. This happens because of the
quadratic structure of the functional, as in the present example.
Lemma 1.14. Let g(x) be a continuous function on [a, b] for which the
(1)
following equality holds for every ϕ(x) ∈ C0 (a, b):
b
g(x)ϕ (x) dx = 0. (1.43)
a
(1)
This holds for all ϕ(x) ∈ C0 (a, b). So by Lemma 1.14 we have (1.45).
The integro-differential equation (1.45) has been called the Euler equa-
tion in integrated form.
Corollary 1.16. If
along a minimizer y = y(x) ∈ C (1) (a, b) of (1.33), then y(x) ∈ C (2) (a, b).
Considering the term with y (x) on the left, we prove the claim.
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
It follows that under the condition of the corollary equations (1.42) and
(1.45) are equivalent; however, this is not the case when fy y (x, y(x), y (x))
can be equal to zero on a minimizer y = y(x). Since y (x) does not appear
in (1.45), it can be considered as defining a generalized solution of (1.42).
At times it becomes clear that we should change variables and consider a
problem in another coordinate frame. For example, if we consider geodesic
lines on a surface of revolution, then cylindrical coordinates may seem more
appropriate than Cartesian coordinates. For the problem of minimum of a
functional we have two objects: the functional itself, and the Euler equation
for this functional. Let y = y(x) satisfy the Euler equation in the original
frame. Let us change variables, for example from (x, y) to (u, v):
x = x(u, v), y = y(u, v). (1.46)
The forms of the functional and its Euler equation both change. Next we
change variables for the extremal y = y(x) and get a curve v = v(u) in the
new variables. Is v = v(u) an extremal for the transformed functional? It
is, provided the transformation does not degenerate in some neighborhood
of the curve y = y(x): that is, if the Jacobian
x x
J = u v = 0
yu yv
there. This property is called the invariance of the Euler equation. Roughly
speaking, we can change all the variables of the problem at any stage of
the solution and get the same solutions in the original coordinates. This
invariance is frequently used in practice. We shall not stop to consider the
issue of invariance for each type of functional we treat, but the results are
roughly the same.
We have derived a necessary condition for a function to be a point
of minimum or maximum of (1.33). Other functionals will be treated in
the sequel. An Euler equation is the starting point for any variational
investigation of a physical problem, and in practice its solution is often
approached numerically. Let us consider some methods relevant to (1.33).
Here ϕ0 (x) satisfies (1.34); a common choice is the linear function ϕ0 (x) =
αx + β with
d1 − d0 bd0 − ad1
α= , β= . (1.48)
b−a b−a
The remaining functions, called basis functions, satisfy the homogeneous
conditions
ϕk (a) = ϕk (b) = 0, k = 1, . . . , n.
The ck are constants.
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
Definition 1.17. The function yn∗ (x) that minimizes (1.33) on the set of
all functions of the form (1.47) is called the nth Ritz approximation.
n
ck ϕk (x) = 0 only if ck = 0 for k = 1, . . . , n.
k=1
n
g(x) − ck ϕk (x) < ε.
k=1
b
f (x, yn , yn ) dx
a
where yn (x) is given by (1.47). The unknowns are the ck , so the functional
becomes a function in n real variables:
b
Φ(c1 , . . . , cn ) = f (x, yn , yn ) dx.
a
∂Φ(c1 , . . . , cn )
= 0, k = 1, . . . , n. (1.49)
∂ck
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
Denoting c0 = 1, we have
b
∂Φ(c1 , . . . , cn ) ∂
= f (x, yn , yn ) dx
∂ck ∂ck a
! n "
∂ b
n
= f x, ci ϕi (x), ci ϕi (x) dx
∂ck a i=0 i=0
! "
b
n
n
= fy x, ci ϕi (x), ci ϕi (x) ϕk (x) dx
a i=0 i=0
! "
b
n
n
+ fy x, ci ϕi (x), ci ϕi (x) ϕk (x) dx,
a i=0 i=0
subject to y(0) = 0 and y(1) = 10. Find the Ritz approximations for
n = 1, 3, 5 using ϕ0 (x) = 10x and the following basis sets:
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
Solution. Note that ϕ0 (x) was chosen to satisfy the given boundary con-
ditions. We find the expansion coefficients ck by solving the system
! "
∂ n
Ψ ϕ0 (x) + ci ϕi (x) = 0, i = 1, . . . , n.
∂ck i=1
For brevity let us denote
1
y, z = {y (x)z (x) + [1 + 0.1 sin(x)]y(x)z(x)} dx
0
so that
1
Ψ(y) = y, y − 2 xy(x) dx.
0
Using the symmetry of the form y, z we write out Ritz’s equations:
1
c1 ϕ1 , ϕ1 + c2 ϕ2 , ϕ1 + · · · + cn ϕn , ϕ1 = − ϕ0 , ϕ1 + xϕ1 (x) dx,
0
1
c1 ϕ1 , ϕ2 + c2 ϕ2 , ϕ2 + · · · + cn ϕn , ϕ2 = − ϕ0 , ϕ2 + xϕ2 (x) dx,
0
..
.
1
c1 ϕ1 , ϕn + c2 ϕ2 , ϕn + · · · + cn ϕn , ϕn = − ϕ0 , ϕn + xϕn (x) dx.
0
(1.52)
For small n this system can be solved by hand, otherwise computer solution
is required. In the present case we find that for the first basis set the Ritz
approximations are
y1 (x) = 10x − 2.162x(1 − x),
y3 (x) = 10x + (−1.409x − 1.356x2 − 0.246x3 )(1 − x),
y5 (x) = 10x + (−1.404x − 1.404x2 − 0.140x3 − 0.063x4 − 0.007x5)(1 − x).
For the second basis set we obtain the Ritz approximations
z1 (x) = 10x − 0.289 sin πx,
z3 (x) = 10x − 0.289 sin πx + 0.063 sin 2πx − 0.017 sin 3πx,
z5 (x) = 10x − 0.289 sin πx + 0.063 sin 2πx − 0.017 sin 3πx
+ 0.008 sin 4πx − 0.004 sin 5πx,
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
as required.
The polynomial
x
Pn (x) = f (0) + Qn (t) dt
0
approximates f (x):
x x
|f (x) − Pn (x)| = f (0) +
f (t) dt − f (0) − Qn (t) dt
0 0
x
≤ |f (t) − Qn (t)| dt
0
≤ ε/2 for x ∈ [0, 1].
In the same way it can be shown that a function n-times continuously dif-
ferentiable on [0, 1] can be approximated to within any prescribed accuracy
by a polynomial together with all n of its derivatives on [0, 1]. The set of
monomials {xk } constitutes a complete system of functions in C (n) [0, 1] for
any n.
Note that Weierstrass’ theorem guarantees nothing more than the exis-
tence of an approximating polynomial. When we decrease ε we get a new
polynomial where the coefficient standing at each term xk may differ sig-
nificantly from the corresponding coefficient of the previous approximating
polynomial. This is because the set {xk } does not have the uniqueness
property required of a true basis. Moreover, in mathematical analysis it
is shown that we can arbitrarily remove infinitely many members of the
family {xk } and still have a complete system {xkr }. It is necessary only
∞
to retain such members of the family that the series r=1 1/kr diverges.
So the system {xk } contains more members than we need. Although any
finite set of monomials xk is linearly independent, as we take more and
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
more elements the set gets closer to becoming linearly dependent; that is,
given any ε > 0 we can find infinitely many polynomials approximating the
zero function to within ε-accuracy on [0, 1]. This leads to numerical insta-
bility. The difficulty can be avoided by using other families of polynomials
for approximation: namely, orthogonal polynomials for which numerical
instability shows itself only in higher degrees of approximation.
As we know from the theory of Fourier expansion, the second system
of functions {sin kπx} is orthonormal. It is, moreover, a basis (but not
(1)
of C0 (0, π)) as we shall discuss later. This provides greater stability in
calculations to within higher accuracy. However, in low-order Ritz approx-
imations it can be worse than a polynomial approximation of the same
problem, at least for many problems whose solutions do not oscillate.
One more aspect of the approximation is seen in the above results. For
Ritz’s approximations we compared their values. Comparing the values of
their derivatives, we find that much better agreement is obtained for the
values of the approximating functions than for the derivatives. It is obvious
that the same holds for the difference between an exact solution and the ap-
proximating functions. This property is common to all projection methods.
So, for example, in solving problems of elasticity we get comparatively good
results in low-order approximations for the field of displacements, whereas
the fields of stresses, which are expressed through the derivatives of the
displacement fields, are approximated significantly worse.
Theorem
b 1.19. Let y = y(x) ∈ C (2) (a, b) be a minimizer of the functional
(1)
a f (x, y, y ) dx over the space C (a, b). Then for y = y(x) the Euler
equation
d
fy − fy = 0 for all x ∈ (a, b) (1.53)
dx
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
Proof. We can repeat the initial steps of § 1.2. Namely, consider the
values of the functional on the bundle of functions y = y(x) + tϕ(x) where
ϕ(x) ∈ C (1) (a, b) is arbitrary but fixed. Here, however, there are no restric-
tions on ϕ(x) at the endpoints of [a, b]. b
For fixed y(x) and ϕ(x) the functional a f (x, y + tϕ, y + tϕ ) dx be-
comes a function of the real variable t, and attains its minimum at t = 0.
Differentiating with respect to t we get
b
[fy (x, y, y )ϕ + fy (x, y, y )ϕ ] dx = 0.
a
Integration by parts gives
b x=b
d
fy (x, y, y ) − fy (x, y, y ) ϕ dx + fy (x, y(x), y (x))ϕ(x)
= 0.
a dx x=a
(1.55)
From this we shall derive the Euler equation for y(x) and the natural bound-
ary conditions. The procedure is as follows. We limit the set of all continu-
ously differentiable functions ϕ(x) to those satisfying ϕ(a) = ϕ(b) = 0. For
these functions we have
b
d
fy (x, y, y ) − fy (x, y, y ) ϕ dx = 0. (1.56)
a dx
This equation holds for all functions ϕ(x) that participate in the formulation
of Lemma 1.8. Hence the continuous multiplier of ϕ(x) in the integrand of
(1.56) is zero, and the Euler equation (1.53) holds in (a, b).
Now let us return to (1.55). The equality (1.56), because of the Euler
equation, holds for all ϕ(x). From (1.55) it follows that
x=b
fy (x, y(x), y (x))ϕ(x)
=0 (1.57)
x=a
for any ϕ(x). Taking ϕ(x) = x − b we find that fy |x=a = 0; taking
ϕ(x) = x − a we find that fy |x=b = 0.
Let us call attention to the way this result was obtained. First we re-
stricted the set of admissible functions to those for which we could get a
certain intermediate result (the Euler equation); using this result, we ob-
tained some simplification in the first variation. We finished the argument
by considering the simplified first variation on all the admissible functions.
September 30, 2011 8:42 World Scientific Book - 9in x 6in aea
b
The functional a
f (x, y, y ) dx
Let us replace y(x) in (1.33) by a vector function
f (x, y(x), y (x)) or f (x, y1 (x), . . . , yn (x), y1 (x), . . . , yn (x))
First consider the problem of minimizing (1.59) when y(x) takes bound-
ary values
1849. Massotti has suggested that all bodies consist of two kinds
of ultimate particles; that any two or more particles of one kind are
repulsive of each other, while any two or more of different kinds are
reciprocally attractive. Hence atoms are formed, consisting of one
atom of one kind and one of the other kind. Of course, were the
opposite forces exercised by the heterogeneous and homogeneous
equal, the resulting atoms would be neither attractive nor repulsive;
but assuming the attractive power to have the ascendency, the
hypothesis would account for the property of gravitation.
1850. Let the suggestions of Massotti be modified, so far as that
the extremities of each particle, whether of one or the other kind,
are to be considered as endowed with opposite polarities, like those
of the magnetic needle, as already suggested in the case of matter
in general. Then in one relative position of the extremities they may
be reciprocally repulsive, in the other reciprocally attractive; likewise
one of the kinds of matter, like the light-producing ether of the
undulationists, may pervade the universe, and be condensed in a
peculiarly great quantity within perfect conductors: all this being
premised, it may be conceived how the waves of opposite
polarization, which proceed from oppositely electrified, or in other
words, oppositely polarized bodies, cause the matter through which
they pass to be decomposed or explosively rent.
1851. As elsewhere stated, in large bodies of water, waves are the
effect of transference of motion successively from one part of the
mass to the other; the rolling of the wave causing nothing to pass
but the motion, and of course, the momentum is invariably
consequent to motion. The waves by which sound is transmitted, are
analogous; nothing being transferred excepting a vibration of the air,
capable of affecting the tympanum of the ear with the impression
requisite to create in the sensorium the idea of sound.
1852. Any affection of matter, capable of existing in successive
parts of a material body, so that while the body is stationary, the
affection passes from one part of the mass to others, may be
considered as a wave of that affection, as reasonably as the
affection called momentum is considered as producing a wave in
water, when passing through it, as above described. It is in this way
that I consider that the term wave of polarization may be applied to
an affection of matter consisting of an abnormal position of the poles
of the constituent particles, successively induced in rows of atoms,
so as to proceed from one part of the series to the other.
1853. And as two sets of waves, of which the hollows of one
should correspond with the elevations of the others, would, by being
associated, produce an even surface and equalization of the
momentum in the aqueous liquid, so, in opposite polarities, there
might be reciprocal neutralization by the coming together of the
polarities.
ebookgate.com