Computational Physics
Peter Hertel
Fachbereich Physik
Universita¨t Osnabru¨ck
Numerical, or computational physics is a branch of physics with an age-old tra-
dition. If you can’t produce a number, you have achieved nothing says Feynman,
and right he is. More often than not this is possible only by computing. This
series of lectures introduces standard methods of computational physics. The
computational power which is available to the average student or researcher has
grown by a factor of thousand or so, within the last twenty years. And with it
the availability of high quality software. Therefore, these lectures concentrate
on how to use software, and not how to develop it. Quite naturally, this boils
down to MATLAB.
January 14, 2008
Contents
1 Introduction 3
2 Matrices 4
3 Numbers 6
4 Functions 8
5 Ordinary differential equations 11
6 Fast Fourier transform 14
7 Fitting data 17
8 Simulated Annealing 21
9 Finite difference method 25
10 Propagation 29
A Figures Makefile 34
List of Figures 36
2
1 Introduction
This series of lectures is about standard numerical techniques for computing
in physics. Although teachers prefer examples which can be solved by pencil
and paper, or by chalk on the blackboard, most problems cannot be solved
analytically. They require computations. Computational physics has a long
tradition. However, computational physics cannot be taught any more in
the traditional way.
In 1980, the first IBM personal computer could address 64 kB of memory,
its mass storage was a floppy of 360 kB, and it ran at 8 MHz with an
8 bit processor. With 15000 EUR it was cheap as compared to mainframes.
Today, an electronic computer, such as my laptop, costs 1500 EUR. Its 32 bit
processor runs at 1.2 GHz, is supported by 512 MB fast storage, 40 GB
mass storage, ethernet and modem controller on board, and so forth. This
amounts to a many-hundred-fold increase in computational power within
the last two decades.
Moreover, high quality software to be run on these cheap and highly efficient
computers has become cheap. Just study the price list of the MathWorks
corporation for Matlab.
Today, a course on computational physics, in particular an introduction,
should concentrate on how to use available and cheap high quality software,
and not how to develop it.
I have decided on using Matlab for various reasons.
First, because it is relatively cheap.
Second, because it comes with an integrated development environment such
as a command window, a language sensitive editor, a debugger, a workspace
inspector, and so forth.
Third, because it includes the software treasures of the last 50 years. In
particular, the LINPACK and EISPACK packages1 of linear algebra, forming
the backbone of computational physics, are the core of Matlab. In fact,
Matlab was conceived as a meta-language how to use such FORTRAN code
without indulging into unnecessary details.
Fourth, Matlab now provides for sparse matrix technology which is essen-
tial for partial differential equations.
Fifth, Matlab makes you think in terms of objects, it is not a procedural
language. After a short while, you will think of x as a variable, not of an
array of equally spaced representative values.
And last but not least, Matlab is a powerful graphics machine for visual-
1Steve Moler is one of the authors of the the LINPACK and EISPACK packages docu-
mentation, and one of its main contributors. Today he is the chief executive officer of
MathWorks.
3
izing data, both two and three dimensional. In particular, Matlab allows
for the export of pictures in encapsulated postscript format.
In these lectures on Computational Physics we cannot cover a number of
other important fields, such as data acquisition, graphical user interfaces or
fine points of graphics programming. Instead, we will restrict ourselves to
standard techniques of doing analysis on a computer.
This text addresses students of physics who are used to learn by carefully
chosen examples. It is neither systematic nor complete.
2 Matrices
The basic objects of Matlab are matrices. A number is a 1 × 1 matrix.
A row vector is 1 × N matrix, a column vector is an N × 1 matrix. A
polynomial is represented by the row vector of its coefficients. A string is a
row vector of 2 byte numbers which represent characters. And so forth.
The most important operators are square brackets for aggregation. The
comma operator separates objects which are to be aggregated horizontally.
The semicolon operator separates objects which are to be aggregated ver-
tically. The semicolon operator, at the end of an expression, suppresses
echoing.
Here are a few examples:
>> A=[1,2,3;4,5,6];
>> B=[A;[7,8,9]];
The size function takes in a matrix and returns the numbers of rows and
columns. Try
>> [r,c]=size(A);
Note that output is always aggregated horizontally.
By saying2
>> M=3; N=5;
>> a=zeros(M,N);
you create an M × N matrix a of zeros. Likewise, ones creates matrices of
ones.
>> B=eye(N);
2Matlab distinguishes between lower and upper case
4
generates a square 5 × 5 unit matrix. Check it by typing
>> B(2,1)
>> B(2,2)
The dash operator stands for Hermitian conjugation. Compare
>> x=[1,2+3i,4;0,i,2]
and
>> xc=x’
One of the highlights of Matlab is the colon operator. It stands for
’from:to. The colon itself stands for all indices. Thus we could have written
>> B=[1:3;4:6;7:9];
We now extract the first and the third row:
>> C=B([1,3],:);
Note that matrices are used just as functions. Arguments are in parentheses
and separated by commas. The first argument is a vector of column indices,
the second argument a vector of row indices.
Another often used function for creating vectors is linspace. This function
has three arguments: the lower value of an interval, the upper value, and
the number of representative points.
>> x=linspace(0,10,128);
produces a row vector of size 1 × 128. Its first element is 0, the last is 10,
and linear interpolation in-between.
>> x=[0:10/127:10];
does the same.
Here the from:step:to construct is used. If the difference between to and
from is not an integer multiple of step, then to is an upper bound. If step
is omitted, a value of 1 is assumed.
From the 10 × 10 matrix
>> x=rand(10,10);
of randomly chosen numbers we may extract the sub-matrix
5
>> y=x(1:2:10,1:2:10);
of random numbers with odd indices.
Constructs like
>> k=[1,4,9:2:17,31];
are allowed as well. Matlab’s syntax rules, if strictly applied, would require
>> k=[[1],[4],[9:2:17],[31]];
They are, however, sensibly relaxed. In many cases the horizontal alignment
comma operator may also be omitted,
>> k=[1 4 9:2:17 31];
works as well. We shall try to avoid this kind of syntax relaxation although
you may encounter it in the help system.
3 Numbers
Matlab is rather strict with numbers. It adheres to the IEEE3 convention
on representing real numbers and algebraic operations with them. It is obvi-
ous that real numbers must be approximated. The approximation, however,
should be reliable and as good as possible.
There is a predefined number eps. It is the smallest number such that 1
and 1+eps/2 are identical. On my computer eps=2.2204e-016. Roughly
speaking, the inherent accuracy is 16 digits. This is sufficient for physics
where not more than 6 digits are meaningful.
Rounding errors are randomly distributed. With N operations, the error
grows proportional to
√
N . Hence, 1020 operations on real numbers are
allowed until the statistical error exceeds the 6 digits limit. If your computer
runs at 1.0 GHz, approximately 1011 seconds are required to perform such
a large number of operations. This is more than 1000 years. Put otherwise:
rounding errors, unless crucial, will not pose a problem.
Rounding errors become crucial if a result is the difference between two
large, but almost equal numbers. Try
>> x=1e-15;
>> y=((1+x)-1)/x
3Institute of Electrical and Electronics Engineers
6
The result is 1.1102, but should be 1.
Assume you want to calculate f(x) = sin(x)/x for x ∈ [0, 2pi]. The following
code does is:
>> x=linspace(0,2*pi,1024);
>> y=sin(x)./x;
Typing
>> y(1)
produces NaN, not a number. And this should be so. After all, we have
divided zero by zero, and the result is neither 1 nor infinite, it is not defined.
The IEEE standard of floating point representation, which is realized by
all Intel processors and supported by Matlab software, provides for a bit
pattern which is not a valid number. Any operation with NaN results in NaN.
NaNs tend to spread out. For example,
>> z=y/sum(y);
results in a vector filled with NaNs.
A NaN can be detected by the function isnan. isnan(y(1)) returns 1 which
stands for true. isnan(y(2)) results in 0, or false. isnan(y) returns an
vector the first component of which is 1, the remaining are 0.
Another special number of the IEEE standard is Inf which stands for posi-
tive infinity. 1/0 results in Inf, -1/0 in -Inf, just as log(0). Inf/Inf or
Inf-Inf are ill defined, i. e. NaN. However, Inf+Inf gives Inf.
The pseudo-numbers Inf and NaN solve a long-standing problem. In the old
days, you could either demand program abort if an ill-defined arithmetic op-
eration was performed, or the result was arbitrary. With the IEEE standard
and high quality software the program is not halted, but errors are realized
and documented.
Let us return to f(x) = sin(x)/x. This function, by declaring f(0) = 1, can
be made to be continuous everywhere.
>> y=sin(x)./x;
>> y(1)=1;
is a plausible and obvious solution.
However, you have made use of the fact that it is the first component which
is exceptional. Much better is the following solution:
>> y=ones(x);
7
>> k=find(x~=0);
>> y(k)=sin(x(k))./x(k);
find returns a vector of indices for which x is not equal to zero.
A third possibility4 is old-style procedural programming:
1 for k=1:1024
2 if x(k)==0
3 y(k)=1;
4 else
5 y(k)=sin(x(k))/x(k);
6 end
7 end
This is not recommended. Loops in Matlab are slow, they should be
avoided by all means.
A fourth possibility would be
>> x=linspace(eps,2*pi,1024);
Although it works, this is a dirty trick. The problem is avoided, not solved.
4 Functions
Functions map one ore more real or complex variables into real or complex
numbers. Functions are essential for physics, they describe relationships
between measurable quantities.
Let us study an example. The spectral intensity of black body radiation is
described by Planck’s famous formula
S(x) =
15
pi4
x3
e
x − 1
, (1)
where x is short for ~ω/kBT .
(1) is good praxis. Computational physics has to deal with numbers, i. e. di-
mensionless quantities. The spectral density is a probability distribution, a
dimensionless quantity. In fact,∫ z
0
dxS(x) = Pr{~ω < zkBT} . (2)
4Line numbers instead of the >> prompt indicate that we have executed a script file,
test.m in this case
8
0 1 2 3 4 5 6 7 8 9 10
0
0.05
0.1
0.15
0.2
0.25
Figure 1: The black body radiation spectral intensity S(x) where
x = ~ω/kBT
In Matlab you describe a function in a separate m-file. The filename
without the .m extension is the name of the function. (1) is realized as
follows:
1 % this file is planck.m
2 % spectral intensity of black body radiation
3 function s=planck(x);
4 s=zeros(size(x));
5 k=find(x~=0);
6 s(k)=15/pi^4*x(k).^3./(exp(x(k))-1);
We may now say
>> x=linspace(0,10,1024);
>> plot(x,planck(x));
>> print -deps2 planck.eps
Fig. 1 shows the result.
The figure has been exported to an Encapsulated Postscript (level 2) file
planck.eps. It was transformed into planck.pdf by the eps2pdf program
and included into this LATEX document by saying
\FIG{planck.pdf}
{The black body radiation spectral intensity $S(x)$
9
where $x=\hbar\omega/\kB T$.}
My \FIG macro is defined as follows:
1 \newcommand{\FIG}[2]{
2 \begin{figure}[!hbt]
3 \begin{center}
4 \begin{minipage}{0.85\textwidth}
5 \centering{\includegraphics[width=95mm]{#1}}
6 \caption{\label{#1}\small{#2}}
7 \end{minipage}
8 \end{center}
9 \end{figure}
10 }
You must have imported the epsfig package before.
We have explained how to define and how to visualize a function. What else
can you do?
Let us numerically check whether (1) is indeed a probability distribution.
We approximate x = inf by x = 20, say. The following line performs the
integration:
>> quad(@planck,0,20)
The output 0.99999729919447 (with format long) is convincing since the
quadrature method quad has a predefined tolerance of six digits. And 20 is
not yet infinity. quadl is better, although slower. The following call
>> quadl(@planck,0,40)
results in 1.00000000003192.
The function, here planck, is referred to by its handle @planck.
>> quad(’planck’,0,20)
will work as well. Although there are subtle differences, we shall not discuss
them here.
quad or quadl are in-built functions which work on functions. You may
inquire by typing
>> help funfun
fminbnd is such a function function. It finds out the position of the minimum
within prescribed bounds. However, we want to know about the maximum
position of our planck function. Therefore we must define the negative of
10
the planck probability distribution. For this we do not require a new m-file,
because there is the inline statement:
>> np=inline(’-planck(x)’);
The string is parsed as we would read it.
>> xmax=fminbnd(np,0,20)
delivers 2.8214.
We now may ask
>> elo=quad(@planck,0,xmax);
>> ehi=quad(@planck,xmax,20);
for finding the amount of energy below and above the spectral intensity
maximum. The two numbers should add up to 1. Do they?
5 Ordinary differential equations
We talk about a system which is in a certain state. If the system has
N degrees of freedom, the state is described by a vector y1, y2, . . . , yN of
variables. The state will change in the course of time. In many cases the
time development of the state of a system is adequately described by a
system of ordinary differential equations,
y˙j = fj(t, y1, y2, . . . , yN ) . (3)
Higher than first derivatives can always be removed. After all, the second
derivative is the first derivative of the first derivative, and so forth.
Let us investigate one of the oldest physical problems, the motion of planets
in the sun’s gravitational field.
Recall that the conservation of angular momentum requires the planet to
move in a plane. The planet’s location x1(t), x2(t) obey the following differ-
ential equation:
mx¨j(t) = − GmM� xj(x21 + x22)3/2
. (4)
G is the universal gravitational constant, m the planet’s and M� the sun’s
mass. m drops out. By measuring x in units of the astronomical unit5 a
5the mean distance between earth and sun, 149.6× 1011 m
11
and t in units of τ =
√
a3/GM� we arrive at6
x¨j(t) = − xj(x21 + x22)3/2
. (5)
Let us introduce (y1, y2, y3, y4) = (x1, x˙1, x2, x˙2) such that
y˙1 = y2 (6)
y˙2 = −y1/r3 (7)
y˙3 = y4 (8)
y˙4 = −y3/r3 (9)
results, where r =
√
y21 + y
2
3.
We describe this set of four ordinary differential equations by the following
derivative field:
1 % this file is kepler.m
2 function d=kepler(t,y);
3 r=sqrt(y(1).^2+y(3).^2);
4 d=[y(2);-y(1)./r^3;y(4);-y(3)./r^3];
At the command window we say
>> [t,y]=ode45(@kepler,[0:0.1:50],[1;0;0;0.8]);
A Runge-Kutta integration procedure ode45 is invoked. It needs a derivative
field, a time span, and a start vector. And this is the result of saying
>> plot(y(:,1),y(:,3))
>> print -deps2 kepler1.eps
Although the trajectories are ellipses, they shrink more and more. The
reason is not physics, but numerics. We substantiate this remark by plotting
the energy in Fig. 3
>> Ekin=0.5*(y(:,2).*y(:,2)+y(:,4).*y(:,4));
>> Epot=-1./sqrt(y(:,1).*y(:,1)+y(:,3).*y(:,3));
>> plot(t,Ekin+Epot);
We try better by setting a lower relative tolerance (the default being 0.001):
62piτ is one year
12
−0.5 0 0.5 1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
Figure 2: Planetary motion without explicite accuracy control
0 5 10 15 20 25 30 35 40 45 50
−0.8
−0.78
−0.76
−0.74
−0.72
−0.7
−0.68
−0.66
Figure 3: Total energy vs. time without accuracy control
13
>> tol=odeset(’RelTol’,1e-8);
>> [t,y]=ode45(@kepler,[0:0.1:50],[1;0;0;0.8],tol);
−0.5 0 0.5 1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
Figure 4: Planetary motion with accuracy control
Fig. 4 shows what we expect: closed ellipses.
There are many more options for the ordinary differential equation solver,
and there are more such solvers. You should ask for details at the help
desktop.
6 Fast Fourier transform
There are three kinds of Fourier transforms: finite discrete, infinite discrete,
and continuous. The first is defined by
Gj =
N−1∑
k=0
e
−2piijk/N
gk =
N−1∑
k=0
Ωjkgk (10)
for j = 0, 1, . . . , N − 1. The Matrix Ω is given
Ωjk = ω−j·k where ω = e
2pii/N
. (11)
14
ω is the Nth root of 1.
Because Ω/
√
N is unitary,
N−1∑
k=0
Ωjk(Ω†)kl = Nδj,l (12)
we obtain for the inverse transform the following expression:
gk =
1
N
N−1∑
j=0
e
+2piijk/N
Gj . (13)
Only the finite discrete Fourier transform is of interest in computational
physics since a continuum has to be approximated by an interval, and an
interval by a finite set of representative points.
To be specific, we think about a time interval, tk = kτ for k = 0, 1, . . . , N−1.
fj = j/Nτ are frequencies, and we may rewrite (10) and (11) into
Gj = G(fj) =
N−1∑
k=0
e
−2piifjtk gk (14)
and
gk = g(tk) =
1
N
N−1∑
j=0
e
2piifjtkGj . (15)
Let us simulate a very noisy cosine signal. We set
1 % this file is noisy_cos.m
2 fbar=50;
3 tau=0.001;
4 N=1024;
5 t=tau*[0:N-1];
6 R=2.0;
7 g=cos(2*pi*fbar*t)+R*randn(size(t));
8 plot(t,g,’.’);
9 axis([min(t),max(t),-4,4]);
10 print -deps2 ncos1.eps;
Would you recognize the cosine signal in Fig. 5?
We continue by implying the fast Fourier transform fft:
15
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
−4
−3
−2
−1
0
1
2
3
4
Figure 5: Noisy cosine (50 Hz) vs. time sampled at millisecond steps
11 G=fft(g);
12 f=[0:N/2-1]/N/tau;
13 S=abs(G(1:N/2)).^2;
14 plot(f,S);
15 print -deps2 ncos2.eps;
In Fig. 6 we have plotted the spectral power S = |G(f)|2 for positive fre-
quencies7. The prominent peak at f = 50 Hz is evident. The remaining
spectral power is more or less constant which is indicative of white noise.
How can one extract the signal so convincingly from a lot of noise? Not
by looking at the sampled data. They appear to be random. However,
by performing a Fourier analysis, the different harmonic contributions are
excited with their proper phases, so that they add up.
If you know that the signal is spoilt by white noise, you may remove it to a
large extent. You might say
>> H=G.*(abs(G)>150);
>> h=ifft(H);
ifft denotes the inverse fast Fourier transform as described by (13).
You can do a lot more with the fast Fourier transform. Here are some
examples:
7The spectral power is an even function of f
16
0 50 100 150 200 250 300 350 400 450 500
0
2
4
6
8
10
12
14
16
18
x 104
Figure 6: Spectral power of noisy cosine vs. frequency (Hz)
• Differentiation: Fourier transform the signal and multiply by 2piif ,
and back Fourier transform. Thereby a much better precision can be
achieved than by difference quotients.
• Deconvolution: Often the output b(t) = ∫ ds r(s)a(t− s) is the convo-
lution of a signal a and a transmission function r. If the transmission
function is known, then, from B(f) = R(f)A(f), the signal can be re-
constructed. Just divide the Fourier transformed output by the Fourier
transformed transmission function, and back Fourier transform.
• Differential equations with constant coefficients: Differentiation op-
erators become multiplication operators after Fourier transformation,
and differential equations become algebraic equations.
• Data filtering and smoothing: remove the high frequency components.
The fast Fourier transform is an algorithm which takes into account that
an operation on 2N data points are two operations on N data points. This
reduces complexity from N2 to N log(N) which makes all the difference.
7 Fitting data
Mea
本文档为【计算物理matlab】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑,
图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。