tested 1/6/2022.
This is a GNU Octave version of the Fundamentals of Numerical Computation Jupyter Notebooks found at <https://tobydriscoll.net/project/fnc/ https://tobydriscoll.net/project/fnc/>_.
% Recall the grade-school approximation to the number 𝜋.
p = 22/7
p = 3.1429
%%
format long
p
p = 3.142857142857143
Note that not all the digits displayed for |p| are the same as for 𝜋. As an approximation, its absolute and relative accuracy are
abs(p-pi) % absolute accuracy
abs(p-pi)/pi % relative accuracy
-log10(abs(p-pi)/pi) % accurate digits
ans = 1.264489267349678e-03 ans = 4.024994347707008e-04 ans = 3.395234725174717
There is no double precision number between 1 and $1+\epsilon_\mbox{mach}$. Thus the following difference is zero despite its appearance.
e = eps/2
(1.0 + e) - 1.0
e = 1.110223024625157e-16 ans = 0
However, $1+\frac{\epsilon_\mbox{mach}}2$ is a double precision number, so it and its negative are represented exactly:
1.0 + (e - 1.0)
ans = 1.110223024625157e-16
This is now the "correct" result. But we have found a rather shocking breakdown of the associative law of addition!
Here we show how to use |horner| to evaluate a polynomial. We first define a vector of the coefficients of $p(x)=(x-1)^3=x^3-3x^2+3x-1,$ in descending degree order. [Need hornereval.m in the current directory, rewritten so as not to rewrite MATLAB's version. For Octave, the function can be defined in the script as shown below.]
c = [1,-3,3,-1]
c = 1 -3 3 -1
function y = hornereval(c,x)
% HORNER Evaluate a polynomial using Horner's rule.
% Input:
% c Coefficients of polynomial, in descending order (vector)
% x Evaluation point (scalar)
% Output:
% y Value of the polynomial at x (scalar)
n = length(c);
y = c(1);
for k = 2:n
y = x*y + c(k);
end
end
hornereval(c,1.6)
ans = 2.160000000000004e-01
% The above is the value of 𝑝(1.6)p(1.6), up to a rounding error. % Example 1.3.3 % Our first step is to construct a polynomial with six known roots.
r = [-2.0,-1,1,1,3,6]
p = poly(r)
r = -2 -1 1 1 3 6 p = 1 -8 6 44 -43 -36 36
Now we use a standard numerical method for finding those roots, pretending that we don't know them already. We also sort them from lowest to highest values.
r_computed = sort(roots(p))
r_computed = -2.000000000000001e+00 -9.999999999999990e-01 9.999999902778504e-01 1.000000009722150e+00 3.000000000000000e+00 5.999999999999992e+00
Here are the relative errors in each of the computed roots. [Since r_computed is a column vector, we use its transpose.]
abs(r - r_computed')./r
ans = Columns 1 through 3: -6.661338147750939e-16 -9.992007221626409e-16 9.722149640900568e-09 Columns 4 through 6: 9.722149529878266e-09 1.480297366166875e-16 1.332267629550188e-15
It seems that the forward error is acceptably close to machine epsilon for double precision in all cases except the double root at $x=1$. This is not a surprise, though, given the poor conditioning at such roots.
Let's consider the backward error. The data in the rootfinding problem are the polynomial coefficients. We can apply poly to find the coefficients of the polynomial (that is, the data) whose roots were actually computed by the numerical algorithm.
p_computed = poly(r_computed)
p_computed = Columns 1 through 3: 1.000000000000000e+00 -7.999999999999991e+00 5.999999999999973e+00 Columns 4 through 6: 4.399999999999998e+01 -4.299999999999997e+01 -3.599999999999991e+01 Column 7: 3.599999999999993e+01
We find that in a relative sense, these coefficients are very close to those of the original, exact polynomial:
abs(p-p_computed)/p
ans = -1.091370827541370e-16
In summary, even though there are some computed roots relatively far from their correct values, they are nevertheless the roots of a polynomial that is very close to the original.
a = 1.0;
b = -(1e6+1e-6);
c = 1.0;
x1 = (-b + sqrt(b*b-4*a*c)) / (2*a)
x1 = 1000000
So far, so good. But:
x2 = (-b - sqrt(b*b-4*a*c)) / (2*a)
x2 = 1.000007614493370e-06
The first value is correct to all stored digits, but the second has fewer than six accurate digits:
-log10(abs(1e-6-x2)/1e-6 )
ans = 5.118358987126217
a = 1.0
b = -(1e6+1e-6)
c = 1.0
a = 1 b = -1000000.000001000 c = 1
First, we find the "good" root using the quadratic forumla.
x1 = (-b + sqrt(b*b-4*a*c)) / (2*a)
x1 = 1000000
Then we use the alternative formula for computing the other root.
x2 = c/(a*x1)
x2 - 1e-6
x2 = 1.000000000000000e-06 ans = 0