ONotation
Some functions
The idea of bigoh notation
The order of polynomial functions
Applications
Power functions have real constant exponents, y = x^{n}.
Exponential functions have the variable in the exponent, y = b^{x}.
Some common bases for exponential functions are b = 2, e, 10.
Example: Let f(x) = 2^{x}, f(3) = 2^{3}
= 8.
Logarithm functions take positive real numbers as input are of the form y
= log_{b }x. So log_{2 }8 = 3.
Polynomial functions have nonnegative integer exponents and are of the form
y = a_{n}x^{n} + a_{n  1}x^{n1}
+ ... + a_{1}x + a_{0}. Here if a_{n} ¹ 0, the degree of the polynomial is n and
the leading coefficient is a_{n}.
Which of these functions grow the fastest? the slowest? Graphs of these functions.
Onotation (read "bigoh notation") provides a special way to compare relative sizes of functions that is particularly useful in the analysis of computer algorithms. For most programs there is a particular constraining feature, generally memory or time. For the constraining feature there is generally one function that stands out as limiting such as the number of multiplications, additions, or memory reads. The Onotation makes use of approximations that highlight only the largescale differences while ignoring differences of a constant factor and differences that only occur for small sets of input data.
The idea of the Onotation is this. Suppose that f and g are both realvalued functions of a real variable x. If, for large values of x, the graph of f lies closer to the horizontal axis than the graph of some positive multiple of g, then f is said to be of order g. This is written f(x) is O(g(x)). Symbolically
f(x) £ M× g(x), whenever x > x_{0}, for real numbers x, x_{0}, and M.
Orders of Power Functions top
By the transitivity of "<", if 1 < x, then 1 < x
< x^{2} < x^{3}. Thus for any rational numbers
r and s,
if r < s, then x^{r}
is O(x^{s}).
Example 9.2.2 A Polynomial Inequality
Show that for any real number x: if x > 1, then 3x^{3}
+ 2x + 7 £ 12x^{3}.
Solution: Suppose x is a real number and x > 1. Then
x < x^{3} and 1 < x^{3}.
Multiplying the left hand inequality by 2 and the right hand inequality by 7 gives
2x < 2x^{3} and 7 < 7x^{3}.
Now add 3x^{3} £ 3x^{3}, 2x < 2x^{3} and 7 < 7x^{3} to obtain
3x^{3} + 2x + 7 £ 3x^{3}+ 2x^{3} + 7x^{3} = 12x^{3}.
Theorem 9.2.1 On Polynomial Orders top
If a_{n}, a_{n  1}, ... ,a_{1}, a_{0} are real
numbers and a_{n} ¹ 0, then
a_{n}x^{n} + a_{n  1}x^{n1}
+ ... + a_{1}x + a_{0} is O(x^{m}),
for all m ³ n.
Applying this theorem to the function 3x^{3} + 2x + 7 we see that this function is O(x^{3}), as well as O(x^{4}), O(x^{5}), etc. Frequently we want the best bigoh approximation. For polynomial functions we generally take this to be the "least integral power" that is bigoh of f(x). So here the best bigoh approximation is O(x^{3}).
ONotation Application: Efficiency of
Algorithms top
As soon as an Analytical Engine
exists, it will necessarily guide the future course of the
science. Whenever any result is sought by its aid, the question will then arise
3/4 by what
course of calculation can these results be arrived at by the machine in the
shortest time?
(Charles Babbage, 1864)
Two aspects of algorithm efficiency are important: the amount of time required to execute the algorithm and the amount of memory space needed when it is run. We will be discussing time efficiency. Similar techniques exist for calculating space efficiency. What factors affect time efficiency? Certainly the size of the input data, as well as the type of data and the particular operations to be performed on the data all effect the time efficiency of a given algorithm. Roughly speaking, the analysis of an algorithm for time efficiency begins by trying to count the number of elementary operations that must be performed when the algorithm is executed with an input of size n. Just what is classified as an elementary operation depends on the nature of the problem the algorithms being compared are trying to solve. For a polynomial, this may be the number of additions and multiplications, for searching a database the important distinction is the number of comparisons made.
Consider two algorithms A and B, designed to do a certain job. Suppose that for an input of size n the number of elementary operations needed to perform algorithm A is no greater than 20n and the number of elementary operations needed to perform algorithm B is no greater than 4n^{2} (for large n). Certainly algorithm A is O(n) and algorithm B is O(n^{2}). We say algorithm A has order n and algorithm B has order n^{2}.
Table 9.3.1 Time Comparisons for Some Algorithm Orders (assuming one operation per microsecond)
f(n), name 
n = 10 
n = 100 
n = 1,000 
n = 10^{4} 
n = 10^{5} 
lg n, log base 2 
0.000003 sec. 
0.000007 sec. 
0.00001 sec. 
0.000013 sec. 
0.000017 sec. 
n, linear 
0.00001 sec. 
0.0001 sec. 
0.001 sec. 
0.01 sec. 
0.1 sec. 
n× lg n 
0.000033 sec. 
0.0007 sec. 
0.01 sec. 
0.13 sec. 
1.7 sec. 
n^{2}, quadratic 
0.0001 sec. 
0.01 sec. 
1 sec. 
1.67 min. 
2.78 hrs. 
n^{3}, cubic 
0.001 sec. 
1 sec. 
16.7 min. 
11.6 days 
31.7 yr. 
2^{n}, exponential 
0.001 sec. 
40 x 10^{16} yr. 
3.4 x 10^{287} yr. 
6.3 x 10^{2,996} yr. 
3.2 x 10^{30,089} yr 
Example 9.3.1 Computing an Order of an Algorithm Segment
Assume (n) is a positive integer and consider the following algorithm
segment:
p:= 
0, x := 2 
for 
i := 2 to n 

p:= (p + i)× x 
next 
i 
a. Compute the actual number of additions and multiplications that must be performed when this algorithm segment is executed.
Solution:
Example 9.3.2 Computing the Order of an Algorithm with a Nested Loop
Assume (n) is a positive integer and consider the following algorithm segment:
s:= 
0 
for 
i := 1 to n 

for j := 1 to i s := s + j× (i  j + 1) next j 
next 
i 
a. Compute the actual number of additions subtractions and multiplications that must be performed when this algorithm segment is executed.
Solution:
Property 9.4.2 (p 508)
If k is an integer and x is a real number with 2^{k}^{ } < x < 2^{k}^{ +1}, then ëlog_{2} x û = k.
Property 9.4.3 (p 509)
For any odd integer n > 1, ëlog_{2} (n – 1)û = ëlog_{2} n û.
Given a positive integer n, how many binary digits
are needed to represent n? Recall from Section 4.4 that any positive
integer n can be written
uniquely as:
n = 2^{k} + c_{k – }_{1}*2^{k
}^{1} + … + c_{2}*2^{2} + c_{1}*2
+ c_{0}, where k is a nonnegative integer and each c_{0},
c_{1}, c_{2},…, c_{k 1} is either
0 or 1. Then the binary representation of n is
1c_{k –}_{1}c_{k –}_{2}…c_{2}c_{1}c_{0}.
Now n = 2^{k}
+ c_{k – }_{1}*2^{k }^{1} + … + c_{2}*2^{2}
+ c_{1}*2 + c_{0}, is a geometric sequence with
explicit formula (2^{k}^{ +1} 1)/(2 – 1) = 2^{k}^{
+1} 1.
By transitivity of order, n < 2^{k}^{ +1}
1 < 2^{k}^{ +1}. We have 2^{k}^{ } < n < 2^{k}^{
+1}. By property 9.4.2 (p 509) k =
ëlog_{2}
n û.
The number of binary digits needed to represent n is then ëlog_{2}
n û
+ 1.
Efficiency of the Binary Search
Algorithm
The number of probes required to search a sorted array of length k is ëlog_{2}
k û
+ 1. The efficiency of such an algorithm is then O(log_{2}
n). See algorithm 9.5.1 (p 521).