O-Notation
Some functions
The idea of big-oh notation
The order of polynomial functions
Applications

Power functions have real constant exponents, y = xn.
Exponential functions have the variable in the exponent, y = bx. Some common bases for exponential functions are b = 2, e, 10.
Example: Let f(x) = 2x, f(3) = 23 = 8.
Logarithm functions take positive real numbers as input are of the form y = logb x. So log2 8 = 3.
Polynomial functions have nonnegative integer exponents and are of the form
y = anxn + an - 1xn-1 + ... + a1x + a0. Here if an 0, the degree of the polynomial is n and the leading coefficient is an.
Which of these functions grow the fastest? the slowest? Graphs of these functions.

O-notation (read "big-oh notation") provides a special way to compare relative sizes of functions that is particularly useful in the analysis of computer algorithms. For most programs there is a particular constraining feature, generally memory or time. For the constraining feature there is generally one function that stands out as limiting such as the number of multiplications, additions, or memory reads. The O-notation makes use of approximations that highlight only the large-scale differences while ignoring differences of a constant factor and differences that only occur for small sets of input data.

The idea of the O-notation is this. Suppose that f and g are both real-valued functions of a real variable x. If, for large values of x, the graph of f lies closer to the horizontal axis than the graph of some positive multiple of g, then f is said to be of order g. This is written f(x) is O(g(x)). Symbolically

|f(x)| M |g(x)|, whenever x > x0, for real numbers x, x0, and M.

Orders of Power Functions top
By the transitivity of "<", if 1 < x, then 1 < x < x2 < x3. Thus for any rational numbers r and s,

if r < s, then xr is O(xs).

Example 9.2.2 A Polynomial Inequality
Show that for any real number x: if x > 1, then 3x3 + 2x + 7 12x3.
Solution: Suppose x is a real number and x > 1. Then

x < x3 and 1 < x3.

Multiplying the left hand inequality by 2 and the right hand inequality by 7 gives

2x < 2x3 and 7 < 7x3.

Now add 3x3 3x3, 2x < 2x3 and 7 < 7x3 to obtain

3x3 + 2x + 7 3x3+ 2x3 + 7x3 = 12x3.

Theorem 9.2.1 On Polynomial Orders top
If an, an - 1, ... ,a1, a0 are real numbers and an 0, then

anxn + an - 1xn-1 + ... + a1x + a0 is O(xm), for all m n.

Applying this theorem to the function 3x3 + 2x + 7 we see that this function is O(x3), as well as O(x4), O(x5), etc. Frequently we want the best big-oh approximation. For polynomial functions we generally take this to be the "least integral power" that is big-oh of f(x). So here the best big-oh approximation is O(x3).

O-Notation Application: Efficiency of Algorithms top

As soon as an Analytical Engine exists, it will necessarily guide the future course of the
science. Whenever any result is sought by its aid, the question will then arise
3/4 by what
course of calculation can these results be arrived at by the machine in the shortest time?

(Charles Babbage, 1864)

Two aspects of algorithm efficiency are important: the amount of time required to execute the algorithm and the amount of memory space needed when it is run. We will be discussing time efficiency. Similar techniques exist for calculating space efficiency. What factors affect time efficiency? Certainly the size of the input data, as well as the type of data and the particular operations to be performed on the data all effect the time efficiency of a given algorithm. Roughly speaking, the analysis of an algorithm for time efficiency begins by trying to count the number of elementary operations that must be performed when the algorithm is executed with an input of size n. Just what is classified as an elementary operation depends on the nature of the problem the algorithms being compared are trying to solve. For a polynomial, this may be the number of additions and multiplications, for searching a database the important distinction is the number of comparisons made.

Consider two algorithms A and B, designed to do a certain job. Suppose that for an input of size n the number of elementary operations needed to perform algorithm A is no greater than 20n and the number of elementary operations needed to perform algorithm B is no greater than 4n2 (for large n). Certainly algorithm A is O(n) and algorithm B is O(n2). We say algorithm A has order n and algorithm B has order n2.

Table 9.3.1 Time Comparisons for Some Algorithm Orders (assuming one operation per microsecond)

f(n), name

n = 10

n = 100

n = 1,000

n = 104

n = 105

lg n, log base 2

0.000003 sec.

0.000007 sec.

0.00001 sec.

0.000013 sec.

0.000017 sec.

n, linear

0.00001 sec.

0.0001 sec.

0.001 sec.

0.01 sec.

0.1 sec.

n lg n

0.000033 sec.

0.0007 sec.

0.01 sec.

0.13 sec.

1.7 sec.

n2, quadratic

0.0001 sec.

0.01 sec.

1 sec.

1.67 min.

2.78 hrs.

n3, cubic

0.001 sec.

1 sec.

16.7 min.

11.6 days

31.7 yr.

2n, exponential

0.001 sec.

40 x 1016 yr.

3.4 x 10287 yr.

6.3 x 102,996 yr.

3.2 x 1030,089 yr

Example 9.3.1 Computing an Order of an Algorithm Segment
Assume (n) is a positive integer and consider the following algorithm segment:

p:=

0, x := 2

for

i := 2 to n

 

p:= (p + i) x

next

i

a.       Compute the actual number of additions and multiplications that must be performed when this algorithm segment is executed.

  1. Find an order for this algorithm segment from among the set of power functions.

Solution:

  1. There is one multiplication and one addition for each iteration of the loop, so there are twice as many multiplications and additions are there are iterations of the loop. Now the number of iterations of the for-next loop equals the top index of the loop minus the bottom index plus 1; that is , n - 2 + 1 = n - 1. Hence there are 2 (n - 1) = 2n - 2 multiplications and additions.
  2. By the theorem of polynomial orders, we have O(n).

Example 9.3.2 Computing the Order of an Algorithm with a Nested Loop

Assume (n) is a positive integer and consider the following algorithm segment:

s:=

0

for

i := 1 to n

 

for j := 1 to i

s := s + j (i - j + 1)

next j

next

i

a.       Compute the actual number of additions subtractions and multiplications that must be performed when this algorithm segment is executed.

  1. Find an order for this algorithm segment from among the set of power functions.

Solution:

  1. There are two additions, one multiplication and one subtraction for each iteration of the inner loop, so there are four operations performed for each iteration of the inner loop. Now the number of iterations of the inner loop 1 + 2 + ... + n = n(n + 1) / 2. Hence there are 4 n(n + 1) / 2 = 2n2 + 2n operations performed.
  2. By the theorem of polynomial orders, we have O(n2 ).

Polynomial.java

 

Some Useful Logarithmic Properties

Property 9.4.2 (p 508)

If k is an integer and x is a real number with 2k < x < 2k +1, then log2 x = k.

Property 9.4.3 (p 509)

For any odd integer n > 1, log2 (n 1) = log2 n .

Application: Number of Bits Needed to Represent an Integer in Binary Notation

Given a positive integer n, how many binary digits are needed to represent n? Recall from Section 4.4 that any positive integer n can be written uniquely as:
n = 2k + ck 1*2k -1 + + c2*22 + c1*2 + c0, where k is a nonnegative integer and each c0, c1, c2,, ck -1 is either 0 or 1. Then the binary representation of n is

1ck 1ck 2c2c1c0.

Now n = 2k + ck 1*2k -1 + + c2*22 + c1*2 + c0, is a geometric sequence with explicit formula (2k +1- 1)/(2 1) = 2k +1- 1.
By transitivity of order, n < 2k +1- 1 < 2k +1. We have 2k < n < 2k +1. By property 9.4.2 (p 509) k = log2 n . The number of binary digits needed to represent n is then log2 n + 1.

 

Efficiency of the Binary Search Algorithm
The number of probes required to search a sorted array of length k is log2 k + 1. The efficiency of such an algorithm is then O(log2 n). See algorithm 9.5.1 (p 521).

BinarySearch.java