Translate

Friday, December 13, 2013

Algo Complexity Analysis

Asymptotic:

When we look at input sizes large enough to make only the order of growth of the running time relevant, we are studying the asymptotic efficiency of algorithms. That is, we are concerned with how the running time of an algorithm increases with the size of the input in the limit, as the size of the input increases without bound. Usually, an algorithm that is asymptotically more efficient will be the best choice for all but very small inputs.

For large values of components/inputs, the multiplicative constants and lower order terms of an exact running time are dominated by the effects of the input size (the number of components).

Different asymptotic notations are:
Q - Asymptotic tight bound
O - Asymptotic upper bound
W - Asymptotic lower bound
o - upper bound that is not asymptotically tight
w - lower bound that is not asymptotically tight

Q-notation (Theta notation)
For a given function g(n), we denote by Q(g(n)) the set of functions
Q(g(n)) = â(n) : there exist positive constants c1, c2, and n0 such that
     0 <= c1g(n) <=  â(n) <= c2g(n) for all n >= n0

A function â(n) belongs to the set Q(g(n)) if there exist positive constants c1 and c2 such that it can be sandwiched between c1g(n) and c2g(n), for sufficiently large n.

O-notation (Big O Notation)
Q-notation asymptotically bounds a function from above and below. When we have only an asymptotic upper bound, we use O-notation.

For a given function g(n), we denote by O(g(n)) the set of functions
O(g(n)) = {â(n) : there exist positive constants c and n0 such that
     0 <=  â(n)  <= cg(n) for all n >= n0

We use O-notation to give an upper bound on a function, to within a constant factor.

Since O-notation describes an upper bound, when we use it to bound the worst-case running time of an algorithm, by implication we also bound the running time of the algorithm on arbitrary inputs as well. Thus, the O(n^2) bound on worst-case running time of insertion sort also applies to its running time on every input. The Q(n^2) bound on the worst-case running time of insertion sort, however, does not imply a Q(n^2) bound on the running time of insertion sort on every input. For example, when the input is already sorted, insertion sort runs in Q(n) time.

o-notation (Small o notation)
The asymptotic upper bound provided by O-notation may or may not be asymptotically tight. The bound 2n^2 = O(n^2) is asymptotically tight, but the bound 2n = O(n^2) is not. We use o-notation to denote an upper bound that is not asymptotically tight.

We formally define o(g(n)) (“little-oh of g of n”) as the set
  o(g(n)) = {f(n): for any positive constant c > 0, there exists a constant n0 > 0 such that
       0 <= f(n) < cg(n) for all n >= n0
The definitions of O-notation and o-notation are similar. The main difference is that in f(n) = O(g(n)), the bound 0 <= f(n) <= cg(n) holds for some constant c > 0, but in f(n) = o(g(n)), the bound 0 <= f(n) < cg(n) holds for all constants c > 0. Intuitively, in the o-notation, the function f(n) becomes insignificant relative to g(n) as n approaches infinity.

W-notation (Omega notation)
Just as O-notation provides an asymptotic upper bound on a function, W-notation provides an asymptotic lower bound.

For a given function g(n), we denote by W(g(n)) the set of functions
 W(g(n)) = f(n): there exist positive constants c and n0 such that
         0 <= cg(n) <= f (n) for all n >= n0

w-notation (Small omega notation)
By analogy, w-notation is to W-notation as o-notation is to O-notation. We use w-notation to denote a lower bound that is not asymptotically tight.

Formally, however, we define w (g(n)) ("little-omega of g of n") as the set
   w(g(n)) = f(n): for any positive constant c > 0, there exists a constant n0 > 0 such that
         0 <= cg(n) <= f(n) for all n >= n0

That is, f(n) becomes arbitrarily large relative to g(n) as n approaches infinity.


Difference between Big O and small o notation
For at least one choice of a constant k > 0, you can find a constant a such that the inequality f(x) < kg(x) holds for all x > a.
f = o(g) says, essentially
For every choice of a constant k > 0, you can find a constant a such that the inequality f(x) < k g(x) holds for all x > a.
In Big-O, it is only necessary that you find a particular multiplier k for which the inequality holds beyond some minimum x.

No comments:

Post a Comment