Post

Logarithm

Logarithm

It’s a mathematical concept which is very often used in Computer Science in the context of algorithm complexity — it even sounds similar!

The equation:

logx(n) = y    iff    xy = n

log2(n) = y    iff    2y = n


By default, in Computer Science we assume that the base of the logarithm is 2, so we don’t even write it. The reason base-2 is the natural choice comes from the binary nature of computing: every fundamental decision a computer makes is a binary choice — yes or no, 0 or 1, left or right. Algorithms like binary search split the input in half at each step, and that halving maps directly to powers of 2.

log2(n) = y    ->    log(n) = y    iff    2y = n


So in other words what does it imply? The log of N is the power to which you need to raise 2 to get N.

For example:

log(1) = 0      because    20 = 1

log(4) = 2      because    22 = 4

log(256) = 8      because    28 = 256


But notice one thing: as N is doubled, the log of N increases by just one.

log(16) = 4      (23 * 2 = 24)

log(32) = 5      (24 * 2 = 25)


What does that mean for us? If we get an algorithm with complexity log(n), it’s incredibly good! When N doubles, log(n) increases only by one, if we tie this back to complexity analysis, when we have an algorithm with a time complexity of log(n), that is incredibly good! That means as the input doubles, as the input increases, the number of elementary operations that we’re performing in the algorithm increases only by one.

Imagine a linear time complexity algorithm with an input of size 1000, it might take roughly 1000 operations to complete, whereas a logarithmic time complexity algorithm with the same input data would take roughly 10 operations to complete, since 210 = 1024.

It is also worth noting that in Big O analysis, the base of the logarithm does not matter. This is because logarithms of different bases differ only by a constant factor — for example, log3(n) = log2(n) / log2(3), and 1/log2(3) is just a constant. Since Big O drops constant factors, O(log2(n)), O(log3(n)), and O(log10(n)) are all equivalent and written simply as O(log(n)).

Binary Search: O(log n) in Action

Binary search is the classic example of a logarithmic algorithm. It works on a sorted array and repeatedly halves the search space:

1
2
3
4
5
6
7
8
9
10
11
12
function binarySearch(sortedArray, target):
    left = 0
    right = length(sortedArray) - 1
    while left <= right:
        mid = (left + right) / 2
        if sortedArray[mid] == target:
            return mid
        else if sortedArray[mid] < target:
            left = mid + 1
        else:
            right = mid - 1
    return -1

At each step, we compare the target to the middle element and discard half of the remaining elements. If the array has 1024 elements, after one comparison we are down to 512, then 256, then 128, and so on. After at most 10 comparisons (since log2(1024) = 10) we have either found the target or determined it is not present. Doubling the array to 2048 elements only adds a single extra comparison — that is the power of O(log n).

This post is licensed under CC BY 4.0 by the author.