Home Algorithmic Complexity Analysis: Big-O Notation
Post
Cancel

Algorithmic Complexity Analysis: Big-O Notation

In the field of computer science and programming, efficiency is a crucial aspect to consider when designing and evaluating algorithms. Big-O notation, also known as asymptotic notation, provides a standardized way to analyze and compare the efficiency of algorithms. By using Big-O notation, we can estimate how the time or space requirements of an algorithm grow as the input size increases.

In this article, we will delve into the details of Big-O notation, explore its significance, and provide examples in the C programming language to illustrate its usage.

What is Big-O Notation?

Big-O notation is a mathematical notation used to describe the performance characteristics of an algorithm. It represents the upper bound, or worst-case scenario, of how the runtime or space usage of an algorithm scales with respect to the input size. In simpler terms, it describes how the algorithm behaves as the input size approaches infinity.

The “O” in Big-O stands for “order of,” indicating the order of growth of an algorithm’s time or space complexity. It is followed by a function within parentheses, such as O(f(n)), where ‘f(n)’ represents the growth rate function of the algorithm.

Common Big-O Complexity Classes:

  1. O(1) - Constant Time Complexity: Algorithms with constant time complexity have a fixed runtime, regardless of the input size. They are considered highly efficient as the input increases. For example, accessing an element in an array by index or performing basic arithmetic operations on numbers has a constant time complexity of O(1).

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    
    #include <stdio.h>
    
    void printFirstElement(int array[], int size) {
        printf("%d\n", array[0]);
    }
    
    int main() {
        int array[] = {1, 2, 3, 4, 5};
        int size = sizeof(array) / sizeof(array[0]);
        printFirstElement(array, size);
        return 0;
    }
    

    In this example, the printFirstElement function accesses the first element of the array using index 0. Regardless of the size of the array, the function performs a single operation, making it a constant time complexity operation.

  2. O(log n) - Logarithmic Time Complexity: Algorithms with logarithmic time complexity grow at a slower rate than linear time complexity. They frequently occur in algorithms that use divide-and-conquer techniques or binary search. As the input size increases, the runtime grows proportionally but at a slower rate. An example of logarithmic time complexity is binary search on a sorted array.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    
    #include <stdio.h>
    
    int binarySearch(int array[], int size, int target) {
        int low = 0;
        int high = size - 1;
    
        while (low <= high) {
            int mid = (low + high) / 2;
    
            if (array[mid] == target) {
                return mid;
            } else if (array[mid] < target) {
                low = mid + 1;
            } else {
                high = mid - 1;
            }
        }
    
        return -1;
    }
    
    int main() {
        int array[] = {1, 2, 3, 4, 5};
        int size = sizeof(array) / sizeof(array[0]);
        int target = 3;
    
        int index = binarySearch(array, size, target);
    
        if (index != -1) {
            printf("Element found at index %d\n", index);
        } else {
            printf("Element not found\n");
        }
    
        return 0;
    }
    

    In this example, the binarySearch function performs a binary search on a sorted array. It repeatedly divides the search space in half until it finds the target element or determines that it doesn’t exist. The number of iterations required to find the target element grows logarithmically with the input size, making it a logarithmic time complexity operation.

  3. O(n) - Linear Time Complexity: Algorithms with linear time complexity have a runtime that grows linearly with the input size. This means that if the input size doubles, the runtime will also approximately double. Examples of algorithms with linear time complexity include iterating through an array to find a specific element or calculating the sum of all elements in an array.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    
    #include <stdio.h>
    
    int findMax(int array[], int size) {
        int max = array[0];
    
        for (int i = 1; i < size; i++) {
            if (array[i] > max) {
                max = array[i];
            }
        }
    
        return max;
    }
    
    int main() {
        int array[] = {1, 5, 3, 7, 2};
        int size = sizeof(array) / sizeof(array[0]);
    
        int max = findMax(array, size);
        printf("Maximum element: %d\n", max);
    
        return 0;
    }
    

    In this example, the findMax function iterates through the array to find the maximum element. As the size of the array increases, the number of iterations in the loop also increases linearly, resulting in linear time complexity.

  4. O(n^2) - Quadratic Time Complexity: Algorithms with quadratic time complexity have a runtime that grows exponentially with the input size. It means that if the input size doubles, the runtime will increase fourfold. Examples include nested loops iterating over an array or performing operations on all pairs of elements.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    
    #include <stdio.h>
    
    void printPairs(int array[], int size) {
        for (int i = 0; i < size; i++) {
            for (int j = 0; j < size; j++) {
                printf("(%d, %d)\n", array[i], array[j]);
            }
        }
    }
    
    int main() {
        int array[] = {1, 2, 3};
        int size = sizeof(array) / sizeof(array[0]);
    
        printPairs(array, size);
    
        return 0;
    }
    

    In this example, the printPairs function prints all possible pairs of elements from the array. It uses nested loops, resulting in quadratic time complexity. As the size of the array increases, the number of iterations becomes the square of the input size.

Wrapping Up

Big-O notation is an essential tool for analyzing and comparing the efficiency of algorithms. By understanding the growth rate of an algorithm’s runtime or space usage, we can make informed decisions when choosing the most appropriate algorithm for a given problem. Remember that Big-O notation focuses on worst-case scenarios and provides an upper bound on the algorithm’s performance. It allows us to reason about scalability, efficiency, and resource requirements, ultimately leading to the development of more optimized and effective algorithms.

This post is licensed under CC BY 4.0 by the author.