1

mrahmedcomputing

KS3, GCSE, A-Level Computing Resources

Lesson 1. Algorithm Analysis


Lesson Objective

  • Analyse the suitability of different algorithms for a given task and data set
  • Define constant, linear, polynomial, exponential and logarithmic functions
  • Use Big-O notation to compare the time complexity of algorithms
  • Be able to derive the time complexity of an algorithm

Lesson Notes

Algorithms

A good algorithm:

Big O Notation

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.

It tells you how fast a function grows or declines.

It is commonly used in computer science to analyze the efficiency of algorithms. This information can be used to make informed decisions about which algorithm to use for a particular task.

To use Big O notation, we need to write the function's running time in terms of the input size.

The Big O notation of an algorithm is typically determined by analyzing the worst-case scenario. This means that we find the maximum amount of time that the algorithm could take to execute, given any possible input.

There are many different Big O notations, but some of the most common (and OCR Spec related) are:


O(1) - Constant Time

When the number of operations performed does not increase as the input size grows. For example, the following function takes an array of numbers and returns the first element:

function firstElement(array):
   return array[0]

This function always performs a single operation, regardless of the size of the array.


O(n) - Linear Time

The number of operations performed increases linearly with the input size. The following function takes an array of numbers and prints each element to the console:

function printArray(array):
   for i in range(len(array)):
      print(array[i])

This function performs one operation (printing an element) for each element in the array.


O(n2) - Polynomial Time

This algorithm runs in quadratic time, meaning that the number of operations performed increases quadratically with the input size.

The following function takes an array of numbers and finds the largest element using a nested loop.

function findLargest(array):
   largest = array[0]
   for i in range(len(array)):
      for j in range(i + 1, len(array)):
         if array[j] > largest:
           largest = array[j]
   return largest

This function performs one operation (comparing two elements) for each pair of elements in the array. Therefore, it has a time complexity of O(n2).


O(log n) - Logarithmic Time

The number of operations performed increases logarithmically with the input size.

For example, the following function takes a number and searches for it in a sorted array using binary search.

function binarySearch(array, target):
  low = 0
  high = len(array) - 1
  while low <= high:
    mid = (low + high) // 2
    if array[mid] == target:
      return mid
    elif array[mid] < target:
      low = mid + 1
    else:
      high = mid - 1
  return -1

This function performs one operation (comparing the target to the middle element of the array) per iteration of the loop.

The number of iterations is logarithmic in the size of the array. Therefore, this function has a time complexity of O(log n).

Extra: In mathematics, a logarithm is the inverse operation of exponentiation. That means that the logarithm of a number x to the base b is the exponent to which b must be raised to produce x. For example, since 10³ = 1000, the logarithm base 10 of 1000 is 3, or log10(1000) = 3.


O(2n) - Exponential Time

The number of recursive calls grows exponentially with the value of n.

function power_of_two(n):
   if n == 0:
      return 1
   else:
      return 2 * power_of_two(n - 1)

In the example, to calculate the power of two for n = 3, the function will make 4 recursive calls. To calculate the power of two for n = 4, the function will make 8 recursive calls, and so on.


Summary

Ordered best to wost:

  1. O(1) - Always executes in the same amount of time regardless of the size of the data set.
  2. O(log n) - Halves the data set in each pass. Opposite to exponential.
  3. O(n) - Performance declines as the data set grows. Reduces efficiency with increasingly large data sets.
  4. O(n2) - Performance is proportional to the square of the size of the data set. Significantly reduces efficiency with the increasingly large data sets. Deeper nested iterations result in O(n^3), O(n^4), etc.
  5. O(2n) - Doubles with each addition to the data set in each pass. Opposite to logarithmic.

Calculating Time Complexity

Calculating the time complexity of an algorithm involves analyzing how the algorithm's execution time grows as the input size increases. It's a crucial aspect of algorithm analysis and helps determine an algorithm's efficiency.

To calculate time complexity, follow these steps:

EXAMPPLE!!


Permutation

A permutation is the number of ways to arrange a set of n objects.

If you have 3 objects A, B and C you can choose any of A, B, or C to be the first object. You then have two choices for the second object, making 3 x 2 different ways of arranging the first two objects, and then one way for placing the third object.

The permutations are ABC, ACB, BAC, BCA, CAB, CBA.

The formula for calculating the number of permutations of three objects is 3*2*1, written as 3!, referred to as 3 factorial.


3