Lesson 1. Algorithm Analysis
Lesson Objective
- Analyse the suitability of different algorithms for a given task and data set
- Define constant, linear, polynomial, exponential and logarithmic functions
- Use Big-O notation to compare the time complexity of algorithms
- Be able to derive the time complexity of an algorithm
Lesson Notes
Algorithms
A good algorithm:
- has clear and precisely stated steps that produce the correct output for any set of valid inputs.
- should allow for invalid input.
- must always terminate at some point.
- should execute efficiently, in as few steps as possible.
- should be designed in such a way that other people will be able to understand it and modify it if necessary.
Big O Notation
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.
It tells you how fast a function grows or declines.
It is commonly used in computer science to analyze the efficiency of algorithms. This information can be used to make informed decisions about which algorithm to use for a particular task.
To use Big O notation, we need to write the function's running time in terms of the input size.
The Big O notation of an algorithm is typically determined by analyzing the worst-case scenario. This means that we find the maximum amount of time that the algorithm could take to execute, given any possible input.
There are many different Big O notations, but some of the most common (and OCR Spec related) are:
- O(1): This means that the algorithm runs in constant time, regardless of the input size.
- O(n): This means that the algorithm runs in linear time. The time it takes to execute the algorithm grows linearly with the input size.
- O(log n): This means that the algorithm runs in logarithmic time. The time it takes to execute the algorithm grows logarithmically with the input size. Data set is halved.
- O(n2): This means that the algorithm runs in quadratic (polynomial) time. The time it takes to execute the algorithm grows quadratically with the input size.
- O(2n): This means that the algorithm runs in exponential time. The time it takes to execute the algorithm grows exponentially with the input size.
O(1) - Constant Time
When the number of operations performed does not increase as the input size grows. For example, the following function takes an array of numbers and returns the first element:
function firstElement(array):
return array[0]
This function always performs a single operation, regardless of the size of the array.
O(n) - Linear Time
The number of operations performed increases linearly with the input size. The following function takes an array of numbers and prints each element to the console:
function printArray(array):
for i in range(len(array)):
print(array[i])
This function performs one operation (printing an element) for each element in the array.
O(n2) - Polynomial Time
This algorithm runs in quadratic time, meaning that the number of operations performed increases quadratically with the input size.
The following function takes an array of numbers and finds the largest element using a nested loop.
function findLargest(array):
largest = array[0]
for i in range(len(array)):
for j in range(i + 1, len(array)):
if array[j] > largest:
largest = array[j]
return largest
This function performs one operation (comparing two elements) for each pair of elements in the array. Therefore, it has a time complexity of O(n2).
O(log n) - Logarithmic Time
The number of operations performed increases logarithmically with the input size.
For example, the following function takes a number and searches for it in a sorted array using binary search.
function binarySearch(array, target):
low = 0
high = len(array) - 1
while low <= high:
mid = (low + high) // 2
if array[mid] == target:
return mid
elif array[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1
This function performs one operation (comparing the target to the middle element of the array) per iteration of the loop.
The number of iterations is logarithmic in the size of the array. Therefore, this function has a time complexity of O(log n).
Extra: In mathematics, a logarithm is the inverse operation of exponentiation. That means that the logarithm of a number x to the base b is the exponent to which b must be raised to produce x. For example, since 10³ = 1000, the logarithm base 10 of 1000 is 3, or log10(1000) = 3.
O(2n) - Exponential Time
The number of recursive calls grows exponentially with the value of n.
function power_of_two(n):
if n == 0:
return 1
else:
return 2 * power_of_two(n - 1)
In the example, to calculate the power of two for n = 3, the function will make 4 recursive calls. To calculate the power of two for n = 4, the function will make 8 recursive calls, and so on.
Summary
Ordered best to wost:
- O(1) - Always executes in the same amount of time regardless of the size of the data set.
- O(log n) - Halves the data set in each pass. Opposite to exponential.
- O(n) - Performance declines as the data set grows. Reduces efficiency with increasingly large data sets.
- O(n2) - Performance is proportional to the square of the size of the data set. Significantly reduces efficiency with the increasingly large data sets. Deeper nested iterations result in O(n^3), O(n^4), etc.
- O(2n) - Doubles with each addition to the data set in each pass. Opposite to logarithmic.
Calculating Time Complexity
Calculating the time complexity of an algorithm involves analyzing how the algorithm's execution time grows as the input size increases. It's a crucial aspect of algorithm analysis and helps determine an algorithm's efficiency.
To calculate time complexity, follow these steps:
- Identify the Basic Operations: assignments, comparisons, loops, and function calls.
- Count the Operation Frequency: Determine how many times each basic operation is executed for a given input size.
- Worst-Case Scenario: What input causes the algorithm to perform the maximum number of operations.
- Express in Big O Notation: Use Big O notation to express the time complexity. Big O ignores constant factors and lower-order terms.
EXAMPPLE!!
Permutation
A permutation is the number of ways to arrange a set of n objects.
If you have 3 objects A, B and C you can choose any of A, B, or C to be the first object. You then have two choices for the second object, making 3 x 2 different ways of arranging the first two objects, and then one way for placing the third object.
The permutations are ABC, ACB, BAC, BCA, CAB, CBA.
The formula for calculating the number of permutations of three objects is 3*2*1, written as 3!, referred to as 3 factorial.