The analysis of algorithms is the determination of the computational complexity of algorithms, that is the amount of time, storage and/or other resources necessary to execute them.
Each algorithm when laid down, can be represented by a formula to represent the amount of time needed for the execution of the lines of code contained in that algorithm.
This formula can contain details which are important with respect to computing time, or contain details which might be insignificant while computing the time. We can ignore such insignificant details from the formula. These trivial details are usually constants or a variable part which is insignificant when compared to another variable in the same equation, for eg T(n) = 4n^2 +5n + 3. The 3 being a constant can be removed. Leaving T(n) = 3n^2 + 5n. Now when we compare 5n to 4n^2, for large values of n, the 5n would not be of any significance when compared to the value of 4n^2. So for any value of n, we ignore this insignificant component of the formula, thus leaving T(n) = 4n^2. Now, here again, 4 is a constant, and thus becomes insignificant while considering the time complexity. Thus leaves us with T(n) = n^2 which in fact is the component which drives the complexity of an algorithm.
The three notations used to represent the growth of any algorithm, as input increases
Each algorithm when laid down, can be represented by a formula to represent the amount of time needed for the execution of the lines of code contained in that algorithm.
This formula can contain details which are important with respect to computing time, or contain details which might be insignificant while computing the time. We can ignore such insignificant details from the formula. These trivial details are usually constants or a variable part which is insignificant when compared to another variable in the same equation, for eg T(n) = 4n^2 +5n + 3. The 3 being a constant can be removed. Leaving T(n) = 3n^2 + 5n. Now when we compare 5n to 4n^2, for large values of n, the 5n would not be of any significance when compared to the value of 4n^2. So for any value of n, we ignore this insignificant component of the formula, thus leaving T(n) = 4n^2. Now, here again, 4 is a constant, and thus becomes insignificant while considering the time complexity. Thus leaves us with T(n) = n^2 which in fact is the component which drives the complexity of an algorithm.
The Big-O Notation is the way we determine how fast any given algorithm is when put through its pacesor
Big O notation is used to describe the performance or complexity of an algorithm.
The three notations used to represent the growth of any algorithm, as input increases
Big Theta (tight bounds)
When we say tight bounds, we mean that the time complexity represented by the Big-Θ notation is like the average value or range within which the actual time of execution of the algorithm will be.
For example, if for some algorithm the time complexity is represented by the expression 3n2 + 5n, and we use the Big-Θ notation to represent this, then the time complexity would be Θ(n2), ignoring the constant coefficient and removing the insignificant part, which is 5n.
Here, in the example above, complexity of Θ(n2) means, that the average time for any input n will remain in between, x1 * n^2 and x2 * n^2 where x1,x2 are two constants, thereby tightly binding the expression representing the growth of the algorithm.
Big O (Upper bound)
This notation is known as the upper bound of the algorithm, or a Worst Case of an algorithm and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.
It tells us that a certain function will never exceed a specified time for any value of input n.
The question is why we need this representation when we already have the big-Θ notation, which represents the tightly bound running time for any algorithm. Let's take a small example to understand this.
Consider Linear Search algorithm, in which we traverse an array elements, one by one to search a given number.
In Worst case, starting from the front of the array, we find the element or number we are searching for at the end, which will lead to a time complexity of 'n', where 'n' represents the number of total elements.
But it can happen, that the element that we are searching for is the first element of the array, in which case the time complexity will be '1'.
Now in this case, saying that the big-Θ or tight bound time complexity for Linear search is Θ(n), will mean that the time required will always be related to 'n', as this is the right way to represent the average time complexity, but when we use the big-O notation, we mean to say that the time complexity is O(n), which means that the time complexity will never exceed 'n', defining the upper bound, hence saying that it can be less than or equal to 'n', which is the correct representation.
This is the reason, most of the time you will see Big-O notation being used to represent the time complexity of any algorithm, because it makes more sense.
Big Omega (Lower bound)
Big Omega notation is used to define the lower bound of any algorithm or we can say the best case of any algorithm.
This always indicates the minimum time required for any algorithm for all input values, therefore the best case of any algorithm.
In simple words, when we represent a time complexity for any algorithm in the form of big-Ω, we mean that the algorithm will take at least this much time to complete it's execution. It can definitely take more time than this too.
No comments:
Post a Comment