If we talk about some sorting algorithms we'll see their running time is in the order of \(O(n log(n))\) time. In this article we'll talk why logarithm time is useful and how it's working?
Let's imagine the following task
for i in range(10):
print(i)
This function will print the number i
10 times. Now run this for 100 times so it'll run for 100 times. So you can see that the running time growing linearly with respect to the input size.
So we can write the following table
- 1 unit of time to complete if you have 1 elements.
- 2 unit of time to complete if you have 2 elements.
- 3 unit of time to complete if you have 3 elements.
- 4 unit of time to complete if you have 4 elements.
- 5 unit of time to complete if you have 5 elements.
.....................
- \(n\) unit of time to complete if you have \(n\) elements.
Now let's imagine the following task
for i in range(10):
for y in range(10):
print(i)
This program has order of \(n^2\) running time because the time of running the function will grow with the square of the input size.
So we can write the following table
- 1 unit of time to complete if you have 1 elements.
- 4 unit of time to complete if you have 2 elements.
- 9 unit of time to complete if you have 3 elements.
- 16 unit of time to complete if you have 4 elements.
- 25 unit of time to complete if you have 5 elements.
.....................
- \(n^2\) unit of time to complete if you have \(n\) elements.
Now let's talk about log n time
Now similar to these two imagine a function that does the following
- 1 unit of time to complete if you have 2 elements.
- 2 unit of time to complete if you have 4 elements.
- 3 unit of time to complete if you have 8 elements.
- 4 unit of time to complete if you have 16 elements.
- 5 unit of time to complete if you have 32 elements.
.....................
- \(n\) unit of time to complete if you have \(2^n\) elements.
So if you analyze this pattern you'll see that the next iteration of the loop takes half the time the current one is going to take.
When it comes to Asymptotic analysis, we just call \(\log(n)\) which can be basically any base. But since we computer scientists use binary trees, we end up with \(\log_2(n)\) most of the times which we just term \(\log(n)\).
We talked about n log n time in the beginning of the article with is a linearithmic time problem. So you can construct your n log n problems like this
a loop that runs n times{
-> a program that runs in log n time complexity
}
A loop is running n times and a log n algorithm is running in that loop so the overall algorithm is = \(O(n \log n)\).
More resources
You can learn more about this here