What is an algorithm?
An algorithm is defined as a series of steps to complete a task, like a recipe, the first step to do, the second step to do… In computer science, an algorithm is a series of steps to complete a task. There are good algorithms and bad algorithms for completing a task. Finding an excellent algorithm can make the task complete efficiently. A good algorithm needs to be two-point correct and efficient, but sometimes it is not perfect and good enough, for example, it takes a long time for a task to get a perfect result.
Find the cube root
How do you find the cube root of NNN given a number? We know we can’t find the exact cube root of any number, so we can accept some error.
We can keep increasing the size of XXX from zero, see how close it is to NNN to the third power, and find the number that is closest to NNN.
def cuberoot(n) :
inc = 0.001 The smaller the number, the greater the accuracy
eps = 0.01 # Acceptable margin of error
ans = 0.0
while abs(ans ** 3 - n) >= eps and ans < abs(n):
ans += inc
if n < 0: ans *= -1
return ans
Copy the code
You can guess that when NNN is large, this algorithm takes a very long time. So what’s a better algorithm?
Binary search algorithm
Binary search (binary search) is a search algorithm that looks for a specific element in an ordered array. Start looking in the middle of the array to see if it’s the right number. If it’s not, see if it’s greater than or less than the right number. Then throw away the wrong half. This algorithm reduces the search area by half every time, so it’s a very fast algorithm.
Find the cube root problem up here and use the binary search algorithm and that’s it,
def cuberoot(n) :
eps = 0.01
low = 0.0 # lower bound
high = n # the upper bound
ans = (low + high) / 2
while abs(ans ** 3 - n) >= eps:
if ans ** 3 < n:
low = ans
else:
high = ans
ans = (low + high) / 2
if n < 0: ans *= -1
return ans
Copy the code
Compared to the original algorithm can be seen, binary search algorithm much faster, the original to iteration thousands of times, now a dozen times on the line! But is there a faster algorithm?
Newton’s method
Newton’s method is also known as newton-Raphson method. In short, Newton’s method can quickly find the roots of any polynomial (not just cube roots). For example, if we want to find the square root of 25, we first find a polynomial p(x)=x2−25p(x)=x^2-25p(x)=x2−25 such that p(r)=0p(r)=0p(r)=0, and then differentiate it to get p'(x)= 2XP ‘(x)= 2XP ‘(x)=2x. Newton’s method tells us that if a number GGG is close to its root, then g−p(g)p ‘(g) g-\frac{p(g)}{p'(g)}g−p ‘(g) p(g) is closer to its root.
def cuberoot(n) :
eps = 0.01
g = n / 3 # Guess the number
while abs(g ** 3 - n) >= eps:
g = g - (g ** 3 - n) / (g ** 2 * 3)
return g
Copy the code
You can see the code is very compact, but very fast even faster than binary search!
Big O notation
So how do we describe how fast an algorithm is?
- We need to determine how long the algorithm takes based on the input size.
- We have to know how fast the function grows with the input size.
The Big O notation, also known as the asymptotic notation, is a mathematical notation used to describe the asymptotic behavior of a function. Rather, it describes an asymptotic upper bound on the order of magnitude of a function in terms of another (usually simpler) function. In mathematics, it is generally used to describe truncated infinite series, especially the residual term of asymptotic series. In computer science, it is very useful in analyzing the complexity of algorithms.
The big O notation describes the worst-case complexity of an algorithm.
Let’s say I have a summation function
def add(n) :
ans = 0
while n > 0:
ans = ans + n
n = n - 1
return ans
Copy the code
You can see that the function takes a total of 1+5n+11+5n+11+5n+1 steps, but the big O notation only cares about the items that dominate as NNN increases. The other items and coefficients can be ignored. This function in big O notation is O(n), O(n), and O(n) is linear.
Complexity classification (fast to slow)
symbol | The name of the |
---|---|
constant | |
logarithmic | |
More logarithmic | |
linear | |
Linear logarithmic | |
polynomial | |
index | |
factorial |
Plus rule
For example, a function has two loops of different complexity.
The multiplication rule
Like loops nested loops.
Other representation symbols
In addition to the big O notation there are some symbols that are not commonly used.
big
symbol
The big-omega notation means just the opposite of the Big O notation. The big Omega \Omega symbol indicates that a function is always greater than a constant multiple of a particular function as it grows up to a certain point. There is no upper limit on how long the algorithm will take.
big
symbol
Big-theta Theta notation is a combination of the Big O notation and the Big Omega Omega notation.
θ (n2)\Theta(n^2) θ (n2)\Theta(n^2) θ (n2)\ n^2) θ (n2) Big θ (n)\Theta(n) θ (n) and big O look similar, but they mean different things. O(f(n))O(f(n))O(f(n)) means that as NNN increases the actual growth rate of the function does not exceed f(n)f(n)f(n) f(n), Θ (f (n)) \ Theta Θ (f (n)) (f (n)) is said with the increase of NNN f (n) (n) f f (n) is very close to the actual growth rate.