See the data Structures & Algorithms Must-know must-Know series: 1

3. Time & Space Complexity (PART 2)

Best – and worst-case time complexity

Go directly to 🌰 (golang implementation, should not be difficult to read)

func find(arr []int, x int) int {
	pos := - 1
	for i := 0; i < len(arr); i++ {
		if arr[i] == x {
			pos = i
		}
	}
	return pos
}
Copy the code

In an unordered array (ARR), it looks for the occurrence of the variable x, and returns -1 if none is found

What is the complexity? 🙌 O (n)

P.S where, n represents the length of the array, that is, len(arr); 3. Time & Space Complexity (PART 2)

Finding a single piece of data in an array, however, does not require traversing the entire array each time, because it is possible to find it halfway and end the loop prematurely.

The above code can be optimized as follows:

func find(arr []int, x int) int {
	pos := - 1
	for i := 0; i < len(arr); i++ {
		if arr[i] == x {
			pos = i
			break}}return pos
}
Copy the code

So, the question is, after optimization (with break), does the code still have O(n) time?

Can the analysis we did in the last lecture still solve this problem?

🙌 can’t…

Thus, we continue to introduce three concepts: best (case) time complexity, worst (case) time complexity, and average (case) time complexity.

Here, first look at “best and worst time complexity”, the above 🌰 is relatively easy to analyze

  • I’d better find it once, order 1.
  • Worst = “finally found or not found, O(n).”

Refer to play.golang.org/p/zYTQRcx9i… Left:

In general, best-case and worst-case time complexity is relatively simple to analyze.

Average case time complexity

Average, which is actually a statistical term, is the quantity of central tendency in a set of data. If we need to analyze the above average case time complexity of 🌰, we also need to add the calculation of probability, continue in 🐷 mode, one value at a time:

For example, we know arr=[]int{1,2,3,4,5… , n}

Let’s say the probability of finding it is 1/2

Find 1 = "cycle time, probability of (1 / n) * 2 =" cycle (1/2) for 2 times, the probability of (1 / n) * (1/2)... . Find n => cycle n times, probability is (1/n)*(1/2) Can't find n => cycle n, probability is 1/2Copy the code

Minus the coefficients and constants, the weighted average time complexity of this code is still O(n).

In fact, in most cases, we don’t need to distinguish between best, worst, and average case time complexity, and just use one complexity (as in the example from the previous blog post). This is only true if there is an order of magnitude difference in complexity between different inputs.

Amortize the time complexity

👇 introduces a more advanced concept, amortized time complexity, and its corresponding analysis method, amortized analysis (or amortized analysis).

What’s the difference between average and average? (Don’t want to learn…)

Wake up and see 🌰 below

func insert(oArr []int, count int.new int) (nArr []int) {
	if len(oArr) < count {
		nArr = append(oArr, new)}else {
		sum := 0
		for i := 0; i < len(oArr); i++ {
			sum = sum + oArr[i]
		}
		nArr = []int{sum, new}}return
}
Copy the code

Insert data into an array

  1. If len(oArr) < count, insert data directly into the array
  2. Full (len(oArr) = count), for loops through the sum, emptens the array, places the sum at the first position in the array, and inserts the new data

Attention! The input count can only be less than or equal to the Arr length.

Just for example, no one would actually write that… Refer to play.golang.org/p/pYU1Tmvk5…

Obviously, different input (count) has different levels of complexity, so first analyze “best & worst” :

🙌 Best O(1) Worst O(n)

The average analysis is a little more complicated, and we can divide it into n cases, each of which has a time complexity of O(1). In addition, there is the “extra” case where a data is inserted when there is no free space in the array, O(n). Then, the overall “average complexity” is:

Column analysis is really troublesome duck!! Skull pain…

In fact, by “feeling”, completely need not column type, or above that 🌰. If you think about it, there is only one case O(n), and that case is completely negligible by the definition of “average”, the quantity of central tendency in the data. So order 1 is pretty straightforward.

Therefore, for the complexity analysis of such a special scenario, we do not need to find out all the input cases and the corresponding probability of occurrence, and then calculate the weighted average as mentioned in the average complexity analysis method before.

For this special scenario, we introduce a simpler analysis method: amortization analysis method. The time complexity obtained through amortization analysis is named as amortized time complexity.

Where amortized time complexity analysis can be applied, the general amortized time complexity is equal to the best-case time complexity.

The amortized time is equal to the best-case time

The amortized time is equal to the best-case time

(Important things to say three times, you can also forget the above analysis, <( ̄▽ ̄)/)

Although many data structures and algorithm books go to great lengths to distinguish between average time complexity and amortized time complexity, I personally believe that amortized time complexity is a special kind of average time complexity, and we don’t need to spend much effort to distinguish them.

conclusion

For different input cases, if the complexity is differentiated by magnitude, the best case time complexity, worst case time complexity, and average case time complexity are analyzed.

Amortized time complexity is a special kind of average time complexity. In general, amortized time complexity is equal to best-case time complexity.

With the introduction of these concepts, we can more fully represent the efficiency of a piece of code.