A lot of interviews will always be asked sorting questions, most are bubbling, dichotomous, fast, etc., think about the principle of sorting, if not a deep understanding of the interview, I feel myself in the loop… For example, bubble sort, if you can find the maximum in the first round, what if you find the second maximum? Just one more round. How does it compare to the maximum? Is that ok? Because of the complexity of the algorithm, if the array is large and there are repeated numbers, is the result accurate? How do we do that?
Key words: second largest, algorithm complexity, accuracy
Let’s first look at bubble sort.
function getSecondBigNum(arr){ let temp; for(let i = 0; i < 2; i++){ for(let j = 0; j < arr.length-1; j++){ if(arr[j]>arr[j+1]){ temp = arr[j] arr[j] = arr[j+1] arr[j+1] = temp } } } returnArr/arr. Length - 2} var arr =,6,2,7,3 [8] the console. The log (getSecondBigNum (arr))Copy the code
Two loops are ok, the time complexity is 2n, the efficiency is ok, using the array [8,6,2,7,3] to test the return result is ok, but one detail is omitted, the question does not state that the values in the array are not repeated, so for [8,8,6,2,7,3], the return result is wrong.
So you take two iterations and decide if the last two values are equal, and if they are equal you do a third bubble sort, and you always find them, but then the interviewer asks you, do you know how complicated this is? In extreme cases. It’s a big array, and there’s a lot of repetition, which is n2, which is inefficient.
Think again, since we found the maximum in the first iteration, let’s move the maximum out, and the rest of the bubble time is 3N, which is more or less acceptable. The code is as follows:
function getSecondBigNum(arr){ let temp; let newArr = [] for(let j = 0; j < arr.length-1; j++){ if(arr[j]>arr[j+1]){temp = arr[j] arr[j] arr[j] = arr[j+1] arr[j+1] = temp}for(leti = 0; i< arr.length-1; i++){if(arr[i] ! = arr[arr.length-1]){ newArr[i] = arr[i] } }for(let j = 0; j< newArr.length-1; j++) { if(newArr[j]>newArr[j+1]){ temp = newArr[j] newArr[j] = newArr[j+1] newArr[j+1] = temp } }
returnNewArr [newarr.leng-1]}var arr = [8,8,8,8,6,2,7,3] console.log(getSecondBigNum(arr))Copy the code
Disadvantages: Introduce a new array, increase the space complexity, can be directly compared with the maximum value in the second iteration, at the same time, only to find the maximum value in the first round, there is no need to sort, just define an intermediate variable;
The optimization idea summarized so far ~~~
Since sort, then put the other sort to integrate it ~
- Quick sort
Ideas:
– Select A random number in the array, A, based on this number
– Compare other numbers with this one, and place those smaller to its left and those larger to its right
– After A loop, the left side of A is less than A, and the right side is greater than A
– At this time the left and right numbers recurse above the process;
The code is as follows :(we do not consider how ruan yifeng teacher’s quick sorting is said online, we just say an algorithm)
- Selection sort
One of the most stable sorting algorithms in terms of time complexity, because whatever goes in is O(n²) time complexity… So when you use it, the smaller the data, the better. The only benefit might be that it doesn’t take up any extra memory.
- Insertion sort
Like bubble sort, insert sort also has an optimization algorithm called split insert
Actually these algorithms are very interesting, every time writing is easy to forget, especially in the three methods of circulation subscript what, I always remember, time will give a brief analysis of data it ~ ~ ~ there are a number of online sorting algorithms, what hill sort, merge sort, heap sort, etc., not introduce one by one again ~ ~