33. Search rotation sort array

The integer array NUMs is sorted in ascending order, with different values in the array.

Before passing to the function, nums is rotated on some previously unknown subscript k (0 <= k < nums.length), Make the array [nums [k], nums [k + 1),…, nums] [n – 1, nums [0], nums [1],…, nums [k – 1]] (subscript starting from 0). ,1,2,4,5,6,7 [0], for example, in the subscript 3 after rotation may become,5,6,7,0,1,2 [4].

Give you the rotated array nums and an integer target, return its subscript if nums contains the target value, otherwise -1.

Example 1: Input: nums = [4,5,6,7,0,1,2], target = 0 Output: 4 Example 2: Input: nums = [4,5,6,7,0,1,2], target = 3 Output: -1 Example 3: Input: Nums = [1], target = 0Copy the code

Method one: binary search

Ideas and Algorithms

For ordered arrays, binary lookup can be used to find elements.

But in this case, the array itself is not ordered, and the rotation only ensures that parts of the array are ordered, so can we still do binary search? The answer is yes.

What you can see is that when we split the array down the middle into the left and right, some of the array must be in order. For example, when we split from 6, the array becomes [4, 5, 6] and [7, 0, 1, 2]. The left [4, 5, 6] part of the array is ordered, as is the rest.

This enlightenment allows us to check which part of the two parts [L, mid] and [mid + 1, r] that mid is currently separated from the split position is orderly in the conventional binary search, and determine how to change the upper and lower bounds of binary search according to the ordered part. Because we can tell if Target is in this section from the ordered part:

If [l, mid-1] is an ordered array and the size of target is [nums[L],nums[mid])[\ Textit {nums}[L],\ Textit {nums}[mid])[nums[L],nums[mid]), If [mid, r] is an ordered array, then we should narrow the search scope to [l, mid-1], otherwise we should search for [mid + 1, r]. [mid+1],nums[r]]\ Textit {nums}[mid+1],\ Textit {nums}[r]]nums[mid+1],nums[r]] Then we should narrow the search scope to [mid + 1, r]; otherwise, we should search in [L, mid-1].

class Solution{ public int search(int[] nums,int target){ int n = nums.length; if(n == 0){ return -1; } if(n == 1){ return nums[0] == target ? 0:1; } int l = 0,r = n-1; while(l <= r){ int mid = (l+r)/2; if(nums[mid] == target){ return mid; } if(nums[0] <= nums[mid]){ if(nums[0]<=target&&target<nums[mid]){ r = mid - 1; }else{ l = mid + 1; } }else{ if(nums[mid] < target && target <= nums[n-1]){ l = mid + 1; }else{ r = mid - 1; } } } return -1; }}Copy the code

Complexity analysis

Time complexity: O(log⁡n)O(\log n)O(logn), where nn is the size of nums \textit{nums}nums array. The time complexity of the whole algorithm is O(log⁡n) O(\log n)O(logn) of binary search.

Space complexity: O(1). We only need constant space for variables.

144. Antecedent traversal of binary trees

Give you the root node of the binary tree, root, and return a pre-traversal of its node value.

Method one: recursion

Ideas and Algorithms

First we need to understand what binary tree traversal is: we traverse the tree in the same way as we traverse the root node — left subtree — right subtree, and we traverse the tree in the same way as we traverse the left subtree or right subtree until we traverse the entire tree. Therefore, the whole traversal process naturally has the nature of recursion, we can directly use recursive function to simulate this process.

Define preorder(root) to represent the answer currently traversed to the root node. By definition, we simply add the root value to the answer, recursively call preorder(root.left) to traverse the left subtree of root, and recursively call preorder(root.right) to traverse the right subtree of root. Recursion terminates when an empty node is encountered.

class Solution{ public List<Integer> preorderTraversal(TreeNode root){ List<Integer> res = new ArrayList<Integer>(); preorder(root, res); return res; } public void preorder(TreeNode root,List<Integer>res){ if(root == null){ return; } res.add(root.val); preorder(root.left,res); preorder(root.right,res); }}Copy the code

Time complexity: O(n), where n is the number of nodes in the binary tree. Each node is traversed exactly once.

Space complexity: O(n), is the stack overhead during recursion, on average O(log⁡n)O(\log n)O(logn), and in the worst case the tree is chained, O(n).

Method two: iteration

Idea and algorithm of The way we can also use iterative method to realize a recursive function, two ways are equivalent, the difference is that a recursive implicitly maintains a stack, and we need to explicitly when iteration to the stack simulation, and the rest of the implementation details are the same, you can refer to the following code.

class Solution{ public List<Integer>preorderTraversal(TreeNode root){ List<Integer> res = new ArrayList<Integer>(); if(root == null){ return res; } Deque<TreeNode> stack = new LinkedList<TreeNode>(); TreeNode node = root; while(! stack.isEmpty()||node ! = null){ while(node ! = null){ res.add(node.val); stack.push(node); node = node.left; } node = stack.pop(); node = node.right; } return res; }}Copy the code

Complexity analysis

Time complexity: O(n), where n is the number of nodes in the binary tree. Each node is traversed exactly once.

Spatial complexity: O(n), is the overhead of explicit stack during iteration, O(log⁡n)O(\log n)O(logn) on average, O(logn) O(logn) O(logn) in the worst case, tree chainlike, O(n).

Method 3: Morris traversal

Ideas and Algorithms

There is a clever way to do a pre-order traversal in linear time, using only constant space. This method was first proposed by J. H. Morris in his 1979 paper “Traversing Binary Trees Simply and Cheaply”, so it is called Morris Traversing Cheaply.

The core idea of Morris traversal is to use a large number of free Pointers of the tree to achieve the ultimate reduction of space overhead. The preceding traversal rules are summarized as follows:

Create a temporary node and set it to root.

If the left child of the current node is empty, add the current node to the answer and traverse the right child of the current node;

If the left child of the current node is not empty, find the precursor node of the current node in the left subtree of the current node under the middle order traversal:

If the right child of the precursor node is empty, set the right child of the precursor node to the current node. Then add the current node to the answer and update the right child node of the precursor node to the current node. The current node is updated as the left child of the current node.

If the right child of the precursor node is the current node, reset its right child to null. The current node is updated to the right child of the current node.

Repeat steps 2 and 3 until the traversal is complete.

class Solution{ public <Integer> preorderTraversal(TreeNode root){ List<Integer> res = new ArrayList<Integer>(); if(root == null){ return res; } TreeNode p1=root,p2=null; while(p1 ! = null){ p2 = p1.left; if(p2! =null){ while(p2.right ! = null &&p2.right ! = p1){ p2 = p2.right; } if(p2.right == null){ res.add(p1.val); p2.right = p1; p1 = p1.left; continue; }else{ res.add(p1.val); } p1 = p1.right; } return res; }}}Copy the code

Complexity analysis

Time complexity: O(n), where n is the number of nodes in the binary tree. A node without a left subtree is accessed only once, and a node with a left subtree is accessed twice.

Space complexity: O(1). Only Pointers that already exist (the free Pointers of the tree) are operated on, so only extra space for constants is required.

718. Longest repeating subarray

Given two integer arrays A and B, return the length of the longest common subarray of the two arrays.

Input: A: [1,2,3,2,1] B: [3,2,1,4,7] Output: 3 Explanation: The longest common subarray is [3,2,1].Copy the code

We are asked to calculate the longest common subarray of two arrays. Note that the array length does not exceed 1000 and the subarrays are contiguous in the original array.

It is easy to think of A violent solution, which is to enumerate the starting position I in array A and the starting position J in array B, and then calculate the longest common prefix K of A[I :] and B[j:]. The final answer is the maximum of all the longest common prefixes.

Method 1: dynamic programming

class Solution{ public int findLength(int[] A,int[] B){ int n = A.length, m = B.length; int[][] dp = new int[n+1][m+1]; int ans = 0; for(int i=n-1; i>=0; i--){ for(int j = m-1; j<0; j--){ dp[i][j]=A[i]==B[j]? dp[i+1]+1:0; ans = Math.max(ans,dp[i][j]); } } return ans; }}Copy the code

Complexity analysis

Time complexity: O(N×M)O(N \times M)O(N×M).

Space complexity: O(N×M)O(N \times M)O(N×M).

Method two: sliding Windows

class Solution{ pubilc int findLength(int[] A,int[] B){ int n = A.length, m =B.length; int res = 0; for(int i = 0; i<n; i++){ int len = Math.min(m,n-1); int maxlen = maxLength(A,B,i,0,len); ret = Math.max(ret,maxlen); } for(int i = 0; i < n; i++){ int len = Math.min(n,m-i); int maxlen = maxLength(A,B.0,i,len); ret = Math.max(ret,maxlen); } return ret; } public int maxLength(int[] A,int[] B,int addA,int addB,int len){ int ret = 0,k = 0; for(int i = 0; i<len; i++){){ k++; }else{ k = 0; } ret = Math.max(ret,k); } return ret; }}Copy the code

Complexity analysis

Time complexity: O ((N + M) x min ⁡ (N, M)) O ((N + M)/times/min (N, M)) O ((N + M) x min (N, M)).

Space complexity: O(1).

Method three: binary search + hash

Ideas and Algorithms

If arrays A and B have A common subarray of length k, then they must have A common subarray of length j <= k. So we can find the largest k by binary search.

In order to optimize the time complexity, at each step of binary search, we can consider using the method of hashing to determine whether there are subarrays of the same length in arrays A and B.

Noting that the value of the elements in the sequence is less than 100, we use the Rabin-Karp algorithm to hash the sequence. Specifically, we make a prime base, so the hash of the sequence S is:


h a s h ( S ) = i = 0 S 1 base S ( i + 1 ) x S [ i ] \mathrm{hash}(S) = \sum_{i=0}^{|S|-1} \textit{base}^{|S|-(i+1)} \times S[i]

Figuratively, you think of S as a base-like number (high on the left, low on the right) whose decimal value is its hash. Since this value is typically very large, it is modulo another prime mod.

When we want to hash all the subsequences of length len in a sequence S, we can use something like a sliding window to get the hashes of those subsequences in linear time. For example, if we currently have a hash of S[0:len] and want to compute the hash of S[1:len+1], we have the following formula:


h a s h ( S [ 1 : l e n + 1 ] ) = ( h a s h ( S [ 0 : l e n ] ) base l e n 1 x S [ 0 ] ) x base + S [ l e n ] \mathrm{hash}(S[1:len+1]) = (\mathrm{hash}(S[0:len]) – \textit{base}^{len-1} \times S[0]) \times \textit{base} + S[len]

class Solution{ int mod = 1000000009; int base = 113; public int findLenght(int[] A,int[] B){ int left = 1,right = Math.min(A.length,A.length)+1; while(left<right){ int mid = (left + right)>>1; if(check(A,B,mid)){ left = mid +1; }else{ right = mid; } } return left -1; } public boolean check(int[] A.int[] B,int len){ long hashA=0; for(int i = 0; i < len; i++){ hashA = ((hashA -A[i - len]*mult %mod+mod)%mod*base+A[i])%mod; bucketA.add(hashA); } long hashB = 0; for(int i = 0; i < len; i++){ hashB = (hashB*base +B[i])%mod; } if(bucketA.contains(hashB)){ return true; } for(int i = len; i<B.length; i++){ hashB = ((hashB - B[i - len]*mult%mod+mod)%mod*base + B[i])%mod; if(bucketA.contains(hashB)){ return true; } } return false; } public long qPow(long x, long n){ long ret = 1; while(n ! = 0){ if((n&1)! =0){ ret = ret *x%mod; } x = x * x % mod; n >>= 1; } return ret; }}Copy the code

Complexity analysis

Time complexity: O ((M + N) log ⁡ (min ⁡ (M, N))) O \ big ((M + N) \ log {} (\ min (M, N)) \ big) O ((M + N) log (min (M, N))).

Space complexity: O(N).

56. Consolidated intervals

The array enjoyments is a set of intervals, where a single interval is enjoyments [I] = [Starti, endi]. You merge all overlapping ranges and return a non-overlapping array of ranges that exactly covers all of the ranges in the input.

Example 1: input: intervals = [[1, 3], [2, 6], [8, 10], [15, 17]] output: [[1, 6], [8, 10], [15, 17]] : interval [1, 3] and [2, 6] overlap, merge them into [1, 6]. Example 2: inputs: intervals = [[1,4],[4,5]] output: [[1,5]] explanation: intervals [1,4] and [4,5] can be regarded as overlapping intervals.Copy the code

Method 1: Sort

Train of thought

If we sort by the left endpoints of the intervals, then the intervals that can be merged in the sorted list must be contiguous. As shown in the figure below, the intervals marked blue, yellow, and green can be combined into one large interval, respectively, and they are contiguous in the sorted list:

algorithm

We use array merged to store the final answer.

First, we sort the intervals in the list in ascending order from the left endpoint. Next we add the first interval to the merged array and consider each subsequent interval in sequence:

If the left end of the current interval is merged after the right end of the last interval in the array merged, then they do not overlap.

Otherwise, they overlap and we need to update the right endpoint of the last interval in the array merged with the right endpoint of the current interval and set it to the larger value of the two.

Proof of correctness

The correctness of the above algorithm can be proved by contradiction: In the sorted array, two intervals that should have been merged failed to be merged, It means that there is a triple (I, j, k) and an array of three interval a [I], a [j], a [k] meet I < < j and k (a [I], a [k]) (a [I], a [k]) (a [I], a [k]) can be combined, But (a [I], a [j]) (a [I], a [j]) (a [I], a [j]) and (a [j], a [k]) (a [j], a [k]) (a [j], a [k]) cannot be merged. This shows that they satisfy the following inequality:


a [ i ] . e n d < a [ j ] . s t a r t ( a [ i ]  和  a [ j ] Can’t merge ) a [ j ] . e n d < a [ k ] . s t a r t ( a [ j ]  和  a [ k ] Can’t merge ) a [ i ] . e n d p a [ k ] . s t a r t ( a [ i ]  和  a [ k ] Can be combined ) A [I]. End < a [j]. J start \ quad (a \ [I] text {and} [j] \ a text {cannot merge}) \ \ a [j]. Journal of end < a [k]. Start \ quad (a \ [j] text {and} [k] a. \ \ \ text {cannot merge}) a [I] end \ geq a [k]. Start \ quad (a \ [I] text {and} a \ [k] text {can merge}) \ \

Start ≤a[j].enda[j].start ≤a[j].enda[j].start ≤a[j].end


a [ i ] . e n d < a [ j ] . s t a r t Or less a [ j ] . e n d < a [ k ] . s t a r t a[i].end < a[j].start \leq a[j].end < a[k].start

There is a contradiction! That means the hypothesis is not valid. Therefore, all intervals that can be combined must be continuous.

class Solution { public int[][] merge(int[][] intervals) { if (intervals.length == 0) { return new int[0][2]; } Arrays.sort(intervals, new Comparator<int[]>() { public int compare(int[] interval1, int[] interval2) { return interval1[0] - interval2[0]; }}); List<int[]> merged = new ArrayList<int[]>(); for (int i = 0; i < intervals.length; ++i) { int L = intervals[i][0], R = intervals[i][1]; if (merged.size() == 0 || merged.get(merged.size() - 1)[1] < L) { merged.add(new int[]{L, R}); } else { merged.get(merged.size() - 1)[1] = Math.max(merged.get(merged.size() - 1)[1], R); } } return merged.toArray(new int[merged.size()][]); }}Copy the code

Complexity analysis

Time complexity: O(nlog⁡n)O(n\log n)O(nlogn), where n is the number of intervals. Excluding the sorting overhead, we only need one linear scan, so the main time overhead is sort O(nlog⁡n) O(n\log n)O(nlogn).

Space complexity: O(log⁡n)O(\log n)O(logn), where n is the number of intervals. This is how much extra space you’re using, in addition to storing your answers. O(log⁡n)O(\log n)O(logn) is the space complexity required for sorting.

113. Sum of paths II

Given the root node of the binary tree, root, and an integer target and targetSum, find all paths from the root node to the leaf node where the sum of paths is equal to the given targetSum.

A leaf node is a node that has no children.

Method 1: Depth-first search

class Solution { List<List<Integer>> ret = new LinkedList<List<Integer>>(); Deque<Integer> path = new LinkedList<Integer>(); public List<List<Integer>> pathSum(TreeNode root, int sum) { dfs(root, sum); return ret; } public void dfs(TreeNode root, int sum) { if (root == null) { return; } path.offerLast(root.val); sum -= root.val; if (root.left == null && root.right == null && sum == 0) { ret.add(new LinkedList<Integer>(path)); } dfs(root.left, sum); dfs(root.right, sum); path.pollLast(); }}Copy the code

Complexity analysis

Time complexity: O(N2)O(N^2)O(N2), where N is the number of nodes in the tree. In the worst case, the top half of the tree is a chain, the bottom half is a complete binary tree, and the path from the root node to each leaf node matches the problem. In this case, the number of paths is O(N), and the number of nodes in each path is also O(N). Therefore, the time complexity of adding all paths to the answer is O(N2)O(N^2)O(N2).

Space complexity: O(N), where N is the number of nodes in the tree. The space complexity mainly depends on the stack space overhead, the number of elements in the stack does not exceed the number of nodes in the tree.

Method two: breadth-first search

class Solution { List<List<Integer>> ret = new LinkedList<List<Integer>>(); Map<TreeNode, TreeNode> map = new HashMap<TreeNode, TreeNode>(); public List<List<Integer>> pathSum(TreeNode root, int sum) { if (root == null) { return ret; } Queue<TreeNode> queueNode = new LinkedList<TreeNode>(); Queue<Integer> queueSum = new LinkedList<Integer>(); queueNode.offer(root); queueSum.offer(0); while (! queueNode.isEmpty()) { TreeNode node = queueNode.poll(); int rec = queueSum.poll() + node.val; if (node.left == null && node.right == null) { if (rec == sum) { getPath(node); } } else { if (node.left ! = null) { map.put(node.left, node); queueNode.offer(node.left); queueSum.offer(rec); } if (node.right ! = null) { map.put(node.right, node); queueNode.offer(node.right); queueSum.offer(rec); } } } return ret; } public void getPath(TreeNode node) { List<Integer> temp = new LinkedList<Integer>(); while (node ! = null) { temp.add(node.val); node = map.get(node); } Collections.reverse(temp); ret.add(new LinkedList<Integer>(temp)); }}Copy the code

Complexity analysis

Time complexity: O(N2)O(N^2)O(N2), where N is the number of nodes in the tree. The analytical thinking is the same as method 1.

Space complexity: O(N), where NN is the number of nodes in the tree. The space complexity mainly depends on the overhead of the hash table, which needs to store the parent node of every node except the root node, and the number of elements in the queue does not exceed the number of nodes in the tree.