Golden nine silver ten after each large network platform is a variety of surface by sharing, including received offers, or has failed the interview, believe that most people have got their loved ones big offer, but also have a few failed to get in their inner yearning of giant is annoyed, so how to into the company, what to do to prepare to into the giant?
At this time, bytedance director has said that for the interview, the big guy summarized some aspects of the experiment, I believe that in other interviews can bring a lot of help, today xiaobi sorted it out, I hope to help more friends.
Directory:
1. Network 2.Java Basics & Containers & Synchronization & Design Patterns 3.Java Virtual Machine & Memory Structure &GC& Class Loading & Four References & Dynamic Proxy 4.Android Basics & Performance Optimization &Framwork 5. Audio & Video &FFmpeg& Player
1, the network
Network protocol model
Application layer: handles specific application details HTTP, FTP, DNS
Transport layer: Provides basic end-to-end communication TCP and UDP for two hosts
Network layer: Controls IP addresses such as packet transmission and routing
Link layer: interfaces related to operating system device drivers and nics
TCP is different from UDP
A TCP connection. Reliable; Orderly; Byte oriented stream; Slow; The weight; Full duplex. Suitable for file transfer and browser
- Full-duplex: USER A sends A message to user B, but user B can also send A message to user A
- Half-duplex: User A sends A message to user B, but user B cannot send A message to user A
UDP no connection; Unreliable; Disorder; Message oriented; Speed; Light weight; It is suitable for instant messaging and video calls
TCP three-way handshake
A: Can you hear me? B: I can hear you. Can you hear me? A: I can hear you. Let’s go
Both A and B need to make sure that: You hear what I say; I can hear what you’re saying. So you need three handshakes
TCP waved four times
A: I’m done. B: I know. Wait, I may not be finished
B may not be finished after receiving the message that A is finished, so it cannot immediately reply with the end sign. It can only tell A when IT is finished.
POST is different from GET
The Get argument is placed in the URL; The Post argument in the request Body may not be secure because the argument is in the URL
HTTPS
HTTP is hypertext transfer protocol, plaintext transfer; HTTPS encrypts HTTP data using the SSL protocol
HTTP default port 80; HTTPS uses port 443 by default
Advantages: Security disadvantages: time-consuming, SSL certificate charges, encryption power is still limited, but much better than HTTP
Java Basics & Containers & Synchronization & Design patterns
StringBuilder, StringBuffer, +, string. concat link String:
- StringBuffer threads are safe, StringBuilder threads are not
- + is actually implemented with StringBuilder, so you can just use + for acyclic bodies, not for loop bodies, because stringBuilders are created frequently
- StringBuilder < StringBuffer < concat < + StringBuilder < StringBuffer < concat < +
Java generic erase
- Structure-related generics such as modifier member variables are not erased
- The container class generics are erased
ArrayList, and LinkedList
ArrayList
Array-based implementation, fast search: O (1), slow add and delete: O (n) The initial capacity is 10, expansion through the System. ArrayCopy method
LinkedList
Based on bidirectional linked list, slow search: O (n), fast add and delete: O (1) encapsulates the call of queue and stack
A HashMap, HashTable
HashMap
- Based on arrays and linked lists, arrays are the body of a HashMap; Linked lists exist to resolve hash conflicts
- If hash conflicts occur and the linked list size exceeds the threshold, the capacity will be expanded. JAVA 8 will convert the linked list to a red-black tree to improve performance and allow the key/value to be null
HashTable
- The data structure is the same as a HashMap
- Value cannot be null
- Thread safety
ArrayMap, SparseArray
ArrayMap
1. Implement based on two arrays, one storing hash; One holds key-value pairs. During capacity expansion, only array copy is required and hash table reconstruction is not required. 2. High memory utilization 3. Not suitable for storing large amounts of data because of binary lookup of key (up to 1000)
SparseArray
1. Based on two arrays, int is used as the key. 2. Not suitable for storing large amounts of data because of binary lookup of key (up to 1000)
The volatile keyword
- Can only be used to modify variables that may be accessed by multiple threads at the same time
- Equivalent to lightweight synchronized, volatitle ensures order (disables command reordering) and visibility. The latter also guarantees atomicity
- Variables are located in main memory, and each thread has its own working memory. Variables have a copy in the working memory of its own thread, and the thread directly operates on this copy
- Changes to variables that are volatile are immediately synchronized to main memory, keeping the variables visible.
Double check singletons, why volatile?
1. The problem that volatile wants to solve is that if you want to use instance in another thread, you find instance! =null, but instance has not been initialized yet
2. Will the instance = newInstance (); Break it down into three sentences. 1. Allocate memory 2. Initialize instance 3
3. Volatile disallows instruction reordering by ensuring that 2 is followed by 3
Wait and sleep.
- Sleep is a static method of Thread that can be called from anywhere
- Wait is a member of the Object method, can only be invoked in the synchronized code block, or you will quote IllegalMonitorStateException illegal monitoring abnormal state
- Sleep does not release the shared resource lock, and wait does
The lock and synchronized
- Synchronized is a Java keyword and a built-in feature. Lock is an interface
- Synchronized automatically releases locks; The lock needs to be released manually, so it needs to be written to a try catch block and released in finally
- Synchronized cannot interrupt wait locks; Lock can be broken
- Lock improves the efficiency of read/write operations by multiple threads
- When the resource competition is fierce, the performance of Lock is significantly better than that of synchronized
Reentrant lock
- Definition: after a lock has been acquired, the next call to the synchronized code block/attempt to acquire the lock does not have to re-apply for the lock, can directly execute the relevant code
- ReentrantLock and synchronized are both reentrant locks
Fair lock
- Definition: The thread that waits the longest gets the lock first
- An unfair lock does not guarantee which thread will acquire the lock. Synchronized is an unfair lock
- ReentrantLock The ReentrantLock is an unfair lock by default. You can set it to a fair lock
Optimistic locks and pessimistic locks
- Pessimistic lock: once a thread obtains the lock, other threads will suspend and wait. It is suitable for the scenario with frequent write operations. Synchronized is a pessimistic lock
- Optimistic lock: If there is no conflict and no lock is added, the data is updated to determine whether the data expires. If the data expires, the data is not updated. This lock applies to scenarios with frequent read operations
- Optimistic lock CAS: Compare And Swap. When updating data, Compare whether the original value is equal. If the original value is not equal, the data is past And data is not updated
- Optimistic lock implementation: AtomicInteger, AtomicLong, AtomicBoolean
There are four necessary conditions for deadlock
- The mutex
- Possess and wait
- Do not take
- Loop waiting for
Principle of synchronized
- Each object has a monitor lock: monitor, the synchronization block starts with Monitorenter and ends with Motnitorexit
- Wait/notify is dependent on the monitor monitors, so in the synchronized code block execution will quote IllegalMonitorStateException anomalies
Java Virtual Machine & Memory structure &GC& Class loading & Four references & Dynamic Proxy
JVM
- Definition: can be understood as a fictional computer, interprets its own bytecode instruction set to map to the local CPU or OS instruction set, the upper layer only needs to focus on the Class file, independent of the operating system, implement cross-platform
- Kotlin can be interpreted as a Class file, so it can run on the JVM
JVM memory model
- Java threads communicate with each other through shared memory, with each thread having its own local memory
- The shared variable is stored in main memory, and the thread makes a copy of the shared variable into local memory
- The volatile keyword serves the memory model to ensure memory visibility and ordering
JVM memory structure
Thread private:
1. Program counter: records the address of the bytecode instruction being executed. If the Native method is being executed, it is null 2. Virtual machine stack: When executing a method, the data required by the method is stored as a stack frame and pushed onto the stack. Native method stack: Same as virtual machine stack, but for Native methods
Thread sharing:
1. Heap: Store Java instances, GC main area, generation collection GC method will divide the heap into new generation, old generation 2. Method area: store class information, constant pool, static variables and other data
GC
Recycle area: only for heap and method area; Thread private area data is destroyed at the end of the thread without recycling
Recycling type:
1. Objects in the heap
- The generational collection GC method divides the heap into Cenozoic and old ages
- New generation: New objects will enter new generation. Objects are reclaimed by a copy algorithm
- Old age: New large objects and old objects will enter old age. Objects are reclaimed by a mark-sweep algorithm
2. Class information and constant pool in the method area
To determine whether an object is recyclable:
1. Disadvantages of reference counting method: circular reference
2. Reachability analysis definition: Search from GC ROOT, unreachable objects can be reclaimed
GC ROOT
1. Objects referenced in the VM stack/local method stack 2. Objects referenced by constant/static variables in the method area
Four types of references
- Strong references: will not be collected
- Soft reference: When memory is insufficient, it will be reclaimed
- Weak references: they are collected on GC
- Virtual references: Objects cannot be retrieved from virtual references and can listen for their collection
ClassLoader
Class lifecycle:
1. Load; 2. Validation; 3. Prepare; Analytical; 4. 5. Initialize. 6. Use; 7. Remove
Class loading process:
1. Load: get the binary byte stream of the class; Generate the runtime storage structure of the method area; Generate a Class object in memory. 2. Verify that the Class byte stream meets vm requirements 3. Preparation: initializes static variables 4. resolution: replaces symbolic references in the constant pool with direct references 5. Initialization: Perform static block code, class variable assignment
Class loading timing:
3. Call static variables of the class (except constants put into the constant pool)
Class loader: Responsible for loading class files
Classification:
1. Boot class loader – no parent class loader 2. Extended class loader – inherits from boot class loader 3. System class loader – Inherits from the extended class loader
Parental delegation model:
When a class is loaded, the parent loader is loaded layer by layer, and the load fails
Why are they called parents? Regardless of custom loaders, system class loaders require two layers of online inquiry, hence the name parent
To determine whether the class is the same, the class loader must be the same in addition to the class information
Advantages:
- To prevent reloading, the parent loader is loaded and there is no need to load
- Security to prevent tampering with core library classes
Dynamic proxy principle and implementation
- The InvocationHandler interface that the dynamic proxy class needs to implement
- Proxy.newProxyInstance, used to dynamically create Proxy objects
- Retrofit application: Retrofit uses dynamic proxies to generate a dynamic proxy object for each request interface we define to implement the request
4. Android Basics & Performance Optimization &Framwork
Activity Start mode
-
Standard Standard mode
-
SingleTop stack top reuse mode,
-
Push click message interface
-
SingleTask stack reuse mode,
-
Home page
-
SingleInstance singleton pattern, located in a single task stack
-
Dialing interface details:
-
TaskAffinity: taskAffinity, used to specify the task stack name. The default value is the application package name
-
AllowTaskReparenting: Allows task stacks to be moved
View working Principle
-
DecorView (FrameLayout)
-
LinearLayout
-
titlebar
-
Content
-
Call setContentView to set the View
The performTraversals method call to ViewRoot triggers the rendering of the View, which is then called:
- PerformMeasure: traverses the View measure to measure the size
- PerformLayout: Traverses the Layout of the View to determine the position
- PerformDraw: Traverses a View’s draw
Event distribution mechanism
-
Once a MotionEvent is generated, it is passed in the order of Activity -> Window -> decorView -> View. The View is passed in the order of event distribution, relying on three methods:
-
DispatchTouchEvent: Used to dispatch events, is called as soon as the click event is received, and returns a result indicating whether the current event was consumed
-
OnInterceptTouchEvent: Used to determine whether to intercept an event. After the ViewGroup determines to intercept an event, the event sequence will not trigger the onIntercept that calls the ViewGroup
-
OnTouchEvent: Used to handle events, return a result indicating whether the current event was handled, or pass it on to the parent container for processing
-
Details:
-
An event sequence can only be intercepted and consumed by one View
-
View does not have an onIntercept method and calls onTouchEvent directly
-
OnTouchListener has a higher priority than OnTouchEvent and onClickListener has the lowest priority
-
RequestDisallowInterceptTouchEvent can block the parent container onIntercet method call
Windows, WindowManager, WMS, SurfaceFlinger
- Window: The abstraction does not actually exist, but in the form of a View, implemented through PhoneWindow
- WindowManager: External access to Windows, internal interaction with WMS is an IPC process
- WMS: Manages the layout and order of Windows surfaces, running as a system level service in a separate process
- SurfaceFlinger: Mixes Windows maintained by WMS in a certain order and displays them on the screen
View animation, frame animation and property animation
The View animation:
- Function object is View, can be defined in XML, XML implementation is recommended to be easier to read
- Supports four effects: pan, Scale, Rotate and transparency
Animation:
- AnimationDrawable, easy OOM
Property animation:
- Can act on any object, can be defined by XML, Android 3 introduced, it is suggested that the code implementation is more flexible
- ObjectAnimator, ValuetAnimator, and AnimatorSet are included
- Time interpolator: Calculates the percentage of current property changes based on the percentage of time elapsed
- Interpolators for uniform speed, acceleration and deceleration are preset in the system
- Type estimator: Calculates the value of the changed attribute based on the percentage of the current attribute change
- The system preset integer, floating point, color value and other types of estimators
- Precautions for use:
- Avoid using frame animation, easy OOM
- Stop the animation when the interface is destroyed to avoid memory leaks
- Enable hardware acceleration to improve animation fluency. Hardware acceleration:
- Share some CPU work with the GPU, and use the GPU to complete drawing
- The drawing speed is optimized from two aspects of work allocation and drawing mechanism
Handler, MessageQueue, and Looper
- Handler: Develops direct contact classes that hold MessageQueue and Looper internally
- MessageQueue: MessageQueue that stores messages internally through a single linked list
- Looper: Internally holds MessageQueue, loops to see if there are new messages, processes if there are, and blocks if there are not
- How to implement blocking: through the nativePollOnce method, based on the Linux epoll event management mechanism
- Why isn’t the main thread blocked by Looper: the system wakes up with a refresh UI message every 16ms
MVC, MVP, MVVM
-
MVP: Model: Processing data; View: Control View. Presenter: Separates Activity and Model
-
MVVM: Model: process and obtain saved data; View: Control View. ViewModel: data container
-
MVVM is easily implemented using LiveData and ViewModel of Jetpack component architecture
The Serializable, Parcelable
- Serializable: Java serialization, applicable to storage and network transfer. SerialVersionUID is used to determine whether the deserialization is consistent with the class version. If not, deserialization fails
- Parcelable: Android serialization method, suitable for component communication data transfer, high performance, because unlike Serializable, there are not a lot of reflection operations, frequent GC
Binder
- The mainstay of Android interprocess communication, based on client-server communication
- Mmap data copy implementation IPC, traditional IPC: user A space -> kernel -> user B space; Mmap maps the kernel to user B space, implementing directly from user A space -> user B space
- BinderPool avoids creating multiple services
The IPC way
-
Intent Extras, Bundle: Data transfer can be serialized, implement Parcelable, Serializable, applicable to the four components of communication
-
File sharing: Applies to scenarios where simple data is exchanged in real time
-
AIDL: The AIDL interface is essentially a tool that the system provides to implement BInder easily
-
Android Interface Definition Language, which can invoke methods across processes
-
Server: Declare the interface exposed to the client in an AIDL file, create a Service to implement the AIDL interface and listen for client connection requests
-
Client: Bind the Service to a Binder object on the server and invoke it with AIDL interface
-
RemoteCallbackList implements cross-process interface listener with Binder object for key storage client registration
-
1.Binder. LinkToDeath set death proxy; 2. OnServiceDisconnected callback
-
Messenger: Based on AIDL implementation, server side serial processing, mainly used to pass messages, suitable for low-concurrency one-to-many communication
-
ContentProvider: Binder – based implementation for data sharing between one and multiple processes
-
Socket: TCP or UDP, applicable to network data exchange
Android startup process
- Press the power button -> load the boot program BootLoader to RAM -> Run the BootLoader program to start the kernel -> Start the init process -> Start Zygote and various daemons ->
- Start the System Server service process. Start AMS and WMS. -> Start the application Launcher process
App Startup Process
Click on an application icon in the Launcher -> find the application process with AMS, or fork it with Zygote if it doesn’t exist
Process to keep alive
-
Process priority: 1. Foreground process; 2. Visible process; 3. Service process; 4. Background process; 5. Empty process
-
The process is killed: 1. The process is killed when the background memory is insufficient. 2. Cut to the background manufacturer’s power-saving mechanism to kill; 3. Users take the initiative to clear data
-
Survival mode:
-
1.Activity lifting: Suspend a 1-pixel Activity to elevate the process priority to the foreground process
-
2.Service raise: Start a foreground Service (API>18 will have a notification bar that is running)
-
3. Radio pull
-
4. The Service life
-
5.JobScheduler Schedules scheduled tasks
-
6. Dual-process pull
Network optimization and detection
- Speed: 1.GZIP compression (okHTTP automatic support); 2.Protocol Buffer instead of JSON; 3. Optimize picture/file traffic; 4. Direct IP connection saves DNS resolution time
- Success rate: 1. Retry policy.
- Traffic: 1.GZIP compression (okHTTP automatic support); 2.Protocol Buffer instead of JSON; 3. Optimize picture/file traffic; 5. File download resumable; 6. The cache
- Protocol layer optimizations, such as better HTTP versions, etc
- Monitoring: Charles captures packets and Network Monitor monitors traffic
UI stuck optimization
- Reduce layout hierarchy and control complexity to avoid overdrawing
- Use include, Merge, viewStub
- Optimize the drawing process to avoid frequent object creation and time-consuming operations in Draw
Memory leakage scenarios and Prevention
1. Static variables and singletons strongly cite life-cycle related data or resources, including EventBus 2. Resources such as cursors and IO streams are not released. 3. Animations related to the interface are suspended when the interface is destroyed. Internal classes holding external class references cause memory leaks
- Handler Internal class memory leak avoidance: 1. Use static inner class + weak reference 2. Clear the message queue when the interface is destroyed
- Check: Android Studio Profiler
LeakCanary principle
- Weak references and reference queues monitor whether objects are reclaimed
- For example, the object is monitored when the Activity is destroyed, gc is detected when it is not collected, and monitoring continues
OOM scenarios and Avoidance
- Load large image: Reduce the image size
- Memory leakage: Avoids memory leakage
5. Android modularity & Hot Fix & Hot Update & Packaging & Obfuscation & Compression
Dalvik and ART
-
Dalvik
-
Google designed Java virtual machines specifically for the Android platform to run.dex files directly, suitable for systems with limited memory and processing speed
-
The JVM instruction set is stack-based; Dalvik instruction set is register-based, and the code execution efficiency is better
-
ART
-
Dalvik converts bytecode to machine code every time it runs; ART is converted to machine code when the application is installed, making it faster to execute
-
ART storage machine code takes up more space, space for time
APK packaging process
1. Aapt packages resource files to generate R.Java files; Aidl generates Java files 2. Compiles Java files to class file 3. Convert project and third-party class files to dex file 4. Package the dex file, SO, compiled resources, raw resources, and so on into apK file 5. Signature 6. Resource file alignment to reduce runtime memory
App Installation Process
- First, unpack APK, put resources, so, and so into the application directory
- Dalvik will process dex into ODEX; ART will process dex into OAT;
- OAT contains dex and machine code compiled at installation time
Componentized routing implementation
ARoute: Use APT to parse @route and other annotations, and combine JavaPoet to generate routing table, that is, the mapping relationship between routing and Activity
6. Audio & Video &FFmpeg& Player
FFmpeg
An audio and video editing App is implemented based on command:
AAC, MP3, H264 encoder integrated compilation
Principles of player
Video playback principle: mp4, FLV) -> Unpacking -> (MP3 / AAC, H264 / H265) -> decoding -> (PCM, YUV) -> Audio and video synchronization -> Render and play
Audio and video synchronization:
- Select a reference clock source: Audio timestamp, video timestamp and external time select one as the reference clock source (usually choose audio, because people are more sensitive to audio, iJK is also audio by default)
- Synchronization is achieved by waiting or dropping frames to align the video stream with the reference clock source
IjkPlayer principle
It integrates MediaPlayer, ExoPlayer and IjkPlayer, among which IjkPlayer is based on FFmpeg ffplay
Audio output mode: AudioTrack, OpenSL ES; Video output modes: NativeWindow and OpenGL ES
About the algorithm
Algorithm refers to the accurate and complete description of problem solving scheme and a series of clear instructions to solve problems. Algorithm represents the strategy mechanism to solve problems in a systematic way. In other words, the required output can be obtained in a limited time for a given specification of the input. If an algorithm is flawed or inappropriate for a problem, executing the algorithm will not solve the problem. Different algorithms may perform the same task with different time, space, or efficiency. The advantages and disadvantages of an algorithm can be measured by space complexity and time complexity.
The instructions in an algorithm describe a computation that, when run, can start with an initial state and (possibly empty) initial input, pass through a finite and well-defined series of states, and eventually produce an output that stops in a final state. The transition from one state to another is not necessarily deterministic. Some algorithms, including the randomization algorithm, involve some random input.
Algorithm 1: Quicksort algorithm
Quicksort is a sorting algorithm developed by Tony Hall. On average, n log N comparison is required to sort n two items. In the worst case, a comparison of TWO (N2) steps is required, but this is not common. In fact, quicksort is generally significantly faster than other two (N log n) algorithms because its inner loop can be implemented efficiently on most architectures.
Quicksort uses a Divide and conquer strategy to Divide one serial (list) into two sub-lists.
Algorithm steps:
1 Pick an element from the sequence, called pivot,
2 Reorder the array so that all elements smaller than the base value are placed in front of the base value and all elements larger than the base value are placed behind the base value (the same number can go to either side). After the partition exits, the benchmark is in the middle of the array. This is called a partition operation.
3 Recursively sorts subsequences of elements less than the reference value and those greater than the reference value.
The bottom case of recursion is if the sequence is of size zero or one, so it’s always sorted. The algorithm always exits, though recursively, because in each iteration it puts at least one element in its last position.
Algorithm two: heap sort algorithm
Heapsort refers to a sort algorithm designed by using heap data structure. Heap is a nearly complete binary tree structure, and also satisfies the property of heap: that is, the key value or index of the child node is always smaller (or larger) than its parent node.
The average time complexity of heap sort is two o (nlogn).
Algorithm steps:
Create a heap H[0..n-1]
Swap the head of the heap (maximum value) with the tail of the heap
3. Reduce the size of the heap by 1 and call shift_down(0) to move the top of the new array into position
4. Repeat Step 2 until the heap size is 1
Algorithm three: merge sort
Merge sort (Merge sort) is an efficient sorting algorithm based on Merge operations. This algorithm is a very typical application of Divide and Conquer.
Algorithm steps:
1. Apply for a space equal to the sum of two sorted sequences. The space is used to store the combined sequence
2. Set two Pointers to the start positions of two sorted sequences
3. Compare the elements pointed to by the two Pointers, place the smaller element in the merge space, and move the pointer to the next position
4. Repeat Step 3 until a pointer reaches the end of the sequence
5. Copy all remaining elements of the other sequence directly to the end of the merged sequence
Algorithm four: binary search algorithm
Binary search algorithm is a search algorithm that looks for a particular element in an ordered array. The search process starts with the middle element of the array. If the middle element is exactly the element to be searched, the search process ends. If a particular element is greater than or less than the middle element, it looks in the half of the array that is greater than or less than the middle element, and compares from the middle element as it began. If the array is empty at any step, it cannot be found. This search algorithm reduces the search area by half with each comparison. Half search Reduces the search area by half each time and the time complexity is two o (logn).
Algorithm 5: BFPRT(Linear search algorithm)
The problem solved by BFPRT algorithm is very classic, that is, the KTH largest (KTH smallest) element is selected from a sequence of N elements. Through clever analysis, BFPRT can guarantee the linear time complexity in the worst case. The idea of this algorithm is similar to the idea of quicksort, of course, in order to make the algorithm still achieve o(n) time complexity in the worst case, five authors of the algorithm have done a delicate treatment.
Algorithm steps:
1. Group n elements into five groups and divide them into n/5(upper bound) groups.
2. Extract the median of each group and sort by any method, such as insertion sort.
3. Recursively call the selection algorithm to find the median of all medians in the previous step and set it to x. In the case of even medians, set it to select the one with the smallest middle.
4. Divide the array by x. Set the number less than or equal to x as k, and the number greater than x as n-k.
5. If I ==k, return x; If I <k, recursively find the ith smallest element among elements less than x; If I >k, recursively find the i-th smallest element in the element greater than x.
Terminating condition: n=1, returns the I element.
Algorithm 6: DFS (Depth-first Search)
Depth-First-Search algorithm is a kind of Search algorithm. It traverses the nodes of the tree along its depth, searching branches as deep as possible. When all edges of node V have been explored, the search goes back to the starting node of the edge on which node V was found. This process continues until all nodes reachable from the source node have been discovered. If there are undiscovered nodes, one of them is selected as the source node and the process is repeated until all nodes are accessed. DFS is blind search.
Depth-first search is a classical algorithm in graph theory. The depth-first search algorithm can generate the corresponding topological sorting list of the target graph, and the topological sorting list can conveniently solve many related graph theory problems, such as the maximum path problem and so on. Heap data structure is generally used to assist the implementation of DFS algorithm.
Depth-first traversal graph algorithm steps:
1. Access vertex v;
2. Depth-first traversal of the graph starts from the unvisited adjacent points of V successively; All vertices in the graph that have a path with V are accessed;
3. If there are still vertices in the graph that have not been accessed, the depth-first traversal is performed again starting from an unvisited vertex until all vertices in the graph have been accessed.
The above description may seem abstract, but here’s an example:
After accessing a starting vertex V in the graph, DFS starts from V and accesses any adjacent vertex W1. From w1, access the unvisited vertex w2 adjacent to w1. Then from W2, do a similar visit… And so on until you reach vertex U, where all adjacent vertices have been visited.
Next, take a step back to the vertex you visited last time and see if there are any other adjacent vertices that have not been visited. If so, visit this vertex, and then start from this vertex for a similar visit; If not, take a step back and search. Repeat the process until all vertices in the connected graph have been accessed.
Algorithm 7: BFS(Breadth-first Search)
Breadth-First-Search algorithm is a graphic Search algorithm. Simply put, BFS starts at the root node and traverses the nodes of the tree (graph) along its width. If all nodes are accessed, the algorithm aborts. BFS is also blind search. Queue data structures are generally used to assist the IMPLEMENTATION of BFS algorithms.
Algorithm steps:
1. Put the root node into the queue first.
2. Remove the first node from the queue and verify that it is the target.
If the target is found, the search ends and the results are returned.
Otherwise, all of its immediate children that have not yet been verified are enqueued.
3. If the queue is empty, the whole graph has been checked — there is no object in the graph to search for. End search and return “target not found”.
4. Repeat Step 2.
Algorithm 8: Dijkstra algorithm
Dijkstra’s algorithm was proposed by Dutch computer scientist Ezekiel Dykstra. Dikoscher algorithm uses breadth-first search to solve the single source shortest path problem of non-negatively-weighted directed graph, and finally obtains a shortest path tree. This algorithm is often used in routing algorithms or as a submodule of other graph algorithms.
The input of the algorithm consists of a weighted directed graph G and a source vertex S in G. Let’s call V the set of all vertices in G. Every edge in a graph is a pair of ordered elements formed by two vertices. (u, v) means there’s a path from vertex u to v. We denote the set of all edges in G by E, and the weight of edges is defined by the weight function W: E → [0, ∞]. So w(u, v) is the non-negative weight from vertex u to vertex v. The weight of an edge can be thought of as the distance between two vertices. The weight of the path between any two points is the sum of the weights of all the edges of the path. Given that V has vertices S and t, Dijkstra can find the least weighted path from S to T (for example, the shortest path). This algorithm can also find the shortest path from one vertex S to any other vertex in a graph. Dijkstra algorithm is the fastest single source shortest path algorithm for digraphs without negative weights.
Algorithm steps
1. Initial time S={V0},T={other vertices}, the corresponding distance value of the vertices in T
If <v0,vi> exists, d(v0,vi) is the weight on the arc <v0,vi>
If there is no <v0,vi>, d(v0,vi) is infinity
2. Select a vertex W from T whose distance value is the smallest and is not in S, and add S
3. Modify the distance values of the vertices in the rest of T: If W is added as the intermediate vertex and the distance value from V0 to Vi is shortened, modify this distance value
Repeat steps 2 and 3 until S contains all vertices, that is, W=Vi
Algorithm nine: dynamic programming algorithm
Dynamic programming is a method used in mathematics, computer science, and economics to solve complex problems by breaking down original problems into relatively simple subproblems. Dynamic programming is often applied to problems with overlapping subproblems and optimal substructures, and the time consumed by dynamic programming is often much less than that of the naive solution.
The basic idea behind dynamic programming is very simple. Roughly speaking, to solve a given problem, we need to solve its different parts (i.e., subproblems) and combine the solutions of the subproblems to get the solution of the original problem. In general, many subproblems are very similar, so the dynamic programming method tries to solve each subproblem only once, thus reducing the amount of computation: once the solution of a given stator problem has been calculated, it is memorized and stored so that the next time the same subproblem solution is needed, the table can be looked up directly. This approach is particularly useful when the number of repeating subproblems increases exponentially with respect to the size of the input
The most classical problem about dynamic programming is knapsack problem.
Algorithm steps:
1. Optimal substructure properties. A problem is said to have the property of optimal substructure (that is, to satisfy the optimization principle) if the optimal solution of the problem contains the solution of the subproblem which is also optimal. Optimal substructure properties provide important clues for dynamic programming algorithms to solve problems.
2. Overlapping nature of sub-problems. The overlapping property of subproblems means that when solving a problem from top to bottom with recursive algorithm, the subproblems generated each time are not always new, and some subproblems will be repeated many times. The dynamic programming algorithm takes advantage of the overlapping nature of the subproblem, calculates each subproblem only once, and then saves its calculation results in a table. When it needs to calculate the subproblem again, it simply looks up the results in the table, so as to obtain higher efficiency.
Algorithm 10: Naive Bayes classification algorithm
Naive Bayes classification algorithm is a simple probability classification algorithm based on Bayes theorem. The basis of Bayesian classification is probabilistic reasoning, which is how to complete reasoning and decision-making tasks when the existence of various conditions is uncertain and only the probability of their occurrence is known. Probabilistic reasoning is the counterpart of deterministic reasoning. However, naive Bayes classifier is based on the assumption of independence, that is, each feature of the sample is not correlated with other features.
Naive Bayes classifier relies on accurate natural probability model and can achieve very good classification effect in supervised learning sample set. In many practical applications, naive Bayesian model parameter estimation uses maximum likelihood estimation method, in other words naive Bayesian model can work without using Bayesian probability or any Bayesian model.
Quicksort, merge sort, heap sort
So how do you choose in practice? There are these selection criteria:
If n is small, insert sort and simple selection sort are used. Because direct insertion sort requires more record movement than simple selection sort, simple selection sort is better when the record itself has a large amount of information.
If the sequence to be sorted is basically ordered, direct insertion sort or bubble sort can be used
If n is large, the algorithm with the lowest time complexity should be used, such as fast sorting, heap sorting, or merge
In subdivision, quicksort works best when data is randomly distributed (this has to do with hardware optimization for quicksort, which was mentioned in a previous post)
The heap requires only one auxiliary space and does not have the worst case of fast row
Quick sorting and heap sorting are unstable, if you want stability, you can use merge, you can also combine direct insertion sort and merge, first use direct insertion to get ordered fragments, and then merge, so that the result is also stable, because direct insertion is stable.
All the information of the article has been packaged, free to share to those who need it, in addition to the small series on hand to organize a large number of Android architect full set of learning materials, Android core advanced technology PDF document + a full set of advanced learning materials + video +2020 BAT factory interview analysis, are free to share with everyone, All have been sorted out on GitHub, friends in need can click to get more.
Information Display: