directory

  • 1, an overview of the
  • 2, characteristic
  • (1) Finiteness
  • (2) Definiteness
  • (3) Input
  • (4) Output
  • (5) Effectiveness
  • Three elements,
  • (1) Operation and operation of data objects:
  • (2) Control structure of the algorithm:
  • 4, evaluation
  • (1) Time complexity
  • (2) Spatial complexity
  • 5, methods,
  • (1) Recursive method
  • (2) Recursion
  • (3)
  • (4) Greedy algorithm
  • (5) Greedy algorithm
  • (6) Divide-and-conquer
  • (7) Dynamic programming method
  • (8) Iterative method
  • (9) Branch boundary method
  • (10) Backtracking
  • 6. Description
  • 7, classification,
  • (1) Finite, deterministic algorithm
  • (2) Finite, nondeterministic algorithms
  • (3) Infinite algorithms
  • 8, history,

1, an overview of the

Algorithm refers to the accurate and complete description of problem solving scheme and a series of clear instructions to solve problems. Algorithm represents the strategy mechanism to solve problems in a systematic way. In other words, the required output can be obtained in a limited time for a given specification of the input. If an algorithm is flawed or inappropriate for a problem, executing the algorithm will not solve the problem. Different algorithms may perform the same task with different time, space, or efficiency. The advantages and disadvantages of an algorithm can be measured by space complexity and time complexity. The instructions in an algorithm describe a computation that, when run, can start with an initial state and (possibly empty) initial input, pass through a finite and well-defined series of states, and eventually produce an output that stops in a final state. The transition from one state to another is not necessarily deterministic. Some algorithms, including the randomization algorithm, involve some random input. The concept of formal algorithms is derived in part from attempts to solve Hilbert’s decision problems and from subsequent attempts to define efficient computability or efficient methods. These attempts included Kurt Godel’s recursive functions in 1930, Jacques Herbrand in 1934 and Stephen Cole Kleiney in 1935, Alonzo Church’s lambda calculus in 1936, Emil Leon Post’s Formulation 1 in 1936 and Alan Turing’s Turing machine in 1937. Even today, there are often situations where intuitive ideas are difficult to define as formal algorithms.

2, characteristic

An algorithm should have the following five important characteristics:

(1) Finiteness

The fineness of the algorithm means that the algorithm must be able to terminate after executing a finite number of steps.

(2) Definiteness

Each step of the algorithm must be clearly defined;

(3) Input

An algorithm has zero or more inputs to describe the initial conditions of the operation object. The so-called zero inputs refer to the initial conditions determined by the algorithm itself.

(4) Output

An algorithm has one or more outputs that reflect the results of processing the input data. An algorithm without output is meaningless;

(5) Effectiveness

Any computational steps performed in an algorithm can be broken down into basic executable steps, that is, each computational step can be completed in a finite amount of time (also called validity).

Three elements,

(1) Operation and operation of data objects:

The basic operations a computer can perform are described in the form of instructions. The set of instructions that a computer system can execute, called the instruction system of the computer system. The basic operations and operations of a computer fall into the following four categories: 1. Arithmetic operations: addition, subtraction, multiplication and division. Logical operation: or, and, not equal operation 3. Relational operation: greater than, less than, equal to, not equal to the operation 4. Data transmission: input, output, assignment and other operations

(2) Control structure of the algorithm:

The functional structure of an algorithm depends not only on the selected operations, but also on the order in which the operations are executed.

4, evaluation

The same problem can be solved by different algorithms, and the quality of one algorithm will affect the efficiency of the algorithm and even the program. The aim of algorithm analysis is to select suitable algorithm and improve algorithm. The evaluation of an algorithm is mainly based on time complexity and space complexity.

(1) Time complexity

The time complexity of an algorithm refers to the computational effort required to perform the algorithm. In general, a computer algorithm is f(n) as a function of the problem size N, and the time complexity of the algorithm is thus denoted. T(n)= o (f(n)) Therefore, the greater the problem n is, the growth rate of algorithm execution Time is positively correlated with the growth rate of F (n), which is called the Asymptotic Time Complexity.

(2) Spatial complexity

The spatial complexity of an algorithm refers to the memory space consumed by the algorithm. Its calculation and expression are similar to time complexity, which is generally expressed by the asymptotic property of complexity. Compared with time complexity, spatial complexity is much simpler to analyze. Correctness the correctness of an algorithm is the most important criterion to evaluate the merits of an algorithm. Readability algorithm readability refers to how easily an algorithm can be read by people. Robustness Robustness refers to the ability of an algorithm to respond to and process unreasonable data input, also known as fault tolerance.

5, methods,

(1) Recursive method

Recursion is a common algorithm in sequential computers. It calculates each item in a sequence according to a certain rule, usually through some items in front of the computer to get the value of the specified item in the sequence. The idea is to turn a complex and bulky computing process into many repetitions of a simple one, and the algorithm takes advantage of the speed and indefatigability of computers.

(2) Recursion

The programming technique by which a program calls itself is called recursion. A procedure or function in the introduction to the definition or call their own, directly or indirectly, with a kind of method, it is usually the problem of a large complex layers into a similar to the original problem of smaller problems to solve, the recursion strategy only a small number of procedures can be required to describe the problem solving process of repeatedly calculation, greatly reduces the amount of program code. Recursion is the ability to define an infinite collection of objects in finite statements. In general, recursion requires boundary conditions, recursive forward sections, and recursive return sections. When the boundary condition is not satisfied, recursion advances. The recursion returns when the boundary condition is satisfied. Note: (1) Recursion is calling itself within a procedure or function; (2) When using recursive strategy, there must be a clear recursive end condition, called recursive exit.

(3)

Exhaustive method, or brute force cracking method, its basic idea is: for the problem to be solved, list all its possible situations, judge one by one which is in line with the requirements of the problem, so as to get the solution of the problem. It is also commonly used for password deciphering, that is, codes are calculated one by one until the real password is found. For example, a password that is known to be four digits and consists entirely of numbers may have 10,000 combinations, so a maximum of 10,000 attempts will result in the correct password. In theory, this method can be used to crack any kind of password, the only problem is how to shorten the trial and error time. So some people use computers to increase efficiency, while others use dictionaries to narrow down their password combinations.

(4) Greedy algorithm

Greedy algorithm is a simpler and faster design technique for some optimization problems. Greedy design algorithm is characterized by step by step, often based on the current situation according to an optimization measure to make optimal selection, without considering all possible overall situation, it saves a lot of time to find the optimal solution to exhaust all possible. It USES the top-down, successive greedy choices by iterative method, each for a greedy selection will ask problem is simplified to a smaller sub-problems, every step by the greedy choice, an optimal solution can be obtained, although each step are intended to ensure that can obtain the local optimal solution, but sometimes the resulting global solution is not necessarily the best, So the greedy method doesn’t backtrack.

(5) Greedy algorithm

Greedy algorithm is an improved classification methods, its core is a measure selection according to the question, then the more input in order required by this measure, an enter a quantity in this order, if the input and the current constitute measure in this sense the partial optimal solution together does not produce a feasible solution, This input is not added to this decomposition. This hierarchical processing method which can obtain the optimal solution in a certain measure sense is called greedy algorithm. There are often several possible metrics for a given problem. At first glance, these measures seem to be desirable, but in fact, the optimal solution in the sense of this measure obtained by greedy treatment of most of them is not the optimal solution of the problem, but the sub-optimal solution. Therefore, the selection of optimal metrics that can produce optimal solutions to problems is the core of using greedy algorithms. In general, it is not easy to choose the optimal measurement standard, but the greedy algorithm is particularly effective for a problem that can choose the optimal measurement standard.

(6) Divide-and-conquer

Divide-and-conquer is to divide a complex problem into two or more identical or similar sub-problems, and then divide the sub-problems into smaller sub-problems… Until the subproblems can be solved directly, the solution of the original problem is the combination of the solutions of the subproblems. The problems solved by divide-and-conquer generally have the following characteristics: (1) The problem can be easily solved when the scale of the problem is reduced to a certain extent; (2) The problem can be decomposed into several similar problems of smaller scale, that is, the problem has the property of optimal substructure; (3) The solutions of subproblems decomposed by this problem can be combined into the solutions of this problem; (4) Each sub-problem decomposed by this problem is independent of each other, that is, there is no common sub-problem between the sub-problems.

(7) Dynamic programming method

Dynamic programming is a method used in mathematics and computer science to solve optimization problems involving overlapping subproblems. The basic idea is that the original problem is decomposed into similar sub-problems, and the solution of the original problem is obtained through the solution of the sub-problems in the process of solving. The idea of dynamic programming is the basis of many algorithms and is widely used in computer science and engineering. Dynamic programming programming is not a special algorithm but a way to solve optimization problems. Unlike those searches or numerical calculations described above, there is a standard mathematical expression and a clear and clear way to solve the problem. Dynamic planning program design is often to a optimization problem, given the different natures of the various problems, determine the conditions of the optimal solution is also each other is not the same, so the design of the dynamic programming method to different problems, there are different characteristics of the problem solving methods, and does not exist a kind of universal dynamic programming algorithm, all kinds of optimization problems can be solved. Therefore, in addition to the correct understanding of basic concepts and methods, readers must analyze and deal with specific problems, build models with rich imagination and solve them with creative skills.

(8) Iterative method

Iterative method, also known as the toss and turn method, is a process of constantly using the old value of variables to recurse the new value, and the corresponding iteration method is the direct method (or known as a solution), that is, to solve the problem at one time. Iterative method is divided into precise iteration and approximate iteration. “Dichotomy” and “Newton iterative method” belong to approximate iterative methods. Iterative algorithm is a basic method to solve problems by computer. It uses the computer operation speed, suitable for repetitive operation characteristics, so that the computer to a group of instructions (or certain steps) repeated execution, in each execution of this group of instructions (or these steps), from the original value of the variable to derive a new value of it.

(9) Branch boundary method

Branch and bound method is a widely used algorithm, which has strong skills and different kinds of problem solutions. The basic idea of branch-and-bound method is to search the space of all feasible solutions of constrained optimization problems. In the implementation of the algorithm, all the feasible solution space is continuously divided into smaller and smaller subsets (called branches), and a lower or upper bound (called bound) is calculated for the values of the solutions in each subset. After each branch, the subsets whose bounds exceed the value of known feasible solutions are not further branched. Thus, many subsets of solutions (i.e. many nodes in the search tree) can be ignored, thus narrowing the search scope. This process continues until a feasible solution is found whose value is not greater than the bounds of any subset. Therefore, this algorithm can generally obtain the optimal solution. Like a greedy algorithm, this method is used to design algorithm for combinatorial optimization problems, the difference is it in the question of the possible solution space search, the algorithm designed in its time complexity is higher than greedy algorithm, but its advantage is similar with exhaustion method, can guarantee the optimal solution of the problem, and this method is not blind exhaustive search, In the search process, it can stop the further search for some subspaces where it is impossible to get the optimal solution (similar to pruning in artificial intelligence) through the limit, so it is more efficient than the exhaustive method.

(10) Backtracking

Backtracking (exploration and backtracking) is a selection search method that searches forward according to the selection criteria to achieve the goal. However, when a certain step is explored, it is found that the original choice is not good or fails to reach the target, so it takes a step back to make a new choice. The technology of going back and going again when there is no way is called backtracking method, and the point in a certain state that meets the backtracking condition is called “backtracking point”. The basic idea is that in the solution space tree containing all the solutions of the problem, according to the depth-first search strategy, the solution space tree is deeply explored from the root node. When a node is explored, it is necessary to first judge whether the node contains the solution of the problem. If it does, it is necessary to continue the exploration from the node. If the node does not contain the solution of the problem, it is necessary to trace back to its ancestor node layer by layer. Backtracking is a depth-first search algorithm for implicit graphs. If we use backtracking to find all the solutions to the problem, we need to go back to the root, and all the feasible subtrees of the root node have been searched. But if using backtracking method to find any solution, as long as the search for a solution to the problem can be ended.

6. Description

There are many ways to describe algorithms, such as natural language, structured flow chart, pseudocode and PAD graph, among which the most common is flow chart.

7, classification,

Algorithms can be roughly divided into basic algorithm, data structure algorithm, number theory and algebraic algorithm, computational geometry algorithm, graph theory algorithm, dynamic programming and numerical analysis, encryption algorithm, sorting algorithm, retrieval algorithm, randomization algorithm, parallel algorithm, Hermitic deformation model, random forest algorithm. Algorithms can be macrogenerically divided into three categories:

(1) Finite, deterministic algorithm

Such algorithms terminate for a limited period of time. They may take a long time to perform the assigned task, but will still terminate at a certain time. The results of such algorithms often depend on the input values.

(2) Finite, nondeterministic algorithms

Such algorithms terminate in a finite amount of time. However, for a given value (or values), the result of the algorithm is not unique or deterministic.

(3) Infinite algorithms

Algorithms that do not terminate because a termination-defined condition is not defined, or because the defined condition cannot be satisfied by the input data. In general, infinite algorithms arise because there is no definable termination condition.

8, history,

Zhou bi Suanjin, or the Circular Suanjin of Heaven, is the Chinese name for the algorithm. The English name Algorithm comes from the Persian mathematician Al-Khwarizmi in the 9th century, because Al-Khwarizmi put forward the concept of Algorithm in mathematics. The word “algorithm” evolved in the 18th century from “algorism”, which means the algorithm of Arabic numerals. Euclid’s algorithm is considered to be the first algorithm in history. The first program was written by Ada Byron in 1842 for the Babbage analytical machine to solve Bernoulli’s equations, so Ada Byron is considered by most to be the world’s first programmer. Because Charles Babbage had not finished his Babbage analytical machine, the algorithm could not be carried out on it. Because of the lack of a mathematically precise definition of “well-defined procedure, “mathematicians and logicians in the 19th and early 20th centuries had trouble defining algorithms. Alan Turing, a British mathematician in the 20th century, developed his famous Turing thesis and proposed an abstract model of a hypothetical computer known as the Turing machine. The emergence of Turing machine solved the problem of algorithm definition, and Turing’s ideas played an important role in the development of algorithms.