This series will introduce readers to various hot topics in the field of artificial intelligence. Since AI is such a large and complex field, and its coverage is increasing by the day, it is possible for any book to focus on a specific field, so this book is not intended to be a comprehensive ai tutorial.
This series teaches concepts related to ARTIFICIAL intelligence in a mathematically accessible way, which is what the English title of the series means by “for Human.” In addition:
- This series assumes proficiency in at least one programming language;
- This series assumes a basic understanding of college algebra;
- This series uses concepts and formulas from calculus, linear algebra, differential equations, and statistics;
- In explaining point 3 above, however, the books do not assume familiarity with the above;
- All concepts have not only mathematical formulas, but also programming examples and pseudocode.
The books are aimed at programmers who are proficient in at least one programming language, and the examples have been adapted into multiple programming languages.
The first two volumes have been published by The Asynchronous Community of Posts and Telecommunications Press, and another volume is expected to be available in the first quarter of next year.
Volume 1: Artificial Intelligence Algorithms Volume 1 Basic Algorithms
Chapter 1, “Getting Started with AI,” introduces some of the basic concepts related to ARTIFICIAL intelligence that will be used in other volumes of this book or series. Most AI algorithms take an array of inputs to produce an array of outputs — problems that AI can solve are usually classified as such models. And inside the algorithm model, you need additional arrays to store short and long memories. The training of an algorithm is actually a process of adjusting the value of long-term memory to produce an expected output corresponding to a given input.
Chapter 2, “Data normalization”, describes the preprocessing process of the original data by most ARTIFICIAL intelligence algorithms. The data needs to be passed to the algorithm as an array of inputs, but in practice not all of the data is numeric, but some of it is categorical, such as color, shape, gender, species, or other non-numeric descriptive characteristics. In addition, even ready-made numerical data must be normalized within a range, usually to (-1, 1).
Chapter 3, “Distance Measurement,” shows how we compare data, much like plotting the distance between two points on a map. Ai typically processes data in the form of numeric arrays — input, output, long-term memory, short-term memory, and much else — often referred to as vectors. We can calculate the difference between two data points in the same way we calculate the distance between two points (two – and three-dimensional points can be regarded as vectors of length two and three, respectively). Of course, in artificial intelligence, we’re often dealing with data in higher dimensions.
Chapter 4, “Random number generation”, explains the generation and use of random numbers in artificial intelligence algorithms. This chapter begins with a discussion of uniform and normal random numbers — a difference that arises because algorithms sometimes require random numbers to be equally likely, while other times they require them to conform to some given distribution. In addition, this chapter also discusses the method of generating random numbers.
Chapter 5, “K-means clustering algorithm”, details the method of classifying data according to similarity. The K-means algorithm itself can be used to group data by commonality, but it can also be used to form more complex algorithms – genetic algorithms, for example, use k-means to classify populations by characteristics, and network operators also use clustering algorithms to classify customers and adjust their sales strategies according to the consumption habits of similar customers.
Chapter 6, “Error Calculation,” demonstrates methods for evaluating the effectiveness of artificial intelligence algorithms. The error calculation process is performed by a scoring function that evaluates the final effect of the algorithm, and the result determines the effect of the algorithm. A common scoring function only needs a given input vector and an expected output vector, which is called “training data”. The effect of the algorithm is determined by the difference between the actual output and the expected output.
Chapter 7, “Toward Machine Learning,” Outlines simple machine learning algorithms that can learn features from data to optimize results. Most ai algorithms use weight vectors to convert input vectors into desired output vectors. These weight vectors constitute the long-term memory of the algorithm. “training” is the process of adjusting long-term memory to produce desired output. This chapter will demonstrate the construction of several simple models with learning capability, as well as introduce some simple but effective training algorithms that can adjust this long-term memory (weight vector) and optimize the output. Simple random walk and mountain climbing are two of them.
Chapter 8, “Optimization Training”, expands on the previous chapters and introduces algorithms such as simulated annealing algorithm and Nelder-Mead method [2] for rapid optimization of artificial intelligence model weight. This chapter also shows how, with some adjustments, these optimization algorithms can be applied to some of the previously mentioned models.
Chapter 9, “Discrete Optimization,” explains how to optimize non-numeric categorical data. Not all optimization problems are numerical. There are discrete and categorical problems, such as knapsack problem and traveling salesman problem. This chapter will show that the simulated annealing algorithm can be used to deal with these two problems, and that the algorithm is suitable for both continuous numerical problems and discrete class problems.
Chapter 10, “Linear Regression,” explains how to use linear and nonlinear equations to learn trends and make predictions. This chapter introduces simple linear regression and demonstrates how it can be used to fit the data into a linear model. In addition, the General Linear Model (GLM), which can fit nonlinear data, is introduced.
Volume 2: Artificial Intelligence Algorithms Volume 1 Basic Algorithms
Chapter 1, “Populations, Scoring, and Selection” introduces concepts that will be used in the rest of the book. Algorithms inspired by nature solve problems by forming populations of solutions. Scoring allows algorithms to evaluate the effectiveness of population members.
Chapter 2, “Crossover and Mutation” introduces several approaches to crossover and mutation that population members can use to create potentially better solutions for the next generation. Crossover allows two or more potential solutions to combine their characteristics to produce the next generation of solutions. Mutations allow individuals to create their own slightly altered versions for the next generation.
Chapter 3, “Genetic Algorithms,” combines the ideas of Chapters 1 and 2 into a specific algorithm. Genetic algorithms optimize fixed-length arrays through evolution to provide better results. This chapter will show you how to use fixed-length arrays to find solutions to the traveler-dealer problem, and how to use measurements of flowers to predict iris species.
Chapter 4, “Genetic Programming,” shows that the solution sets of evolutionary algorithms are not always of fixed length. In effect, using these ideas, computer programs can be represented as trees that can evolve and generate other programs to better perform their intended tasks.
Chapter 5, speciation, discusses how populations can be divided into one species. Just as crossover creates offspring through the combination of two individuals in a population, speciation produces offspring through the mating of similar solutions. Programmers borrowed the concept from nature that only creatures of the same species can pair up and reproduce.
Chapter 6 “Particle swarm Optimization” uses particle groups to search for optimal solutions. This grouping instinct in computer software is modeled from nature. Examples such as “cattle”, “insect”, “bird” and “school of fish” suggest that organisms naturally tend to travel in groups as the best solution to predators.
Chapter 7, “Ant Colony Optimization,” discusses how pheromone tracking in ants can provide inspiration to computer programmers. As more ants follow the chemicals left behind by their companions, the tracks become stronger and stronger. A computer program can use similar techniques to find the optimal solution.
Chapter 8, “Cellular automata,” uses simple rules to produce very complex results and patterns. The key to creating interesting cellular automata is to find some simple rules that can be evolved using genetic algorithms.
Chapter 9, “Artificial Life,” aims to reflect the characteristics of real life and contains a major project of the book. This chapter will create a program that simulates plant growth. To help you track progress, this chapter divides the program into three milestones and provides the code.
Chapter 10, “Modeling,” discusses how data science uses algorithms inspired by large algorithms. This chapter also contains the second major project of the book. Using data sets from one of The Kaggle tutorial competitions, this book will show readers how to create models to predict whether passengers on the Titanic survived or died. This chapter also presents the project with three milestones for the reader to verify progress.
Volume 3 Artificial Intelligence Algorithms (Volume 3) : Deep Learning and Neural Networks (first half of 2021)
In this book, we will demonstrate neural networks for a variety of real-world tasks, such as image recognition and data science. We investigate current neural network techniques including ReLU activation, stochastic gradient descent, cross entropy, regularization, Dropout, and visualization.