Computer Science
Following in the footsteps of The Physics Map, the Mathematics Map and the Chemistry Map, today we present a map to the computer world. I hope this article will bring you into the field of computer science.
A map to Computer Science. | image: Dominic Walliman (authorized by the author using compiled)
Dominic Walliman/Sun Peng
We use computers to expand our own brains. Computers were originally used to solve problems related to arithmetic, but their value soon spread to all fields. Things like running the Internet, processing real-time images, creating artificial intelligence, and simulating the entire universe. And the magic is that behind all these powerful functions, it is only 0 and 1 change back and forth.
Computers are getting smaller and faster at an incredible rate. The computing power of mobile phones now exceeds that of supercomputers combined in the 1960s (see: What are the Limits of Computers? ). The entire moon landing system of Apollo 11 can now be completed in two Nintendos. Computer science in general is the study of what computers can do. Computer science has now expanded into many interconnected branches, but I still divide the whole discipline into three parts: computer theory, computer engineering, and computer applications.
Computer theory
○ First Branch: Computer theory. | image: Dominic Walliman
Computer theory. We have to start with Alan Turing, the father of computing, who created the Turing Machine. In an article entitled ‘On the Application of decision Problems to Computable Machines’, Turing first defined’ finite degree operations’ and proposed the prototype of Turing machines. The Turing machine is a simple description of today’s general-purpose computer, not a physical machine. Later scientists came up with many computer models, but these models were fundamentally Turing machines. So the Turing machine is also the theoretical basis of modern computers.
A Turing machine consists of several parts, an infinite strip with symbols written on it, a read-write head that reads and writes on the strip, a status register that stores the current state, and a list of commands. On today’s computers, the tape is now the memory (no longer infinite, of course) and the read/write head is now the processor (CPU). The command string is stored in the computer’s memory (see “An Unprovable Logic Problem”). Although Turing machine is a simple description, it is also a very comprehensive description of computer design. Today’s computers are of course made up of more parts, such as hard drives, keyboards, audio, graphics cards, screens and so on, but they operate within the concept of a Turing machine.
Turing machine and modern computer – | image: Dominic Walliman
Turing laid the foundation for the development of computers through his description of machines. At the same time, however, we should not forget another computer scientist who was closely associated with Alan, and his PhD advisor, Alonzo. The church. Church invented the lambda operator, describing the concept of computer operations through a rigorous set of mathematical theories. All problems that can be solved by Turing machines can be solved by lambda operators. If Turing’s ideas represent the archetypes of algorithms and machines, lambda operators are the basis of all programming logic and languages today (see “His Ideas Represent Logic and Language” and “Computing” and “Machines”).
As I said at the beginning, the fundamental question in computer theory is can a computer do anything? If not, what it can or can’t do. This problem extends directly from the branch of computer Theory to a field called Computability Theory. Computability theory is a discipline used to determine which problems can be computed with Turing machines and the final results obtained. Some problems are inherently impossible to solve with a computer. The most famous example is downtime. In summary, downtime problems represent problems where it is not certain that a computer program will run indefinitely. Alan, however, cleverly used the concept of self-consistency to show that computers were not omnipotent until they had gone beyond the Turing machine. Some problems cannot be solved even in the lifetime of a computer (see “An Unprovable Logic Problem”).
○ Complexity theory classification. | image: Dominic Walliman
Of all the problems that can be solved by computers, there are also many that take too long to solve(maybe even longer than the universe can exist.). Based on this,Computational complexity(Computational Complexity)Theory has become another important component of computer theory. Complexity theory classifies problems into three categories based on how much time it takes to solve a problem to increase with the input to the problemP classThe problem(for example, sorting a sequence from smallest to largest).NPThe problem(For example, find a route that traverses all cities within a given city and the total distance is less than N)And so on. While many real-world problems are theoretically unsolvable, computer scientists can make some technical simplifications to arrive at approximate answers, but no one can be sure if those answers are the best ones. Just as in the NP problem above, we can find a route traversing all cities with a total distance less than N in polynomial time, but we cannot find the shortest route in polynomial time(seeThe Million-dollar Questionand The Optimism and Fear of Misunderstanding).
○ Algorithm and algorithm complexity. | image: Dominic Walliman
The branch of computer theory also includes the study of algorithms (Alogorithm) and information theory. Algorithms are problem-solving routines that are independent of all programming languages and computer hardware. Algorithms are the basis for creating programs, and many computer scientists work on algorithms to find the optimal solution to a problem. For example, different algorithms may solve the same problem and get the same result, such as sorting a jumble of numbers from smallest to largest. But some algorithms are faster and more efficient than others. And that’s all in the area of algorithmic complexity.
Information Theory studies how Information is received, stored and transmitted by studying the nature of Information. For example, how to compress information while retaining most or all of it so that we can store it in less memory. Coding Theory and Encryption Theory are also very important parts of information Theory. These two theories use complex mathematics as the auxiliary, and re-encrypt the transmitted information, which greatly increases the security of information in network transmission.
○ Information theory and cryptography. | image: Dominic Walliman
So that’s a very important part of the branch of computer theory. Of course, there are many other components, including logic, graphics, computational geometry, automata theory, quantum computing, parallel processing, data structures, etc. I will not list them all here.
Computer engineering
○ Second branch: Computer engineering. | image: Dominic Walliman
The second large branch of computer science is computer engineering. Designing a computer is a big challenge because there are so many different aspects to consider, from the underlying hardware to the upper software. The designer must ensure that the computer can solve as many problems as possible in the most optimal way possible. The processor (CPU) is the center of the computer, through which all tasks performed by the computer pass and are scheduled. When a single processor processes multiple tasks, the processor needs to go back and forth between each task, all of which can be completed in a user-acceptable amount of time.
Scheduling is a complex process performed by a scheduler in a processor. The scheduler decides when to execute what tasks and tries to schedule all tasks in an optimal way. In this case, using multiple cores for multiple tasks can speed up computer execution because each task can now be performed by a separate core. But at the same time, multi-core execution also makes the design of scheduler more complicated. These designs all belong to the research category of Computer Architecture. Different architectures are suited for different tasks. The processor (CPU) is suitable for executing general-purpose programs such as the operating system we use. Image processors (Gpus) are good for image processing, such as the high-resolution games we play, while field-programmable gate arrays (FPgas) are good for high-speed execution of very narrow tasks, such as mining bitcoin.
○ Single-core and multi-core scheduling | image: Dominic Walliman
Software and Programming Languages are also an important part of computer engineering. On top of the hardware there are layers of software written in various programming languages. From the low-level assembly language to the high-level Java language, programming languages are the languages in which programmers give commands to computers and write tasks with different syntactic features. For example, we use assembly language to write the execution of the bottom of the computer, and Java to write web applications. As you can imagine, the lower-level programming language is closer to the structure of the computer itself, but harder for people to understand. However, no matter how high or low level the language is, it will eventually be converted into binary code that the processor can execute. This conversion mechanism is accomplished by the compiler in one or more steps. Each programming language has its own compiler to translate and optimize programs into executable binaries. The design of compilers and programming languages is important in computers because they have to be both simple and flexible, making it easy for programmers to put their crazy ideas into practice.
○ Programming language and compiler. | image: Dominic Walliman
Operating System is the most important software in a computer System, as well as the medium through which users must interact with computers. The operating system controls all of the computer’s hardware while taking instructions from the user. So designing a good operating system is a big challenge. Therefore, Software Engineering has become an important part of computer Engineering branch. Software engineers tell computers what to do and when by designing software, new operating systems, or interacting with existing ones. Designing software is an art that requires engineers to translate creative thinking into rigorous logic programs in a specific programming language, and make the transformed logic programs run efficiently and quickly on a computer. Therefore, software engineering as an independent discipline also has many design ideas and philosophies for programmers to learn, use and study.
○ Operating system. | image: Dominic Walliman
Of course, computer engineering also includes many other components, such as the large-scale collaboration of multiple computers (such as Taobao’s servers), the storage of big data (such as the personal information that needs to be stored in Google’s Facebook), Machine performance research (such as writing large software as a benchmark to test computer performance) and computer image processing (such as simple meitu xiu Xiu). We’ll talk more about that in future articles.
Computer application
○ Third branch: Computer applications. | image: Dominic Walliman
Let’s move on to the third branch of computer science, computer applications. This branch aims to use computers to solve various problems in real life. When you travel and you want to find the best value for money, this involves using computers to solve the Optimisation problem. Optimal problem solving has historically been one of the most important parts of business, because getting it right can save companies billions of dollars. However, the problem of optimal solution sometimes cannot be solved efficiently by computer, such as finding the shortest route to traverse all cities mentioned above. So some people are looking to new technologies, such as artificial intelligence or quantum computers, to see if they can offer a breakthrough in solving such problems.
Artificial Intelligence (AI) plays an important role in this branch of computer application. Computers have expanded our brains and increased our cognitive abilities many times over. Cutting-edge artificial intelligence research is trying to make machines think like humans. The research of artificial intelligence consists of many parts, among which the most rapidly developing is Machine Learning. Machine Learning enables machines to accurately distinguish objects or make decisions through pre-determined algorithms and input of big data. The most successful example is Google’s AlphaGo’s successive defeats of go champions. Machine learning is also divided into supervised (classifying unknown data from existing samples), unsupervised (classifying data from a particular feature of the data without any samples) and enhanced learning (training the birds in flappy Bird, a popular little game before it. If the bird hits the post, it gets -1, otherwise it gets 0. Through several of these exercises, we end up with a bird that knows how to fly and what to do to avoid the pillar. In addition, Computer Vision and Natural Language Processing are also important components of ARTIFICIAL intelligence. Computer vision hopes to use image processing to enable computers to distinguish things as well as humans. Natural language processing aims to enable computers and humans to communicate with each other through human language, or to analyze text as input. Each of these areas of ARTIFICIAL intelligence will be discussed in detail.
○ Artificial intelligence | image: Dominic Walliman
The success of machine learning has benefited greatly from the growth of Big Data. So the study of big data has become a very important field in the computer application branch. Big data research aims to find out valuable information from huge amounts of data. The Internet of Things adds a further dimension to big data research by connecting objects to provide ever larger amounts of data. Hacking is not an orthodox area of academia, but it’s worth mentioning here. Hackers take advantage of vulnerabilities in computer systems and networks to steal information they need from other people’s computer systems without being discovered, such as the recent attack on vulnerabilities in the Windows operating system. Even with today’s technology, these hackers can only be thwarted.
In addition to the fields mentioned above, the branch of computer applications also uses computers to study scientific problems, such as physics and neurology. The field typically uses supercomputers to solve large-scale Simulation problems. At the same time, Computer applications also include the study of Human Computer Interaction, which aims at designing Computer systems that make it easier for users to use. At the same time, Virtual Reality (for example, VR glasses worn on your head), Augmented Reality ( Such as the previously popular Pokemon Go game) and Mixed Reality (where you scan a physical book on your phone and see an online review of it) have linked the virtual and physical worlds. Research on Robotics also makes machines more similar to humans in form and movement.
This is the map to computer science. Principles Into the History of Computer Culture introduces the first part of this map, computer theory, in a series of articles. I will go through this map with you in the future. Today’s computers are still developing rapidly. Although hardware research has been hampered by the difficulty of making transistors smaller, computer scientists are trying to solve this problem by researching other fields. The computer has a vital impact on the development of the whole human race, so how the computer will develop in the next century has become a hotly pursued question by scientists. Who knows? Maybe someday we’ll all be in the world more or less in the form of computers.
More maps
Map of Physics
Mathematical Maps
Chemical Map
Click “Read the article” to buy four posters of the map
And surrounding yo ~