The new recruitment season is about to start. “Data scientist” and “algorithm engineer” are hot jobs for those who want to work in the technology field, given the artificial intelligence craze sweeping the world. Keywords such as “artificial intelligence”, “machine learning”, “deep learning”, “modeling” and “convolutional neural network” are not only the talk of people after dinner, but also become the necessary skills of software engineers.
In the next few years, artificial intelligence technology will undoubtedly be fully popularized, and relevant talents will be scarce. Many computer students at school or have some work experience software engineers, hope to catch up with artificial intelligence completely take over the world, they can control the relevant field of technology, become a strong competitor. Zhuge Hulu, vice President of global research and development has been deep in the field of artificial intelligence, she was as editor of the best noodles machine learning: algorithm engineer to take you to the interview, the author cut open to booking (open), to learn about the artificial intelligence professional experience and understanding of industry in the field of artificial intelligence, hope to inspire the following her experience to share.
Zhuge Yue is editor in chief of Machine Learning. He is now Hulu’s Vice President of Global Research and Development and general Manager of its Research and Development Center in China. He graduated from the Department of Computer Science and Technology, Tsinghua University, master and doctor of Computer science, Stanford University, USA, and master of Applied Mathematics, State University of New York at Stony Brook.
Various ge yue: I and artificial intelligence
My undergraduate major is artificial intelligence, so I have been exposed to many cutting-edge technologies in the field of artificial intelligence at that time. My tutor for the introduction to artificial Intelligence is Professor Lin Yaorui, also the author of Introduction to Artificial Intelligence. In my senior year of undergraduate study, I was lucky enough to enter the Artificial Intelligence Laboratory of Tsinghua University and do some simple research under the tutelage of Professor Zhang Bo. From Mr. Zhang and the senior students, I learned a lot of advanced knowledge in the field of artificial intelligence at that time.
When I first arrived at Stanford, I was in the middle of a small Brown Bag lecture when the door swung open and a bearded professor, John McCarthy, walked in and asked loudly, “Heard there’s free lunch here?” Then he walked to the front of the room, grabbed a couple of sandwiches and stalked out. The lecturer paused and said, “Welcome to Stanford, the place where the world’s most famous scientist walks into your classroom and steals your food!” The word “Artificial Intelligence” comes from John McCarthy.
I also took an AI course at Stanford called CS140. Professor Nils Nilsson, who taught the course at the time, was another founder of the discipline and a world expert on ARTIFICIAL intelligence. Professor Nelson’s class was very interesting, and I also did a small project with him to plan the path of a sweeping robot. To this day, I still have the notes of this course.
To be honest, when I was younger and doing homework and projects every day, I didn’t realize how lucky I was to be in the company of these top scientists, or that I was witnessing the world’s leading edge in a field of technology. The best technologies are initially understood and appreciated by a minority. Now it seems that my encounters with ai and ai gurus happened to be related to the three waves of AI.
Three waves of ARTIFICIAL intelligence
The first wave of AI came around the 1950s. At the Dartmouth Symposium on Artificial Intelligence in 1956, John McCarthy formally proposed the concept of “artificial intelligence”, which is generally regarded as the beginning of the modern artificial intelligence discipline. McCarthy and Marvin Minsky of THE Massachusetts Institute of Technology are known as the “fathers of ARTIFICIAL intelligence.”
In the early days of the invention of computers, many computer scientists seriously thought and discussed the fundamental difference between the machine and human beings. The first group of experts thinking about artificial intelligence were very advanced in their thinking and theory and saw the potential of computers. Many of the basic theories of this stage are not only the basic theories of artificial intelligence, but also the cornerstones of computer science.
The first wave of ARTIFICIAL intelligence was largely based on logic. In 1958 McCarthy proposed the logical language LISP. From the 1950s to the 1980s, researchers demonstrated that computers could play games, perform some degree of natural language understanding, invent neural networks, and do simple language comprehension and object recognition.
However, despite being a fruitful field of scientific research in its first two or three decades, AI went into “winter” in the early 1980s due to lack of application. By the late ’80s and early’ 90s, AI scientists had moved away from solving large, universal intelligence problems to solving single problems in some fields. After 30 years of development of computer technology, data storage and application of a certain foundation, the researchers see the possibility of artificial intelligence and data, puts forward the concept of “expert system”, see the doctor, forecasting the weather, such as expert system from all walks of life, has brought hope, meaningful and practical application scenarios, These findings led to the first possible commercial outlet. This is the second wave of ARTIFICIAL intelligence.
What’s interesting, though, is that when we tried to use these expert systems to do smart diagnostics, we found that the problem wasn’t diagnosis, but that most of the data wasn’t digital at the time. A patient’s diagnostic history is still written by a doctor he doesn’t understand. If any information has been digitized, it has been in spreadsheets or on disconnected machines, inaccessible and unusable. So people who want to do automatic diagnostics instead do the basic work of digitizing all the information in the world.
At a time when a group of people are trying to make every book, map and prescription in the world digital, the spread of the Internet is connecting all this information to each other, creating real big data. Meanwhile, the increase in computing performance predicted by Moore’s Law has been at work. As computing power grows exponentially, applications that can only be implemented in a lab or in a limited setting are getting closer to real life.
The third wave of ARTIFICIAL intelligence is based on huge computing power and huge amounts of data. Huge computing power comes from the development of hardware, distributed systems and cloud computing technology. Recently, neural-network-based computing (NEURal-network -based Computing) has given another boost to the combination of hardware and software in ARTIFICIAL intelligence. The massive amount of data comes from the previous decades of data accumulation and the development of Internet technology. The combination of computing power and data has catalyzed the growth of machine learning algorithms.
This wave of ARTIFICIAL intelligence started in the past 10 years. The most basic difference from the previous two waves is its widespread application and impact on ordinary people’s lives. Artificial intelligence has left the academic laboratory and truly entered the public’s field of vision.
Is ARTIFICIAL intelligence fully approaching human capabilities?
Why is this wave of ARTIFICIAL intelligence so fierce? Is ARTIFICIAL intelligence really approaching the full range of human capabilities? What is the current development stage of ai technology? Let’s look at three simple facts.
The first is that, for the first time in history, computers are outperforming or about to outperform humans at many complex tasks, such as image recognition, video understanding, machine translation, driving a car, playing go and so on. So the topic of artificial intelligence replacing human beings began to appear in all kinds of headlines.
In fact, in terms of single technology, many computation-related technologies have already surpassed human beings and are widely used, such as navigation, search, search chart, stock trading. But these are mostly about “completing a task,” and the computer doesn’t have much to do with human perception, thinking, complex judgments and emotions.
In recent years, however, the tasks that machines have performed have increasingly approached those of humans in complexity and form. For example, automatic driving technology based on machine learning is becoming mature, which will not only have a revolutionary impact on people’s way of travel, but also affect urban construction, personal consumption and lifestyle. People are both excited and frightened by the rapid arrival of these new technologies, enjoying the convenience they bring but also feeling overwhelmed by the speed of change.
In addition, the self-learning ability of computers is increasing. Modern machine learning algorithms, especially the development of deep learning machine learning algorithms, make machine behavior no longer relatively predictable “program” or “logic”, but more like “black box thinking”, with almost human inexplicable thinking ability.
The second fact is that, on closer inspection, although artificial intelligence has made rapid progress in a number of specialized fields, it is still a long way from the general-purpose intelligence that the pioneers of ARTIFICIAL intelligence worked on in the first wave. Machines are still put in certain situations to do certain tasks, but the tasks are more complicated. Machines still lack some of the most basic human intelligence. Artificial intelligence is still unable to understand even simple emotions, such as helping or cooperating with a two-year-old child.
The third fact is that the application scenarios for AI and machine learning are very broad. Advances in the application of artificial intelligence and machine learning in recent years have brought concepts that were once the domain of academic research into the public eye and into the conversation of the future. The application of algorithm class goes out of the academic world, penetrates into every corner of the society and penetrates into every aspect of people’s life. We are familiar with face recognition, autonomous driving, medical diagnosis, machine assistant, smart city, new media, games, education, etc., and not often talked about, such as the automation of agricultural production, the care of the elderly and children, operation of dangerous situations, traffic scheduling, etc. The waves spread to every aspect of society.
Ten years ahead, the big development of ARTIFICIAL intelligence and machine learning will be the popularization and application of these technologies. A large number of new applications will be developed, artificial intelligence infrastructure will be rapidly improved, and traditional software and applications will need to be migrated to use new algorithms. So now is a good time to become an expert in ARTIFICIAL intelligence and machine learning.
What do you need to prepare to become one of the new wave players in ai? Maybe the book “Machine Learning with 100 sides” will help you step forward. The book covers every practical area of machine learning, from simple to complex, and is lively with examples and q&A. Help you become a better algorithm engineer, data scientist, and ai practitioner.
How to Read “Machine Learning with 100 Faces”
The book is informative, covering various subfields of artificial intelligence and machine learning. Different companies, businesses, and positions may use different skills. So here are a few suggestions for reading this book.
Read from beginning to end. If you understand everything, all the questions will be answered.
From simple to difficult: Difficulty is indicated next to each question. One star is the easiest, five is the hardest. A list of topics is also provided in this book. One star title, mainly introduces the basic concept, or why to do a certain thing. If you are a beginner in machine learning, you can start with background knowledge and simple questions and work your way up.
Work by Purpose: Not every company and not every position needs to know all kinds of algorithms. If your current job or job you want to work in is in a certain field, they probably use certain kinds of algorithms. If you’re interested in a new field, focus on these chapters. No matter which algorithm is used, basic skills such as feature engineering and model evaluation are important.
Internet reading method: a book is difficult to cover a wide range of areas, the title and solution can be a lot of different. So, we have summaries and expansions in many chapters. If you are interested in a particular area, you can use this book as a starting point to expand your reading and become an expert in that area.
How the Boss reads: If you’re a technology manager, you need to figure out how algorithms might contribute to your existing technology system, and how to find the right people to help you make smart products. It is recommended that you scan through this book to understand the various technical areas of machine learning and find suitable solutions. Then, you can use this book as an interview guide.
Artificial intelligence and machine learning algorithms continue to evolve rapidly, so let’s keep up with the technological advances in this area. I wish you all the best in this exciting new era of technology.
100 faces Machine Learning: Algorithmic Engineers Take you to an interview
Edited by Zhuge Yue, calabash wa
The field of artificial intelligence is evolving faster than anyone can imagine, and we are fortunate to have written this book before AI takes over the world.
There are more than 100 interview questions and solutions for machine learning algorithm engineers, most of them from real-life Hulu algorithm-research jobs. Book from daily work and life in all kinds of interesting phenomenon, not only includes the basic knowledge of machine learning, but also contains a good algorithm engineer related skills, more important is condensed the author for a heart of enthusiasm in the field of artificial intelligence, aims to develop the reader to find and solve problems and expand the question ability, establish the love of machine learning, Draw the grand blueprint of the ai world together.
Starting from classic machine learning fields such as feature engineering, model evaluation and dimension reduction, this book will build a necessary knowledge system for algorithm engineers. See the latest research progress of neural network, reinforcement learning, generative adversarial network, and understand the rise and fall of deep learning; In the last chapter, readers will be presented with various epochal applications of ARTIFICIAL intelligence.