Why don’t I care about artificial intelligence
Someone heard that I wanted to start my own business and suggested some “tricks” for me. They say that since you are a programming language expert and artificial intelligence (AI) is hot right now, you can actually create an “automatic programming system” that claims to automatically generate programs to replace the work of programmers and save a lot of labor costs, so that you can take advantage of the “AI fever” to attract investment.
Someone even came up with a name for me: “DeepCoder.” The slogan is: “DeepCoder, not Top Coder!” Others pointed me to the latest, much-hyped research in this direction, such as Microsoft’s Robust Fill…
I thank these people for their concern, but ACTUALLY I don’t care and I don’t believe in AI. Now LET me tell you briefly what I think.
Machine like heart
Many people like to advocate artificial intelligence, automatic cars, robots and other technologies, but if you observe carefully, you will find that these people not only do not understand what human intelligence is, do not understand what the limitations of artificial intelligence, and these “AI fanatics” of the heart, has been seriously mechanized. They more or less lost their humanity, as if to forget that they are a person, forget what people need most, forget the value of people. These people are, as Chaplin pointed out in the final speech of the Great Dictator, “machine men, machine hearts.”
When it comes to AI, these people inevitably have ambitious claims to “replace human jobs” and “save labor costs”. Leaving aside the question of whether these goals can be achieved, they were completely inconsistent with my values from the beginning. A great company should create real, new value for the society, instead of trying to “save” what labor costs, let people unemployed! I create a company whose greatest contribution is to put thousands of people out of work, save “labor costs” for the avaricious, aggravate the polarization between the rich and the poor, concentrate power in the hands of a few people, and finally lead to people living in poverty, leading to the desolation and even collapse of the society…
I can’t imagine living in a world like that, even if it makes me the richest person in the world, it doesn’t make sense. There are too many things in the world that money can’t buy. If walking in the street, I can’t see the people happy smile, leisurely pace, no regards, love and a sense of humor, can’t see sweet romance, but saw the homeless people of the land is not here right now, drilling to they left nostrils blunt people smell of urine, wherever he was afraid of being robbed, because people really live not bottom go to, in addition to steal and rob, There is no other way to live…
If artificial intelligence succeeds, this may be the end result. Fortunately, there is ample evidence that ARTIFICIAL intelligence will never succeed.
My dream of artificial intelligence
Many of you may not know this, but I was once an “AI fanatic”. I used to be crazy about artificial intelligence as my “great idea”. I used to talk about “human beings” as if machines were on a par with, or even superior to, humans. When Deep Blue beat Kasparov, I said, “Oh, we are finished!” I also used to think that with the two kou of logic and hao of learning, machines would one day surpass human intelligence. But I didn’t think clearly about how to achieve this, and I didn’t think clearly about what it means to achieve it.
The story begins more than a decade ago, when artificial intelligence was in its winter. In the library of Tsinghua University, I came across a dusty copy of “Paradigms of Artificial Intelligence Programming” (PAIP) by Peter Norvig. Like an archaeologist, I started honing and implementing the various classic AI algorithms one by one. PAIP’s algorithm focused on logic and reasoning, because in its day, many AI researchers thought human intelligence was all about logical reasoning.
They naively thought that with predicate logic, first-order logic and the like, saying “because so, not only, but all”, machines could have intelligence. So they designed all kinds of logic-based algorithms, expert Systems, and even a logic-based programming language called Prolog, which they called a “fifth generation programming language.” Finally, they hit insurmountable obstacles. Numerous AI companies failed to achieve their vaunted goals, various machines based on “neurons” failed to solve practical problems, huge government and private investment went up in vain, and ARTIFICIAL intelligence entered the winter.
It was during that winter that I met PAIP. It didn’t get me into artificial intelligence, but it got me hooked on Lisp and programming languages. It was also because of this book that I realized A* algorithm for the first time in an easy and methodical way. For the first time, I understood what “modularity” meant in a program. Guided by code examples, I started using small “utility functions” in my own programs instead of worrying about “function call overhead.” These two books, PAIP and SICP, eventually led me into the more “basic” field of programming languages rather than artificial intelligence.
After PAIP, I got into machine learning for a while, because I was told that machine learning was a new chapter in artificial intelligence. But I’ve come to realize that artificial intelligence and machine learning have very little to do with real human intelligence. Compared with the actual problems, the classical algorithms in PAIP are either quite naive or have high complexity and cannot solve the actual problems. The bottom line is, I don’t see how the algorithms in PAIP have anything to do with “intelligence.” The name “machine learning” is basically a sham. As many people can see, machine learning is simply a statistical “fitting function,” with a deceptively different name.
Artificial intelligence researchers like to throw out the word “neuron” and tell you that their algorithms are inspired by how neurons work in the human brain. Note that “inspiration” is a very ambiguous word, and the result of being inspired by something can have nothing to do with that thing. For example, I can also say that the design of Yin language is inspired by the nine Yin Classics 😛
Of all the AI researchers in the world, how many have actually studied the human brain, dissected it, experimented with it, or read about the results of brain science? In the end, you find that very few AI researchers have actually done human brain or cognitive science. Douglas Hofstadter, a renowned cognitive scientist, has long pointed out in an interview that these so-called “AI experts” have no interest in how the human brain and mind work and have never studied them thoroughly. It claims to achieve “Artificial General Intelligence” (AGI), which is why AI remains an empty dream to this day.
Recognition systems and language comprehension
All of the things that machine learning has been able to do historically have been character recognition, speech recognition, face recognition, things like that, which I call “recognition systems.” Of course, the recognition system is very valuable, OCR is very useful, I often use the voice input method on the phone, face recognition is obviously of great significance to the police. Yet many have boasted that we can use the same methods (machine learning, deep learning) to achieve “human-level intelligence” and replace all human work, which is a myth.
Recognition systems are a far cry from the “human intelligence” that truly understands language. In plain English, what these recognition systems, the fitting functions of statistics, can do. OCR and speech recognition, for example, input pixels or audio and output text words. Many people confuse “word recognition” with “language comprehension.” OCR and speech recognition systems, while they can statistically “recognize” what words you’re saying, don’t really “understand” what you’re saying.
Talk a little deeper, people who don’t understand can skip this paragraph. The difference between “recognition” and “understanding” is like the difference between “syntax” and “semantics” in a programming language. The program language text must first pass through a Lexer and parser before it is fed to an interpreter. Only the interpreter can implement the program’s semantics. By analogy, a speech recognition system for a natural language is really just a lexer for a procedural language. As I have pointed out in previous articles, lexical analysis and grammatical analysis are only “step 0” on a long journey to achieve a language.
Most AI systems don’t even have parsers, so you can’t analyze subject, verb, object, and sentence structure clearly, let alone understand the meaning. Frederick Jelinek, a speechrecognition expert at IBM, once joked that “every time I fire a linguist, the recognition rate goes up.” The reason is that speech recognition is just a lexer, whereas linguists work with Parser and Interpreter. Of course, what you are doing is so elementary that a linguist can’t help you, but that doesn’t mean that a linguist is worthless.
Many speech recognition experts think the Parser is useless because people seem to understand sentences without ever parsing them. What they don’t realize is that one has to parse some sentences unconsciously to understand their meaning.
Let’s take a very simple example. If I say to Siri, “I want to see some pictures of cats.” It would give me the following answer: “I can’t find anything about ‘some cats’ online.”
So what does that tell us? A lot of people probably figured it out, which means Siri couldn’t understand the sentence, so it went online and searched for some keywords. Siri doesn’t have a Parser or even a good word segmentation system, so it doesn’t even know what keywords to search for.
Why does Siri go online to find information about “some cats,” but not about “cats”? If you search for “cats” and “photos,” it will at least find something. That’s because Siri doesn’t actually have a Parser. It doesn’t have a syntax tree. It just uses some common NLP methods (like N-gram) to break sentences into “I… Want to… Look at… Some cats… The… Photo “instead of the syntax tree equivalent of” I… Want to… Look at… Some of the… The cat… The… Photo “.
The syntax tree of this sentence, based on the parser, a natural language PARSER I worked on, looks something like this.
The details are too technical for me to explain here. But as those interested may have noticed, according to the grammar tree, this sentence can be simplified to: “I want to see pictures.” Where “look at the picture” is a clause, it is “I want to…” “, which is called an object clause. How many photos? Some. What kind of photos are you looking at? The theme is cat photos.
- I want to see the pictures
- I’d like to see some photos
- I’d like to see pictures of cats
- I’d like to see some pictures of cats
Isn’t that interesting?
Siri doesn’t have this syntax tree, and its N-gram doesn’t even separate “some” from “cats,” which is why it looks for “some cats,” not “cats.” It left out even a word as important as “photo.” So Siri does “speech recognition” correctly and knows that I said the words. But since it has no Parser and no syntax tree, it can’t understand exactly what I’m saying. It doesn’t even know what I’m saying “about”.
How hard is it to make a Natural language Parser? A lot of people probably haven’t tried. I’ve done it before. While Indiana, I took an NLP class and built an English grammar parser with some of my classmates to pay for credit. It parses the syntax tree form like the one above.
Not only do you have to understand the parser theory of the programming language (LL, LR, GLR…) It takes a lot of examples and data to unravel the ambiguities of human language. My partner specialized in NLP, parsing Haskell, type systems, category theory, and GLR… It’s all very slick. Even so, our English Parser only handles the simplest sentences and is riddled with errors that get it through 😛
After parsing the grammar, you get a “grammar tree” that you pass on to the understanding center of language in your brain (a kind of “interpreter” for procedural languages). The interpreter “executes” the sentence, finds “values” for related names, and calculates them to get the meaning of the sentence. No one seems to quite understand how the human brain assigns “meaning” to words in sentences, and how these meanings are put together to form “thought.”
At the very least, it requires a great deal of practical experience, which one accumulates from birth. Machines don’t have these experiences at all, and we don’t know how to make them. We don’t even know how these experiences are structured or organized in the human brain. So it’s almost as hard for a machine to really understand a sentence.
That’s why Hofstadter says, “For a machine to understand human speech, it has to have legs, it has to be able to walk, it has to observe the world, it has to have the experience it needs. It must be able to live with people and experience their lives and stories…” In the end, you realize that building such a machine is much more difficult than raising a child. It’s not a matter of having nothing to do but to eat.
Machine conversation systems and human customer service
Siri, Cortana, Google Assistant, Amazon Echo and other voice-recognition tools are called “personal assistants.” How many of these things can be called “intelligent”, I think anyone who has used it should understand. Every time I’ve tried Siri, I’ve been blown away by its stupidity, which can make you smash your fruit phone in a hurry. The others were no better.
Many people have been fooled by “Microsoft Ice”, and it seems that this guy really understands what you are saying. After a chat, however, you discover that Xiaoice is nothing more than an “online sentence search engine”. It just randomly searches for sentences that are already on the Internet based on the keywords in your sentences. Most of these sentences come from question-and-answer websites such as Baidu Zhizhi and Zhihu.
A very simple experiment is to repeatedly send the same word to Xiaobing, such as “Wang Yin”, to see what it returns, and then take that content to Google or Baidu search, you will find the true source of the sentence. People like to deceive themselves, see a few sentences answer quite “clever”, think it has intelligence, but in fact, it is a random sentence, irrelevant, so you feel “clever”. For example, you say to Xiaoice, “Who is Wang Yin?” “, to which she might reply, “Wang Yin is changing her jokes.”
Thought how lovely sister, does not directly answer your question, has the sense of humor! Then you search baidu, and you find that this sentence is a forum inside the black my person said.
Here is an exact example that shows how little ice works. The picture was captured at the end of October 2016, when I tried to talk to Xiao Ice. Things may be slightly different now.
This shows that Xiaoice’s reply is basically from baidu q&A, Zhihu and other places. It seems that xiaoice just made a search for the data above. Xiaoice just randomly found the sentence, and as for the sense of humor, it’s all in your head. Many people like to take screenshots of only the “logical” or “funny” parts of a conversation with Xiaoice and exclaim, “Wow, Xiaoice is so smart and funny!” What they don’t tell you is that a lot of the unposted conversations are chicken and duck.
IBM’s Watson, which beat humans at Jeopardy, was assumed by many to understand human language and possess human-level intelligence. These people don’t even know how Jeopardy works, so they blindly assume that it’s a game that requires understanding human language. When you look closely, Jeopardy is a simple “guessing game” with a one-sentence question and a noun answer. For example: “Who is this singer who won ten Grammy awards last year?”
If you understand my previous analysis of “recognition systems”, Watson is also a recognition system where the input is a sentence and the output is a noun. A Jeopardy recognition system would not understand the meaning of a sentence at all, but instead produce a word based on the keywords that appear in the sentence, based on a fitting function derived from analyzing a large amount of corpus. With all the nouns in the world, where can one find such a corpus? Here’s a Jeopardy puzzle for you as a hint: “What kind of web site, you give it a noun, it outputs paragraphs and sentences that explain what the thing is, and gives you all sorts of relevant information?”
Easy to guess? It’s an encyclopedia like Wikipedia! All you have to do is turn the content around and create an “inverted index” search engine. You type in a sentence, and it searches for the most relevant noun, based on the keywords in the sentence. This is a Machine that can play Jeopardy, and it can easily outperform human players, just as search engines like Google and Yahoo can easily outperform human ability to find web pages. But there is little understanding or intelligence in it.
In fact, to see if Watson understands human language, I went to the Watson website earlier and played its “customer service demo”. It turned out to be a complete chicken talk. Most of the time Watson replied, “I don’t know what you’re talking about. Are you trying to…” Then list a bunch of options, 1,2,3…
Boss, do you expect to replace your company’s human customer service with something like this? Your company is going out of business 😛
Of course, I’m not saying these products are completely worthless. I’ve used Siri and Google Assistant, and I find them somewhat useful, especially when driving. I can use voice control because it’s easy to get into an accident while driving. I can say to my phone, “Navigate to the nearest gas station.” To do this, however, you don’t need to understand the language at all. You just need to use speech recognition to type in a function call: navigation (gas station).
Personal assistants are of little use at other times. I don’t want to use them at home or in public places for a simple reason: I’m too lazy or inconvenient to talk. With a few taps on the screen, I can do exactly what I want with much less effort and precision than talking. Personal assistant completely don’t understand what you are saying, this limitation was understandable, can be used, each big company recently, however, take a personal assistant by such things, exaggerate the “smart” content, not to mention their limitations, let outsiders thought that artificial intelligence is realized, which is why I must despise it.
Thanks to these “personal assistants,” for example, some are claiming that similar technology could be used to create “robo-customer service,” using machines instead of humans. What they don’t realize is that customer service, which seems like an “easy job,” is far more difficult than these voice-controlled gadgets. They have to understand the business, they have to be able to understand exactly what the customer is saying, they have to have a real conversation, they have to be able to solve real problems for the customer, not just pick up some keywords and respond randomly.
In addition, the customer must be able to information from the dialogue, caused the change of the real world, such as call distribution center stopped deliveries, to the higher request to meet the special requirements of customers, with return policy debate, with the client to reject their return requirements, grasp the customer psychology, promote new services to them, and so on, all kinds of “human experience” is needed to deal with. So not only do machines have to be able to form real conversations and understand what customers are saying, they also need a lot of experience in the real world, the ability to change the real world in order to be able to do customer service. Since these personal assistants are all huffing and puffing, I don’t see any hope of using existing technology for robotic customer service.
Machines can’t replace even such routine tasks as customer service, let alone more complex ones. Many people saw AlphaGo’s victory and assumed that so-called Deep Learning could one day achieve human-level intelligence. In a previous article, I pointed out that this is a myth. Many people think that difficult things (like go) are the locus of true human intelligence, but they are not. I ask you, is it difficult to divide (23423451345/729) in your head? It’s hard for a human to do, but any dumb computer can do it in 0.1 seconds. The same goes for go, chess and so on. These mechanical problems don’t really reflect real human intelligence, they just reflect a lot of brute force.
Looking at the scary terms invented in the field of Artificial Intelligence, from Artificial Intelligence to Artificial General Intelligence, from Machine Learning to Deep Learning… I’ve come up with a pattern: artificial intelligence researchers seem to like to make scary nouns, and when people lose confidence in a noun, they come up with a different, new one, so that people don’t transfer their disappointment with that noun into a new study. However, these terms, after all, are the same. Since no one really knows what human intelligence is, there is no way to achieve “artificial intelligence.”
Every day of my life, I, a former AI enthusiast, am amazed by the extraordinary capabilities of “human intelligence”. It doesn’t even have to be a person, the ability of any higher animal, like a cat, is something I’m awed by. I have a genuine respect for people and animals. I am no longer qualified to speak of “human beings” because any machine is so small in the face of that word.
In memory of my chatbot Helloooo
Taking advantage of this hot topic, now I will tell the story of my own chatbot more than ten years ago…
If you look at PAIP or any of the classic AI textbooks, you’ll see that the original idea for these robo-talking systems came from an AI program called ELIZA. Eliza was designed as a therapist to talk to you and solve your problems, but inside it was essentially a sentence search engine like Ice, implemented entirely with regular expression matching. For example, one of Eliza’s rules says that when a user says, “I (.*),” you reply, “I, too, $1…” Where $1 replaces part of the original sentence, creating a “understanding” effect. A user might say, “I’m bored.” Eliza could say, “I’m bored too…” And then these two boring people will know each other and be together.
Some old friends of tsinghua university may remember that more than ten years ago, when I was in tsinghua university, I made a chat robot and put it on the BBS of shuimu tsinghua university. It was very popular at that time, so I can be regarded as the originator of network chat robot 🙂 my chat robot, shuimu account is called helloooo. Helloooo is a naughty and lustful little boy with a character like Crayon Shin-shin.
It’s all eliza-like inside, it doesn’t understand sentences, it doesn’t even have a corpus, it doesn’t even have a neural network, it’s just a bunch of regular expression “sentence patterns” that I’ve written in advance. You type in a sentence, and when it matches, it randomly selects one of several responses, so you say the same thing over and over again, and Helloooo’s response doesn’t repeat itself. If you say the same thing over and over on purpose, eventually Helloooo says to you, “Why are you so boring?” “Or” What’s wrong with you? Or change the subject, or temporarily ignore you… So it doesn’t seem obvious to the other person that it’s a stupid machine.
It’s that simple. To my surprise, Helloooo attracted many people as soon as it went online. Word spread, and people texted him every day. Because the regular expression and reply way I set for him take into account people’s psychology, so helloooo seems very “playful”, sometimes it may play dumb, trick, delay reply, change the topic, but also take the initiative to chat with you, using more than two sentences of small jokes… All kinds of things. In the end, he won over so many girls that he almost went out with a few. 😛
In this respect, Helloooo is much better than Xiaoice. Although Xiaoice is more technical and has more data, Helloooo feels more like a person and more popular. What this shows is that you don’t need to be very sophisticated, you don’t need to understand natural language, you can design smart, you can capture people’s psychology, and you can make chat machines that people love.
Later, Helloooo finally attracted the interest of graduate students from the Human Intelligence Group of Tsinghua University, who asked me: “What corpus do you use for analysis?” Me: “&%&¥@#@#%……”
Automatic programming is impossible
Now, back to some of the original proposals, implementing automated programming systems. I can tell you right now, very simply, that’s not going to happen. Microsoft Robust Fill, it’s all bullshit. I have a little disdain for Microsoft’s recent efforts to fan the flames of AI fever. The Microsoft researchers probably know the limitations of these things, but the Chinese editors are exaggerating their efficacy.
When you look at the examples they give, you know it’s a toy problem. Given a few examples, it is obviously impossible for a computer to guess exactly what it wants to do. For the simple reason that the example may not contain enough information to indicate exactly what the person wants. The simplest transformation might work, but with just a few exceptions, it’s impossible to guess what he’s up to. Even if a person can’t see these examples and know what the other person wants to do, how can a machine? This is all about mind-reading. Even a person can be confused. He doesn’t know what he wants to do. How can a machine guess? So it’s harder than mind reading!
For such a weak mental problem, can not be 100% correct solution, encountered a little bit of logic, there is no hope. At the end of the paper, the “visionary” reference to extending this approach to situations with “control flow” is complete nonsense. So all RobustFill was able to do was make the highly retarded toy problem “almost 92 percent accurate.” In addition, it is also questionable what criteria are used to calculate this 92%.
As any programming language expert in charge will tell you, automating programs is impossible. Since mind-reading is impossible, for a machine to do anything, a person must at least tell the machine what he or she wants, and expressing this “want” is almost as difficult as programming. Isn’t the essence of the programmer’s job, in fact, to tell the computer what he wants it to do? The most difficult work (data structures, algorithms, database systems) has been hardwired into the library code, but the task of expressing “what you want to do” can never be done automatically, because only the programmer knows what he wants, and even he has to think long enough to know what he wants…
It has been said that programming is just another name for a lost art called thinking. No machine can replace human thinking, so programming is a job that cannot be replaced by a machine. While good programming tools can make programmers’ jobs easier and more efficient, any attempt to replace programmers, save programming labor, and shortchange them into “replaceable originals” (Agile, TDD) will ultimately backfire on employers. The same principle applies to other creative jobs: chefs, hairstylists, painters…
So forget about automatic programming. The only way to save programmer costs is to invite good programmers, treat them with respect, and let them live and work happily. At the same time, get rid of those “Agile”, “Scrum”, “TDD”, “software engineering”, all talk and no action managers, they are the real waste of company resources, reduce development efficiency and software quality.
The value of stupid machines
I am not opposed to continue to invest research those who have the practical value of artificial intelligence (such as face recognition), however, I think that should not be exaggerated its usefulness, focus too much on it, as if that is the only way to do things, as if it were an epoch-making revolution, as if it will replace all human labor.
My personal interest is not in artificial intelligence. So how do I start a business? Quite simply, I don’t think most people need a very “smart” machine. It’s the “dumb machine” that is most valuable to us. In fact, we are far from realizing the potential of the dumb machine. So designing new, reliable, stupid machines that benefit people should be my entrepreneurial goal. Of course, by “machine” I mean hardware and software, and even cloud computing and big data.
To take just one example, some AI companies want to develop “robot servants” that can automatically clean and do housework. I think this problem is almost impossible to solve, it is better to directly ask the real intelligent – aunt to help. I can build an auntie service platform to facilitate the matchmaking between families in need of services and auntie. Equip auntie with better tools, communications, schedules, payment facilities so she can work cheaply and easily. In addition, to provide feedback information about the work of the aunt to the family, so that the family is also relieved, that is not the best of both worlds? What kind of intelligent robot is needed? It’s difficult, expensive and hard to use. Apparently such auntie service platforms, combined with real human intelligence, could easily make those robo-servant companies die in the bud.
Of course, I may not actually be an auntie service platform, but as an example, there are so many silly machines that are useful to people that are still waiting to be invented. These machines require brainpower to be designed, but they are easy to implement, convenient and economically effective. Instead of competing for people’s jobs, these things might create more jobs. The most reasonable development direction is to make use of people’s wisdom and the brute force of machines so that people can save labor and make money.
(If you like this article, please pay, as much as you like)