The paper contains 5476 words and is expected to last 16 minutes

Photo source: Unsplash

In recent years, the public has been discussing the ethics of ARTIFICIAL intelligence (AI). The recently concluded China Computer Conference (CNCC) held a sub-forum on the topic of “the Ethical Boundaries of AI development”.

For a time, all kinds of talk.

“Robots will take over people’s jobs”, “computer systems are just as biased as humans”…


Why don’t we think about why machine learning or artificial intelligence is more dangerous than other technologies?


Indeed, the current public debate about the ethics of AI is so important and timely that it needs to continue. But the only thing the zealous citizen wants is to avoid using the term “artificial intelligence” when discussing ethical issues of technology, because sometimes some issues have nothing to do with artificial intelligence at all.


The current debate about the ethics of AI misses the point


Recently, Microchip read an article at the World Economic Forum titled “Nine Ethical Questions about ARTIFICIAL intelligence.” Based on this article, small Chip will conduct an experiment. (In this need to declare, small chip for this forum is not any criticism or irony, but very love)


Most of the issues addressed by the public in discussing the ethics of AI are not unique to AI. In fact, technology in a broad sense has this problem, so it’s not unusual.


In this article, take the subtitle, replace all the words “artificial intelligence”, “robot”, “machine”, “intelligent system” and “artificial” with the word “technology”, and see what you can find.


How many of these “nine ethical questions” are unique to artificial intelligence?


1. What if people can’t find jobs in the future?

2. How should the wealth created by technology be distributed?

3. How will technology affect human behavior and communication?

4. How to reduce the technical error rate?

5. How to eliminate technology bias?

6. How to avoid using technology for improper purposes?

7. How to prevent unintended consequences of technology?

8. How do humans master complex technology?

9. How to define the “human rights” of technology?


Obviously, questions 1 to 8 relate only to technology in a broad sense, including, of course, general software. The use of the term “artificial intelligence” in this article is actually intended to arouse public interest in reading; It also reminded Him of Pet Rocks, a teaching tool commonly used by geologists. If the pet stone is just to add fun to the class, then of course it is acceptable. However, if geology class becomes “pet rock psychology”, that is completely different (question 9).


Draw a face on an object and soon start talking to it again. In fact, it’s a sign of human “dysfunction,” because just having a face doesn’t mean it has a brain.


If you want to inflict more injustices on human beings, create ineffective solutions, disrupt the labor market, change the way people communicate, hand things over to the wrong people, and create a complex system that human beings cannot control or get rid of, You don’t need machine learning or ARTIFICIAL intelligence to do this (but please don’t). And we don’t need to turn to fancy concepts like “big data” or “neural networks” to explore the impact of these behaviors on the human world.


Technological singularity? (The idea is unproven, so the Nebula prize is a bit premature.)


What features are unique to machine learning or artificial intelligence? There are! Is it “The Singularity”? Don’t be too quick to give the Nebula Award to Technological Singularity. There are many other important features of what is now called machine learning or artificial intelligence that need to be clarified.


This is another example of a pet rock. This simulation is more realistic, but still, it’s an inanimate object with a face painted on it.


Artificial intelligence V.S. robots


What’s so good about ARTIFICIAL intelligence?


The answer is simple: ARTIFICIAL intelligence can automatically handle tasks that cannot be expressed in words.


In other words, humans no longer need to find a solution to a problem, just use patterns in the data.


Do you know how powerful this approach is?


This means that humans no longer need to write instructions and computers can do tasks automatically. What more could you ask for? Want artificial intelligence to be “human”? Completely replace humans? Technological singularity?


Don’t think too much about it. Artificial intelligence has nothing to do with that. Currently, AI is packaged as a “fully armed” humanoid robot, which plays on public ignorance and distracts the public from the real danger.


Robots are just another kind of pet rock. For example, try putting two big eye stickers on your vacuum cleaner (I know you want to do this).


So if we’re worrying about things we don’t need to worry about, then we’re missing out on things we really need to worry about. Don’t be fooled by the technical poets.


Neural networks are not the same as human brains.


The current term for artificial intelligence is not to develop humanoid entities with personalities to replace humans (a better term for such robots is HLI humanoid intelligence). In reality, AI is just a series of programming tools written in a different way, using examples (data) rather than explicit instructions.


“Ai is a tool for programming with examples [data] rather than explicit instructions.”


Want to know what the future holds for artificial intelligence? What are the potential risks of ARTIFICIAL intelligence? Take a closer look at the last quote, which shows both of these problems.


A shift in focus


Suppose you need to automate a 10,000-step task. With traditional programming, a human would have to wrestle with every instruction.


In traditional programming, each step of the solution must be explicitly written by a human.


For this reason, the process can be compared to the human hand to build 10,000 Lego bricks. Since the developers themselves have little patience, they package parts together so they don’t have to reassemble them the next time they use them.


Instead of dealing with the 10,000 blocks individually, people download the setup package they’ve created from the Web. At this point, people can concentrate more on the next step, which is to put together another 50 components composed of 200 small squares. If people are confident enough that lego components have been put together in advance by someone else, they don’t have to bother checking the details of every single piece.


As a result, people only need to put the whole roof and wall together, and do not need to carefully consider the brick by brick. Besides, who has so much spare time to do such trifles? (Perhaps the 10,000-piece masterpiece could be packed up at the end of the assembly, so that when someone is ready to do the 100,000-piece masterpiece, they can copy it at any time, saving a lot of time. GitHub is undoubtedly responsible for all of this.)

Photo source: Unsplash

But here’s the thing: even if you don’t have to do each step in person anymore, know that every single one of those 10,000 steps was written by a human… That’s the difference between machine learning and artificial intelligence.


Machine learning takes humans from a state of intense concentration to a state of inattention.


In machine learning and artificial intelligence engineering, there are all kinds of problems to deal with, but it’s mostly about integrating tools. People may need to write 10,000 lines of code for a project, and most of that code is to coax these clunky tools into taking orders. As tools evolve, eventually there will be only two real instructions in machine learning or artificial intelligence:


1. Optimization goals…

2…. Based on data sets.


That’s all. Instead of writing thousands of lines of code, two statements are enough to automate tasks. That sounds great, but it’s dangerous.


Whose jobs will AI replace?


In real life, some tasks are so trivial that they can be done by machines without human bother. As a result, the efficiency of human activities will be significantly improved. Although humans don’t know how to do it themselves, they can accomplish tasks efficiently with the help of machines. This is why machine learning and artificial intelligence are increasingly appreciated by those who are not blinded by the scenes in science fiction movies.


After that, mankind will no longer have to write tens of thousands of steps of solutions. Machine learning and artificial intelligence will be able to automatically generate those 10,000 lines of code (or something like that) by finding solutions from the examples provided by the developers.


The basic difference is concentration.


If you’ve never thought about machine learning or ARTIFICIAL intelligence replacing anyone’s job, you’d better brace yourself for the fact that:


Developers automate/speed up other people’s work.

Machine learning and artificial intelligence will automate/speed up the work of developers.


So you don’t have to write instructions like “First this, then this, then this, then…” ; Now just say, “Based on this data, get an optimal score.” Say something more like, “Here’s something I like, and if you typewriter monkeys get your hands on it, let me know immediately.”


Photo source: www.quickiwiki.com

(Don’t worry, software engineers don’t have to worry about losing their jobs because there’s still a lot of work to do integrating certain data sets before algorithms can process them. The only thing that has changed is how they work, because instead of telling the computer what to do with instructions, they now need to enter data.)


Unavoidable human negligence


It is now necessary to bring up the most pressing problem unique to machine learning and ARTIFICIAL intelligence: the inevitable human negligence.


When endangering the life and death of human beings, the negligence of ARTIFICIAL intelligence will be a huge risk. Machine learning and artificial intelligence will become an oversight amplifier.


At this point, the decision makers’ choices matter, but will the people running the project really organize those two lines as carefully as those 5,000 instructions? Really?


Photo source: tooopen

What else has not been considered?


Instance selection


Machine learning and artificial intelligence use instances to express their intentions, so data sets need to be fed into the system. However, if this data is not verified and processed in advance, and it is not clear what is relevant, what is biased and what is a high-quality example, the consequences can be disastrous.


Ai bias: Inappropriate examples, unverified.

Photo source: Unsplash

Target selection


In addition, people can randomly pick a goal that sounds good but actually sucks. For example, a leader says to his developer, “Help me block as much spam as possible,” and expects him to develop a sensitive and reliable filter. However, if the same instructions were told to the AI algorithm, the leader might quickly discover that there have been no new emails recently. Mark every message as spam for a given target to get the highest score.


Any fool can name a goal off the top of his head. Unfortunately, the learning system insists on achieving this goal.


The words “brain” and “math” are vague and obscure, and that may pose a bigger risk for humanity. Because xiaomin believes that this sense of ambiguity will make people become too careless and hasty when choosing goals and instances. In the end, it’s just the human mind that’s in charge. The so-called mathematics is reduced to a sandwich. Human subjective consciousness is the two loaves of bread, and the real objective mathematics is nothing but a thin layer of butter sandwiched between them.


Mathematics is like a sandwich, subjective with a little objectivity in between.


As the tools of ARTIFICIAL intelligence become more sophisticated, the barriers to entry lower; Yet even if the bar is low, humanity inevitably stumbles. Improvements in technology are good for small, private projects, but if the progress of an AI project will affect others, then project managers must pay more attention to manage it. That’s when a great leader has to step up and make wise decisions about a range of issues. For example, are we really ready for risks and challenges?


Photo source: sohu

“Give me a fulcrum, I can lift the whole earth.” — Archimedes


Technology — a great lever


Technology can make the world a better place, broaden our horizons, extend our lives, and provide us with food and clothing even in an age of population explosion.


However, technology can also bring fear, turmoil and chaos to humanity. The greater the impact of a technology, the greater its potential for disruption. Technology is like a lever that can greatly enhance human potential.


But remember, we must be careful when using technology for the benefit of mankind! Because when you enjoy a technology, it’s easy to influence others around you.


It is best to use a technology, including artificial intelligence, as an aid to humans rather than as autonomous individuals. While enjoying the convenience brought by technology, we must not affect others. Photo source: 3158


When we enjoy the convenience brought by technology, it is easy to affect others.


While many of the ethical issues surrounding AI are not unique to AI, AI can also rub salt in the wound. Therefore, the current discussion on these issues is still hot.


Human negligence will be magnified


If someone asks me, “Are you afraid of AI?” What I really hear is, “Are you afraid of human oversight?” To me, this is the root of the problem, because I don’t believe in robots in science fiction and I don’t talk to pet stones.


Good bye! Ethics of artificial intelligence!

Hello! Human negligence!


Going back to those nine sub-headings at the beginning of this article, imagine increasing the reach and speed of the scenes in those headings. Then put a human carelessness amplifier on that scale; Soon, the consequences of human negligence will be magnified to infinity.


What humans should fear from AI is not robots, but humans themselves.


As the saying goes, with great power comes great responsibility. But in this age of ARTIFICIAL intelligence and big data, are humans really ready to take on the challenge of leadership? Needless to say, humans themselves worry about being bitten by technology.

Photo source: Unsplash

Is xiao Xin afraid of artificial intelligence?


Of course not!


Xiao Xin loves ARTIFICIAL intelligence and, as an optimist, looks forward to the future led by artificial intelligence.


Xiaomin believes that in the era of artificial intelligence, everyone will value the value of responsibility, responsibility and leadership. By then, humans will be able to build safe and efficient systems that will lead to technological advances and a better life for all.


This is why Xiaomin (and like-minded people) has been sparing no effort to share with you all the technical knowledge and consultation on artificial intelligence, the Internet and so on.


Once humans have removed the fruit from the bushes (that is, solved simple problems), artificial intelligence will be needed to solve more difficult problems.


Hopefully, through the proper use of ARTIFICIAL intelligence, human beings will be able to protect resources, even travel across the stars, spread love across mountains and seas, and ignorance and disease will never exist again.


Technology can be beautiful. It’s up to us. Xiaomin always believes that everyone is the witness and creator of the beauty and warmth of technology.


Human and artificial intelligence cooperation, nature and technology symbiosis of the better future is calling us!

Leave a comment like follow

We share the dry goods of AI learning and development. Welcome to pay attention to the “core reading technology” of AI vertical we-media on the whole platform.



(Add wechat: DXSXBB, join readers’ circle and discuss the freshest artificial intelligence technology.)