Planning to edit | Natalie
The author | Francois Chollet
The translator | nuclear coke
Edit | Emily
AI Front Line introduction:The Facebook user data leak has become the biggest scandal in the history of social media, and has triggered numerous discussions on major media at home and abroad.





Last night, Francois Chollet, a Google engineer and author of Keras, a well-known deep learning library, posted a post on the Medium blog titled “Why I’m Worried about AI.” According to Francois Chollet, companies and technology developers should not use AI as a tool to manipulate users, but as a powerful weapon that users can actually master. As of press time, the article has received more than 2,200 compliments.





At the beginning of his article, Francois Chollet argues that our fears about artificial intelligence, like our fears about computer technology in the 1980s and 1990s, are misguided. The ones that are really worrying are not getting enough attention.





So what are the problems with ARTIFICIAL intelligence that we should really be worried about? How should technical personnel deal with these problems? This article is worth reading for all business managers and technology developers.






Please pay attention to the wechat public account “AI Front”, (ID: AI-front)


The following is the full text of “What Worries me About AI” by Francois Chollet on Medium. It has been compiled and edited by AI Front.

Disclaimer: The opinions expressed in this article are not related to the company I work for. If you are quoting from this article, be faithful to your personal conclusions and make an objective judgment based on the facts.

If you were born in the 1980s or 1990s, you may remember the now long-gone phenomenon of “computer phobia”. The problem was still with us in the early 2000s — as personal computers swept into our lives, workplaces, and homes, many of us began to feel anxious, fearful, and even aggressive. While some people are fascinated by computers and are shocked and excited by their potential, most people don’t know much about these technologies. They see ufos and even regard them as a serious threat. In other words, people are worried about being replaced by computer technology.

Many people actually react to technological change with antipathy, even panic. While this phenomenon is true in any revolution, it’s important to emphasize that most of the consequences we fear will not actually occur.

A few years later, computer enthusiasts have gotten used to living with computers and using them to their advantage. Computers have not replaced humans and caused mass unemployment. On the contrary, it is now impossible to imagine life without laptops, tablets and smartphones. Sensitivity to threats translates into affirmation of the status quo. But at the same time, the threats that we used to worry about, that might come from computers and the Internet, are actually beginning to take on a different form and cause trouble. In the 1980s or 1990s, most people would never have imagined the pervasive mass surveillance, the tracking of infrastructure or personal data, the psychological alienation of social media, the loss of users’ patience and ability to focus, the vulnerability of minds to political or religious radicalization, And the hijacking of social networks by hostile foreign powers to undermine western democracies.

If most of our original fears were the result of irrational thinking, most of the really worrying developments caused by technological change today were not fully noticed then. A hundred years ago, we could not have predicted that transportation and manufacturing technologies would unleash the industrial revolution that would eventually cost tens of millions of lives in two world wars. Nor did we realize that the rise of radio would lead to a new form of mass propaganda that would eventually lead to the rise of fascism in Italy and Germany. And the development of theoretical physics in the 1920s and 1930s failed to make people realize that it would soon threaten to permanently bring the world to the brink of destruction in the form of thermonuclear weapons. Today, a majority of Americans (44 percent) choose to ignore the alarm that has been ringing at the heart of humanity’s problems for decades: climate. As individuals in a civilized society, we seem to have difficulty identifying future threats with reasonable concern or caution; On the other hand, it’s easy to panic because of irrational judgment.

Now, as ever, we are facing a wave of fundamental change: cognitive automation. Or we can generalize it in terms of artificial intelligence. And, as in the past, we still fear that this new technology will harm humanity itself — that AI may lead to mass unemployment, or that it will gain autonomy and become a superhuman “species” that will ultimately choose to destroy humanity in order to achieve world unity.

Photo credit: facebook.com/zuck

But we are still worried in the wrong direction — just as we were in the past. What if the real threat to AI doesn’t lie in the “superintelligence” or “singularity theory” that most people are currently focused on? In this article, I want to address what you really need to worry about when it comes to ARTIFICIAL intelligence: how the highly efficient, highly scalable capabilities that AI can achieve for human behavior will affect companies and governments if maliciously manipulated. Of course, there will be other tangible risks associated with the development of cognitive technology, particularly problems related to negative biases within machine learning models. But this article will focus on the issue of large-scale AI manipulation, mainly because it is extremely risky and often overlooked.

This risk is already a reality and will continue to escalate in the coming decades with long-term technological developments. As our lives become increasingly digital, social media will be an ever more accurate guide to how we live and think. At the same time, they are increasingly using behavioral control mediums — particularly through news dissemination algorithms — to control how we consume information. This means that human behavior will ultimately become a class of optimization problems, or problems that can be solved with ARTIFICIAL intelligence: social media companies can repeatedly adjust their controls to achieve certain goals. It’s as if the game AI is constantly tweaking its game strategy based on the score and eventually getting through the current level. The only bottleneck in this process is the level of algorithm intelligence in the loop. Social networking companies are pouring billions of dollars into AI research to break through this bottleneck.

Now, please allow me to elaborate.

Social media represents a macro psychological trend

Over the past two decades, our private and public lives have migrated entirely online. We spend more and more time staring at screens. Human society as a whole is shifting to a new dimension in which much of our work is based on digital information and entire nations consume, modify or create.

One side effect of this long-term trend is that companies and governments are collecting vast amounts of data about individuals – particularly through social networking services. Who we communicate with, what we say, what content we consume all the time — including pictures, movies, music news, how we feel at certain times and even everything we perceive and do — will eventually be recorded on a remote server.

In theory, these data can help collectors build up very accurate psychological profiles of individuals and groups. Our views and actions can be cross-correlated with thousands of similar individuals, resulting in seemingly incomprehensible conclusions that are far more predictable than mere introspection (For example, Facebook likes to use algorithms to make even more accurate generalizations about its users’ personalities than its friends’ impressions). This data can predict in advance when you will start a new relationship (and with whom) and when you will end your current one. What’s more, which users are at risk for suicide, which side you voted for in an election (even we don’t know), and so on can be fairly accurately summed up. This analytical power doesn’t just apply at the individual level — large groups are more predictable because average behavior eliminates factors such as randomness and individual anomalies.

Use digital information consumption as a psychological control vector

Of course, passive data collection is by no means the ultimate goal. Social networking services are also increasingly taking control of the information we consume. What we see in the news has become a curated product. Opaque social media algorithms will help us decide which political articles to read, which movie trailers to watch, who to keep in touch with and what feedback to receive after expressing our opinions.

Combined with years of reporting and analysis, it’s safe to say that today’s information-consumption algorithms have wielded considerable power over our personal lives — even answering questions about who we are and who we will be. Imagine Facebook dictating for years what news you can read (true or false), and people’s politics will change — meaning Facebook will ultimately control our world view and our politics.

Facebook is in the business of making an impact on people. That is the main service it sells to buyers — advertisers, especially political ones. In fact, Facebook has built a fine-tuning algorithm engine that can not only influence how you feel about a brand or the next product you want to buy, but can even shape people’s moods and tweak deliverable to easily make you angry or happy. In the end, it may even be able to control the outcome of elections.

Human behavior is an optimization problem

In short, social networking companies can track everything about us and control the information we can consume. This trend is accelerating. In fact, all current problems involving perception and movement are, to some extent, artificial intelligence problems. We can set up an optimization cycle for human behavior, in which people can observe the current state of the target and adjust the information provided until they arrive at the desired opinion and behavior. At the heart of a big part of ai right now — reinforcement learning in particular — is developing algorithms to solve these optimization problems as efficiently as possible, thus closing the loop and achieving complete control over the goal. In other words, total control over us humans. As we move our lives into the digital realm, we will become more susceptible to rules, especially those dictated by artificial intelligence algorithms.

Reinforcement learning loops for human behavior

Because the human mind is so susceptible to simple patterns of social manipulation, this work is made easy. For example, consider the following attack vector:

  • Identity reinforcement: This is an old-school trick that has been used extensively since the dawn of advertising, but it has always worked. This involves correlating certain perspectives to tags you’ve identified, and controlling algorithms in the social media consumption environment to ensure that you only see content that matches your perspective position with your tags (including news stories and posts from friends), while excluding other content that doesn’t match your identity.

  • Negative social reinforcement: If your post expresses an opinion that the control algorithm doesn’t want you to hold, the system can choose to show your post only to users with opposing views (acquaintances, strangers or even bots), who will harshly criticize it. Over time, this kind of social feedback can lead you astray from your original point of view.

  • Positive social reinforcement: If you express an opinion in a post that the control algorithm wants you to hold, it will choose to show it only to people willing to “like” it (maybe even a bot), which reinforces your beliefs and makes you feel supported by the majority.

  • Sampling bias: The algorithm may also show you more friends (or media coverage) who support your opinion. In an information bubble like this, you’d think such views would have huge support, although they might not.

  • Parametric personalization: Algorithms may observe that certain content is more likely to trigger a switch in certain groups of people whose psychological attributes are similar to yours. On this basis, the algorithm may be able to deliver targeted content based on a particular point of view and life experience, thus effectively completing the position reversal. In the long run, the algorithm might even create “personalised” content that suits you particularly well, entirely through fabrication.

From the perspective of information security, this can be called a “security breach”, that is, a known attack that can be used to break into a system. In the case of the human mind, these holes can never be fixed because they are the way we think. They’re in our DNA. The human mind belongs to a static and vulnerable system, and will increasingly be attacked by intelligent AI algorithms. These algorithms have a complete grasp of what we do and believe, giving us complete control over the content of the information we consume.

The status quo are reviewed

It’s worth noting that applying AI algorithms to our information consumption to achieve mass population manipulation — particularly political control — often doesn’t require the use of particularly advanced AI schemes. Given such demands, it’s a waste to have a self-aware super-intelligent AI system, a task that even existing technology already does well. Social network enterprises have carried out research for many years and obtained remarkable results. While their current goal may simply be to maximise “engagement” and influence your buying decisions rather than manipulate your views about the world as a whole, the tools they have developed have been used by hostile powers for political ends — including the 2016 Brexit vote and the 2016 US presidential election. Yet if mass population manipulation is possible, why hasn’t the world fallen completely?

Simply put, it’s because we haven’t embraced artificial intelligence enough. That could soon change.

As of 2015, targeting algorithms for the entire advertising industry still operate on a logistic regression basis. This is still true today — only the largest and most powerful companies are able to move to a more advanced model. Logistic regression is an algorithm that has appeared before the computing era, and it is also one of the basic technologies to achieve personalized effect. That’s why we still see a lot of ads on the Web that have nothing to do with our needs. Similarly, social media bots used by hostile countries to influence public opinion are hardly fully AI. For now, they are very primitive.

However, machine learning and artificial intelligence have made rapid progress in recent years, and their results are rapidly being deployed into targeted algorithms and social media robots. Deep learning was introduced to news communication and advertising networks only in 2016. Facebook has invested heavily in AI research and development and clearly wants to be a leader in the field. So in the context of social media, what are the benefits of building natural language processing and reinforcement learning capabilities?

Quite simply, Facebook, a company with nearly 2 billion users, was able to construct detailed models of the psychological characteristics of such a large group of users and manipulate their news sources to conduct behavioral experiments, ultimately developing the most powerful AI solutions ever seen. Personally, this situation terrifies me. But what’s scarier is that Facebook isn’t even the biggest threat to worry about. For example, some totalitarian states are using information control to achieve unprecedented levels of control. While many people think of big corporations as the all-powerful rulers of the modern world, they are nowhere near as powerful as governments. When it comes to algorithmic mind control, governments are most likely to be behind all this.

So how do we respond? How can we protect ourselves? As technology practitioners, how should we avoid the large-scale manipulation of public opinion caused by the spread of social news?

The other side of the coin: What can AI do for us

Most importantly, the existence of this threat does not mean that all algorithmic strategies are untrustworthy, or that all targeted advertising is brainwashing. In fact, they can also bring huge practical value.

With the rise of the Internet and artificial intelligence, applying algorithms to the consumption of information is not just inevitable — it’s desirable. As our daily lives become more digital and connected, and as the world becomes more densely packed with information, we need AI to act as our interface to the world. In the long run, education and self-development will become the most influential application direction of ARTIFICIAL intelligence. This help will happen dynamically and in almost exactly the same way as malicious AI trying to manipulate our minds — only this time in good faith. Information management algorithms can help us maximize our personal potential and build a better human society through better self-management.

So the problem is not ai per se, but control.

We shouldn’t use newsfeed algorithms to manipulate users — for example, to reverse their political allegiances, or waste their valuable time. We should make the user the object of optimization that the algorithm is responsible for. After all, we should have autonomy and control over the news we want to see, our world view, our friends and even our personal lives. Information management algorithms should not be some mysterious force created by us to achieve certain benefits; Instead, it should be a tool that we can truly control. More specifically, they should be tools that we can use to achieve our stated goals — such as education and character development — rather than entertainment.

Here’s what I think — any algorithmic newsfeed should do:

  • Be transparent about what the algorithm is currently optimizing for and how those goals affect your information consumption.

  • Provide users with intuitive tools to personally set these goals, for example, users should be able to configure their own news sources to drive learning and personal growth in a particular direction as much as possible.

  • Measure how much time you spend with this information in a way that is always visible.

  • Provide tools to control how much time you spend on information consumption — such as daily time goals that algorithms will use to remind you to leave the computer and return to real life.

Use artificial intelligence to strengthen yourself while maintaining control

We need to build AI that actually serves humanity, not manipulate it for profit or political gain. Clearly, newsfeed algorithms should not be as crooked as casino operators or propagandists; Instead, they should be more like a mentor or a good librarian, who recommends books that resonate and help you grow, based on a psychological understanding of individuals and millions of others like you. It should be a guiding tool that matches your life — by establishing the best path for you to experience the space, you will get there with the help of ARTIFICIAL intelligence. In this case, you will be able to look at yourself in a peer group, learn with a system of accumulated experience, and conduct academic research with a “helper” who holds the entire stock of current human knowledge.

In a real product, the user should have complete control over the AI that interacts with them. In this positive way, you will be able to achieve your stated goals more effectively.

Build an anti-Facebook model

In short, ai will be our interface to the digital world in the future. This will give individuals more flexible control and even allow organizations to die out. Unfortunately, social media is definitely going the wrong way right now. But we still have enough time to turn this around.

As members of the AI industry, we need to develop products and markets that provide incentives to align users with the algorithms that influence them, rather than using AI to manipulate their judgment for profit or political gain. More specifically, we need to build anti-Facebook models.

In the future, these products may come in the form of AI assistants. For now, search engines can be regarded as an AI-driven early information interface. It serves users without trying to control their psychological preferences. Search is a tool used to achieve a specific goal. It only shows the user what we actually want to see. In this efficient way, the search engine minimizes the lead time from question to answer and question to solution.

Some friends may wonder, since search engines belong to the artificial intelligence layer between us and our information, will they adjust the results to manipulate us? Yes, there are risks with every information-management algorithm. But unlike social networks, the direction of market incentives in this case is actually aligned with user needs, which also encourages search engines to remain as relevant and objective as possible. If it doesn’t work, users will quickly switch to other, more equitable competing products. What’s more, search engines have much less room for psychological influence than social news feeds. The ai threats we’ve mentioned in this article often require the following conditions to be reflected in the product:

  • Perception and Action: Not only can the product control the information presented to the user (news and social updates), it can also “sense” the user’s current state of mind through “likes”, chat messages and statuses. Without these two cognitive abilities, they will not be able to establish reinforcement learning cycles.

  • Be central to our lives: The product should be the primary source of information for at least some users, and the typical user should spend hours using the product each day. In contrast, ancillary, dedicated push mechanisms, such as Amazon product recommendations, do not pose a serious threat.

  • It becomes a social component that enables broader and more effective psychological control (especially social reinforcement). This means that unpersonalized news feeds represent only a small fraction of the information we receive.

  • Business incentives are designed to manipulate users and lead them to spend more time with the product.

Most AI-driven information management products do not meet these requirements. Social networks, on the other hand, are scary aggregates of risk factors. As technologists, we should lean towards products that do not have these characteristics, and resist products that are in danger of such abuse. So instead of social news, we should build search engines and digital assistants. Design the recommendation engine transparently to ensure that it is configurable and constructive. In addition, building great user interfaces, user experience, and AI expertise, while building great configuration panels for algorithms, ensures that users can use the product as they wish.

It is important that we communicate these concepts to our users to help them reject manipulative products. The resulting market pressures will push the technology industry in a truly healthy direction.

Bottom line: We stand at a fork in the road to the future
  • Not only can social media build powerful mental models for individuals and groups, it is increasingly controlling how we consume information and manipulating our beliefs, feelings and behaviors through a series of fruitful psychological effects.

  • Artificial intelligence algorithms advanced enough to access our mental states and the behaviors that represent them in a continuous loop can effectively manipulate our beliefs and behaviors.

  • Using ARTIFICIAL intelligence as our information interface is not the problem, per se. If well designed, these ai interfaces can bring greater benefits and rights to humans. The point is that users should have full control over the algorithmic goal and use it as a tool to pursue their own goals (just as with search engines).

  • As technologists, we have a responsibility to reject products that we can’t control, and to focus on building information interfaces that are accountable to users. Instead of using AI as a tool to manipulate users, use it as a weapon that users can actually control.

There are two paths ahead, one to a chilling future and the other to a more personal one. Fortunately, we still have time to deliberate and choose. But it’s important to emphasize that as a participant in the digital world, whether you care or not, your choices have consequences for all of humanity. So be sure to look at the problem responsibly.

Read the original link: https://medium.com/ @Francois-chollet /what-worries-me-about- AI-ed9DF072b704

Please pay attention to the wechat public account “AI Front”, (ID: AI-front)