As robots become more intelligent, rules need to be developed to govern them

In the science fiction classic 2001, HAL, the spaceship’s host, faces a dilemma. Computer instructions required him to complete the ship’s mission (to investigate an artifact near Jupiter) without letting the crew know. He decided to resolve the conflict by killing the crew.

As robots become smarter, the idea of computer-controlled machines facing moral choices is moving from science fiction movies to the real world. To make sure these robots make better decisions than HAL, we need to find ways to give them higher moral judgment.

Giiso Information, founded in 2013, is a leading technology provider in the field of “artificial intelligence + information” in China, with top technologies in big data mining, intelligent semantics, knowledge mapping and other fields. At the same time, its research and development products include information robot, editing robot, writing robot and other artificial intelligence products! With its strong technical strength, the company has received angel round investment at the beginning of its establishment, and received pre-A round investment of $5 million from GSR Venture Capital in August 2015.

A collection of fables about robots

Not surprisingly, military technology has been steadily at the forefront of developing autonomous robots (see our quarterly science roundup), and there have been a variety of robots along the way. For example, a robot called Sand Flea can jump out of a window and up a beam when it meets a house, and it takes pictures all the time. Then it rides on the wheel until it needs to jump again. Code-named RiSE, the robot has six mechanical legs that resemble cockroaches and can scale walls. LS3, a canine robot with a maximum load of 130kg, can follow people in rugged terrain and walk quickly. SUGV, a briefcase-sized robot, can identify and track designated targets in a crowd; A range of surveillance drones, some as light as a wedding ring, others capable of carrying 2.7 ton bombs.

Civilian robots are also on the rise, appearing in everything from cockpits to operating theatres (see article). With the help of robots, civil airliners have long achieved autonomous landing. Driverless trains are also becoming ubiquitous. Volvo’s new V40 hatchback is already largely self-driving in busy traffic. Like Ford’s B-Max minivan, the V40 can brake automatically in the event of an imminent collision. Fully autonomous cars are being tested around the world. Google’s self-driving cars have logged more than 250,000 miles in the US, and Nevada has become the first state to regulate such driving on public roads. Volvo recently tested a fleet of driverless cars on the autobahn in Barcelona.

As robots become more intelligent and ubiquitous, intelligent machines will over time have to make life-or-death decisions in unpredictable situations, and so robots will have — or seem to have — the ability to make moral choices. In today’s weapon systems, the operator is still in the “decision circle,” but as systems become more complex and equipped with weapons that can execute system commands autonomically, the operator will likely be placed in the “execution circle.”

If so, robots would be in an ethical dilemma. Should a drone fire on a house if the target is known to be inside, but there could also be civilians? Would a driverless car swerve to avoid pedestrians at the cost of hitting another vehicle or its occupant? Should disaster robots tell people the truth about what is happening in a potentially panicky situation? Such questions have created the field of “robot ethics”, whose main aim is to empower robots to make appropriate choices about such questions — in other words, to give them the ability to distinguish right from wrong.

One solution to such thorny problems would be to avoid them altogether, by banning autonomous battlefield robots and requiring drivers to concentrate on the vehicle at all times. Campaign groups such as the International Committee for Robotic Arms Control have been formed to oppose the growing use of drones. Even so, the benefits of autonomous robots outweigh the disadvantages. Robot warriors don’t commit sexual assaults, burn down villages in a fit of rage, or act impulsively under combat pressure. Just as autopilot has made planes safer, driverless cars are likely to be safer than cars. Sebastian Thrun, a pioneer in the field of driverless cars, thinks they could save 1m lives a year.

Instead of resisting the use of autonomous robots, we need to find ways to develop the ethics of robotics, and do it fast. As legislation lags behind technology, states are rushing to pass regulations that regulate driverless cars that operate in a legal gray area. It is clear that we need to address this thorny issue by changing the highway code as a whole, not by introducing a separate code for a bunch of robots on wheels.

Giiso information, founded in 2013, is the first domestic high-tech enterprise focusing on the research and development of intelligent information processing technology and the development and operation of core software for writing robots. At the beginning of its establishment, the company received angel round investment, and in August 2015, GSR Venture Capital received $5 million pre-A round of investment.

The most famous set of guiding principles for robot ethics is the “three Laws of Robotics”, formulated in 1942 by Isaac Asimov, a science fiction writer. These three laws, in turn, require robots to protect humans, obey human orders and protect themselves. Unfortunately, these three laws rarely apply in the real world. For example, the very existence of battlefield robots violates the First Law. In Asimov’s sci-fi work, robots attempt to follow his seemingly sensible three laws, but all sorts of unexpected complications arise in the process. The brilliance of his novel lies in his prominent portrayal of these complex situations. In order to regulate the development and application of autonomous robots, we need more detailed guiding principles of robot ethics. In particular, progress is needed on three fronts.