Please pay attention to the wechat public account “AI Front”, (ID: AI-front)
What does this have to do with artificial intelligence? In computer science, games are often used as a benchmark to measure the “intelligence” of algorithms. Robert Wilensky, a late professor at the University of California, Berkeley, and a leading figure in artificial intelligence, once offered an explanation for this phenomenon. In an interview with the author of Obsessive Technology: Computers as A Cultural Phenomenon, he said computer scientists “look around for who the smartest person is, and it’s them. They were trained mathematicians, and mathematicians only do two things — prove theorems and play chess. So they assume that if a system can prove theorems or play chess, it must be intelligent.” So it’s no wonder that people are using games to test the “intelligence” of AI.
Yet the games people choose, such as chess (which Google DeepMind has been testing algorithms with), tend to have clear boundaries, set goals and a clear way to win or lose. These games don’t have open cooperation like Dungeons & Dragons. It got me thinking: Do we need new testing methods for AI, games where the goal is not just to win, but to tell stories? What would it mean if an AI could “beat” a Dungeons & Dragons game just like a human? Should there be a “fairy ranger” test, not just a Turing test?
Of course, this is just a fun thought experiment, but it does illustrate the flaws of some intelligent models. First, it reveals how intelligent systems work in a variety of environments. D&d players can play as characters in the game, and players can switch between any character (warrior, rogue, healer). At the same time, AI researchers know that algorithms trained in one area are difficult to apply to other areas with only subtle differences, but we humans have mastered this ability very well.
Second, DUNgeons & Dragons reminds us that intelligence is integrated with the organism. In computer games, the physical experience includes pressing buttons on the controller to move an icon or game character (a ping-pong bat, a spaceship, or an anthropomorphic, eternally hungry yellow sphere), as well as the current immersive experience, including virtual reality glasses and haptic gloves. Even without these additional features, games can still elicit biological responses related to stress and fear. In the original version of DUNgeons & Dragons, players sat around and entered the game together, experiencing the story of the game and its impact. Recent research in cognitive science suggests that first-hand experience is crucial to how we grasp more abstract mental concepts. But we pay little attention to what ai experience and how that affects the way they learn and process information.
Finally, intelligence is social. Ai algorithms typically learn through multiple rounds of competition, in which successful strategies are rewarded. Indeed, it seems that humans also learn to evolve through repetition, reward and reinforcement. But there is an important cooperative dimension to human intelligence. In the 1930s, psychologist Lev Vygotsky discovered the phenomenon of expert and novice cooperation, known as “scaffolding” learning: the teacher demonstrates and then helps the learner acquire a new skill. In unconstrained games, this collaboration is done through narrative. Games can evolve from win/lose to scary monster attacks and then move on to more complex narratives explaining why monsters attack, who the heroes are, what they can do and why, narratives that aren’t always logical and can even be contradictory. An AI capable of social storytelling is certainly more reliable and versatile than an AI that can only play chess, and playing chess is not necessarily a step in the right direction.
In some ways, it’s odd not to make role-playing games a barrier test for smart technology. As Katie Hafner and Matthew Lyon point out in Why Wizards Stay Up Late: The Origins of the Internet (1996), Dungeons & Dragons was an important cultural touchstone for technologists in the 1980s and an inspiration for many of the early text-based computer games. Even today, DUNgeons & Dragons is often mentioned by AI researchers who play games in their free time. So instead of just beating your opponents in games, if we try to teach ai to play as paladins and elven rangers like humans, maybe we can learn more about intelligence.
Beth Singler is an associate research fellow at the Faraday School of Science and Religion and the Centre for Future Intelligence at the University of Cambridge. He is the author of Indigo Children: Self and the New Age of Scientific Experimentation.
英文原文 :
https://aeon.co/ideas/dungeons-and-dragons-not-chess-and-go-why-ai-needs-roleplay