OpenAI, a non-profit organization backed by tech mogul Elon Musk, recently unveiled its AI word generator GPT2, which caused a lot of controversy due to its ability to generate fake news written almost by real people. OpenAI decided not to release trained models, but only a small number of models and examples.
GPT2 is trained with nearly 40 gigabytes of web page text and is designed to be able to predict the next word. The team claims that GPT2 produces text that depends on the style and content of the context, and that the AI system can produce “realistic and coherent sentences” on any subject simply by feeding it a paragraph of text.
According to a public beta video published by The Guardian, GPT2 instantly produced a full paragraph when an editor typed in an incomplete sentence about Brexit. In addition, if fed the first line of George Orwell’s “1984,” “It was a bright cold day in April, and the clocks were striking Thirteen,” It immediately recognized the futuristic sci-fi style of the words. And can continue the corresponding story. The newspaper said GPT2 produced sentences that revealed few semantic inconsistencies or grammatical errors before and after AI systems in the past.
While others have pointed out that GPT2 is a fairly coherent text scraped from the web, many people believe that GPT2 is a mature technology to demonstrate AI’s ability to simulate, and fear that the AI system could be used to create fake news that is unrecognizable to ordinary people, calling it “deepfake for words.” Deepfake is a film that uses ARTIFICIAL intelligence (AI) to graft a celebrity’s mouth or face into pornography or other films, in order to prank or plant evidence.
OpenAI later said it would not release trained models, but in the principle of responsible disclosure, because of concerns that the technology could be used for malicious purposes, such as producing misleading text, impersonating someone’s online identity, or spreading fake news on social networks, or automatically producing spam and phishing content. They will release much smaller models, examples, and technical whitepapers for outside researchers to experiment with.
But OpenAI has also been criticized for going against the organization’s name by hiding its research from full disclosure.
Musk himself distanced himself, saying in a tweet that he has not been involved in OpenAI’s operations for more than a year and has no executive or board oversight role.
Advances in AI technology have been reflected in creative fields such as writing news, painting, translating and speaking, sparking ethical and moral debates. Last year, Google’s I/O presentation of Duplex, its lifelike voice system, drew criticism for making phone reservations, forcing the company to announce that in future tests, it would voluntarily identify itself as a robot.