This article was originally published by AI Frontier.
From deepfakes banned, see the rampant application of TensorFlow


By Dave Gershgorn


The translator | Debra


Edit | Emily

“Blockchain technology, supported by peer-to-peer computer nodes around the world, is gaining momentum. With the adoption of blockchain technology in the entertainment industry, it could disrupt the monopoly of creation and distribution by replacing centralized gatekeepers with peer-to-peer networks, fundamentally disrupting the entertainment industry and bringing about the end of cable TV and Netflix.”


In 2015, Google announced it would release TensorFlow, its in-house developed ai algorithm. It’s a move that’s changing the basic fundamentals of AI research and development around the world. In the words of Google’s CEO, “the technology that can have the same profound impact on human society as electricity will be open, easy to use and free.” Since then, the barrier to entry into artificial intelligence algorithms has been lowered from a PhD to a laptop.

But it also means that TensorFlow’s prehistorical power is beyond Google’s control. Just over two years ago, academia and Silicon Valley were the biggest contributors to software, but that is changing. Deepfakes acted as a catalyst to stir things up. It’s a project built by an anonymous Reddit user using AI software that automatically stitches any face image into a video (almost) seamlessly. The project, which was first reported by Motherboard, can be used to perfectly graft anyone’s face, such as a Facebook image of a famous actress or friend, onto the face of a porn actress.

Since then, users have created their own Subreddit, which has accumulated more than 91,000 users. Another Reddit user named DeepFakeApp has also published a tool called FakeApp, which allows anyone who downloads the AI software to make fake porn videos on their own, provided the hardware is available. As of Feb. 7, the community on Reddit had been banned for violating the site’s policy on nonconsensual pornography.

According to FakeApp’s user guide, the software is built on top of the TensorFlow framework. Google employees pioneered something similar with TensorFlow, with slightly different Settings and themes, training algorithms to generate images from scratch. Deepfake is actually quite funny. For example, someone used it to put Nicolas Cage in a bunch of different movies. But let’s face it, the 91,000 people who subscribe to Deepfakes’ Subreddit are actually there to watch porn.

However, while TensorFlow’s open sourcing offers many benefits, such as potential cancer detection algorithms, the popularity of FakeApp represents the dark side of open sourcing. Google (as well as Microsoft, Amazon and Facebook) has selflessly given the world enormous technological power, with no strings attached, by making AI software available for anyone to download and use in the creation of any data set, from faking political speeches (with the help of voice-mimicking AI tools) to generating fake pornography. All digital media is made up of a series of zeros and ones, and ARTIFICIAL intelligence can create things that never existed in ingenious ways.

Because software can run locally on computers, big tech companies lose control of software after it leaves the server. The creed of open source, or at least the state of modern software development, dictates that these companies cannot be held accountable for what others do with their software. In this way, it exists like a gun or a cigarette.

And there seems little incentive to change that: free software is a very profitable business for these companies precisely because it allows more people to develop AI. Every big tech company is competing for as much AI talent as possible, and the more people that rush into the space, the better. In addition, others develop projects using code that stimulates the production of new products, people outside the company can find and fix bugs, and the software is used to teach undergraduate and doctoral students, creating opportunities for newcomers who already know the tools used within the company.

“People have talked about big breakthroughs in machine learning in the last five years, but the real big breakthroughs are not algorithms. Algorithms are not really that different from the ’70s,’ 80s and ’90s, and the real breakthrough is open source, “said Mazin Gilbert, vice president of advanced technologies at AT&T and a former machine learning researcher. “Open source has lowered the barriers to entry, and algorithms are no longer the sole technical strength of IBM, Google and Facebook. “

Open source software also complicates ethical issues in AI development. The tools Google offers today are not the key to creating Skynet or any other superintelligence, but they can still have a big negative impact. Companies such as Google and Microsoft, which offer open source AI frameworks, have long argued that ARTIFICIAL intelligence is ethical, and their staff scientists have signed on and formed groups dedicated to the subject. But these companies do not provide any guidance or licensing to users who download their free software. The TensorFlow site has instructions for how the software works, but no disclaimers about the ethics of using software, or instructions for making sure data sets are free of bias.

A few months ago, when I asked Harry Shum, Microsoft’s vice president of artificial intelligence, how the company planned to guide those who use open source software and paid developer tools to create ethical machine learning systems, he said he wasn’t sure.

“It’s really hard,” Shum says. “I don’t think we can come up with an easy solution right now. One of the things we’re learning is to try to find blind spots when we design machine learning algorithms.”

Google did not respond to similar questions.

Moving AI away from open source is not an ideal solution either. It’s hard to know how opaque tech companies actually develop their AI algorithms if they block out software information. We can publish our research for free on sites like ArXiv and share the original code on Github, which means journalists, academics and ethicists can spot potential problems and demand accountability. Moreover, most people use AI toolkits for productive use, such as standard image recognition or classifying objects such as cucumbers in applications.

It’s not too far-fetched to imagine that other types of fake videos could soon be disseminated via mainstream platforms like Facebook and Twitter and find a foothold in these social media. While AI researchers have been looking for a solution, they won’t find a way soon. After all, such software already exists.

Since the developers of this core technology, such as deepfakes, will refuse to take responsibility, the burden will fall on video and image sharing platforms. For example, Gfycat removed all Deepfakes GIFs from the site. Reddit has shut down the Deepfakes community. PornHub, the porn video site, also said it would remove such videos because it believed the images of the people involved were used without their consent. But Deepfakes. club still hasn’t banned it.

Whatever the future of Deepfakes, this is only the beginning.

Original link: qz.com/1199850/goo…

For more content, you can follow AI Front, ID: AI-front, reply “AI”, “TF”, “big Data” to get AI Front series PDF mini-book and skill Map.