In addition to Facebook ai Research (FAIR), Facebook also has a division called Applied Machine Learning. Backchannel’s editor in chief Steven Levy recently took a look inside the division’s operations and explained how AI is becoming an integral part of the social network’s future strategy. It’s helping to bring visual, speaking, understanding and even disinformation to Facebook’s products and services.
Giiso Information, founded in 2013, is a leading technology provider in the field of “artificial intelligence + information” in China, with top technologies in big data mining, intelligent semantics, knowledge mapping and other fields. At the same time, its research and development products include information robot, editing robot, writing robot and other artificial intelligence products! With its strong technical strength, the company has received angel round investment at the beginning of its establishment, and received pre-A round investment of $5 million from GSR Venture Capital in August 2015.
The following is the main content of the article:
Joaquin Candela, head of applied machine learning engineering at Facebook
Asked to lead Facebook’s Applied Machine Learning (AML) division, tasked with harnessing AI to transform the world’s largest social network, Joaquin Qui? Onero Candela) is hesitating.
It’s not that the Spanish-born scientist, who calls himself a “machine learning (ML) researcher,” hasn’t seen how AI can help Facebook. Since joining the company in 2012 He has led a transformation of the company’s advertising operations: leveraging ML technology to improve the relevance and effectiveness of sponsored advertising. More importantly, in the process, he encouraged his department’s engineers to use AI, even if they weren’t trained in it, so the AD department as a whole had richer machine learning skills. But he’s not sure the same magic will work on the broader Facebook platform, where billions of connections depend on fuzzy values rather than hard data that can be used to measure ads. “I wanted to make sure I could create value on it.” “He said of his promotion.
Whether it’s Facebook, Instagram or Messenger, it’s all powered by AI
Despite his doubts, Kendra took the job. Now, nearly two years later, his hesitancy seems absurd.
How ridiculous is that? Last month, Kendra spoke to an audience of engineers at a conference in New York City. “I’m going to make a strong statement.” “Facebook cannot survive today without AI,” he warned. What you may not know is that every time you use Facebook, Instagram, or Messenger, your experience is ai-driven.”
Last November, I visited Kendra and some of his team at Facebook’s Menlo Park headquarters to find out how AI had suddenly become Facebook’s livelihood. So far, most of the attention on Facebook’s AI efforts has focused on its world-class Facebook AI Research unit (FAIR), which is led by Yann LeCun, a renowned neural network expert. Along with the AI divisions of Google, Microsoft, Baidu, Amazon and Apple, FAIR is one of the most sought-after organizations for graduates of elite AI programs. FAIR is one of the most groundbreaking organizations in the field of digital neural networks that underlie recent advances in computer vision, hearing and even conversation. But kendra’s applied Machine Learning group (AML) is responsible for integrating FAIR and other research into Facebook’s actual products — and, perhaps more importantly, for driving all of the company’s engineers to integrate machine learning into their work.
Since Facebook couldn’t survive without AI, it needed all of its in-house engineers to build the technology.
“Neurostyle transformation”
My visit came two days after the PRESIDENTIAL election, and Facebook CEO Mark Zuckerberg called the idea that Facebook’s spread of fake news helped Donald Trump win “crazy.” The comments add to anger at Facebook after the social network stirred controversy last year over fake news in its News feed. While these controversial issues are largely outside Kendra’s remit, he knows that ultimately Facebook’s response to the fake news crisis will depend on the machine learning work his team will be involved in.
Then Kendra wanted to show me something else — a demo of one of her team’s work. To my surprise, it was a relatively boring trick: repainting a photo or streaming video in the style of a well-known artist’s artwork. In fact, it’s reminiscent of the kind of digital baubles you can see on Snapchat, and the idea of redrawing photos in Picasso’s cubist style has already been done.
“The technique behind this is called neurostylistic transformation.” “This large neural network can be trained to redraw a photograph in a particular style,” he explains. He pulled out his phone and opened a photo. With a click and a swipe, it looks like a derivative of Van Gogh’s Starry Night. Even more impressive is its ability to generate streaming video of a particular style. ‘What was really special,’ he added, ‘was something I couldn’t see: The neural network That Facebook built to make it work on mobile phones.’
The technology isn’t new either — Apple has previously boasted of some kind of neural computing on the iPhone. But that’s much harder for Facebook because it can’t control the hardware itself. Candela, said his team was able to perform this task, because the department of accumulating, the completion of each project to another project become more simple, the construction of each project, and let the engineers of the future to create requires less training of similar products, so the project can quickly build out like this. “It took us eight crazy weeks from starting this to getting it out in the open.” He said.
Another secret to doing tasks like this, Kendra says, is collaboration — a mainstay of Facebook’s culture. For this task, it was easy to reach out to other parts of Facebook, especially the mobile team, who were familiar with iPhone hardware, so they could skip rendering images in Facebook’s data centers and do the work directly on the phone. The benefits go far beyond turning your friends and family into “The Scream” -style images. This is a step toward making Facebook as a whole stronger. In the short term, this can speed up response in language interpretation and text comprehension. In the long run, it helps analyze what you see and say in real time. “We’re talking about the blink of an eye — it has to be real time.” “We’re a social network,” he said. “If I’m going to predict what people are going to say about something, my system needs to respond instantly, right?”
“Running complex neural networks on phones means you’re putting AI into everyone’s hands.” “This is not something that happens by accident. It’s part of our broader push for AI within the company.”
“It’s a long slog.” “He added.
Kendra’s origins
Kendra was born in Spain. He moved with his family to Morocco when he was 3 and attended French school there. Although he excelled in the sciences and humanities, when he decided to go to university in Madrid, he chose what he thought was the most difficult subject: telecommunications engineering. Telecoms engineering requires knowledge not only of physics, such as antennas and amplifiers, but also of data analysis, which he says is “cool”. He admires a professor who reinvented adaptive systems. Kendra built a system that uses smart filters to improve the signal from roaming phones; He now calls it a “baby version of a neural network”. He was obsessed with training algorithms, not content to just write code. During a semester in Denmark in 2000, he met Carl Rasmussen, a machine learning professor who had studied in Toronto with Geoff Hinton, a machine learning legend, and his passion for training algorithms was further stimulated. As he graduated, he was about to enter a leadership program at Procter & Gamble. Rasmussen then offered him a PhD, and he chose machine learning.
In 2007, he joined Microsoft Research’s lab in Cambridge, England. Shortly after joining, he learned that the company had a contest: Microsoft was ready to launch Bing, but needed to work on an important part of search advertising — predicting exactly when a user would click on an AD. The company decided to hold an internal competition about it. The winning team’s solution will be tested to see if it’s worth launching, and team members will be rewarded with a free trip to Hawaii. Nineteen teams competed, and Kendra’s team tied for the championship. He was rewarded with a free trip, but felt cheated when Microsoft delayed the bigger prize — a test run to determine if his game could be launched.
Kendra’s determination was evident in subsequent events. He went into frantic hot pursuit mode to get the company to give him a chance. He gave more than 50 internal talks, and he built simulators to prove the superiority of his algorithm. He also lobbied his vice president, who had decision-making power, everywhere he went, including in the bathroom, to talk about his system.
Kendra’s algorithm was rolled out with Bing in 2009.
In early 2012, Kendra visited a friend at Facebook and spent a Friday at the company’s Menlo Park campus. He was surprised to find that at the company, people didn’t have to beg permission to test their work. They could have just done it. He interviewed at Facebook the following Monday and had an offer by the end of the week.
The original model was not advanced
After joining Facebook’s AD team, Kendra was tasked with leading a group that displayed more relevant ads. While the system used machine learning, kendra said, “the models we used were not very advanced, they were simple.”
Hussein Mehanna, another engineer who joined Facebook at the same time as Kendra, was equally surprised by the lack of progress in integrating AI into the company’s systems. “When I saw the quality of the product before I joined Facebook, I thought it was all there, but obviously it wasn’t.” “A few weeks into the job, I told Kendra that what Facebook really lacked was a proper world-class machine learning platform,” Mehanna says. We have machines, but we don’t have the right software to help them learn as much as possible from data.” (Mehanna is now head of core machine learning at Facebook and has worked at Microsoft for many years — as did several other engineers interviewed for this article. Coincidence?)
By “machine learning platforms,” Mehanna means the spread of the paradigm that brought AI out of the winter of the last century and into its recent flowering, after the spread of models based on how the brain works. Take advertising. Facebook needs its system to do something that no one else can do: predict instantly and accurately how many people will click on a given AD. So Kendra and his team set out to build a new system based on machine learning programs. Because the team wanted to make the system a platform accessible to all engineers in the division, they took an approach to building it so that modeling and training could be generalized and replicated.
A big factor in building machine learning systems is getting good data — the more the better. Fortunately, this is one of Facebook’s greatest assets: when you have more than a billion people using your product every day, you can gather tons of data for your training set and get countless examples of user behavior once you start testing. Because of this, the AD team was able to accelerate to a few models a week instead of a new model every few weeks. To make the system a platform that other people within the company could use to develop their own products, Kendra had to work with several teams. It’s a delicate three-step process. “You focus on performance, then utility, then community.” He said.
Kendra’s AD team proved how transformative machine learning could be at Facebook. “We’ve become extremely successful at predicting clicks, likes, conversions and so on.” He said. It makes sense to extend that model to other services on the Facebook platform. In fact, FAIR leader Lecun has previously called for the creation of a division responsible for incorporating AI into products — specifically, promoting machine learning more broadly within the company. “I did call for it, because you need this organization of good engineering people focused on the underlying technology that can be used by many product divisions, rather than being directly responsible for products.” Lecun said.
In October 2015, Kendra became the director of the new AML team (for a while, he kept his position in the advertising department as a precaution, shuttling between the two roles). He stays in close touch with FAIR. FAIR has offices in New York City, Paris and Menlo Park, where its researchers actually work next to AML engineers.
The collaboration between AML and FAIR is illustrated by a product in development that provides voice descriptions of photos shared by users. Over the past few years, it has become standard AI practice to train systems to recognize objects in a scene or to make general conclusions, such as whether a photo was taken indoors or outdoors. But recently, FAIR’s scientists found a way to train a neural network to describe almost any interesting object in an image, and then determine what the photo is about based on its position and relative to other objects — in effect, analyzing posture to determine whether people in a particular photo are hugging or riding a horse. “We showed this technology to the AML department,” Mr. Lecun said. “They thought about it and said, ‘That would be useful in this context. ‘” They ended up prototyping a feature that could benefit the blind or visually impaired by allowing users to point at an image and have their phone describe its contents to them via voice.
“We communicate all the time,” Kendra said of his brother’s team. “Basically, to move from science and technology to actual projects, you need glue, right? We are the glue.”
Four application areas of AI
Kendra divides the use of AI into four areas: visual, speech, voice and camera effects. All of this, he says, will lead to “content understanding engines.” By studying how to understand the meaning of content on the platform, Facebook wants to detect subtle intent in comments, extract subtleties from spoken language, identify the faces of your friends that pop up quickly in videos, read your expressions and then reflect them in your avatar in a virtual reality scene.
“We are studying the universalization of AI.” “As content explodes, we need to improve understanding and analytics,” Kendra said. The solution is to build generalized systems that allow the success of one project to accumulate and benefit other teams working on related projects. “If I could build algorithms that transfer knowledge from one task to another, that would be fantastic, right?” says Kendra.
That kind of knowledge transfer could dramatically speed up the launch of Facebook products. Take Instagram. Since its launch, the photo-sharing service has displayed users’ photos in reverse chronological order. But in early 2016, it decided to use algorithms to sort photos by relevance. The good News, Kendra notes, is that since AML has already implemented machine learning in products like News Feed, “Instagram doesn’t have to start from scratch.” “They had one or two engineers who were proficient in machine learning contact some other team that was running some sort of content sequencing application. Then you can just copy the workflow and ask them if you have any questions.” That’s why Instagram was able to implement this major change in just a few months.
The AML team is always looking for use cases where its neural network technology can be combined with technology from multiple other teams to build unique features for the entire Facebook platform. “We use machine learning to build our core features and delight our users.” “Tommer Leyvand, lead engineer of the AML Perception team. (He also worked at Microsoft.)
One example is a recent feature called social recommendations. About a year ago, an AML engineer and a product manager on Facebook’s sharing team were very active in discussing people asking their friends to recommend local food or services. “The question is, how do you present that to the audience?” Rita Aquino, product manager for AML’s natural language team. To do this, the sharing team has been trying to match specific terms associated with recommendation requests. “Given that you have a billion posts a day, that’s not necessarily accurate, not necessarily scalable.” Aquino pointed out. By training neural networks and testing models of users’ behaviour in real time, the team was able to detect very subtle linguistic differences and thus accurately detect whether users were asking where food was available in a particular area or where to buy shoes. That triggers the request to appear on the appropriate contact in the news feed. The next step, again driven by machine learning, is to figure out if a user is making a seemingly reasonable recommendation and show the location of a business or restaurant on a map in the user’s news feed.
Aquino said that in the year and a half since she joined Facebook, AI has gone from being a rarity in the company’s products to being introduced at the beginning of development. “People want the products they use to be smarter.” “Teams look at products like social recommendations, they look at our code and they say, ‘How do we do this? ‘You don’t have to be a machine learning expert to bring this technology to your team.” In the case of natural language processing, the team built a system called Deep Text that other teams could easily access. It helps drive the machine learning technology behind the translation features Facebook uses for more than 4bn posts a day.
For images and video, the AML team built a machine learning vision platform called Lumos. Lumos, who built the massive machine learning vision platform when Manohar Paluri was an intern at FAIR, calls it Facebook’s visual cortex — the way all the images and videos posted on Facebook are processed and understood. At a hackathon in 2014, Paruri and a colleague, Nikhil Johri, built a prototype in a day and a half and showed off their results to an excited Zuckerberg and Sheryl Sandberg, Facebook’s chief operating officer. When Kendra started working at AML, Paluri joined to lead the computer vision team that built Lumos to help all of Facebook’s engineers, including those working on Instagram, Messenger, WhatsApp, Oculus and more, harness the visual cortex.
“With Lumos, anyone in the company can use the capabilities from these different neural networks to build a model for their particular scenario and see if it works.” “Then they can send one person to fix the system, retrain it and push it, without any involvement from the AML team,” notes Parulli, who works at both AML and FAIR.
Paruri gave me a quick demonstration. He opened Lumos on his laptop, and we started on a sample task: improving the neural network’s ability to recognize helicopters. A page of images (5,000, scrolling on and on) appeared on the screen, full of pictures of helicopters and objects other than helicopters. For these data sets, Facebook uses publicly available images from its properties. I’m not an engineer, much less an AI expert, but clicking on the wrong examples to “train helicopter image classifiers” wasn’t too hard for me.
“Smart application development will be 100 times faster”
Eventually, this “classification” step (called supervised learning) could be automated, as the company pursues the holy grail of machine learning: “unsupervised learning,” in which neural networks decide for themselves what’s in those images. Paluri says the company is making progress. “Our goal is to reduce the number of manual tags by a factor of 100 over the next year.”
In the long term, Facebook will merge its visual cortex and natural language platform to achieve what Kendra calls a universal content understanding engine. “There is no doubt that we will combine them.” “Paluri said.
Eventually, Facebook hopes that the core principles it uses for its progress will spread beyond the company, in the form of journal articles, for example, allowing its generalized approach to spread machine learning more widely. “You can dramatically speed up the development of smart applications instead of taking years.” “Imagine the impact on healthcare, security, transportation,” Mehanna points out. I think the future of building apps in those areas is going to be a hundred times faster.”
AML is deeply involved in the epic process of giving Facebook’s products the ability to see, read and even speak, which Zuckerberg sees as crucial to realizing his vision of Facebook as a social good. In Zuckerberg’s lengthy humanitarian manifesto on building communities, the CEO used the word “artificial intelligence” or “AI” no fewer than seven times, all in the context of how machine learning and other technologies will keep communities safe and informed.
Achieving those goals won’t be easy, which is what Kendra feared when he hesitated to take on AML. Machine learning alone won’t solve everyone’s problems, and you’re trying to be the primary source of information discovery and personal connections for billions of users. That’s why Facebook is tinkering with the algorithms that determine what users see in their news feed — how do you train the system to deliver the optimal mix of content when you’re not sure what the optimal mix is? “I think it’s a near-unsolvable problem.” Kendra says, “If we show random content, that means we’re wasting your time, right? If we only show the content from one friend, we’re winner-takes-all. You can get stuck in this discussion all the time, and going to extremes is obviously not the optimal solution. We tried to explore a little bit.” Facebook will continue to try to solve this problem using AI. “There’s a lot of research going on in machine learning and AI that optimizes that kind of exploration.” ‘Sounds promising,’ Says Kendra.
False message problem
When Facebook found itself being blamed for the spread of fake news, it rightly called on its AI team to quickly clean up the fake news on its service. Lecun points out that it was an unusual all-hands-on-staff effort, including FAIR’s team, which acted almost as “consultants.” FAIR has built a tool to help solve this problem: a model called World2Vec. World2Vec adds a memory function to neural networks that helps Facebook label all content with information such as where it came from and who shared it. With that information, Facebook could understand the sharing patterns of fake stories and use machine learning to weed them out. “It turns out that identifying fake messages is not that different from identifying the pages people want to see the most.” Lecun points out.
Giiso information, founded in 2013, is the first domestic high-tech enterprise focusing on the research and development of intelligent information processing technology and the development and operation of core software for writing robots. At the beginning of its establishment, the company received angel round investment, and in August 2015, GSR Venture Capital received $5 million pre-A round of investment.
The original platform from Kendra’s team allowed Facebook to roll out those content censorship products more quickly. How effective they actually are remains to be seen; Mr. Kendra said it was too early to share data about how well companies were doing using algorithms to reduce misinformation. But whether or not those new measures work, the dilemma itself raises the question of whether solving problems with algorithms — even if they are improved by machine learning — inevitably produces unintended and even harmful results. Surely some will argue that this happened in 2016.
Kendra disputes this. “I think we made the world a better place.” He said, and told a story. The day before the interview, Kendra called a Facebook contact he had met only once, his friend’s father. He saw the man Posting pro-Trump content, and he found it puzzling. Later, Kendra realized that his job was to make decisions based on data and that he lacked important information. So he texted the man, asking to speak. When the latter agreed, they spoke on the phone. “It didn’t help me change reality, but it allowed me to see things in a very different way.” Kendra said, “I would never have crossed paths with him in a world without Facebook.”
In other words, while AI is important, even existential, to Facebook, it’s not the only answer. “Our challenge is that AI is really in its infancy,” Kendra said. “We’re just getting started.” (leeper)