We are pleased to have Ping Yu, the head of Google tensorflow. js team, as our “Smart” guest to explain the Tensorflow. js ecosystem and how to integrate existing machine learning models into the front-end.

PingYu, Google Brain Engineer, tensorflow.js Project Lead. Dedicated to bringing machine learning platforms to Web and JavaScript developers. Formerly technical Director of Google Attribution. Ping holds a Bachelor of Science degree from Tsinghua University and a Master of Science degree from the University of Maryland, College Park.

Thanks to the reduction of labor, data and computing costs, artificial intelligence (AI) has seen explosive development in the last decade, among which “machine learning” is the most popular genre [1]. Looking at the world, Google is far ahead in the race of AI, and Google takes machine learning as its focus in the FIELD of AI [2]. They develop and maintain TensorFlow to help others solve important problems [3].

When it comes to TensorFlow, a lot of people think it’s a long way off. I heard a joke from my algorithm friends that if you can get TensorFlow up and running on your computer, you’ve already beaten 80% of algorithm engineers. It’s a joke, but it shows how high the bar is for TensorFlow — even algorithmic engineers can’t figure it out. That as the front end engineer of us, can only hope its inferior?

The answer is No, No, No, No

Because there are a bunch of people at Google who have already solved this problem for us. They saw the potential of the front end for machine learning and developed TensorFlow.js, a machine learning platform dedicated to the front end.

In order to give you a better understanding of the theme of the speech, we specially arranged this interview to listen to his views on the front end and machine learning.

Hi, Ping Yu, it’s my honor to invite you to give a speech at D2. Would you please introduce you and your current work to our Friends in China?

Ping Yu: I’m in charge of the open source tensorflow. js project on Google’s TensorFlow team. Tensorflow.js is a machine learning platform for front-end developers. It not only provides a complete SET of apis for modeling, training and reasoning, but also provides a model library for practical application scenarios. Our goal is to lower the barriers for front end developers to use machine learning in order to stimulate their creativity.

D2: We know that artificial intelligence often involves complex data, models, and calculations, but many people think that JavaScript is too weak for ARTIFICIAL intelligence, and even that the front end should not be involved in AI. What do you think of this “cliche”?

Ping Yu: There are two ways to look at this. First, JavaScript does lack pure computational power as a language for interpreting execution, but the computational speed of the language itself does not determine whether it is suitable for machine learning. Python, for example, is also a language for interpreting execution, and speed is not its strong suit, even though it is much slower than JavaScript’s V8 engine, which has not prevented it from becoming the most popular machine learning language today. Because it doesn’t have to solve the speed problem directly, it can take advantage of other, lower-level languages to gain speed. TensorFlow Python uses the Binding C Library to provide CPU speed and GPU speed through cuDNN binding. These are all things that JavaScript can do. For example tensorflow. js uses WebGL’s Fragment Shader for GPU acceleration, Web Assembly for CPU acceleration, etc.

In addition, as a machine learning API Level language, ease of use is what really attracts users. Python owes much of its rise to a very popular numerical library, Numpy. In fact, JavaScript is not inferior to Python in terms of ease of use. For more than a decade, JavaScript has been used everywhere from web pages to backends to iOT devices. Its cross-platform nature is a good solution to the task of model deployment.

When I talked to a lot of partners in industry and research departments, I found that they all have a common dilemma, which is the lack of talent to really implement AI research results. We believe this can’t be done without front end engineers, but with a complete engineering solution. Tensorflow.js is just a start and hopefully more front-end developers will join us.

D2: Tensorflow. js is available in many applications. If there’s not enough data to access?

Ping Yu: Tensorflow. js supports many environments in which JavaScript can run, such as front-end browsers, wireless applets, React Native platforms, middle and background Node.js, etc.

Different operating environments have different application scenarios. The wireless end is mainly based on real-time human-computer interaction models. Whether video or voice, it has high requirements on the size and execution speed of models. Recently, L ‘Oreal used Tensorflow. js to launch a real-time makeup test in wechat mini program. The model execution speed reached 25FPS, but the model was only 800K. In Web App in browser, its AI application scenarios are mainly graphics or text models. In the middle background, the application scenario is a bit richer. Tensorflow.js is on par with TensorFlow Python in node.js, allowing server-side model reasoning to be integrated into the existing BFF architecture.

Tensorflow.js also supports model training, which means that the front end can provide customized models for each user through transfer learning.

D2: Some people say that the biggest advances in AI technology are not the algorithms themselves, but the improvements in human-computer interaction. Do you agree with this view? At present (at least in China) the artificial intelligence that has been landed is mostly “artificial artificial intelligence”. Do foreign countries face this dilemma? What do you think is the solution?

Ping Yu: This question covers a wide range of subjects, but I basically agree with it. I understand that the original meaning of this sentence is that the improvement of human-computer interaction interface provides scenes and opportunities for AI technology. AI is based on data, and front-end technology can help researchers understand data faster and better. Although AI is developing with each passing day and new models and architectures emerge in an endless stream, when the model falls to the ground, if it does not achieve a closed loop with the front end, the phenomenon of “artificial artificial intelligence” will appear. Google’s federated learning and TFX machine learning frameworks combine AI and the front end to continuously validate and tweak models.

D2: Where do you think front-end intelligence will go in the future? What are some learning tips for front end engineers who want to work in ARTIFICIAL intelligence?

Ping Yu: Although there are many successful frameworks on the front end, there is still a lot of manual work overall. [I think the future consideration will be] how to integrate the capabilities of AI into these frameworks to further reduce the manual part of the low-end of the front-end engineer’s job and further liberate their creativity.

For front-end engineers, it is important to have an understanding of common models and frameworks, their characteristics and usage scenarios. Knowledge of AI engineering, such as model acceleration and compression, model encryption, and on-side and on-server inference schemes, can provide a practical approach to model implementation.

D2: What kind of pre-knowledge do I need to prepare for this session? What kind of communication will you have with domestic developers?

Ping Yu: This session introduces the capabilities of the Tensorflow. js platform with some examples, and further explores scenarios where AI land on the front end. The target audience is front-end engineers with no specific prior knowledge.

I wonder if this interview has answered your doubts about the front end and machine learning? In my opinion, machine learning is no longer in the clouds for me after watching this interview. Ping Yu’s simple and understandable expression enables me to have a further understanding of the front end, machine learning, TensorFlow, and what aspects to learn AI from. It is really great!

Ping Yu will also talk to us about how tensorflow.js can be further improved and where it will go in the future. Is it good to look forward to!

If you have already bought a ticket, please bring your cute self to attend on time

If you haven’t bought your ticket yet, be warned!

Due to the high enthusiasm for buying tickets, our early bird tickets and early bird group tickets have been sold out in advance, you can choose to buy ordinary tickets (559/ tickets), buy 3 or more tickets at one time, or you can immediately reduce 100, enjoy the price of early bird tickets 459/ tickets! There are not many tickets left, as soon as possible, I will wait for you in D2 intelligent special!



Attachment 1: D2 intelligent special introduction

How do machine learning and artificial intelligence apply to the front end? How will intelligence change the way the front end works? What are the current achievements of machine learning and artificial intelligence applications in the front end? Creating technological value with intelligence in engineering and business? This session of D2’s front-end intelligent special show will give you a real feeling about the intelligent change of the front end through industry application cases and practical experience. At the same time, we also invited well-known front-end intelligent teams such as Google Tensorflow.js to bring the latest information about the development trend of front-end intelligent. Let’s join hands at the top of technology as machine learning changes the industry today.
Appendix 2: Good articles recommended

  • Initial understanding of front-end intelligence
  • A practice of front-end intelligence




Reference article:

[1] Do you really understand ARTIFICIAL intelligence? — Talk about AI hitting a wall and winter

[2] What is Google’s AI First strategy one year on? Here’s the answer

[3] Machine learning is the best way to explain AI