Not clever enough Ri from concave the temple qubit product | public QbitAI
When you were 22, what were you doing?
In two days, the legendary Jeff Dean, now head of Google AI, has once again been the subject of adoration and discussion. All because his undergraduate thesis came to light for the first time.
The paper is only eight pages long.
It was the summa cum laude undergraduate thesis of 1990 and remains in the University of Minnesota Library.
At the same time, the paper shows that A full 28 years ago, Jeff Dean was writing parallel computing code for neural networks in C.
The Jeff Dean crowd is going crazy again.
Old papers, new hot discussion
One user, probably quite young, wrote on HackerNews after reading the paper: “It’s amazing that Jeff Dean became interested in neural networks in 1990.
In a word, many popular science and recall. To sum up, neural networks were hot at the time.
“Neural networks were a big deal, very popular in the late 1980s,” says one. “It was a great time.”
In 1990, before the second COLD winter of AI, neural network, Prolog, LISP language, fuzzy logic were popular at that time, ANN had not been surpassed by SVM and so on, and Japan was still catching up with the United States in the field of AI.
Geoffrey Hinton, the father of neural networks who later became Jeff Dean’s colleague (intern ^_^), also had a lot of important research published and back propagation was emerging.
The two parallel training methods mentioned in Jeff Dean’s paper are both based on back propagation.
@Silverlake wrote, “I’m almost Dean’s age, and my undergraduate project was improving neural networks with genetic algorithms. Ai was all the rage then, but then winter came.”
The early 1990s were a very interesting time for neural networks and machine learning. There was object recognition, handwriting recognition, all that stuff, and all that was going on at a very fast pace, but very quickly the funding was withdrawn and the project was all in trouble.
“Fortunately, with the advent of gpus, deep back propagation, and data volumes starting to explode, neural networks made a comeback.” “Said @dekhn. And for those who stuck it out through the second AI winter, it has clearly paid off.
In addition to recalling the past, many people come to the similar conclusion that we should never forget our original aspiration.
For example, the problem Jeff Dean studied in his undergraduate thesis is still cited as one of the big issues in TensorFlow today.
“Really interesting and innovative early work, which I think also explains why TensorFlow does not support in-layer model parallelism. It’s amazing how much our early experiences influence us.” Comments from @Scottlegrand2.
That’s true. In fact, After graduation, Jeff Dean did not continue to study AI. His interest then shifted to writing compilers for high-level object-oriented languages, and he earned a PhD in that area.
“The feeling that neural networks were interesting, however, never really went away,” said Dean, who later took the lead in neural networks and artificial intelligence research at Google and co-founded and led the Google Brain project with Ng and Greg Corrado.
Jeff Dean is a computer scientist with compiler optimization expertise, and TensorFlow is essentially an attempt to accelerate neural networks into compiler optimization-related problems.
Of course, many others noticed that this excellent senior thesis was only eight pages long.
“As always, Jeff Dean never disappoints. Solved a responsible problem at a young age, achieved good results, and described the solution clearly and succinctly.” @Halflings wrote, “My paper is 60 pages, but it’s not worth one thousandth of that.”
It is commendable that an academic institution should allow such a concise thesis to be submitted. One recalled his experience as a graduate student: ‘Almost everyone who revised my paper tried to add a bunch of crap. If you try to describe it in one sentence, you will be rejected. It was good of him that his mentor allowed him, even encouraged him, to communicate so efficiently and succinctly.
So people began to wonder who his mentor was…
Of course there is an answer. Jeff Dean’s senior thesis advisor was Vipin Kumar.
Kumar continues to teach at the University of Minnesota, researching data mining, high performance computing, and their applications to climate, ecosystems, and healthcare. He also served as director of the U.S. Army High Performance Computing Research Center (AHPCRC) from 1998 to 2005.
Dean tweeted that he had actually lost the paper, so earlier this year he asked Vipin Kumar, his former advisor at the University of Minnesota, if he still had the paper.
They checked with the Honors Program and were told there were no paper papers. Fortunately, the library scanned a PDF and brought the paper to light.
What is the paper about?
How does this paper, which is almost 30, train neural networks in parallel?
Jeff Dean discussed two methods of parallel training neural networks based on back propagation.
The first method is pattern-partitioned approach, which presents the whole neural network on each processor and divides various input modes into available processors.
The second method, called network-partitioned Approach (PIPELined approach), distributes neural network neurons to available processors, all of which form a communication ring. The features are then processed by neurons on each processor as they pass through the pipeline.
He also built neural networks of different sizes and tested both methods with several different inputs.
The results show that for the mode segmentation method, the acceleration effect is better under the condition of large network and multiple input modes.
Here’s a comparison of the two methods:
At that time, there was no public Python release, and there were no frameworks like TensorFlow or PyTorch. Jeff Dean’s parallel training neural network test code is written in C language.
The neural networks themselves, and the configurations used to test them, also have a strong sense of age. We can see from this paper what a “large” neural network looked like in 1990: three layers with 10, 21, and 10 neurons each. Jeff Dean tested up to 32 processors.
At that time, Jeff Dean could not have imagined that 12 years later, he would be working with Ng, Quoc Le, et al., using 16,000 CPU cores to find cats from massive amounts of data.
Paper portal:
https://drive.google.com/file/d/1I1fs4sczbCaACzA9XwxR3DiuXVtqmejL/view
Jeff Dean bio
Born in 1969, he is 49 years old.
In 1996, he graduated from the Department of Computer Science (UW) at the University of Washington.
Fellow of national Academy of Engineering, ACM Fellow, AAAS Fellow.
Since joining the startup in 1999, Google has designed and deployed most of the Google Ads, crawl, index, and query services, as well as the various distributed computing infrastructures that underlie most of Google’s products, and is the developer of Google News, Google Translate, and other products.
Launched Google Brain.
Launched TensorFlow, a deep learning framework platform with the largest share in the world.
Although title is officially a Senior Researcher at Google, Dean ranks second only to founders Larry Page and Sergey Brin at Google.
In April 2018, Google changed its internal structure, and Jeff Dean took over the entire Google AI business and team, reportedly working next to Pichai, Google’s current CEO.
Jeff Dean’s LinkedIn profile
The brother-in-law in the joke
Of course, if you don’t know much about this low-key god, it’s time for another Jeff Dean joke collection.
We also relayed the jokes to my brother-in-law, who replied with a smile: thank you for your love.
Check out these jokes below:
During his own Google interview, Jeff Dean was asked the implications if P=NP were true. He said, “P = 0 or N = 1.” Then, before the interviewer had even finished laughing, Jeff Examined’s Public Certificate and Wrote the Private Key on the Whiteboard. When he was interviewed by Google, Jeff Dean was asked what it meant if P=NP. He said, “P=0 or N=1.” Then, before all the interviewers could finish laughing, Jeff glanced at Google’s public certificate and wrote the corresponding private key on the whiteboard.
Compilers don’t warn Jeff Dean. Jeff Dean warns Compilers. The compiler never gives Jeff a compilation warning, Jeff warns the compiler.
The rate at which Jeff Dean produces code jumped by a factor of 40 in late 2000 when he upgraded his keyboard to USB 2.0. In the late 2000s, Jeff’s code speed suddenly increased 40-fold when he upgraded his keyboard to USB 2.0.
“Jeff Dean builds his code before committing it, but only to check for compiler and linker bugs.” This is just to check for bugs in the compiler and linker.
The -o4 optimization option of GCC is to send your code to Jeff Dean for a rewrite.
Jeff Dean was born on December 31, 1969 at 11:48 PM. It took him twelve minutes to implement his first time counter. Jeff was born on December 31, 1969 at 11:48 PM. Then it took him a full 12 minutes to pull off his first timer. (Background: Timer values in computers are usually set to the number of seconds from 00:00.0 on January 1, 1970 to the current date).
When Jeff Dean designs software, he first codes the binary and then writes the source as documentation. When Jeff writes software, he goes straight to machine code. Write the source code for documentation only.
Jeff Dean’s keyboard has two keys: 1 and 0.
When Jeff has trouble sleeping, he Mapreduces sheep. When Jeff has insomnia, he uses Mapreduce sheep. (One of Jeff’s creations is Mapreduce, a framework algorithm for distributed processing that is one of the foundations of Google.)
You name three pointers, Einstein, Euler, and Turing, when you de-reference them, all you get is Jeff Dean. If you named three Pointers Einstein, Euler and Turing, you would see Jeff when you looked at their directions.
More Jeff Dean jokes set, can also move to the site of the “Google great god Jeff Dean how cow” collection: https://www.quora.com/What-are-all-the-Jeff-Dean-facts
– the –