A few days ago, someone looked at the number of deep learning frameworks used by ICLR paper takers over the years and found that while TensorFlow topped the list, PyTorch’s usage has tripled in the past year, putting it on par with TF. This is a surprising news, also let many practitioners began to face up to this development trend, preparation of “double courses” matters. In the following article, Nonzhi introduces a handy PyTorch library from Fast. Ai — anyone interested in PyTorch should try it first.

With the deep combination of the Internet and knowledge dissemination, online courses are no longer new to many people. Ai, deeplearning.ai /Coursera and Udacity are the three most popular moocs for students in the deeplearning space. Among them, because of Jeremy Howard’s unique teaching style, Fast. Ai has been very “down-to-earth” :

  • Research how to quickly and reliably apply state-of-the-art deep learning to practical problems.
  • Provide the Fast. Ai library, which is not only a toolkit for beginners to quickly build deep learning implementations, but also a powerful and convenient resource for providing best practices.
  • The course content is concise and easy to understand so that as many people as possible can benefit from the research results and software.

During the National Day holiday, Fastai, a new free and open source library for deep learning, was released by Fast. The PyTorch library, while still in preview form, currently provides a consistent API for the most important deep learning applications and data types, with significantly improved accuracy and speed compared to other deep learning libraries, while requiring significantly less code.

Interested developers can visit Fastai’s GitHub to install it: github.com/fastai/fast…

Fastai library

After 18 months of development since it was announced last year, fastai Deep Learning library V1.0 is finally here. From the start of the project, developers talked about PyTorch’s advantages as a platform for solving a wider range of problems by building and training neural networks with the flexibility of regular Python code and various functions…

Now, thanks to the combined efforts of the Fast. Ai team and the PyTorch team, we have a deep learning library that provides a single consistent interface for common deep learning applications such as computer vision, text, tabular data, time series, collaborative filtering, and more. This means that if you’ve learned to create a practical computer vision (CV) model using FASTAi, you can do the same for natural Language processing (NLP) models, or any other model supported by the software.

Early adopters use feedback

Semantic code search on GitHub

Fast.ai courses are an important way for GitHub’s data scientists and executives, including the CEO, to improve their data literacy. Hithl Husain, GitHub’s senior machine learning scientist, has been learning deep learning through Fast.ai for the past two years. He believes these MoOCs have opened a new era of data at Github, giving data scientists more confidence in solving the latest problems in machine learning.

As one of the first users of Fastai, Hithl Husain and his colleague Ho-hsiang Wu recently released an experimental version of the tool called semantic Code Search, which allows developers to find code directly by meaning rather than keyword matching, meaning that the best search results don’t necessarily contain the words you’re searching for. In their official blog post, they explained why they had abandoned the Tensorflow Hub in favor of Fastai, which they said had easier access to state-of-the-art architectures (such as AWD LSTMs) and technologies (such as the random restart loop learning rate).

Semantic code search


Husain has been playing with pre-release versions of the Fastai library for the past 12 months. He said:

I chose Fast. ai because it enables the most advanced technology and innovation with modular, advanced apis, while reducing computational overhead, while maintaining the same performance. Semantic code search is just the tip of the iceberg. People can use Fastai to revolutionize everything from sales to marketing to anti-fraud.

Generate the music

Christine McLeavey Payne was a student who stood out from the last Fast. Ai deep learning course. Her life has been varied: from classical pianist with the San Francisco Symphony, to HPC specialist in finance, to neuroscience and medicine researcher at Stanford University. Now, she has embarked on another life journey with OpenAI, most recently using Fastai to create Clara, an LSTM that generates piano and chamber music.

Fastai is an amazing resource, and even I, new to deep learning, can get fastai models in just a few lines of code. I don’t fully understand the principles behind these advanced technologies, but my model works, and it takes less time to train and performs better.

Her music generation model is based on a language model she built during the class. Using the fastai library to support the latest NLP technology, she completed the music generation project in just two weeks and achieved good initial results. This is a good example of the usefulness of the Fastai library. With a few modifications, developers can change the text classification model to the music generation model, which saves a lot of time and effort in practice.

IBM Watson Senior Research Fellow on Clara, a music generator



The artistic creation

Architect and investor Miguel Perez Michaus has been conducting his “Style Reversion” experiments with pre-release versions of Fastai. The so-called “style restoration” is to restore the image after style transfer to its original appearance, as shown below:

Style reduction


“I like to work with Fastai because it does things that Keras can’t, like generating ‘non-standard’ things,” he says. As an early adopter, he has watched fastai iterate over the past 12 months:

I was lucky enough to play with fastai beta, and while it was only Alpha, it showed off its usefulness and flexibility, and allowed someone with domain knowledge but no formal computer science background to play with it. Fastai will get better and better. My own superficial understanding of the future of deep learning is that we need to understand the real technology behind the black box in detail, and in this case, I think Fastai will be very popular.

Academic research

Polish has always been a challenge in the NLP arena because it is a morphologically rich language, such as Polish, where adjectives change according to the number and gender of nouns. Entrepreneurs Piotr Czapla and Marcin Kardas, co-founders of n-Wave, a Deep Learning consultancy, based on ideas shown in the Cutting Edge Deep Learning For Coders course, They developed a new Polish text classification algorithm using FAstai and won the first prize in Poland’s top NLP academic competition. A paper on this new research will be published soon.

According to Czapla, the Fastai library was critical to their success:

Fastai is for the average person who doesn’t have hundreds of servers, which is one of the things I really like about it. It supports rapid development and prototyping and incorporates all the best deep learning practices. At the same time, the Fast. Ai course is the guiding light for me to learn deep learning, and I started to think about what deep learning can do from the day of the class.

Example: Transfer learning in computer vision

There is a very popular competition on Kaggle: Dogs vs Cats. Contestants had to write an algorithm to classify images that included dogs or cats. This is also a competition that is often covered in Fast. Ai courses because it represents an important class of problems: transfer learning based on pre-training models.

We will use this as an example to compare Keras and Fastai in terms of the amount of code required, accuracy, and speed. Here’s all the code for 2-stage fine-tuning with Fastai — not only is there very little code to write, but very few parameters to set:

data = data_from_imagefolder(Path('data/dogscats'), ds_tfms=get_transforms(), tfms=imagenet_norm, size=224) learn = ConvLearner(data, tvm.resnet34, Metrics =accuracy) learn.fit_one_cycle(6) learn.unfreeze() learn.fit_one_cycle(4, slice(1e-5,3e-4))Copy the code

The following table compares the differences between the two deep learning libraries:

Keras is one of the most popular training methods for neural networks. Although the above data is one-sided, the improvement of Fastai shows that Keras is not perfect and there is still a lot of room for improvement. Both Keras and other deep learning libraries require far more code to accomplish the same task than Fastai, which in turn leads to longer training times and not necessarily better model performance.

In addition, FASTAI also has strong performance in NLP tasks. The following table is a screenshot from the ULMFiT paper, showing the relative errors of the text classification algorithm ULMFiT and the top ranked algorithm in the IMDb dataset:

Summary of text classification performance


Fastai is currently the only library that provides this algorithm. Since the algorithm is built-in, you can directly refer to the Dogs vs Cats code above to reproduce the results of the paper. Here’s how to train the ULMFiT language model:

data = data_from_textcsv(LM_PATH, Tokenizer(), data_func=lm_data) learn = RNNLearner.language_model(data, Drop_mult = 0.3, pretrained_fnames = ['lstm_wt103'.'itos_wt103'Fit_one_cycle (1, 1E-2, moms=(0.8,0.7)) learn. Unfreeze () learn. Fit_one_cycle (10, 1E-3, Moms = (0.8, 0.7), pct_start = 0.25)Copy the code


Source: www.fast.ai/2018/10/02/…

Compiler: Bot