This article is for those with Java roots
Author: DJL – Lanking
HelloGitHub introduces the Open Source project series. We are honored to have Lanking (github.com/lanking520), an engineer from Amazon + Apache, to explain DJL, a deep learning platform built entirely from Java.
introduce
For many years, there has been no deep learning development platform tailored for Java. Users have to do a lot of project configuration and build classes to build deep learning applications in Java. After that, we still have to face all kinds of troublesome problems such as dependency matching maintenance. To address this pain point, Amazon has opened source the Deep Java Library (DJL)
Project address: github.com/awslabs/djl…
Liverpoolfc.tv: DJL. Ai /
A deep learning platform built entirely in Java. DJL’s developers have also created interesting environments for it, allowing users to run deep learning applications in Java with minimal configuration and even online.
To make deep learning easier for Java developers, we have launched DJL Future LABS, which aims to create a minimalist Java runtime environment and create Java’s own deep learning toolkit. You can easily use them online or offline to build your deep learning applications. Our goal is to make deep learning more accessible to Java developers.
Here are some online locations or tools that can help you get started with DJL.
Online compilation: Block Runner
Djl.ai /website/dem…
Block Runner is a very simple design that directly helps you compile Java deep learning code online. As shown above, you can execute the code simply by clicking Run. We offer a variety of deep learning engines for you to choose from. You can easily complete simple deep learning operations and reasoning tasks on it. When you finish building, click Get Template to Get a copy of the Gradle project that runs directly on the local computer. As a simple example, here is an image classification application code built using the Apache MXNet model, which you can copy directly to the online editor:
import ai.djl.inference.*;
import ai.djl.modality.*;
import ai.djl.modality.cv.*;
import ai.djl.modality.cv.transform.*;
import ai.djl.modality.cv.translator.*;
import ai.djl.repository.zoo.*;
import ai.djl.translate.*;
String modelUrl = "https://alpha-djl-demos.s3.amazonaws.com/model/djl-blockrunner/mxnet_resnet18.zip?model_name=resnet18_v1";
Criteria<Image, Classifications> criteria = Criteria.builder()
.setTypes(Image.class, Classifications.class)
.optModelUrls(modelUrl)
.optTranslator(ImageClassificationTranslator.builder()
.addTransform(new Resize(224.224))
.addTransform(new ToTensor())
.optApplySoftmax(true).build())
.build();
ZooModel<Image, Classifications> model = ModelZoo.loadModel(criteria);
Predictor<Image, Classifications> predictor = model.newPredictor();
String imageURL = "https://raw.githubusercontent.com/awslabs/djl/master/examples/src/test/resources/kitten.jpg";
Image image = ImageFactory.getInstance().fromUrl(imageURL);
predictor.predict(image);
Copy the code
After running, you should get the following result:
[class: "n02123045 tabby, tabby cat", probability: 0.41073 class: "N02124075 Egyptian cat", probability: 0.29393 class: "N02123159 Tiger cat", probability: 0.19337 class:" N02123394 Persian cat", probability: 0.04586 class: "N02127052 Lynx, catamount", probability: 0.00911]Copy the code
Finally, you can just hit Get Template to run it locally. Isn’t it easy? The build currently supports three back-end engines, Apache MXNet/PyTorch/TensorFlow, and more support will be added later.
At the implementation level, we used the CodeMirror online editor and SpringBoot for back-end hosting. To learn more, see the implementation code.
Online terminal tool: JShell
Djl.ai /website/dem…
JShell is a modified version of JShell that includes DJL features. You can directly integrate existing Java functionality with DJL’s class for online use. We prepared the following introductions for JShell in advance:
import ai.djl.ndarray.NDManager;
import ai.djl.ndarray.NDArray;
import ai.djl.ndarray.types.Shape;
import ai.djl.ndarray.index.NDIndex;
NDManager manager = NDManager.newBaseManager();
Copy the code
The back end is server architecture based on SpringBoot, and the front end uses XtermJS.
Currently, the command line supports the following operations:
backspace
Remove the input<-
和->
Move the cursor- Copy/paste code function
- The input
clear
Clear the screen
With a few simple examples provided on the web page, you can easily use NDArray to do what you need.
To see how we built this JShell application, look at the implementation code.
The Java version of Jupyter Notebook
Address: github.com/awslabs/djl…
What? Jupyter Notebook? Are we not talking about Python? No! 100% pure Java11.
Inspired by Spencer Park’s IJava project, we integrated DJL into Jupyter Notebook. You don’t need to configure it, just boot it up. We have prepared a series of Java deep learning exercises and inference applications Notebook built using Jupyter Notebook. To learn more, click here.
The Java version of Notebook basically implements all of the features of Jupyter in Python:
- Support each code block to run independently
- Show a picture
- using
Tablesaw
Show a chart
In contrast to Python, Java Notebook can be imported directly into Maven’s library so that users don’t have to worry about project configuration and so on. It also runs on a GPU, so you can easily use Notebook for deep learning training tasks.
Here are a few notebooks to help you quickly understand DJL’s usage and new features:
- Target detection is performed using ModelZoo
- Load the PyTorch pre-training model
- Load the Apache MXNet pretraining model
- Transfer Learning Case
- Question answering System Case
P.S: We even have a Java-based deep learning book, which is in the preview stage, so stay tuned.
About DJL and future lab plans
DJL is still a very young framework, released at the end of 2019, and actually supports all the major deep learning frameworks (TensorFlow, PyTorch MXNet) in March 2020. You can easily train and deploy your deep learning model using DJL. It also includes more than 70 pre-training models from GluonCV, HuggingFace, TorchHub, and Keras.
About the future lab: We still have many functions in the development stage, and we need a lot of friends to participate in and experience our new functions. Here are a few projects in the works:
- D2L – Java: Build a Java version of hands-on Deep Learning
- DJL NLP WordEmbedding: Provides more word embedding interfaces for DJL
For more information or questions, please click the link below:
- GitHub
- Slack
Welcome to HelloGitHub public account