JavaScript is a popular high-level programming language used by the vast majority of websites around the world and supported by all major browsers. With the popularity of deep learning, more and more developers are exploring the use of JavaScript to implement artificial intelligence and machine learning algorithms. Robin Wieruch from Germany recently published a series of tutorials on how to build machine learning using JavaScript. In this article, we will introduce how to implement neural networks using JavaScript.





“I was going to write a big rebuttal about how this makes no sense without GPU support… but it uses WebGL to apply GPU power. And it’s probably 10,000 times easier than installing the TensorFlow stack on your local desktop,” one user wrote.


Recently, the author has published a series of articles on implementing ARTIFICIAL intelligence and machine learning algorithms in JavaScript, including:


  • Linear regression and gradient descent: www.robinwieruch.de/linear-regr…
  • The normal equation of linear regression: www.robinwieruch.de/multivariat…
  • Logistic regression and gradient descent: www.robinwieruch.de/logistic-re…


The implementation of these machine learning algorithms is based on the math.js library of linear algebra (such as matrix operations) and differentiation, and you can find all of these algorithms on GitHub:


GitHub link: github.com/javascript-…


If you find any flaws, feel free to suggest your own improvements to this resource to help later generations. I hope to continue to provide web developers with more and richer machine learning algorithms.


Personally, I find it somewhat challenging to implement these algorithms. Especially if you need to implement neural network forward and back propagation in JavaScript. Since I was learning about neural networks myself, I started looking for libraries suitable for this kind of work. Hopefully, it will be easy to find a basic implementation on GitHub in the near future. For now, however, with my JavaScript experience, I’ve chosen Deeplearn.js, published by Google, to do the job. In this article, I’ll share how to implement neural networks using Deeplearn.js and JavaScript to solve real-world problems — on the Web.


First of all, I strongly recommend readers to study machine Learning by Ng, a well-known scholar of deep learning. This article will not explain the machine learning algorithm in detail, but will only show how it is used in JavaScript. On the other hand, the course series has a surprisingly high quality of detail and explanation of algorithms. Before writing this article, I took the course myself and tried to internalize the knowledge in the course with a JavaScript implementation.


What is the purpose of neural networks?


The neural network implemented in this paper needs to improve web accessibility by selecting appropriate font colors related to background colors. For example, the font on a dark blue background should be white, and the font on a light yellow background should be black. You might be wondering: Why do you need a neural network to do things in the first place? It’s not hard to programmatically calculate the available font color based on the background color, is it? I quickly found a solution to this problem on Stack Overflow and adjusted it to fit the colors in RGB space according to my needs.



function getAccessibleColor(rgb) { let [ r, g, b ] = rgb; let colors = [r / 255, g / 255, b / 255]; Let c = color.map ((col) => {if (col <= 0.03928) {return col / 12.92; } return math.pow ((col + 0.055) / 1.055, 2.4); }); Let L = (0.2126 * c [0]) + (0.7152 * c [1]) + (0.0722 * c [2]); Return (L > 0.179)? [0, 0, 0] : [255, 255, 255]; }Copy the code


When there is already a programmatic approach to the problem, using neural networks is of little value for real-world problems, and there is no need to use a machine-trained algorithm. However, since this problem can be solved programmatically, it is also easy to verify the performance of neural networks, which may solve our problem. View the making library (https://github.com/javascript-machine-learning/color-accessibility-neural-network-deeplearnjs) in the figure, Find out how it ends up, and what you’ll build in this tutorial. If you are familiar with machine learning, you may have noticed that this task is a classification problem. The algorithm should determine the binary output (font color: white or black) based on the input (background color). In the course of using the neural network training algorithm, the correct font color will be output according to the input background color.


The following will guide you through setting up all the parts of the neural network from scratch, leaving it up to you to put the parts of the file/folder Settings together. However, you can integrate previously referenced GitHub libraries to get implementation details.


Data set generation in JavaScript


The training set in machine learning consists of input data points and output data points (labels). It is used to train algorithms to predict output for new input data points outside of a training set (such as a test set). In the training stage, the algorithm trained by the neural network adjusts its weight to predict the given label of the input data point. In summary, the trained algorithm is a function that takes data points as input and approximates the output label.

After the neural network training, the algorithm can output font color for the new background color that does not belong to the training set. Therefore, you will use the test set later to verify the accuracy of the training algorithm. Since we are dealing with colors, it is not difficult to generate a sample data set of input colors for the neural network.


function generateRandomRgbColors(m) {
 const rawInputs = [];
 for (let i = 0; i < m; i++) {
   rawInputs.push(generateRandomRgbColor());
 }
 return rawInputs;
}
function generateRandomRgbColor() {
 return [
   randomIntFromInterval(0, 255),
   randomIntFromInterval(0, 255),
   randomIntFromInterval(0, 255),
 ];
}
function randomIntFromInterval(min, max) {
 return Math.floor(Math.random() * (max - min + 1) + min);
}Copy the code


The generateRandomRgbColors() function creates a partial data set of given size M. The data points in the dataset are colors in RGB color space. Each color is represented as a row in the matrix, and each column is a characteristic of the color. Features are R, G, B coded values in RGB space. The dataset doesn’t have any labels yet, so the training set is incomplete because it has only input values and no output values.


Because the programming method of generating usable font colors based on known colors is known, you can use a modified version of the functionality to generate labels for the training set (and later the test set). These labels are tuned for dichotomies and implicitly reflect black and white colors in RGB space. So, for black, the label is [0,1]; For white, the label is [1,0].


function getAccessibleColor(rgb) { let [ r, g, b ] = rgb; let color = [r / 255, g / 255, b / 255]; Let c = color.map((col) => {if (col <= 0.03928) {return col / 12.92; } return math.pow ((col + 0.055) / 1.055, 2.4); }); Let L = (0.2126 * c [0]) + (0.7152 * c [1]) + (0.0722 * c [2]); Return (L > 0.179)? [ 0, 1 ] // black : [ 1, 0 ]; // white }Copy the code


Now you have all the random data sets (training set, test set) that are used to generate (background) colors, which are classified as black or white (font) colors.


function generateColorSet(m) {
 const rawInputs = generateRandomRgbColors(m);
 const rawTargets = rawInputs.map(getAccessibleColor);
 return { rawInputs, rawTargets };
}Copy the code


Another step that makes the underlying algorithms in neural networks better is feature scaling. In the simplified version of feature scaling, you want the RGB channel values to be between 0 and 1. Since you know the maximum, you can simply derive the normalized value for each color channel.


function normalizeColor(rgb) {
 return rgb.map(v => v / 255);
}Copy the code


You can put this function in your neural network model, or as a separate utility function. Next I’m going to put it in a neural network model.


The setup phase of the JavaScript neural network model


Now you can implement a neural network using JavaScript. To get started, you need to install the deeplearn.js library: a framework for JavaScript neural networks. “Deeplearn.js is an open source library that brings efficient machine learning building blocks to the Web, allowing neural networks to be trained in a browser or pre-trained models to be run in inferential mode,” the official blurb says. In this article, you will train your model and then run it in inferential mode. There are two main advantages to using this library:


  • First, it uses a local computer’s GPU to accelerate vector computations in machine learning algorithms. These machine learning computations are similar to graphical computations, so using gpus is more efficient than using cpus.
  • Second, deeplearn.js is similar in structure to the popular TensorFlow library (also developed by Google, though it uses Python). So if you want to make the leap in machine learning using Python, deeplearn.js offers a shortcut to all areas of JavaScript.


Now back to your project. If you want to use NPM to do this, all you need to do is install deeplearn.js on the command line. You can also see the official installation documentation for the Deeplearn.js project.


npm install deeplearnCopy the code


I haven’t built a lot of neural networks, so I follow the general practice of building neural networks. In JavaScript, you can use JavaScript ES6 class to advance this. This class provides the perfect container for your neural network by defining neural network properties and class methods. For example, your color normalization function can find a point in the category as a method.


class ColorAccessibilityModel {
 normalizeColor(rgb) {
   return rgb.map(v => v / 255);
 }
}
export default ColorAccessibilityModel;Copy the code


Maybe that’s where your function generates the data set. In my case, I just normalized the category as a classification method, leaving the dataset generation independent of the category. You can argue that in the future there are different ways of generating data sets that should not be defined in neural network models. Anyway, this is just an implementation detail.

The training and inference stages fall under the umbrella term session for machine learning. You can set up sessions in the neural network category. First, you can enter the NDArrayMathGPU category from Deeplearn.js to help you do math on your GPU in a computationally efficient way.


import {
 NDArrayMathGPU,
} from 'deeplearn';
const math = new NDArrayMathGPU();
class ColorAccessibilityModel {
 ...
}
export default ColorAccessibilityModel;Copy the code


Second, declare the classification method class setting session. Its function signature takes a training set as an argument, making it the perfect consumer for generating a training set from a previously implemented function. Third, the session initializes the empty graph. After that, the diagram will reflect the architecture of the neural network. You can define its characteristics any way you want.


import { Graph, NDArrayMathGPU, } from 'deeplearn'; class ColorAccessibilityModel { setupSession(trainingSet) { const graph = new Graph(); }.. } export default ColorAccessibilityModel;Copy the code


In step 4, you define the shape of the input and output data points in the graph in terms of tensors. A tensor is an array of different dimensions. It can be a vector, a matrix, or a matrix of higher dimensions. The neural network takes these tensors as inputs and outputs. In our case, there are three input units (one for each color channel) and two output units (binary, such as black and white).


class ColorAccessibilityModel { inputTensor; targetTensor; setupSession(trainingSet) { const graph = new Graph(); this.inputTensor = graph.placeholder('input RGB value', [3]); this.targetTensor = graph.placeholder('output classifier', [2]); }... } export default ColorAccessibilityModel;Copy the code


Fifth, the neural network contains a hidden layer. How miracles happen is still a black box. Basically, the neural network comes up with its own cross-calculated parameters (trained in session). However, you can define any dimension of the hidden layer (per-cell size, layer size).


class ColorAccessibilityModel { inputTensor; targetTensor; setupSession(trainingSet) { const graph = new Graph(); this.inputTensor = graph.placeholder('input RGB value', [3]); this.targetTensor = graph.placeholder('output classifier', [2]); let connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64); connectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32); connectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16); } createConnectedLayer( graph, inputLayer, layerIndex, units, ) { ... }... } export default ColorAccessibilityModel;Copy the code


Depending on the number of layers, you can change the diagram to scale out more layers. Creating a classification method for the connected layer requires a graph, a mutated connected layer, an index for the new layer, and the number of cells. The layer properties of a graph can be used to return a new tensor determined by name.


class ColorAccessibilityModel { inputTensor; targetTensor; setupSession(trainingSet) { const graph = new Graph(); this.inputTensor = graph.placeholder('input RGB value', [3]); this.targetTensor = graph.placeholder('output classifier', [2]); let connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64); connectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32); connectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16); } createConnectedLayer( graph, inputLayer, layerIndex, units, ) { return graph.layers.dense( `fully_connected_${layerIndex}`, inputLayer, units ); }... } export default ColorAccessibilityModel;Copy the code


Each neuron in a neural network must have a defined activation function. It could be a logistic activation function. You probably already know it from logistic regression, which becomes the Logistic unit in a neural network. In our case, the neural network uses modified linear elements by default.


class ColorAccessibilityModel { inputTensor; targetTensor; setupSession(trainingSet) { const graph = new Graph(); this.inputTensor = graph.placeholder('input RGB value', [3]); this.targetTensor = graph.placeholder('output classifier', [2]); let connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64); connectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32); connectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16); } createConnectedLayer( graph, inputLayer, layerIndex, units, activationFunction ) { return graph.layers.dense( `fully_connected_${layerIndex}`, inputLayer, units, activationFunction ? activationFunction : (x) => graph.relu(x) ); }... } export default ColorAccessibilityModel;Copy the code


Step 6, create a layer that outputs dichotomies. It has two output units, each representing a discrete value (black, white).


class ColorAccessibilityModel { inputTensor; targetTensor; predictionTensor; setupSession(trainingSet) { const graph = new Graph(); this.inputTensor = graph.placeholder('input RGB value', [3]); this.targetTensor = graph.placeholder('output classifier', [2]); let connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64); connectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32); connectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16); this.predictionTensor = this.createConnectedLayer(graph, connectedLayer, 3, 2); }... } export default ColorAccessibilityModel;Copy the code


Seventh, declare a cost tensor to define the loss function. In this case, the cost tensor is the mean square error. It uses the target tensor (label) of the training set and the prediction tensor obtained by the training algorithm to calculate the cost.


class ColorAccessibilityModel { inputTensor; targetTensor; predictionTensor; costTensor; setupSession(trainingSet) { const graph = new Graph(); this.inputTensor = graph.placeholder('input RGB value', [3]); this.targetTensor = graph.placeholder('output classifier', [2]); let connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64); connectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32); connectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16); this.predictionTensor = this.createConnectedLayer(graph, connectedLayer, 3, 2); this.costTensor = graph.meanSquaredCost(this.targetTensor, this.predictionTensor); }... } export default ColorAccessibilityModel;Copy the code


Last but not least, set up the associated session for the architecture diagram. After that, you are ready to import the training set for the training phase.


import { Graph, Session, NDArrayMathGPU, } from 'deeplearn'; class ColorAccessibilityModel { session; inputTensor; targetTensor; predictionTensor; costTensor; setupSession(trainingSet) { const graph = new Graph(); this.inputTensor = graph.placeholder('input RGB value', [3]); this.targetTensor = graph.placeholder('output classifier', [2]); let connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64); connectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32); connectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16); this.predictionTensor = this.createConnectedLayer(graph, connectedLayer, 3, 2); this.costTensor = graph.meanSquaredCost(this.targetTensor, this.predictionTensor); this.session = new Session(graph, math); this.prepareTrainingSet(trainingSet); } prepareTrainingSet(trainingSet) { ... }... } export default ColorAccessibilityModel;Copy the code


However, before preparing the neural network training set, the setup has not been completed. First, you can use callback functions in the GPU mathematical computing environment to support computation, but this is optional and not mandatory.


import { Graph, Session, NDArrayMathGPU, } from 'deeplearn'; const math = new NDArrayMathGPU(); class ColorAccessibilityModel { session; inputTensor; targetTensor; predictionTensor; costTensor; . prepareTrainingSet(trainingSet) { math.scope(() => { ... }); }... } export default ColorAccessibilityModel;Copy the code

Second, you can deconstruct the inputs and outputs of the training set (labels, also known as targets) to convert them into a format that neural networks can read. Deeplearn.js uses built-in NDArrays for mathematical calculations. You can think of them as simple arrays or vectors in an array matrix. In addition, the color of the input array is normalized to improve the performance of the neural network.


import { Array1D, Graph, Session, NDArrayMathGPU, } from 'deeplearn'; const math = new NDArrayMathGPU(); class ColorAccessibilityModel { session; inputTensor; targetTensor; predictionTensor; costTensor; . prepareTrainingSet(trainingSet) { math.scope(() => { const { rawInputs, rawTargets } = trainingSet; const inputArray = rawInputs.map(v => Array1D.new(this.normalizeColor(v))); const targetArray = rawTargets.map(v => Array1D.new(v)); }); }... } export default ColorAccessibilityModel;Copy the code


Third, shuffle input and target array. When shuffling, the Shuffler provided by Deeplearn.js saves both in sync. Shuffle occurs in each training iteration, feeding different inputs as the batch of the neural network. The whole shuffle process can improve the training algorithm because it is more likely to generalize by avoiding overfitting.


import { Array1D, InCPUMemoryShuffledInputProviderBuilder, Graph, Session, NDArrayMathGPU, } from 'deeplearn'; const math = new NDArrayMathGPU(); class ColorAccessibilityModel { session; inputTensor; targetTensor; predictionTensor; costTensor; . prepareTrainingSet(trainingSet) { math.scope(() => { const { rawInputs, rawTargets } = trainingSet; const inputArray = rawInputs.map(v => Array1D.new(this.normalizeColor(v))); const targetArray = rawTargets.map(v => Array1D.new(v)); const shuffledInputProviderBuilder = new InCPUMemoryShuffledInputProviderBuilder([ inputArray, targetArray ]); const [ inputProvider, targetProvider, ] = shuffledInputProviderBuilder.getInputProviders(); }); }... } export default ColorAccessibilityModel;Copy the code


Finally, feed entries are the final input of the neural network feedforward algorithm in the training phase. It matches data and tensors (defined according to the shape of the setup phase).


import { Array1D, InCPUMemoryShuffledInputProviderBuilder Graph, Session, NDArrayMathGPU, } from 'deeplearn'; const math = new NDArrayMathGPU(); class ColorAccessibilityModel { session; inputTensor; targetTensor; predictionTensor; costTensor; feedEntries; . prepareTrainingSet(trainingSet) { math.scope(() => { const { rawInputs, rawTargets } = trainingSet; const inputArray = rawInputs.map(v => Array1D.new(this.normalizeColor(v))); const targetArray = rawTargets.map(v => Array1D.new(v)); const shuffledInputProviderBuilder = new InCPUMemoryShuffledInputProviderBuilder([ inputArray, targetArray ]); const [ inputProvider, targetProvider, ] = shuffledInputProviderBuilder.getInputProviders(); this.feedEntries = [ { tensor: this.inputTensor, data: inputProvider }, { tensor: this.targetTensor, data: targetProvider }, ]; }); }... } export default ColorAccessibilityModel;Copy the code


This completes the setup of the neural network. All layers and units of the neural network are implemented and the training set is ready for training. Now you just need to add two hyperparameters that configure the behavior of the neural network for the next phase: the training phase.


import { Array1D, InCPUMemoryShuffledInputProviderBuilder, Graph, Session, SGDOptimizer, NDArrayMathGPU, } from 'deeplearn'; const math = new NDArrayMathGPU(); class ColorAccessibilityModel { session; optimizer; batchSize = 300; InitialLearningRate = 0.06; inputTensor; targetTensor; predictionTensor; costTensor; feedEntries; constructor() { this.optimizer = new SGDOptimizer(this.initialLearningRate); }... } export default ColorAccessibilityModel;Copy the code


The first parameter is the learning rate. The learning rate determines the convergence rate of the algorithm to minimize the cost. We should assume it’s high, but not too high. Otherwise gradient descent will not converge, because there is no local optimum.


The second parameter is batch size. It defines how many data points of the training set are passed through the neural network in each epoch (iteration). An EPOCH is equal to one forward propagation and one back propagation of a batch of data points. Training neural networks in batches has two benefits. First, it prevents intensive computations, since algorithms are trained using a small number of data points in memory. Second, it allows the neural network to batch more quickly, since the weights in each epoch are adjusted for each batch of data points — rather than waiting until the entire dataset has been trained.


The stage of training


After the setup phase, it’s time for the training phase. Not much implementation is required because all the groundwork is done in the setup phase. First, training stages can be defined by classification methods. It is then executed again in the mathematical environment of Deeplearn.js. In addition, it uses all the predefined features of neural network instances to train algorithms.


class ColorAccessibilityModel {
 ...
 train() {
   math.scope(() => {
     this.session.train(
       this.costTensor,
       this.feedEntries,
       this.batchSize,
       this.optimizer
     );
   });
 }
}
export default ColorAccessibilityModel;Copy the code


The training method is neural network training of 1 EPOCH. Therefore, when invoked from the outside, the invocation must be iterative. In addition, training requires only 1 epoch. To train an algorithm in batches, you must run the training method through multiple iterations.

This is the basic training phase. But adjusting your learning rate based on time can improve your training. The learning rate is initially high, but as the algorithm converges in each step, the learning rate tends to decline.


class ColorAccessibilityModel { ... Train (step) {let learningRate = this.initialLearningrate * math.pow (0.90, math.floor (step / 50)); this.optimizer.setLearningRate(learningRate); math.scope(() => { this.session.train( this.costTensor, this.feedEntries, this.batchSize, this.optimizer ); } } } export default ColorAccessibilityModel;Copy the code


In our case, the learning rate drops by 10% every 50 steps. Next, we need to capture the loss during the training phase to verify that it decreases over time. Losses can be returned at each iteration, but this results in lower computational efficiency. Every time the neural network requests to return the loss, it must be through the GPU to return the request. Therefore, we only require a loss return after multiple iterations to verify that it is decreasing. If no request returns a loss, the loss reduction constant for training is defined as NONE (previously the default).


import { Array1D, InCPUMemoryShuffledInputProviderBuilder, Graph, Session, SGDOptimizer, NDArrayMathGPU, CostReduction, } from 'deeplearn'; class ColorAccessibilityModel { ... Train (step, computeCost) {let learningRate = this.initialLearningrate * math.pow (0.90, math.floor (step / 50)); this.optimizer.setLearningRate(learningRate); let costValue; math.scope(() => { const cost = this.session.train( this.costTensor, this.feedEntries, this.batchSize, this.optimizer, computeCost ? CostReduction.MEAN : CostReduction.NONE, ); if (computeCost) { costValue = cost.get(); }}); return costValue; } } export default ColorAccessibilityModel;Copy the code


Finally, this is the training phase. Now you just need to iterate externally after setting up the session on the training set. External execution depends on whether the training method returns losses.


Inference phase


The final stage is the inference stage, which uses test sets to verify the performance of the training algorithm. The input is the RGB color in the background color, and the output is the [0, 1] or [1, 0] classification prediction made by the algorithm for whether the font color is black or white. Since the input data points are normalized, don’t forget to normalize the colors in this step as well.


class ColorAccessibilityModel {
 ...
 predict(rgb) {
   let classifier = [];
   math.scope(() => {
     const mapping = [{
       tensor: this.inputTensor,
       data: Array1D.new(this.normalizeColor(rgb)),
     }];
     classifier = this.session.eval(this.predictionTensor, mapping).getValues();
   });
   return [ ...classifier ];
 }
}
export default ColorAccessibilityModel;Copy the code


This approach runs the performance critical part again in a mathematical environment, requiring the definition of a map that can ultimately be used as input to the session evaluation. Remember, prediction methods don’t have to run after the training phase. It can be used during the training phase to output validation of the test set. So far, the neural network has gone through the stages of setup, training and inference.


Visualize learning neural networks in JavaScript


Now it’s time to train and validate/test using neural networks. The simple process is to establish a neural network, use a training set to run the training phase, and use a test set to predict after the cost function gets the minimum value. All you need to do is use a few console.log statements on the developer console on your Web browser. However, because the neural network is about color prediction and deeplearn.js runs in a browser, it is easy to visualize the training and testing phases of the neural network.


At this point, you can decide how to visualize your running neural network. Using a Canvas and repuestAnimationFrame API makes JavaScript code simpler. But for this article, I’m going to use react.js because I wrote about it on my blog.


So after setting up the project using create-React-app, the app component can be the entry point for our visualization. First, import neural network categories and functions to generate data sets from your files. Furthermore, several constants are added for the training set size, the test set size and the number of training iterations.


import React, { Component } from 'react';
import './App.css';
import generateColorSet from './data';
import ColorAccessibilityModel from './neuralNetwork';
const ITERATIONS = 750;
const TRAINING_SET_SIZE = 1500;
const TEST_SET_SIZE = 10;
class App extends Component {
 ...
}
export default App;Copy the code


The components of the App include generating data sets (training sets and test sets), establishing neural network sessions by passing the training sets, and defining the initial state of the components. During the training phase, the value of the cost function and the number of iterations are displayed on the console, which also represents the state of the component.


import React, { Component } from 'react'; import './App.css'; import generateColorSet from './data'; import ColorAccessibilityModel from './neuralNetwork'; const ITERATIONS = 750; const TRAINING_SET_SIZE = 1500; const TEST_SET_SIZE = 10; class App extends Component { testSet; trainingSet; colorAccessibilityModel; constructor() { super(); this.testSet = generateColorSet(TEST_SET_SIZE); this.trainingSet = generateColorSet(TRAINING_SET_SIZE); this.colorAccessibilityModel = new ColorAccessibilityModel(); this.colorAccessibilityModel.setupSession(this.trainingSet); this.state = { currentIteration: 0, cost: -42, }; }... } export default App;Copy the code


Next, after the neural network session is set up, you can train the neural network iteratively. The simplest version of React simply runs a for loop all the time.


class App extends Component { ... componentDidMount () { for (let i = 0; i <= ITERATIONS; i++) { this.colorAccessibilityModel.train(i); }}; } export default App;Copy the code


However, the above code does not provide render output during the React training phase, because the component does not reRender when the neural network blocks a single JavaScript thread. This is when React uses requestAnimationFrame. Instead of defining a for loop yourself, each requested browser animation frame can be used to run a training iteration.


class App extends Component { ... componentDidMount () { requestAnimationFrame(this.tick); }; tick = () => { this.setState((state) => ({ currentIteration: state.currentIteration + 1 })); if (this.state.currentIteration < ITERATIONS) { requestAnimationFrame(this.tick); this.colorAccessibilityModel.train(this.state.currentIteration); }}; } export default App;Copy the code


In addition, the cost function can be calculated every 5 steps. As mentioned earlier, you need to access the GPU to retrieve the cost function. Therefore, it is necessary to prevent the neural network from training too fast.


class App extends Component { ... componentDidMount () { requestAnimationFrame(this.tick); }; tick = () => { this.setState((state) => ({ currentIteration: state.currentIteration + 1 })); if (this.state.currentIteration < ITERATIONS) { requestAnimationFrame(this.tick); let computeCost = ! (this.state.currentIteration % 5); let cost = this.colorAccessibilityModel.train( this.state.currentIteration, computeCost ); if (cost > 0) { this.setState(() => ({ cost })); }}}; } export default App;Copy the code


Once the component is loaded, the training phase is ready to run. Now it’s time to provide a set of tests using programmatic computed and predictive outputs. Over time, the predicted output should become the same as the programmed output. The training set itself is not visualized.


class App extends Component { ... render() { const { currentIteration, cost } = this.state; return ( <div className="app"> <div> <h1>Neural Network for Font Color Accessibility</h1> <p>Iterations: {currentIteration}</p> <p>Cost: {cost}</p> </div> <div className="content"> <div className="content-item"> <ActualTable testSet={this.testSet} /> </div>  <div className="content-item"> <InferenceTable model={this.colorAccessibilityModel} testSet={this.testSet} /> </div> </div> </div> ); } } const ActualTable = ({ testSet }) => <div> <p>Programmatically Computed</p> </div> const InferenceTable = ({ testSet, model }) => <div> <p>Neural Network Computed</p> </div> export default App;Copy the code


The actual table continuously shows the colors of each input and output as the test set continues to input. The test set includes input colors (background colors) and output colors (font colors). Since the output colors were classified as black [0,1] and white [1,0] vectors when the dataset was generated, they need to be converted to real colors again.


const ActualTable = ({ testSet }) => <div> <p>Programmatically Computed</p> {Array(TEST_SET_SIZE).fill(0).map((v, i) => <ColorBox key={i} rgbInput={testSet.rawInputs[i]} rgbTarget={fromClassifierToRgb(testSet.rawTargets[i])} /> )} </div> const fromClassifierToRgb = (classifier) => classifier[0] > classifier[1] ? [255, 255, 255] : [0, 0, 0]Copy the code


The ColorBox component is a generic component that takes an input color (background color) and a target color (font color) as input. It simply displays the type of input color in a rectangle, the RGB code string for the input color, and colors the given target color with the RGB code of the font.


const ColorBox = ({ rgbInput, rgbTarget }) =>
 <div className="color-box" style={{ backgroundColor: getRgbStyle(rgbInput) }}>
   <span style={{ color: getRgbStyle(rgbTarget) }}>
     <RgbString rgb={rgbInput} />
   </span>
 </div>
const RgbString = ({ rgb }) =>
 `rgb(${rgb.toString()})`
const getRgbStyle = (rgb) =>
 `rgb(${rgb[0]}, ${rgb[1]}, ${rgb[2]})`Copy the code


Last but not least, visualize the exciting part of predicting colors in the inference table. It also uses a Color box, but offers a few different props.


const InferenceTable = ({ testSet, model }) =>
 <div>
   <p>Neural Network Computed</p>
   {Array(TEST_SET_SIZE).fill(0).map((v, i) =>
     <ColorBox
       key={i}
       rgbInput={testSet.rawInputs[i]}
       rgbTarget={fromClassifierToRgb(model.predict(testSet.rawInputs[i]))}
     />
   )}
 </div>Copy the code


The input color is still the color defined in the test set, but the target color is not the target color in the test set. The key to the task is to predict the target color using the neural network’s prediction method – it requires input of the color and should predict the target color during the training phase.


Finally, when you start the application, you need to see if the neural network is enabled. While the actual table uses a fixed set of tests from the beginning, the reasoning table should change its font color during the training phase. In fact, when the ActualTable component shows the actual test set, InferenceTable shows the input data points of the test set, but the output is predicted using a neural network.


This article has shown you how to use Deeplearn.js to build neural networks for machine learning in JavaScript. If you have any suggestions for improvement, feel free to leave a comment and make your own contribution on GitHub. A visual GIF of the React render section can be seen on GitHub: github.com/javascript-…