Heartbeat.fritr. ai/image-class…
Recently, an alpha version of Tensorflow.js was released for React Native and Expo applications. Currently supports loading pre-trained models and training new models.
TensorFlow.js
TensorFlow.js
MobileNet
React Native
Note: IN the past, I did touch on Google’s Vision API to build an image classification application that determines if a given image is a hot dog. If you are interested in reading this example, please click on the following link:The heartbeat. Fritz. Ai/build – a – not…
This article directory
- Environment to prepare
- Integrate TF.js into the Expo application
- Test tF.js integration
- Load the MobileNet model
- Asking user Permissions
- Transform the original image into a tensor
- Load and classify images
- Allows the user to select images
- Run the application
- conclusion
Full code link: github.com/amandeepmit…
Environment to prepare
- The local environment
Nodejs >= 10.x.x
expo-cli
- Apply to
Android
oriOS
The Expo Client application for testing the APP
Integrate TF.js into the Expo application
Using the TensorFlow library in React Native, the first step is to integrate the platform adapter, the TFJS-React – Native module, which supports loading all major TFJS models from the Web. It also provides GPU support using EXPO-GL.
Open a terminal window and create a new Expo application by executing the following command.
expo init mobilenet-tfjs-expo
Copy the code
Next, make sure to generate an application managed by Expo. Then install the following dependencies in your app’s directory:
yarn add @react-native-community/async-storage @tensorflow/tfjs @tensorflow/tfjs-react-native expo-gl @tensorflow-models/mobilenet jpeg-js
Copy the code
Note: If you want to generate your application using the React-native CLI, you can follow explicit instructions to modify the metro.config.js file and other necessary steps, as described here.
Even if you use Expo, you still need to install the Async-storage that the TFJS module depends on.
Test tF.js integration
We need to ensure that TFJS is successfully loaded into the application before rendering it. There is an asynchronous function called tf.ready (). Open the app.js file, import the necessary dependencies, and define isTfReady to start with false.
import React from 'react'
import { StyleSheet, Text, View } from 'react-native'
import * as tf from '@tensorflow/tfjs'
import { fetch } from '@tensorflow/tfjs-react-native'
class App extends React.Component {
state = {
isTfReady: false
}
async componentDidMount() {
await tf.ready()
this.setState({
isTfReady: true
})
//Output in Expo console
console.log(this.state.isTfReady)
}
render() {
return (
<View style={styles.container}>
<Text>TFJS ready? {this.state.isTfReady ? <Text>Yes</Text> : ' '}</Text>
</View>
)
}
}
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: '#fff',
alignItems: 'center',
justifyContent: 'center'}})export default App
Copy the code
Because the lifecycle method is asynchronous, the isTfReady value is updated to true only when TFJS is actually loaded.
You can see the output in the emulator device, as shown below.
Load the MobileNet module
Similar to the previous step, you must also load the MobileNet model before you can provide the input image. Loading a pre-trained tensorflow.js model from the Web is an expensive network call that will take a lot of time. Modify the app.js file to load the MobileNet model. Import it first:
import * as mobilenet from '@tensorflow-models/mobilenet'
Copy the code
Add additional attributes to the initial state:
state = {
isTfReady: false,
isModelReady: false
}
Copy the code
Modify the lifecycle method:
async componentDidMount() {
await tf.ready()
this.setState({
isTfReady: true
})
this.model = await mobilenet.load()
this.setState({ isModelReady: true})}Copy the code
Finally, when the model is loaded, let’s display an indicator on the screen.
<Text> Model ready? {' '} {this.state.isModelReady ? <Text>Yes</Text> : <Text>Loading Model... </Text>} </Text>Copy the code
When the module loads, the following information is displayed:
Asking user Permissions
Now that the platform adapter and model are integrated into the React Native application, we need to add an asynchronous capability to request the user’s permission to access the camera. This is an essential step when building iOS applications using Expo’s image selector component. Before continuing, run the following command to install all of the packages provided by the Expo SDK.
expo install expo-permissions expo-constants expo-image-picker
Copy the code
Add an import declaration to app.js
import Constants from 'expo-constants'
import * as Permissions from 'expo-permissions'
Copy the code
Add method to APP class:
getPermissionAsync = async () => {
if (Constants.platform.ios) {
const { status } = await Permissions.askAsync(Permissions.CAMERA_ROLL)
if(status ! = ='granted') {
alert('Sorry, we need camera roll permissions to make this work! ')}}}Copy the code
Call this asynchronous method inside componentDidMount () :
async componentDidMount() {
await tf.ready()
this.setState({
isTfReady: true
})
this.model = await mobilenet.load()
this.setState({ isModelReady: true })
// add this
this.getPermissionAsync()
}
Copy the code
Transform the original image into a tensor
The app will ask users to upload images from their phone’s camera or gallery. You must add a method to load the image and allow TensorFlow to decode the data in the image. TensorFlow supports JPEG and PNG formats.
In the app.js file, first import the JPEG-JS package, which will be used to decode the data in the image.
import * as jpeg from 'jpeg-js'
Copy the code
The imageToTensor method decodes the width, height and binary data of the image. The method takes the parameters of the original image data.
imageToTensor(rawImageData) {
const TO_UINT8ARRAY = true
const { width, height, data } = jpeg.decode(rawImageData, TO_UINT8ARRAY)
// Drop the alpha channel info for mobilenet
const buffer = new Uint8Array(width * height * 3)
let offset = 0 // offset into original data
for (let i = 0; i < buffer.length; i += 3) {
buffer[i] = data[offset]
buffer[i + 1] = data[offset + 1]
buffer[i + 2] = data[offset + 2]
offset += 4
}
return tf.tensor3d(buffer, [height, width, 3])
}
Copy the code
The TO_UINT8ARRAY array represents an array of 8-bit unsigned integers. The Uint8Array () constructor is a new ES2017 syntax. For different typed arrays, each type of array has its own byte range in memory.
Load and classify images
Next, we add another method called classifyImage, which reads the raw data from the image and produces the results in predictive form after classification.
The path to the image source must be saved in the state of the application component to read the image from the source. Also, the results of the asynchronous methods described above must be included. This is the last time to modify the existing state in the app.js file.
state = {
isTfReady: false,
isModelReady: false,
predictions: null,
image: null
}
Copy the code
Add asynchronous methods:
classifyImage = async () => {
try {
const imageAssetPath = Image.resolveAssetSource(this.state.image)
const response = await fetch(imageAssetPath.uri, {}, { isBinary: true })
const rawImageData = await response.arrayBuffer()
const imageTensor = this.imageToTensor(rawImageData)
const predictions = await this.model.classify(imageTensor)
this.setState({ predictions })
console.log(predictions)
} catch (error) {
console.log(error)
}
}
Copy the code
The results of the pre-training model are generated as arrays. Examples are as follows:
Allows the user to select images
To select images from the camera system equipment, you need to use the expo – image – picker package provides asynchronous method ImagePicker. LaunchImageLibraryAsync. The import package:
import * as Permissions from 'expo-permissions'
Copy the code
Add the selectImage method for:
- Let the user select the image
- Select the image in
state.image
Fill in the sourceURI
object - Finally, call
ClassifyImage ()
Methods Make predictions based on the given input
selectImage = async () => {
try {
let response = await ImagePicker.launchImageLibraryAsync({
mediaTypes: ImagePicker.MediaTypeOptions.All,
allowsEditing: true,
aspect: [4, 3]
})
if(! response.cancelled) { constsource = { uri: response.uri }
this.setState({ image: source })
this.classifyImage()
}
} catch (error) {
console.log(error)
}
}
Copy the code
The package expo-image-picker returns an object. If the user cancels the selection of the image, the image selector module returns a single property: Canceled: True. If successful, the image selector module returns properties, such as the URI of the image itself. Therefore, the if statement in the above fragment is significant.
Run the application
To complete the program, add opacity where the user clicks to add an image.
Here’s the complete code snippet for the Render method in app.js:
render() {
const { isTfReady, isModelReady, predictions, image } = this.state
return (
<View style={styles.container}>
<StatusBar barStyle='light-content'/> <View style={styles.loadingContainer}> <Text style={styles.commonTextStyles}> :' '} </Text> <View style={styles.loadingModelContainer}> <Text style={styles.text}>Model ready? </Text> {isModelReady ? (<Text style={styles. Text}>✅</Text>) : (<ActivityIndicator size='small' />
)}
</View>
</View>
<TouchableOpacity
style={styles.imageWrapper}
onPress={isModelReady ? this.selectImage : undefined}>
{image && <Image source={image} style={styles.imageContainer} />} {isModelReady && ! image && ( <Text style={styles.transparentText}>Tap to choose image</Text> )} </TouchableOpacity> <View style={styles.predictionWrapper}> {isModelReady && image && ( <Text style={styles.text}> Predictions: {predictions ?' ' : 'Predicting... '}
</Text>
)}
{isModelReady &&
predictions &&
predictions.map(p => this.renderPrediction(p))}
</View>
<View style={styles.footer}>
<Text style={styles.poweredBy}>Powered by:</Text>
<Image source={require('./assets/tfjs.jpg')} style={styles.tfLogo} />
</View>
</View>
)
}
}
Copy the code
The full styles object:
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: '#171f24',
alignItems: 'center'
},
loadingContainer: {
marginTop: 80,
justifyContent: 'center'
},
text: {
color: '#ffffff',
fontSize: 16
},
loadingModelContainer: {
flexDirection: 'row',
marginTop: 10
},
imageWrapper: {
width: 280,
height: 280,
padding: 10,
borderColor: '#cf667f',
borderWidth: 5,
borderStyle: 'dashed',
marginTop: 40,
marginBottom: 10,
position: 'relative',
justifyContent: 'center',
alignItems: 'center'
},
imageContainer: {
width: 250,
height: 250,
position: 'absolute',
top: 10,
left: 10,
bottom: 10,
right: 10
},
predictionWrapper: {
height: 100,
width: '100%',
flexDirection: 'column',
alignItems: 'center'
},
transparentText: {
color: '#ffffff'Opacity: 0.7}, footer: {marginTop: 40}, poweredBy: {fontSize: 20, color:'#e69e34',
marginBottom: 6
},
tfLogo: {
width: 125,
height: 70
}
})
Copy the code
Run the program by executing the expo start command from the terminal window. The first thing you’ll notice is that after booting the application in the Expo client, it will ask for permissions.
conclusion
The purpose of this article is to give you a head start on how to implement the Tesnorflow.js model in React Native applications, as well as a better understanding of image classification, which is the core use case for computer vision-based machine learning.
Since the Tf.js for React Native is in alpha at the time of writing this article, we expect to see more advanced examples for building real-time applications in the future. Here are some resources that I find very useful. The tfJS-React-Native GitHub repository contains more examples using different pre-training models Infinite Red NSFW JS and React Native examples are clear and very helpful introduction to tensorflow.js