- Linear Algebra for Deep Learning
- Original author: Vihar Kurama
- The Nuggets translation Project
- Permanent link to this article: github.com/xitu/gold-m…
- Translator: maoqyhz
- Proofreader: kezhenxu94, luochen1992
The math behind each deep learning project.
Deep learning is a subfield of machine learning that involves artificial neural network algorithms that mimic the structure and function of the human brain.
Linear algebra is a continuous rather than discrete form of mathematics, and many computer scientists have little experience with it. Understanding linear algebra is important for understanding and using many machine learning algorithms, especially deep learning algorithms.
Why math?
Linear algebra, probability theory and calculus are the three “languages” that make up machine learning. Learning these mathematical knowledge will help to deeply understand the underlying algorithm mechanism and develop new algorithms.
When we get down to the bottom, everything behind deep learning is math. Therefore, it is crucial to understand the basics of linear algebra before learning deep learning and programming.
source
The core data structures behind deep learning are scalars, vectors, matrices and tensors. Let’s use these data structures to solve all the basic linear algebra problems programmatically.
scalar
A scalar is a single number, or a tensor of order 0. The symbol x∈ℝ indicates that x is a scalar that belongs to a set of real values ℝ.
The following is a representation of different sets of numbers in deep learning. ℕ stands for a set of positive integers (1,2,3…) . ℤ represents a set of integers that combine positive, negative, and zero values. ℚ stands for set of rational numbers.
There are several built-in scalar types in Python, int, float, complex, bytes, and Unicode. In Numpy, a Python library, there are 24 new basic data types to describe different types of scalars. Refer to the documentation for information about data types.
Defining scalars and related operations in Python:
The following code snippet explains the use of some of the operators in scalars.
# Built-in scalarA = 5 and b = 7.5print(type(a))
print(type(b))
print(a + b)
print(a - b)
print(a * b)
print(a / b)
Copy the code
<class 'int'>
<class 'float'> 12.5-2.5 37.5 0.666666666666Copy the code
The following code snippet checks whether the given variable is a scalar.
import numpy as np
Is a scalar function
def isscalar(num):
if isinstance(num, generic):
return True
else:
return False
print(np. Isscalar (3.1))print(np. Isscalar ([3.1]))print(np.isscalar(False))
Copy the code
True
False
True
Copy the code
vector
Vectors are ordered arrays of singular numbers, examples of first-order tensors. A vector is a fragment of an object called vector space. A vector space can be thought of as the entire set of all possible vectors of a given length (or dimension). A three-dimensional real-valued vector space, expressed as ℝ^3, is commonly used to mathematically represent our real-world concepts of three-dimensional space.
To explicitly locate a component of a vector, the ith scalar element of the vector is written as x[I].
In deep learning, vectors usually represent feature vectors, whose original components define the correlation of specific features. These elements can include the relative importance of the intensity of a group of pixels in a two-dimensional image or the historical price values of various financial instruments.
Define vectors and related operations in Python:
import numpy as np
# define vector
x = [1, 2, 3]
y = [4, 5, 6]
print(type(x))
# You're not going to get the sum of vectors
print(x + y)
Add vectors using Numpy
z = np.add(x, y)
print(z)
print(type(z))
The vector cross product
mul = np.cross(x, y)
print(mul)
Copy the code
<class 'list'>
[1, 2, 3, 4, 5, 6]
[5 7 9]
<class 'numpy.ndarray'>
[-3 6 -3]
Copy the code
matrix
A matrix is a rectangular array of numbers and is an example of a tensor of order 2. If m and n are positive integers, that is, m, n∈ℕ, then the m×n matrix contains m* N numbers with m rows and n columns.
The complete M ×n matrix can be written as:
It is often useful to abbreviate the full matrix display to the following expression:
In Python, we use the Numpy library to help us create n-dimensional arrays. An array is basically a matrix, and we use the matrix method and construct a matrix from a list.
$python
> > > import numpy as np > > > x = np. Matrix ([[1, 2], [2, 3]]) > > > x matrix ([[1, 2], [2, 3]]) > > > a = x.m ean (0) > > > a matrix ([[1.5, 2.5]]) > > >Take the mean of the matrix. (where axis does not set a value, take the mean of m*n number, return a real number; Axis = 0: compress rows, average each column, return 1* n matrix; Axis =1: Compress the columns, average the rows, and return the M *1 matrix).> > > z = x.m ean (1) > > > z matrix ([[1.5], [2.5]]) > > > z.s hape (2, 1) > > > = x y z matrix ([[0.5, 0.5], [0.5, 0.5]]) > > >print(type(z))
<class 'numpy.matrixlib.defmatrix.matrix'>
Copy the code
Defining matrices and related operations in Python:
Matrix addition
Matrices can be added with scalars, vectors, and other matrices. Each operation is precisely defined. These techniques are often used in machine learning and deep learning, so it’s worth taking the time to become familiar with them.
# Matrix addition
import numpy as np
x = np.matrix([[1, 2], [4, 3]])
sum = x.sum()
print(sum)
# Output: 10
Copy the code
Matrix plus matrix
C = A + B (**A and B need the same dimensions **)
Shape returns the dimension of the matrix, and Add takes two matrix parameters and returns the sum of the two matrices. If the dimensions of two matrices are inconsistent, the add method will throw an exception saying that they cannot be added.
# matrix plus matrix
import numpy as np
x = np.matrix([[1, 2], [4, 3]])
y = np.matrix([[3, 4], [3, 10]])
print(x.shape)
# (2, 2)
print(y.shape)
# (2, 2)
m_sum = np.add(x, y)
print(m_sum)
print(m_sum.shape)
""" Output : [[4 6] [7 13]] (2, 2) """
Copy the code
Adding a matrix to a scalar
Adds the given scalar to all elements of the given matrix.
# matrix plus scalar
import numpy as np
x = np.matrix([[1, 2], [4, 3]])
s_sum = x + 1
print(s_sum)
""" Output: [[2 3] [5 4]] """
Copy the code
Multiplication of matrices with scalars
Multiply the given scalar times all the entries in the given matrix.
Multiply a matrix by a scalar
import numpy as np
x = np.matrix([[1, 2], [4, 3]])
s_mul = x * 3
print(s_mul)
"""[[3 6] [12 9]]"""
Copy the code
Matrix multiplication
Matrix A of dimension (m x n) is multiplied by matrix B of dimension (n x p), resulting in matrix C of dimension (m x p).
source
# Matrix multiplication
import numpy as np
a = [[1, 0], [0, 1]]
b = [1, 2]
np.matmul(a, b)
# Output: array([1, 2])
complex_mul = np.matmul([2j, 3j], [2j, 3j])
print(complex_mul)
# Output: (-13+0j)
Copy the code
Matrix transpose
By transpose, you can convert a row vector to a column vector and vice versa:
A=[a_ij_]mxn
The AT = [a_ji_] n * m
Transpose
import numpy as np
a = np.array([[1, 2], [3, 4]])
print(a)
"""[[1 2] [3 4]]"""
a.transpose()
print(a)
""" array([[1, 3], [2, 4]]) """
Copy the code
tensor
Tensors, the more general entities, encapsulate scalars, vectors, and matrices. In physical sciences and machine learning, it is sometimes necessary to use more than two sequenced tensors.
source
Instead of using nested matrices, we use Python libraries like TensorFlow or PyTorch to declare tensors.
Define a simple tensor in PyTorch:
import torch
a = torch.Tensor([26])
print(type(a))
# <class 'torch.FloatTensor'>
print(a.shape)
# torch.Size([1])
Create a 5*3 random Torch variable.
t = torch.Tensor(5, 3)
print(t)
"""0.0000e+00 0.0000e+00 +00 0.0000e+00 +00 +00 7.0065e-45 1.1614e-41 0.0000e+00 2.2369e+08 0.0000e+00 7.0065e-45 [Torch.FloatTensor of size + 1]"""
print(t.shape)
# torch.Size([5, 3])
Copy the code
Operations on tensors in Python:
import torch
# create a tensorQ = torch.Tensor(4,4) ones = torch. Ones (4,4)print(p, q, ones)
""" Output: E+00 e+00 e+00 e+00 0.0000 0.0000 0.0000 0.0000 1.6009 4.4721 e-19 e+21 e+22 e+30 e-09 e+17 e-08 5.1019 8.0221 3.1921 4.7428 6.2625 [torch.FloatTensor of size 4x4] 0.0000e+00 0.0000e+00 0.0000e+00 E+00 e-44 e-41 e+00 e+08 2.2369 0.0000 1.1614 1.8217 0.0000 0.0000 0.0000 e+00 e+00 2.0376 2.0376 e-40 e-40 e+37 nan nan - 5.3105 nan [torch.FloatTensor of size 4x4] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [torch.FloatTensor of size 4x4] """
print("Addition:{}".format(p + q))
print("Subtraction:{}".format(p - ones))
print("Multiplication:{}".format(p * ones))
print("Division:{}".format(q / ones))
""" Addition: E+00 e+00 e+00 e+00 0.0000 0.0000 0.0000 0.0000 1.6009 4.4721 e-19 e+21 e+22 e+30 e-09 e+17 e-08 5.1019 8.0221 3.1921 4.7428 6.2625 Add nan nan [torch.FloatTensor of size 4x4] Subtraction: -1.0000e+00 -1.0000e+00 -1.0000e+00 -1.0000e+00 -1.0000e+00 + 4.4721e+21 6.2625e+22 4.7428e+30 -1.0000e+00 8.0221e+17 -1.0000e+00 8.1121e+17 + 1.0000e+00 8.2022 +17 + 1.0000e+00 -8.4363e-01 [torch.FloatTensor of size 4x4] x +00: E+00 e+00 e+00 e+00 0.0000 0.0000 0.0000 0.0000 1.6009 4.4721 e-19 e+21 e+22 e+30 e-09 e+17 e-08 5.1019 8.0221 3.1921 4.7428 6.2625 [torch.FloatTensor of size 4x4] Division: [torch.FloatTensor of size 4x4] 0.0000 e+00 e+00 e+00 e+00 e-44 e-41 1.1614 1.8217 0.0000 0.0000 0.0000 0.0000 e+00 e+08 e+00 e+00 e-40 2.0376 0.0000 0.0000 2.2369 [torch.FloatTensor of size 4x4]"""
Copy the code
For more documentation on tensors and PyTorch click here.
Important links
Getting started with Deep learning in Python:
- Deep Learning with Python: The human brain imitation.
- Introduction To Machine LearningMachine Learning is an idea to learn from examples and experience, without being explicitly programmed.
conclusion
Thanks for reading. If you find this story useful, please click 👏 below to spread the word of love.
Special thanks to Samhita Alla for her contribution to this article.
If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The ** permanent link ** at the beginning of this article is the MarkDown link for this article on GitHub.
The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. Content covered Android, iOS, front front end (https://github.com/xitu/gold-miner#), [the back-end] (HTTP: / / https://github.com/xitu/gold-miner# The back-end), block chain chain blocks (https://github.com/xitu/gold-miner#), [products] (HTTP: / / https://github.com/xitu/gold-miner# Products), [design] (https://github.com/xitu/gold-miner#), [ai] (https://github.com/xitu/gold-miner#), artificial intelligence, and other fields, If you want to see more high-quality translations, please continue to pay attention to the Nuggets Translation project, the official weibo, zhihu column.