As we all know, there is no wife in the wife cake, husband and wife lung slice is no husband and wife, so the four operations can not +, -, ×, ÷ the four operators? Of course you can. This article will start with the basics, then code implementation, and finally, we will discuss the starting point and philosophy of computer algorithms.
1. The principle and basic knowledge of bit operation to realize the four operations
1.1 Source code, inverse code and complement
As we all know, the computer world is made up of zeros and ones, or what we call a binary world. The binary representation of a number in a computer is called the machine number of that number, where the highest bit represents the symbol of the number. The positive number is 0 and the negative number is 1. In addition, we also refer to the true value of the number of machines with sign bits as the true value of the number of machines.
True form: A true form is the concept of the truth value mentioned in the previous paragraph. For example, if an 8-bit binary is used to represent a number, the source code for +11 is 0,0001011, and the source code for -11 is 1,0001011.
Inverse code: When the number of machines is positive, the Inverse code is the same as the original code; When it is negative, the inverse code is the original code except the sign bit, the other bits are reversed by bits. Intuitively, the inverse of +0 and -0, the source code of +0 is 0 0000000, then the inverse code is 0 0000000; If the source code of -0 is 1,0000000, the inverse code is 1,1111111.
Two’s complement: In computer systems, numerical values are represented and stored identically (remember this). The reason is that, using complement code, the sign bit and number range can be unified processing; At the same time, addition and subtraction can also be handled uniformly.
Now that we’ve introduced the three concepts, let’s show you how the complement is computed.
First of all, we should first introduce the concept of “mold” : “mold” refers to the counting range of a measurement system, such as the bucket and clock used for measuring food in the past. The computer can also be regarded as a measuring machine, because the word length of the computer is fixed, that is, the number of bits stored and processed is limited, so it also has a measuring range, that is, there is a “module”. For example, the measurement range of the clock is 0~11, mode =12. Computers representing n bits (32 bits, 64 bits) have a measurement range of 0 to 2^ n-1, and the modulus is 2^n. “Modulus” is essentially the quantity that the meter produces “overflow”, and its value cannot be expressed on the meter, which can only represent the remainder of the modulus. Any modulo meter can be reduced to an addition operation.
If the current hour hand points to 8 o ‘clock and the exact time is 6 o ‘clock, there are two ways to adjust the time: one is to dial back 2 hours, that is, 8-2=6; The other is to dial back 10 hours, 8+10=12+6=6, i.e. 8-2=8+10=8+12-2(mod 12). In a modulo 12 system, adding 10 is the same as subtracting 2, so any subtracting 2 operation can be replaced by adding 10. It can be expressed as: A -b= A -b+mod=a+mod-b. For modules, 2 and 10 complement each other. In fact, in a system of modules 12, 11 and 1,8 and 4,9 and 3,7 and 5,6 and 6 all have this property, and the common property is that they add up to the magnitude. With computers, the concepts and methods are exactly the same. N bit computer, set n=8, can represent the largest number is 11111111, if you add 1 into 100000000(9 bits), but because only 8 bits, the highest bit 1 naturally lost. It’s back to 00000000, so the magnitude of the 8-bit binary system is 2^8. In such a system, the subtraction problem can also be transformed into an addition problem, simply by expressing the subtraction in terms of the corresponding complement. The use of the complement to computer logarithm processing, is the complement code. A minus b in the computer is equal to a plus B’s complement.
Source code for complement, and complement for the source code, you can be interested in their own manual calculation, here directly to the conclusion.
1.1.1 Complement the original code
Positive: The complement of a positive number is the same as the original.
Negative: The complement of a negative number by inverting all bits of the source code except the sign bit (0 becomes 1, 1 becomes 0, and the sign bit remains 1) and adding 1.
1.1.2 Complement to find the original code
(1) If the symbol bit of the complement is “0”, it means that it is a positive number, and its original code is the complement.
(2) If the sign bit of the complement is “1”, it means that it is a negative number, then the complement of the given complement is the required original code, that is, 1 is added after the inverse except the sign bit.
1.2 The principle of four operations (addition, subtraction, multiplication and division)
1.2.1 addition
Addition is the simplest of the operations. The addition that we learn in primary school is all decimal, the ones place plus the ones place, the ten place plus the ten place, the decimal place, the value of each place plus the carry gets the result, “and”. In the same way, the binary world of computers is the same, except that 1 + 0 = 1, 1 + 1 = 0, 0 + 0 = 0, carry every 2. Those of you who have studied logical operations know that this equation can be implemented using the xOR operator ^, which is one of the computer’s bit operations, and then carried.
1.2.2 subtraction
Now that we’ve done addition, subtraction is easy, so a minus b is equal to a plus minus b, and that’s where the complement comes in, so take the inverse of b plus 1 and that’s the complement of minus b. Here we use the inverse operator, the bit operator ~b
1.2.3 multiplication
The meaning of multiplication is the addition of the multipliers, or frankly addition.
1. Division
The idea of division is to keep subtracting the dividend by the divisor until the dividend is less than the divisor, at which point the number of subtractions is the quotient we need, and the dividend is the remainder.
Code implementation
Once you know the basics, writing code is simple. Note: the code here is just a simple implementation of the principle, interested students can write better code, can also use other languages to implement.
1. Add
/** * implements a + b **@param a
* @param b
* @returnA and B and */
public static int add(int a, int b) {
int sum = 0;
/ / carry
int carry;
while(b ! =0) {
sum = a ^ b;
// The sum of the corresponding bits is carried, since it is carried, we must move the whole bit to the left
carry = (a & b) << 1;
a = sum;
b = carry;
}
return sum;
}
Copy the code
2. The subtraction
/** * implements a -b, which is essentially a + (-b) **@param a
* @param b
* @returnThe difference between a and b */
public static int subtract(int a, int b) {
// introduce c = -b
int c = add(~b, 1);
return add(a, c);
}
Copy the code
3. The multiplication
/** ** a * b, which is the result of the multiplicative times, first calculate the absolute value of the summation, then find the symbol **@param a
* @param b
* @return* /
public static int multiply(int a, int b) {
// Take the absolute value of both the multiplier and the multiplied
int multiplier = a < 0 ? add(~a, 1) : a;
int multiplicand = b < 0 ? add(~b, 1) : b;
// Compute the product of absolute values
int product = 0;
int count = 0;
while (count < multiplier) {
product = add(product, multiplicand);
count = add(count, 1);
}
// Compute the sign of the product
if ((a ^ b) < 0) {
product = add(~product, 1);
}
return product;
}
Copy the code
4. Division
/** * calculate the value of a/b, that is, keep subtracting the dividend by the divisor until the dividend is less than the divisor, at which point the number of subtractions is the quotient we need, and the dividend is the remainder. Notice the sign. * *@param a
* @param b
* @return* /
public static int divide(int a, int b) {
// Remove absolute value from dividend and divisor
int dividend = a < 0 ? add(~a, 1) : a;
int divisor = b < 0 ? add(~b, 1) : b;
// Evaluate the absolute value of the dividend and divisor
int remainder = dividend;
int quotient = 0;
while (remainder >= divisor) {
remainder = subtract(remainder, divisor);
quotient = add(quotient, 1);
}
// find the sign of the quotient
if ((a ^ b) < 0) {
quotient = add(~quotient, 1);
}
return quotient;
}
Copy the code
“Abstraction” and “encapsulation”
As we can see from the above code, when addition is introduced, subtraction, multiplication and division can be completed on the basis of the previous operation. Here we can also extract several methods, such as finding the negative of a number:
// Find the negation of a
int negative(int a) {
return add(~a, 1);
}
Copy the code
Thus, the subtraction code can be written as:
public static int subtract(int a, int b) {
return add(a, negative(b));
}
Copy the code
Is it much cleaner than the previous code? We “encapsulate” the process of finding inverse codes into a function, and then “abstract” the function into taking negative numbers, so that we can ignore the encapsulated content and use the abstract method directly. Just as we encapsulated addition in the first place, the other operations are functions that use addition directly, without delving into the details of how addition is implemented. This is the beauty and charm of abstraction, which allows us to focus on building more complex systems without worrying about the low-level details.
As we all know, if you know, computer hardware consists of controllers, arithmetic units, memory, INPUT and output devices, and its hardware implementation also starts with relays, vacuum tubes, and then transistors. The complexity and computing power of the internal circuits of the computer may be far beyond our imagination. However, even the most complex things are made up of the simplest components, and as small components can form larger components, larger components can form larger components. The idea is to use “abstraction” and “encapsulation” to reduce our perception of complexity. At least in computer and software engineering, if you tell me about Picasso’s “abstract” art, forget it.
So, as you probably know, if there’s a current flowing through the circuit, we’ll call it “1”, which can also be represented as “true”, and if there’s no current flowing through the circuit, we’ll call it “0”, which can also be represented as “false”, so how do we represent it and calculate it? In mathematics, there is an entire branch of mathematics that deals exclusively with “true” and “false,” and it has solved all the laws and operations called “Boolean Algebra!” Boolean algebra differs from conventional calculations in that it does NOT add AND subtract, but in that it operates logically, with three basic logical operations: NOT, AND, AND OR.
NOT: The NOT operation has one input and one output. If the input is true, the output is false. When input is false, output is true. We can list its “truth table” and draw its circuit implementation.
AND: AND has two inputs. Both inputs must be true for the output to be true. Otherwise, the output is false.
OR: The OR operation also has two inputs. When one of the two inputs is true, the output is true. Otherwise, the output is false.
Next, we can do an abstraction. The NOT gate, AND gate, OR gate can be abstracted into the following symbols, the transistors AND wires of the circuit itself are still there, but in a different way:
We can then use these abstracted symbols to build larger components without making complex changes. For example, we can use it to build another very important Boolean operation, xor-exclusive OR, with the bit operator ^, which we use to add. Its truth table is:
And we can implement an XOR circuit through other logic gates by following these steps:
We can then do another abstraction, representing xor with the following notation:
That way, we can put XOR into the toolbox and use it next time to build other components, regardless of how many gates XOR uses, how those gates are put together with transistors, and how electrons flow through semiconductors.
conclusion
This paper first does not need +, -, ×, ÷ four symbols to achieve a simple four operations, to explain how the bottom of the computer is to complete the calculation of this task. After a brief introduction of the logic operation, and the implementation of the logic gate circuit, to give you a preliminary display of the computer “abstract” art. In short, on an abstract basis, computer engineers design processors thinking less in terms of transistors and more in terms of larger components, such as logic gates; Development engineers, when developing features, rarely think about how the logic is implemented at the physical level, focusing instead on the business logic implementation. So that the current most popular Serverless, Faas and other cloud computing related technologies, most also benefit from this idea. Hopefully you’ll find that enlightening.
link
- Add, subtract, multiply and divide with basic bit operations: www.cnblogs.com/kiven-code/…
- Related code (Java version) : github.com/lq920320/al…
- 【 crash course in computer science 】 : www.bilibili.com/video/BV1EW…
- Nguyen half-stretching
XOR
Tutorial:Mp.weixin.qq.com/s/dVOyUMaJR…