Outline of this paper:

It was very clear in the last article! In this illustration of the structure, variables, and types of Java programs, we know that there are three basic types in Java: integer, floating point, and Boolean. Java provides different computing capabilities for different types. In this article, we will take a look at the various operations in Java primitive types.

Integer arithmetic

The first is the operation of integers.

Java provides a number of operators that operate on integer values.

Comparison operator

The first is the comparison operator, which results in a Boolean value. including

  • Numeric comparison operators:<, <=, > and >=.
    • Less than, less than or equal to, greater than, greater than or equal to
  • Numeric equality operator:= = and! =.
    • Equal to, not equal to

Numeric operator

The second is the numeric operator, which results in a value of type int or long. including

  • Unary plus and minus operators:+ and -.
    • Positive and negative
  • Multiplication operator:*, / and %
    • Multiply, divide, take modules
  • Addition operator:+ and -.
    • Add and subtract
  • Incrementing operators:++.
    • Gal.
  • Decrement operator:--.
    • Minus one
  • There are conforming and unsigned shift operators:< < > > and > > >.
    • <<: move to the left, add 0 in the low position, do not distinguish positive and negative numbers.
    • >>: right shift, positive number right shift, high fill 0, negative number right shift, high fill 1.
    • >>>: unsigned right shift, high order complement 0, does not distinguish between positive and negative numbers.
  • Bitwise complement operator:~.
  • Integer bitwise operators:& ^ and |.

Conversion operator

The third is the conversion operator.

Before we look at conversions, let’s take a look at the order of precision for Java primitives: byte->short->char->int->long->float->double.

How does Java handle low-precision types to high-precision ones?

Implicit conversion

In this case, there is no intrinsic loss of precision, so Java does automatic type conversions, also known as implicit type conversions.

For example, can you guess the output of this code?

public class TypeConvert {

    /** * from ** *@paramThe args into * / refs
    public static void main(String[] args) {

        The number 65 actually represents the uppercase letter A
        char charValue = 65;

        // initialize char
        System.out.println("initCharValue=" + charValue);

        / / plus one
        charValue += 1;
        System.out.println("CharAddOneValue=" + charValue);

        // By printing B, the charValue is already C (67)
        System.out.println("CharAddOneValue=" + charValue++);

        // Add 134, 67+67=134
        System.out.println(charValue + charValue);

        // Automatic switch to high precision
        int intValue = charValue;
        System.out.println("intValue=" + intValue);

        long longValue = intValue;
        System.out.println("longValue=" + longValue);

        double doubleValue = intValue;
        System.out.println("doubleValue="+ doubleValue); }}Copy the code

Here is the output:

InitCharValue =A CharAddOneValue=B CharAddOneValue=B 134 intValue=67 longValue=67 doubleValue=67.0Copy the code

As you can see, char is converted to its ASCII counterpart. Byte, char, and short are automatically converted to ints, but += and ++ are not converted to ints. When multiple types are mixed, they are automatically converted to the type with the highest accuracy. This type can override floating-point numbers, but cannot be converted to Booleans.

Automatic conversion to Java does this for you without having to declare it explicitly in our code.

According to conversion

In addition, high precision to low precision, in which case a forced conversion is required, also known as explicit conversion.

You can use the following code:

// High precision to low precision, strong turn
int highIntValue = 129;
byte lowByteValue = (byte)highIntValue;

// However, there will be a loss of precision after strong rotation, such as -127 output
System.out.println(lowByteValue);
Copy the code

You’ll notice that it actually prints -127 instead of 129. What’s going on here?

It turns out that Java lost precision in the process of doing a high-precision to low-precision type conversion. As for why precision is lost and why printing is a different value, we need to clarify a basic computer knowledge.

When a computer stores Java numeric types, what form of data does it hold in memory?

This involves the concepts of source code, inverse code and complement.

The original code

The source code is the unmodified code. It consists of the leftmost sign bit and the binary number. The sign bit is 0 for a positive number and 1 for a negative number. The sign bit is determined by the number of bits in the computer. The number 6, for example, is 0000 0110 on an 8-bit computer. The nice thing about it is that it’s simple and intuitive, and it can represent numbers directly, so you’ll see that the values printed by this program are all in the original code, but we’ve converted them from binary to decimal.

But the original code also has disadvantages, is not directly involved in the operation, easy to make mistakes. For example, in mathematics, 1+(-1)=0, but in binary 00000001+10000001=10000010, translated into decimal is -2, obviously does not meet expectations.

So someone put forward the inverse code.

Radix-minus-one complement

The inverse code is the same for positive numbers and the opposite for negative numbers. The inverse of a positive number is the same as the original code, while the inverse of a negative number requires the retention of the leftmost sign bit and the inversion of the original number bit per digit.

The number 6, for example, on an 8-bit computer is its inverse: 0000 0110. The inverse of the number (-6) in a computer is: 1111 1001. More examples of source code are shown in the following diagram, which lists the unsigned values of 8-bit values in source code and inverse code.

The numerical Unsigned value The resulting values are expressed in source code The resulting value is represented by inverse code
0111, 1111, 127 127 127
0111, 1110, 126 126 126
0000, 0010, 2 2 2
0000, 0001, 1 1 1
0000, 0000, 0 0 0
1111, 1111, 255 – 127 – 0
1111, 1110, 254 – 126 1 –
1111, 1101, 253 – 125 2 –
1000, 0001, 129 – 1 – 126
1000, 0000, 128 0 – 127

(Reverse code sample data table)

The inverse code solves the problem of calculation error when the original code is subtracted, although the inverse code solution also has certain defects, let’s look at the inverse code is how to do.

Mathematical expressions:

1 -1 = 1 + (-1) = 0; 2 = 1 + (-1) = (-1);Copy the code

Inverse expression:

0000 0001 + 1111 1110 = 1111 1111(-0); 00000001 + 1111 1101 = 1111 1110(-1); / / rightCopy the code

This shows that most scenarios are correct when the inverse code is subtracted, except for possible negative signs when the result is 0. 0 can also take a minus sign, it is really strange to understand, this is a natural defect of inverse code. As can be seen from the above (inverse code sample data table), the expression range of inverse code includes ** -127 to -0 and 0 to 127**, a total of 256 numbers. It also distinguishes 0 from positive and negative, which is obviously illogical!

To solve this problem, the complement appears.

complement

A complement is a code with the same positive number and the same negative number. The complement of a positive number is the same as the original code, while the complement of a negative number requires the retention of the leftmost sign bit, and the reversal of each digit of the original code and the addition of one.

Instead of having two ways of representing 0 in an inverse code system, there is only one way of representing 0 in a complement system, the number 0 itself.

The complement of a positive number is the same as the complement of an inverse, and the complement of a negative number is its inverse plus one.

As shown in the table below.

The numerical Unsigned value The resulting values are expressed in source code The resulting value is represented by inverse code The resulting value is represented by a complement
0111, 1111, 127 127 127 127
0111, 1110, 126 126 126 126
0000, 0010, 2 2 2 2
0000, 0001, 1 1 1 1
0000, 0000, 0 0 0 0
1111, 1111, 255 – 127 – 0 – 1
1111, 1110, 254 – 126 1 – 2 –
1111, 1101, 253 – 125 2 – – 3
1000, 0001, 129 – 1 – 126 – 127.
1000, 0000, 128 0 – 127 – 128.

(Complement sample data table)

So this representation of the complement is very computer friendly, so again, it’s subtraction, so let’s see how the complement works.

Mathematical expressions:

1 -1 = 1 + (-1) = 0; 2 = 1 + (-1) = (-1);Copy the code

Complement table up to:

+ 1111 1111 0000 0001 (1) (1) -- -- -- -- -- -- -- -- -- -- -- -- -- -- 10000, 0000, 0000, 0001, (1) + 1111 (0), 1110 (2) -- -- -- -- -- -- -- -- -- -- -- -- -- -- 1111, 1111 (1)Copy the code

The first result, 10,000, may seem wrong because there are more than eight bits, but if you ignore the ninth bit (counting from right), the result is 0000 0000(0). This time the result is still 0, but there is no negative sign compared to the inverse result.

By comparison with the sample data table of complement, we can also see that the representation range of complement includes -128 to 0 and then to 127, a total of 256 numbers.

The complement code is designed so that the symbol potential energy and the effective value part participate in the operation, so as to simplify the operation rules, and at the same time, the subtraction operation is transformed into addition operation, which further simplifies the circuit design of the arithmetic unit in the computer.

Based on such advantages, complement has become the most common way of computer data storage. And we see the Java program printed value is the computer to complement code into the original code display, inverse code is the middle of the transition.

Source code, inverse code and complement can be described as the three “code” cars in the field of computer, which together support the form of data storage and expression in the computer. The relationship between them is as follows:

  1. All three codes are binary representations.
  2. The first digit is a sign bit, 1 represents a negative number, 0 represents a positive number, and the rest are numeric bits.
  3. All three of the positive yards are the same.
  4. The inverse of the negative number is the inverse of the non-sign bit on the basis of the original code, that is, the inverse of the negative number = the sign bit + the value bit of the original code.
  5. The complement of a negative number is the addition of one to the inverse, i.e., the negative complement = inverse +1.
  6. The conversion of negative complement to source code is to subtract one from the complement and then negate the non-sign bits, that is, negate the negative source code =(complement -1)&& numeric bits.

With the concepts of source code, inverse code, and complement in mind, let’s return to the problem of precision loss and review the previous code:

// High precision to low precision, strong turn
int highIntValue = 129;
byte lowByteValue = (byte)highIntValue;

// However, there will be a loss of precision after strong rotation, such as -127 output
System.out.println(lowByteValue);
Copy the code

In the above code, we know that int data is 32 bits and byte data is 8 bits. When Java converts int data to byte data, it essentially intercepts 8 bits of int and stores them in byte.

0000 0000 0000 0000 0000 1000 0001 The computer stores complement codes.

Convert byte from int to 1000 0001. The resulting data is still a complement.

According to the formula of negative complement to source code, the source code can be found as follows: complement (1000 0001) — > inverse (1000 0000) — > source code (1111 1111). 1111 1111 ** is the result of (byte)highIntValue.

In decimal notation, lowByteValue=-(64+32+16+8+4+2+1)=-127.

Did you have an Epiphany? The computer’s strange phenomenon is actually traceable!

String concatenation operator

The fourth is the string concatenation operator: +.

Given a String operand and an integer operand, the operator converts the integer operand to a String representing its decimal form, concatenating the two strings to produce a newly created String.

What does the following code output?

// Define an int in binary form
int strAppendInt = 0b111;

System.out.println(strAppendInt);

// String concatenation prints
System.out.println("String concatenation operator test, originally defined as: 0b111, printed as:" + strAppendInt);
Copy the code

Yes, it prints 7 and concatenates it with a string.

7 String concatenation operator test, originally defined as: 0b111, printed value: 7Copy the code

Floating point operation

So with integer arithmetic out of the way, let’s look at floating-point arithmetic.

Floating-point numbers are stored in a computer in accordance with IEEE 754 floating-point counting

Compared with integer arithmetic, floating-point arithmetic can only perform numerical operations of addition, subtraction, multiplication and division, but not bit arithmetic. Floating-point numbers, however, represent a larger range on a computer. Even 32-bit floats are more accurate than 64-bit longs! The drawback is that floating-point numbers can sometimes not be represented exactly.

IEEE 754 specifies four ways to represent floating point values: single precision (32 bits), double precision (64 bits), extended single precision (43 bits above, rarely used), and extended double precision (79 bits above, usually implemented as 80 bits).

Java often uses single and double precision, so we’ll discuss only those two floating-point formats.

Scientific enumeration

When it comes to floating point numbers, you have to say scientific notation!

The emergence of scientific counting method is used to express a maximum or minimum number, such as four hundred million billion numbers, can also be expressed with an integer, but you really write words, do not know how to write monkey years, and the readability is very poor, not scientific! Scientific notation was born to express such numbers simply and clearly.

Scientific notation consists of symbols, significant digits and exponents. The real world rules for numbers are decimal, 0 through 9, with an exponent based on 10. Diodes in the computer world can only be on or off, and that translates to binary.

Floating-point representation

The scientific counting method of decimal requires that the integer part of a significant digit must be within the interval of ** [1,9], while the integer part of binary can only be [1]. Since it is a certain information, the computer saves ** for this 1 in order to save cost. 32-bit float single-precision floating-point numbers are formatted as follows, with the yellow part being the decimal part following 1. XXX.

We introduce the structure of floating point format, which is divided into three parts.

4. Symbol bit A symbol assigned to the highest binary bit to represent a floating point number, with 0 representing a positive number and 1 representing a negative number.

Rank code point An exponent equivalent to scientific notation. Eight bits are allocated to the right of the sign bit to store the exponent. IEEE754 standard stipulates that the order code point stores the shift code corresponding to the exponent, rather than the original code or complement of the exponent.

The so-called shift code is obtained by shifting a truth value forward by an offset on the number line. So this is the eight green cells and you just calculate it, and you subtract 127 from that, and you get the actual exponent.

Mantissa a significant digit equivalent to scientific notation. The right-most 23 consecutive bits are allocated to store significant digits. IEEE754 specifies that the mantissa is represented as a source code, and the normalized representation of ** omits 1. **.

The normalized representation of commonly used floating point numbers is shown in the table:

The numerical Binary representation of floating-point numbers instructions
– 16 1100 0001 1000 0000 0000 0000 0000 The first bit is the sign bit, and 1 represents a negative number. Exponent for131-127 = 4, that is, 2 =16, the mantissa part is 1.0
16.35 0100 0001 1000 0010 1100 1100 1100 1101 The first bit is the sign bit, where 0 represents a positive number. The mantissa significant digit is 1.000 0010 1100 1100 1100 1101, converted to decimal 1.021875, then multiplied by 2 to get 16.35000038. You will find that the actual value stored by the computer may be different from the truth value.
0.35 0011 1110 1011 0011 0011 0011 0011 0011 0011 16.35 and 0.35 have different mantissa
1.0 0011 1111 1000 0000 0000 0000 0000 127-127=0, 2=1, mantissa part is 1.0
0.9 0011, 1111,110 0110 0110 0110 0110 0110 0110 126-127 = 1 or 2 = 0.5, the tail part of the valid number is 1.11001100110011001100110, converted to a decimal is 1.7999999523162842, and then multiplied by 0.5 to 0.899999976158142, You will find that 0.9 is not exactly represented in finite binary bits.

Addition and subtraction

In mathematics, when adding and subtracting two decimals, the decimal points are first aligned, and then the same digit is added and subtracted. When adding and subtracting numbers represented by scientific notation, you want the decimal points to line up by making sure the exponents are the same, and then adding and subtracting the significant numbers as normal. Specific operations are as follows:

  1. Zero detection. The order code and mantissa are all 0, that is, zero value, zero value can be directly involved in the result.
  2. Order operation. Through the rank code comparison, determine whether the decimal point position is aligned. According to IEEE 754, the movement direction of order is to move to the right, that is, the number with small order code is selected for operation.
  3. Sum of mantissa. Mantissa sum by bit, negative number of words to complement the first operation.
  4. Normalization of results. The result of the calculation may not conform to the normalized form, in which case it should be normalized. Moving the mantissa digit to the right is the right gauge, and moving the mantissa digit to the left is the left gauge.
  5. Results are rounded. In order or right gauge, the bits removed from the right end will be discarded, resulting in a loss of precision. In order to reduce the loss of accuracy, it is necessary to save the data moved out first, called the protection bit, and then round according to the protection bit after normalization.

1.0-0.9 Calculation process description

Do you know what the value of 1.0-0.9 is?

    public static void main(String[] args) {

        System.out.println(1.0 f - 0.9 f);
        
    }
Copy the code

The answer: 0.100000024

0.100000024
Copy the code

Let’s analyze the computer process.

The binary of 1.0 is: 0011 1111 1000 0000 0000 0000 0000 0000 0000 0000 0000 -0.9 the binary of 1.0 is: 1011 1111 0110 0110 0110 0110 0110 0110 0110 0110 0110 0110 0110 0110

Let’s disassemble these two floating point numbers separately:

Floating point Numbers symbol exponent Mantissa (actual value) Mantissa complement
1.0 0 127 1000 0000 0000 0000 0000 1000 0000 0000 0000 0000
0.9 1 126 1110 0110 0110 0110 0110 0110 0110 0001 1001 1001 1001 1001 1010

There is a hidden bit at the left end of the mantissa, so we add 1 at the highest end of the mantissa. Subsequent calculations are based on the actual mantissa digits.

Let’s do the logarithm first. The order of 1.0 is 127, and the order of -0.9 is 126. After comparing the rank code size, it is necessary to move the -0.9 mantis complement to the right, so that the rank code becomes 127, and at the same time, the high-order complement is 1. Then the result after the move is 10001 1001 1001 1001 1001 101.

And then sum the mantissa. Add bits based on the complement code, note that symbol bits also participate in the operation.

Sign bit Mantissa bit0		1000 0000 0000 0000 0000 0000
1		1000 1100 1100 1100 1100 1101
---------------------------------------
0		0000 1100 1100 1100 1100 1101
Copy the code

The left end is the sign bit, which is evaluated as 0, and the mantissa bit is evaluated as 0000, 1100, 1100, 1100, 1101.

And then we normalize. According to the specification, the highest mantissa must be 1, so move the result four bits to the left and subtract four from the order. The order code after the move is 123 (1111011 in binary) and the mantissa is 1100 1100 1100 1100 1101 0000. Then hide the highest mantissa, and change to 100 1100 1100 1100 1101 0000.

The resulting result has the sign 0, the order code 1111011, and the mantissa 100 1100 1100 1100 1101 0000. The combination of the three parts is 1.0-0.9, and 0.100000024 for decimal.

To help you understand the above steps, the snail drew a picture to help you remember.

Boolean operation

With floating point operations out of the way, let’s look at the final operation: Boolean operations. I’ve got two types here, logical operators and conditional operators.

Logical operator

Logical operators &, |,! ^, | |, &&, and, or, not, xor, and short circuit or short circuit with. The input is Boolean, and the output is Boolean.

Conditional operator

Then there is the conditional operator, something like this: type identifier = Boolean -expression? True – res: false – res. This is known as a ternary expression, and the ternary is the Boolean expression, the result when the Boolean is true, and the result when the Boolean is false.

For example, int b = a > 10? 10: A, returns 10 when A is 99, returns a when a is 6 which is itself 6.

summary

This article introduces three major types of Java basic operations, including integer operation, floating point operation and Boolean operation. In the course of explaining various operations, it also introduces some basic knowledge of the computer, such as source code, inverse code, complement and so on. It also illustrates some problems that you may not notice at ordinary times. For example, 1.0 minus 0.9 is not a perfect 0.1 in the computer world. In fact, in the world of floating point numbers, there are many points that can be ignored or even used incorrectly. For example, if you use == to determine whether two floating point numbers are equal, the program will fail. Limited to space, in addition, the snail’s cognition is not deep enough, did not continue to unfold.

This article has been written intermittently for more than a week. When I really wanted to share the seemingly simple operator, I realized how little I knew and shared it while learning. The more you know, the more you don’t know. However, this is also a good thing. Only when you know what you lack in cognition, can you make up for it and see the wider world. In terms of the cognitive funnel, it is already at the second level and needs further improvement.

Writing is not easy, readers are welcome to like and forward, thank you!