Learning others summary record, relatively shallow, there may be problems, welcome to correct.
There are eight basic types
- The integer
Byte /short/int/long the number of bytes is 1/2/4/8
- floating-point
Float single precision, double precision, number of bytes 4/8, symbol 1/1, index 8/11, decimal 23/52, decimal 7/16
- character
Char: The number of bytes is 2, unsigned bits, and the maximum value is 65535
- The Boolean
Boolean, true/false
Calculation rules
- The conversion from a large type to a small type is mandatory. The conversion from a small type to a large type is automatic. The value of [-128, 127] is cached
- The default literal integer type is int
- Literals decimal defaults to double
- A + B will convert to the larger type of A and B
- Integer and floating-point calculations are converted to floating-point
- Literals are implicitly converted, so no cast is required, e.g. Byte a = 10; However, literal values cannot be outside the range of byte.
- b+=1; It automatically converts to type B
- The results of short, byte, and char operations are automatically converted to ints, so the results of short, byte, and char operations are received as at least int or floating point
- Char accepts an integer and requires a cast
Bit operation (binary operation)
- Both bits and & : bits are 1, otherwise 0
- Or | : bits are 0 0, or 1
- Xor (no carry operation) ^ : same is 0, different is 1
Which ones will compile an error
short a = 128;
byte b = 128;
short c = a+b;
short d = a+=b;
double e = 10;
int f = e/10;
double g = a+b;
char h = a;
Copy the code