This article refers to Wang Dao’s “computer composition principle”, to the computer hardware components (input and output devices, CPU, memory, etc.) necessary discussion, for learning other computer related to do the necessary knowledge reserve.

1 Computer system overview

1.1 History of computer development

The computer has experienced the era of the first generation of electronic tube, the second generation of transistor, the third generation of small and medium scale integrated circuit, and now the fourth generation of vlSI.

Computer classification

  • Analog computer
  • Digital computer
    • Special purpose computer
    • General purpose computer
      • Mainframe computers
      • The mainframe
      • Medium-sized machine
      • minicomputer
      • A single-chip
      • Single chip microcomputer

1.2 Computer system hierarchy

1.2.1 Computer system composition

Computer system is composed of hardware system and software system. Hardware is tangible physical equipment, and software refers to the programs and related data and documents running on hardware. For a certain function of the computer system, if the hardware and software can be realized, the hardware implementation efficiency is high, the software implementation is more flexible.

1.2.2 Basic composition of computer hardware

  1. The early Von Neumann machine

Von Neumann’s ideas of stored programs and binary form the basic structure of modern computers.

Stored program refers to the preloading of instructions in code into the computer’s main memory and the subsequent execution of instructions in sequence until the end of the program execution.

All kinds of computers based on this concept are collectively called von Neumann machines, including arithmetic unit, memory unit, controller, input device and output device, with arithmetic unit as the center2. The organization structure of modern computers before the advent of the microprocessor, the computer and the controller were separated, and the memory capacity was very small, so the computer as the center, the input and output devices through the computer and memory to save data.

Later, with the increase of computer data processing capacity, THE difference between I/O and CPU speed is more and more big, modern computer development into memory as the center, data transmission bypassing the arithmetic device, directly between I/O equipment and storage equipment.3. The functional components of a computer are represented separately by von Neumann’s five components

  • Input devices

Used to enter programs and data into a computer in the form of information that a machine can recognize and accept.

  • Output devices

Output the results of computer processing in a form acceptable to people or in the form of information required by other systems.

  • memory

Used to store programs and data. The memory can be divided into main storage and auxiliary storage. The memory that can be directly accessed by CPU is main storage, and the information stored in auxiliary storage can only be accessed by CPU after being transferred into main storage. Main memory consists of a number of storage units, each storage unit contains a number of storage elements, each storage element stores a bit of binary code, that is, each storage unit can store a string of binary code, this code bit is called the storage word length. Main storage is used to store binary information, which is accessed by address. The address register (MAR) stores the access address, and the corresponding storage unit is found in the storage body. Its bits correspond to the number of storage units, for example, if it is 10 bits, there are 2^10 storage units. The data register (MDR) is used to temporarily store the information to be read or written from the storage body. Timing control logic is used to store various timing signals for operations

That is, the number of storage units x the number of bits of each storage unit equals the maximum capacity of the main storage

Although MDR and MAR are storage, they are placed in the CPU, including the cache mentioned below.

  • Arithmetic unit

Arithmetic unit is the executive part of the computer. It is used for arithmetic operations (such as addition, subtraction, multiplication and division) and logical operations (such as or and not, xor, comparison, shift, etc.). The core of arithmetic unit is arithmetic logic unit (ALU). PSW) and accumulator (ACC), multiplicator register (MQ), operand register (X) and other general registers.

  • The controller

The controller is used to control the work of each component, which is composed of program counter (PC), instruction register (IR) and control unit (CU). PC stores the address of the instruction to be executed and can automatically add 1 to form the next instruction address to connect mar of main memory. Ir stores the current instruction, and the opcode op in the MDR instruction is sent to CU for analyzing and issuing various command sequences. The address code AD in the instruction is sent to MAR for obtaining operands.

Generally the arithmetic unit and controller integrated together, called CPU, CPU and main memory constitute the host, other hardware (including external memory and I/O) called peripherals. The CPU and main memory are connected by a set of buses, including

  • The address bus sends the address on mar to the address bus for pointing to the read and write main memory storage unit
  • The control bus includes read/write signal lines that indicate whether data on the MDR is being read or written from main memory
  • The data bus uses MDR to read and write main memory

1.2.3 Classification of computer software

  1. System software and application software

Software is divided into system software and application software by function. Among them, the former is to ensure the efficient and correct operation of the computer system, such as operating system; The latter is a program that the user installs to solve an application problem.

  1. Three levels of language

1) machine language, also known as binary language, programmers need to edit each instruction of binary code, can be implemented computer directly identify and 2) assembly language, and use the English words or abbreviations instead of the binary instructions, need through assembler processing ability in computer operation (3) a high-level language, is for the convenience of programmers and the development of language, It has to be compiled to run on a computer

1.2.4 Multi-level hierarchical structure of computer system

Computer is a complex of software and hardware combination, similar to the layering of computer network, the function of the whole computer system is realized by modules (layering), each software and hardware designer and user only need to care about a specific level, do not need to care about how the lower level works.

Between the layers, the lower layer is the foundation of the upper layer, and the upper layer is the extension of the lower layer. The subject of computer composition principle mainly discusses the bottom two layers. Usually, a computer without software is called a bare machine, and the upper three layers are called virtual machines, that is, software-implemented machines.

1.3 Computer performance indicators

  • The machine word length is the number of bits of binary that can be processed in an integer operation
  • Main memory capacity is the maximum capacity that main memory can store, that is, the number of storage units x the number of bits per storage unit. For example, mar is 16 bits and MDR is 32 bits, so the result is 2^16 x 32 bits. The algorithm for the maximum 32-bit 4g is 2^32 x 8 bits =4GB

2. Data representation and operation

2.1 Number system and coding

2.1.1 Carry count values and mutual conversion

  1. Carry counting

Carry counting is a method of counting, such as base 10, where the highest digit is carried by one. Decimal is the most commonly used in daily life, binary, octal, hexadecimal is often used in the computer, where all information is in the computer system is in binary, because the two states of binary is easy to express, convenient coding, logic and arithmetic operations. Octal is represented by three binaries and hexadecimal by four binaries.

  1. Conversion between different bases

1) Binary and octal, hexadecimal interconversion

Octal or hexadecimal for binary conversion, by a decimal point, the integer part from right to left, the decimal part from left to right, a string of binary can be divided into three group into octal, a group is divided into four convert hexadecimal, if less than one group at the side of the end for 0, each group separately using octal or hexadecimal representation, For example, convert 1111000010.01101 to octal and hexadecimal

The corresponding octal number is 1702.32



The corresponding hexadecimal number is 3C2.68

Conversely, octal or hexadecimal bits are converted into groups of three or four binaries, respectively

When the other bases are converted to base 10, multiply each digit by its weight and add the product to get the corresponding base 10. This method is called expansion and addition by weight. Where the weight is from the left of the decimal point for the first power of base 0, the index of the other bits in turn to the left in turn increase 1, to the right in turn decrease 1 such as the binary 11011.1 into 10 base 11011.1=1×2^4+1×2^3+1×2^1+1×2^0+1×2^-1=27.5

To convert the decimal system to other bases, use the base multiplication and division method, where the integer part uses the division base mod method, the decimal part uses the multiplication base round method, and finally join the two parts together, such as 123.6875 into binary

For the integer part, divide by 2 until you get 0, which can be understood as the inverse of adding by weight.



If you sort the remainder from the bottom up, 123 is 1111011

And the decimal part, you keep multiplying by 2 until you get to 1



Sorting the whole number from the top down yields 0.6875=1011

The last 123.6875 = 1111011.1011

Note during the calculation that for integers, there is a one-to-one correspondence between each base, but decimals, such as not every decimal decimal corresponds to a binary, such as 0.3, can not be multiplied by 2 many times to get the final 1, so it is impossible to use binary representation accurately

2.1.2 Truth value and number of machines

We take the numbers with positive and negative signs to represent the truth value, in the actual representation of the computer signs and values are expressed in binary, where 0 represents positive, with 1 represents negative, this symbol digitized number is called the machine number.

2.1.3 BCD

Binary coded decimal, representing a decimal number every four bits, can be quickly converted between a decimal value and a binary representation. Since 4-bit binary represents 16 digits, there is redundancy. BCD code has several common encodings

  • 8421 yards, four – digit weights from left to right are 8421
  • 2421 code, the weight of four are 2421 respectively, characteristic is greater than 5 of the highest bit is 1, otherwise is 0

2.1.4 Characters and Strings

Not only numbers are represented in binary, but all characters are also represented in binary. Each binary and character set mapping is called an encoding scheme. A common character encoding is ASCII, which uses A seven-bit binary for 128 common characters, such as 0100 0001 for character A.

Other Reference coding parts such as Unicode encoding.

2.1.5 check code

Check code is a data code that can be found or corrected automatically, also known as error detection and correction code. Through some redundancy to check and correct, can be used for network transmission error control.

2.2 Representation and operation of machine number

Numbers in computers are classified as fixed point or floating point numbers, depending on whether the decimal point is fixed or not. Fixed-point numbers represent pure integers or decimals (the integer part is 0) with the agreed position of the decimal point. Floating point numbers dynamically change the number of decimal points in scientific notation.

2.2.1 fixed-point number

said

Fixed point numbers represent machine numbers in which the decimal point position does not change. The decimal point is no longer represented by points, but by convention. The decimal point is usually agreed to be in two places, one before the highest place, and this is called a fixed point decimal



One is after the lowest order, which is called a fixed-point integer.

A fixed point number can be divided into unsigned and signed numbers according to whether it has signed bits or not. All bits of an unsigned number represent values. For example, an 8-bit unsigned integer represents the range from 0 to 255.

A signed fixed-point number generally has source code, complement code, inverse code and shift code four ways of representation, here we only discuss fixed-point integer.

  • The original code

The source code is the binary in the previous discussion of base, with the highest bit being the sign bit, where 0 is represented in both +0 and -0.

  • complement

Complement is designed to solve subtraction problems in computers (see here why computers use complement rather than source or inverse?). When subtracting a number, it is expressed as adding the negative of the minuend. The negative is the number that is added to the original to equal 0. To add two positive numbers to equal 0, it is necessary to overflow the highest digit to get 0. 00000010 +(-0000 0001), where the inverse of 00000001 is calculated as the complement x of -00000001 i.e. 00000001+ X-1 =11111111,x=11111111-00000001+1=11111111. When the corresponding complement of the source code is calculated, the complement of the positive number is the source code itself, and the complement of the negative number is the symbol bit of the source code invariant value and then add 1.

  • Radix-minus-one complement

In other words, the inverse of a positive number is the original code itself, while the complement of a negative number is the symbol bit of the original code unchanged, and the numeric part is reversed.

  • frameshift

The order code commonly used to represent floating-point numbers is to add a constant (the offset value) to the true value x. This constant is usually x^n. Minimum truth value when shift code is 0.

operation

We’re only talking about integers here

  1. A shift

Shift operation is divided into arithmetic shift and logical shift, among which the shift of signed number is called arithmetic shift, and the shift of unsigned number is called logical shift.

In arithmetic shift the sign bit remains unchanged and only the numeric part is operated on. For positive numbers, the behavior is consistent because the source code, inverse code and complement code are consistent, that is, the left shift is equivalent to ×2, the right shift is equivalent to ÷2, the empty position is supplemented with 0, and the overflow content is discarded. For negative overflow content discard, the specific treatment should be divided into different cases

  • The source code is treated like a positive number, with the value partially filled with 0
  • The value bit of the inverse code is the opposite of the original code, and the empty value partially complements 1
  • The complement code is the result of the inverse addition of the numeric bit, so the first 1 from the lowest position is the same as the inverse code on the left side, and the same as the original code on the right side. Therefore, the left shift complements 0 and the right shift complements 1

In logical shift, overflow is discarded and void is filled with 0

  1. Source code addition and subtraction

The following operations involve discarding overflow bits when an overflow occurs

When adding, the sign bits are first judged. If the signs are the same, the numeric parts are added and the sign bits remain the same. Or different, the value part with the highest absolute value is subtracted from the value part with the lowest absolute value, and the sign bit is subject to the highest absolute value. Take eight bits for binary as an example

// same symbol, In this case, the positive and negative numbers are the same: 3+2=5 00000011+00000010=00000101 // Different signs // When the absolute value of negative numbers is small, 3+(-2)=1, 11-10=01 => Add the sign bit 00000001 // When the absolute value of negative numbers is large, (-3)+2=-1 If 11-10=01, add the sign bit 10000001Copy the code

To subtract, first take the minus and then add the minuend to the minus

// Add the minutes3 -2=1=>3+(-2)=1Copy the code
  1. Complement addition and subtraction

It is the addition and subtraction method commonly used in computers. Sign bit and numerical bit are calculated separately, and overflow bit is discarded. The complement of the sum of two numbers is equal to the complement of the two numbers and the complement of the difference is equal to the complement of the subtracted complement and the complement of the subtracted modification sign bit

// When there are two positive numbers, the complement of 2+3 =5, 10+11=101 // When there are two positive and one negative numbers, the complement of 2+(-3), namely, the complement of -1, take the value of 10000001 and add 1 to get 11111111, the complement of 2 is 00000010, The complement code of -3 is 10000011, and then 1 is added to get 11111101, and 11111111 is added to get 11111111 // The complement code of the difference between the two numbers is 2-3=2+(-3), and the algorithm of two numbers and complement is used for operationCopy the code
  1. Sign extension

For example, if you add 8 digits to make it 16 digits for positive numbers, you can add 0 in front of it and for negative numbers, you have to differentiate

  • The value part of the original code is preceded by 0, and the sign part remains unchanged
  • The inverse code can be supplemented by 1
  • You can just add one before the complement
Data has to be stored and arranged
  1. Big end storage and small end storage of data

When a value is represented by multiple sub-sections, it is necessary to consider how each sub-section of each value is arranged (the internal order of each sub-section is fixed). For example, the hexadecimal number 123456 is represented by three bytes. If it is represented in memory in the order of 12, 34, and 56, that is, the highest place is first, it is called big-endian representation. If the order is 56, 34, and 12, it is called the little end, which means 2. Data is stored in boundary alignment mode. Data can be stored from any location, but to improve the speed of access, data boundary alignment is required. For 32-bit memory word length computers (the bits of MDR indicate the memory word length, that is, the number of bits per storage unit), can be stored by sub-section, half word, word, where word refers to four sub-sections, half word two bytes, so the half-word address should be a multiple of 2, the word address is a multiple of four

A four-byte int can only be a word address, and a one-byte char can be placed in any subsection.

2.2.2 floating point number

said

Usually floating point numbers are represented as r^e×m, where r is base, usually 2; E and m are signed fixed-point numbers representing the order code and mantissa, respectively. Thus, a floating point number is composed of two parts: the order code and the mantissa, which are composed of two parts: the sign and the value. The order code represents the range of the floating point number, and the mantissa represents the precision of the floating point number.

  1. Normalize floating point numbers

In order to improve the accuracy of operation, it is necessary to make full use of the significant digits of the mantissa, usually in the normalized form, that is, the highest digit of the mantissa must be a significant number (binary number is 1). Converting an unnormalized data to normalized data is called normalization.

  1. IEEE754

The floating-point order codes of the standard are represented by shift codes, and the mantras are represented by original codes. Ieee754 has been analyzed in detail during the discussion of ES specification.

operation

In floating-point operation, the order code and mantissa are operated separately, and the addition and subtraction of floating-point numbers adopt complement code.

2.3 Arithmetic logic unit ALU

Alu is an integral part of the arithmetic unit, which undertakes the four operations of addition, subtraction, multiplication and division, logical operations such as and or non-xor, as well as operations such as shift and complement. The operation type of the arithmetic unit is determined by the controller. The data to be processed comes from the memory, and the results to be processed will be temporarily stored in the arithmetic unit or sent back to the memory. Specific implementation requires a certain circuit basis, not to say here.

3 Storage System

3.1 Overview of Storage

  1. According to the role
    • Main memory
    • Auxiliary storage
    • Cache
  2. By access mode
    • Random Access Memory Any storage unit can be accessed randomly, and the Access time is independent of physical location. It is mainly used for main storage and cache. It is also divided into SRAM and DRAM, in which the former static random access memory uses transistors to store information, fast access, low integration, low power consumption, and the latter dynamic random access memory uses capacitors to store, easy integration, low price, large capacity.
    • Read-Only Memory can Only be Read at random, not written, and is fixed once written. In a broad sense, ROM can be read and written multiple times
    • Serial Access Storage addresses are addressed according to the sequence of physical locations
  3. According to the conservability of the information
    • The disappearance of information after a power failure in volatile memory, such as RAM
    • Nonvolatile memory such as ROM
  4. Press from storage medium
    • Magnetic surface memory, such as magnetic disk or magnetic tape
    • Semiconductor memory
    • Core memory
    • Optical storage, such as a compact disc

3.2 Hierarchical structure of memory

3.2.1 Multi-level Storage System

In order to solve the three constraints of capacity, speed and cost, a multi-level memory structure is adopted in the computer system, in which cache-main memory solves the problem of mismatch between CPU and main memory speed, and main-auxiliary memory mainly solves the problem of storage system capacity.After the continuous development of main storage and auxiliary storage, virtual storage system is formed. The address in programming corresponds to the address space of virtual storage, and the available address can be much larger than the main storage space.

3.3 Semiconductor Ram

3.3.1 RAM

The main memory is implemented by DRAM and the cache is implemented by SRAM. The differences between the two RAM are as follows

3.3.2 rainfall distribution on 10-12 ROM

Compared with RAM, ROM has two advantages: high bit density and no loss of power information. ROM can be divided into

  • Mask mode read-only memory (MROM) is written to the factory and cannot be modified thereafter
  • Primary programmable read-only memory (PROM) allows the user to write once
  • Erasable programmable read-only memory (EPROM) can be modified many times, and all changes need to be erased. The number of writes is limited and the writing is slow
  • Flash memory is fast erasable and readable
  • Solid State drives (SSDS) have higher READ/write speed and lower power consumption than traditional hard drives

3.3.3 Basic composition of main memory



In main memory, the core part of the memory is the storage element (memory unit) that stores 0 or 1. In order to access the information in the storage, the storage element is numbered. Now the computer shares one address with eight storage elements (i.e. one byte).

When accessing main memory during instruction execution, THE CPU needs to send the accessed address to MAR, and then send it to the address register through the address line to point the binary address to the corresponding physical space through the address decoder. At the same time, the CPU transmits read and write signals through the control line, and exchanges data between the MDR and the storage body through the data line according to the specific signal. The data line and the address line reflect the size of the storage, such as the main memory bit 2^36×64 in the figure

3.4 Connection between main memory and CPU

Connect via data bus, address bus, and control bus

Since a single memory chip is limited, it needs to be expanded to meet the actual needs. The main memory capacity can be expanded by bit expansion, word expansion and word bit simultaneous expansion.

3.5 Cache memory

The cache is located between the main memory and the CPU. It stores only the most active part of the main memory. When the CPU reads data, it searches the cache first.

3.6 Virtual Storage

Virtual memory codes the address space of main storage and auxiliary storage to form a huge address space. Users can program in the whole space. The address involved is called virtual address or logical address, and the corresponding storage space is called virtual space. The actual main memory address is called a real address or physical address. When the CPU uses the virtual address, the auxiliary hardware finds out the mapping between the virtual address and the real address, and determines whether the storage unit corresponding to the virtual address has been loaded into memory. If yes, the storage unit is directly accessed through address transformation; otherwise, the storage unit is transferred to the main existence access, that is, the current operation does not need to stay in the auxiliary memory.

4 Instruction system

Instruction, also known as machine instruction, is to instruct the computer to perform a certain operation command, is the smallest functional unit of computer operation. The set of instructions for a computer is called the machine’s instruction set (instruction system), which is the API provided by the CPU to the software.

4.1 Instruction Format

4.1.1 Basic Format

An instruction contains an opcode and an address code, in which the opcode indicates what operation the instruction should perform, such as addition or subtraction, or program transfer or return. The address code gives information about the operation. Such as the address of the operand that participates in the operation, the address where the result of the operation is saved, the address of the program to transfer, or the address of the subroutine that is called. The length of instruction refers to the number of binary bits contained in an instruction. In an instruction system, if all instruction lengths are the same, it is called fixed-length instruction word structure, which has fast execution speed and simple control. Otherwise it is called variable length instruction word length.

Depending on the number of address codes, instructions can be divided into the following zero addresses to four addresses.

4.2 Instruction addressing mode

Addressing is a way to find the effective address of an instruction or data, that is, to determine the data address of this instruction and the address of the next instruction to be executed. There are two addressing modes: instruction addressing and data addressing. The address code in the instruction does not represent the real address of the operand, which is called the formal address (A). The formal address, combined with addressing, computes the real address of the place, namely the effective address (EA).

4.2.1 Instruction addressing

There are two types of instruction addressing, one is sequential addressing and the other is jump addressing. Sequential addressing automatically forms the address of the next instruction by adding 1 to the program counter (PC). Jump addressing is implemented by a transfer instruction, which gives the calculation of the address of the next instruction. The result of jump is that the current instruction modifies the PC value.

4.2.2 Data addressing

There are many ways to address data, and to distinguish them from each other, a field is usually set up in the instruction to represent which addressing, that is, the addressing characteristic. At this point, the address code is divided into two parts, the addressing feature and the formal address. Common ways of addressing data include

  • Implied addressing
  • Immediate digital addressing
  • Direct addressing
  • Indirect addressing
  • Register addressing
  • Register indirect addressing
  • Relative addressing
  • Base address
  • Indexed addressing
  • Stack addressing

4.2.3 Introduction to x86 assembly instruction

  1. Universal register

The general-purpose registers are located in the CPU and are used to transmit and hold data.

X86 processors consist of eight 32-bit general-purpose registers, four of which can be used as either 16 or 8 bits for compatibility

  1. Commonly used instructions

Assembly instructions can be divided into data transfer instructions (such as move), logical computation instructions (such as ADD) and control flow instructions (such as JMP). The operands of instructions include register names, memory addresses, or constants.

4.3 Instruction system development

Instruction systems are developing in two directions. One is to enhance the function of existing instruction systems, such as complex instruction system computers (CISC) such as x86. The other is to reduce the number of instructions and simplify the function of instructions to improve the speed of execution. This type of computer is called RISC, such as ARM.

5 CPU

Composition of 5.1

CPU consists of arithmetic unit and controller.

  1. The computer receives the command from the controller and performs the corresponding operation to process and process the data. including
  • The arithmetic logic unit (ALU) performs arithmetic/logic operations
  • A temporary register is used to temporarily hold data read from storage
  • The summation register is a general-purpose register that temporarily stores alU results and can be used as an input to addition operations
  • General-purpose register groups, as described in the previous chapter
  • The program state word register (PSW) holds various state information created by the result of arithmetic logic operation instructions or test instructions
  • The shifter shifts operands and results
  • The counter controls the number of steps in multiplication and division
  1. The controller is the command center of the system. Under the control of the controller, the arithmetic unit, memory, input and output devices and other systems work. The basic function of the controller is to execute instructions, and the execution of each instruction is realized by a group of microoperations issued by the controller. including
  • The program counter (PC) is used to indicate where the next instruction will be stored in main memory
  • The instruction register (IR) is used to hold the instruction currently being executed
  • The instruction decoder decodes the opcode field and provides the characteristic operation signal to the controller
  • The memory address register (MAR) is used to store the address of the main memory unit to be accessed
  • A memory data register (MDR) is used to hold information written to or read from main memory
  • The timing system is used to generate various timing signals obtained by unified clock frequency division
  • Microoperation signal generator with ir instructions, PSW status information and sequence signal, generate control of the entire computer system required by various operation signals.

5.2 Instruction execution process

5.2.1 Instruction cycle

The time when a CPU accesses and executes an instruction from the master is called the instruction cycle. The instruction cycle is represented by several machine cycles, and a machine cycle contains several clock cycles. The clock cycle is the basic unit of CPU operation. It is the smallest unit of time in a computer. It is also called the beat or t cycle. The machine cycle is also called the CPU cycle. The execution process of an instruction is divided into several stages (such as fetch, execute instructions). The machine cycle is the time required for each stage to complete.

5.2.2 Instruction cycle data flow

A data stream is a sequence of data accessed in sequence according to instructions. The data flow in instruction cycles is explained in terms of some typical CPU cycles.

  1. The fetch cycle is stored in IR from the master access code according to the contents of the PC

  1. The interaddress periodic task takes the valid address of the operand

  1. Execution cycle task is to generate execution results through ALU operation according to the opcodes and operands of the command word in IR. The execution cycle operation of different instructions is different, so there is no unified data flow.

  2. Interrupt cycles save breakpoints

5.3 Data Path

The path through which data is transmitted between functional components is called a data path, and the components along the path are called data path components. The basic structure of data path includes the following three types

  • CPU internal single-bus mode all registers of the input and output segments connected to a common path, the transmission is prone to conflict.
  • CPU Internal three-bus mode
  • Dedicated data path mode

6 Input and output system

Passing information from an external device to a host is called input and vice versa

In the I/O system, there is often a large amount of data transmission, and there are various control methods in the transmission process, including

  • Program Query Mode The CPU continuously checks whether the I/O device is ready through programs
  • The program interrupt mode responds only when the I/O device is ready and issues a response request to the CPU
  • Dma there is a direct data path between main memory and I/O.
  • Channel mode starts the channel to complete I/O

End and spend