I. Computer classification:

Personal computer PC, Server Server, supercomputer, embedded computer (maximum)

Ii. Post-pc era:

  • PMD Personal Mobile Device, battery powered, wireless network, touch screen
  • Cloud computing replaces traditional servers, huge computing centers with warehouses of scale computers that can be acquired on a rental basis

Three, eight great ideas of computer system architecture:

  1. Design for Moore’s Law
  2. Simplify your design with abstractions
  3. Accelerate high probability events
  4. Improve performance through parallelism
  5. Improve performance through pipelining
  6. Improve performance through prediction
  7. Memory hierarchy
  8. Improves reliability through redundancy

Four, a software three levels:Application level system software == Hardware

The system software includes: operating system, compiler (most important, necessary), loader, assembler

Basic functions to be performed by any computer:

  • The input data
  • The output data
  • Process the data
  • Store the data

Corresponding classic parts:

  • The input
  • The output
  • memory
  • Data path/arithmetic unit
  • The controller

The last two are collectively called processor /CPU

Vii. Storage:

  • Volatile memory
    • Memory -DRAM Dynamic Random Access Memory
    • SRAM Static Random Access Memory is faster and less dense, but more expensive to build than DRAM.

SRAM DRAM is volatile memory in two or more layers of the memory hierarchy. The following is non-volatile memory. The former is called main memory, and the latter is called secondary memory

  • Nonvolatile memory
    • A DVD is also a secondary memory
    • In addition, in the post-PC era, Flash Memory has replaced disks in personal actuating devices. However, due to the defect of Flash Memory aging after 100000~1000000 times, SRAM DRAM must be recorded at two levels of Memory hierarchy at all times

Eight, processor chip manufacturing process (transistor – integrated circuit – VLSI)

Chips are connected to I/O pins — a process called encapsulation — and handed to customers

Definition of performance

Personal computers are interested in reducing response times, but servers are interested in increasing throughput

  • Response time: The total time it takes a computer to complete a task
    Performance X/performance Y = Execution time Y/execution time X =n That is, X is n times faster than Y and the performance is better than Y
  • Throughput: Also called the number of tasks completed per unit of bandwidth

Ten, CPU performance

CPU (Execution) Time The TIME spent on the CPU for executing a task is classified as follows:

    1. User program time, referred to simply as user CPU time
    1. The amount of time the operating system spends on user services such as waiting for I/O or running other programs system CPU time

11. Instruction performance

The same program needs to execute a certain number of instructions, in this case, it is necessary to consider the average number of cycles of executing each instruction, that is, CPI represents the average number of clock cycles required to execute each instruction, then the following formula:

Number of CPU clock cycles = number of program instructions X Average number of clock cycles per instruction

Classic CPU performance formula

CPU time = Number of instructions X CPI X clock cycle = Sum of all instructions in the program X cycle time

Or:

CPU time = Number of instructions X CPI/clock frequency = sum of the cycles occupied by all instructions of the program/clock frequency

#### Always remember: the only indicator of computer performance that can be measured completely reliably is time.

Hardware or software specifications Influencing factor
algorithm Instruction number, maybe CPI
A programming language Instruction count, CPI
compiler Instruction count, CPI
Instruction set architecture Instruction count, CPI, clock frequency

Fallacy: When improving any aspect of a computer, it is expected that overall performance will improve in proportion to the size of the improvement.

Amdahl’s law: time after improved execution = affected improvement time/improvement amount + unaffected execution time

If a program had a total running time of 100s on a computer, in which 80 seconds were the multiplication operation of the program, and if the running speed of the program was increased by five times and the multiplication operation was improved by n, then the improved execution time (20s) =(80s)/ N + (20s) then it can be seen that n is infinite. This is amdahl’s quantified version of the law of diminishing returns.

14. New ideas in computers have improved the cost performance of products:

    1. The typical way to develop parallelism in programs today is with the help of multiple processors
    1. Cache is a typical method to develop local access of memory hierarchies