Memory cycle time，与system bus相关，而不是与processor相关。
Cache: Hit and Miss.
How much? How fast? How expensive?
Cost, capacity, access time.
Disk is also used to provide an extension to main memory known as virtual memory.
Basic Design Element：
Mapping Function: Direct, Associative, Set Associative
Replacement Algorithm: Least recently used(LRU), First in first out(FIFO), Least frequently used(LFU), Random
Write Policy: Write through, Write back, Write once
Number of caches: Single or two level, Unified or split
Associative Mapping: The principal disadvantage of associative mapping is the complex circuitry required to examine the tags of all cache lines in parallel.
Set Associative Mapping
LRU: in the cache longest with no reference to it.
FIFO: first tin first out
LFU: least frequently used
The simplest technique is called write through. Using the technique, all write operations are made to main memory as well as to the cache, ensuring that main memory is always valid.
Bus watching with write through:
Nubmer of Caches
Most contemporary designs include both on-chip and external caches. The simplest such organization is known as a two-level cache designated as level 1(L1) and the external cache designated as level 2(L2).
Two features of contemporary cache design for multilevel caches are noteworthy. First , for an off-chip L2 chche, many designs do not use the system bus as the path for transfer between the L2 cache and the processor, but use a separate data path, so as to reduce the burden on the system bus. Second, with the continued shrinkage of processor components, a number of processors now incorporate the L2 cache on the processor chip, improving performance.