COA——Cache缓存

发布于 2015-01-04  1.29k 次阅读


Access time(latency)

Memory cycle time,与system bus相关,而不是与processor相关。

Transfer rate

The Memory Hierarchy

 

Cache: Hit and Miss.

How much? How fast? How expensive?

Cost, capacity, access time.

 

Disk is also used to provide an extension to main memory known as virtual memory.

这在现在个PC上也是有的,正如系统属性->性能设置->高级中有虚拟内存,其是这样定义的公页文件是硬盘上的一块区域 ,Windows当作RAM使用。

Cache and Main Memory

 

 

Cache Read Operation

 

 

Typical Cache Organization

数据是可以直接从Main Memory传输到Processor中的,这样是更加全理的。同时还把Block输入到Cache中,为后面的快速处理数据做准备。

Basic Design Element:

Cache Size

Mapping Function: Direct, Associative, Set Associative

Replacement Algorithm: Least recently used(LRU), First in first out(FIFO), Least frequently used(LFU), Random

Write Policy: Write through, Write back, Write once

Line Size:

Number of caches: Single or two level, Unified or split

 

Direct Mapping

Associative Mapping: The principal disadvantage of associative mapping is the complex circuitry required to examine the tags of all cache lines in parallel.

Set Associative Mapping

 

Replacement Algorithms

LRU: in the cache longest with no reference to it.

FIFO: first tin first out

LFU: least frequently used

random:

 

Wirte Policy

The simplest technique is called write through. Using the technique, all write operations are made to main memory as well as to the cache, ensuring that main memory is always valid.

Bus watching with write through:

Hardware transparency:

Noncacheable memory:

Line Size

Nubmer of Caches

Multilevel Caches

Most contemporary designs include both on-chip and external caches. The simplest such organization is known as a two-level cache designated as level 1(L1) and the external cache designated as level 2(L2).

Two features of contemporary cache design for multilevel caches are noteworthy. First , for an off-chip L2 chche, many designs do not use the system bus as the path for transfer between the L2 cache and the processor, but use a separate data path, so as to reduce the burden on the system bus. Second, with the continued shrinkage of processor components, a number of processors now incorporate the L2 cache on the processor chip, improving performance.

Unified versus Split Caches


公交车司机终于在众人的指责中将座位让给了老太太