Knowledge

Cache (computing)

Source 📝

637:
unprivileged partitions. The privileged partition can be defined as a protected partition. If content is highly popular, it is pushed into the privileged partition. Replacement of the privileged partition is done as follows: LFRU evicts content from the unprivileged partition, pushes content from privileged partition to unprivileged partition, and finally inserts new content into the privileged partition. In the above procedure the LRU is used for the privileged partition and an approximated LFU (ALFU) scheme is used for the unprivileged partition, hence the abbreviation LFRU. The basic idea is to filter out the locally popular contents with ALFU scheme and push the popular contents to one of the privileged partition.
66: 982:
With write caches, a performance increase of writing a data item may be realized upon the first write of the data item by virtue of the data item immediately being stored in the cache's intermediate storage, deferring the transfer of the data item to its residing storage at a later stage or else occurring as a background process. Contrary to strict buffering, a caching process must adhere to a (potentially distributed) cache coherency protocol in order to maintain consistency between the cache's intermediate storage and the location where the data resides. Buffering, on the other hand,
252: 260: 997:
individual writes are deferred to a batch of writes is a form of buffering. The portion of a caching protocol where individual reads are deferred to a batch of reads is also a form of buffering, although this form may negatively impact the performance of at least the initial reads (even though it may positively impact the performance of the sum of the individual reads). In practice, caching almost always involves some form of buffering, while strict buffering does not involve caching.
922: 1005:
order than that in which it is produced. Also, a whole buffer of data is usually transferred sequentially (for example to hard disk), so buffering itself sometimes increases transfer performance or reduces the variation or jitter of the transfer's latency as opposed to caching where the intent is to reduce the latency. These benefits are present even if the buffered data are written to the buffer once and read from the buffer once.
29: 602:, to a network architecture in which the focal point is identified information. Due to the inherent caching capability of the nodes in an ICN, it can be viewed as a loosely connected network of caches, which has unique requirements for caching policies. However, ubiquitous content caching introduces the challenge to content protection against unauthorized access, which requires extra care and solutions. 624:(CDNs) and distributed networks in general. TLRU introduces a new term: TTU (Time to Use). TTU is a time stamp of a content/page which stipulates the usability time for the content based on the locality of the content and the content publisher announcement. Owing to this locality based time stamp, TTU provides more control to the local administrator to regulate in network storage. 838:. It provides a cache for frequently accessed data, providing high speed local access to frequently accessed data in the cloud storage service. Cloud storage gateways also provide additional benefits such as accessing cloud object storage through traditional file serving protocols as well as continued access to cached data during connectivity outages. 296:. For this reason, a read miss in a write-back cache will often require two memory backing store accesses to service: one for the write back, and one to retrieve the needed data. Other policies may also trigger data write-back. The client may make many changes to data in the cache, and then explicitly notify the cache to write back the data. 684:, which is an integrated part of the hard disk drive or solid state drive, is sometimes misleadingly referred to as "disk cache", its main functions are write sequencing and read prefetching. Repeated cache hits are relatively rare, due to the small size of the buffer in comparison to the drive's capacity. However, high-end 981:
With read caches, a data item must have been fetched from its residing location at least once in order for subsequent reads of the data item to realize a performance increase by virtue of being able to be fetched from the cache's (faster) intermediate storage rather than the data's residing location.
1008:
A cache also increases transfer performance. A part of the increase similarly comes from the possibility that multiple small transfers will combine into one large block. But the main performance-gain occurs because there is a good chance that the same data will be read from cache multiple times, or
977:
Fundamentally, caching realizes a performance increase for transfers of data that is being repeatedly transferred. While a caching system may realize a performance increase upon the initial (typically write) transfer of a data item, this performance increase is due to buffering occurring within the
605:
Unlike proxy servers, in ICN the cache is a network-level solution. Therefore, it has rapidly changing cache states and higher request arrival rates; moreover, smaller cache sizes impose different requirements on the content eviction policies. In particular, eviction policies for ICN should be fast
996:
With typical caching implementations, a data item that is read or written for the first time is effectively being buffered; and in the case of a write, mostly realizing a performance increase for the application from where the write originated. Additionally, the portion of a caching protocol where
386:
go further—they not only read the data requested, but guess that the next chunk or two of data will soon be required, and so prefetch that data into the cache ahead of time. Anticipatory paging is especially helpful when the backing store has a long latency to read the first chunk and much shorter
1004:
cannot directly address data stored in peripheral devices. Thus, addressable memory is used as an intermediate stage. Additionally, such a buffer may be feasible when a large block of data is assembled or disassembled (as required by a storage device), or when data may be delivered in a different
627:
In the TLRU algorithm, when a piece of content arrives, a cache node calculates the local TTU value based on the TTU value assigned by the content publisher. The local TTU value is calculated by using a locally defined function. Once the local TTU value is calculated the replacement of content is
150:
A larger resource incurs a significant latency for access – e.g. it can take hundreds of clock cycles for a modern 4 GHz processor to reach DRAM. This is mitigated by reading large chunks into the cache, in the hope that subsequent reads will be from nearby locations and can be read from the
374:
read the minimum amount from the backing store. A typical demand-paging virtual memory implementation reads one page of virtual memory (often 4 KB) from disk into the disk cache in RAM. A typical CPU reads a single L2 cache line of 128 bytes from DRAM into the L2 cache, and a single L1
807:
CDNs began in the late 1990s as a way to speed up the delivery of static content, such as HTML pages, images and videos. By replicating content on multiple servers around the world and delivering it to users based on their location, CDNs can significantly improve the speed and availability of a
649:
servers; two requests within the same park would generate separate requests. An optimization by edge-servers to truncate the GPS coordinates to fewer decimal places meant that the cached results from the earlier query would be used. The number of to-the-server lookups per day dropped by half.
636:
The Least Frequent Recently Used (LFRU) cache replacement scheme combines the benefits of LFU and LRU schemes. LFRU is suitable for 'in network' cache applications, such as ICN, CDNs and distributed networks in general. In LFRU, the cache is divided into two partitions called privileged and
857:
Write-through operation is common when operating over unreliable networks (like an Ethernet LAN), because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches and
158:
The use of a cache also allows for higher throughput from the underlying resource, by assembling multiple fine-grain transfers into larger, more efficient requests. In the case of DRAM circuits, the additional throughput may be gained by using a wider data bus.
737:. Web caches reduce the amount of information that needs to be transmitted across the network, as information previously stored in the cache can often be re-used. This reduces bandwidth and processing requirements of the web server, and helps to improve 88:
occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.
243:. One popular replacement policy, least recently used (LRU), replaces the oldest entry, the entry that was accessed less recently than any other entry. More sophisticated caching algorithms also take into account the frequency of use of entries. 67: 209:) needs to access data presumed to exist in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired data, the data in the entry is used instead. This situation is known as a 803:
A content delivery network (CDN) is a network of distributed servers that deliver pages and other Web content to a user, based on the geographic locations of the user, the origin of the web page and the content delivery server.
80:) is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A 343:. Alternatively, when the client updates the data in the cache, copies of those data in other caches will become stale. Communication protocols between the cache managers that keep the data consistent are associated with 619:
The Time aware Least Recently Used (TLRU) is a variant of LRU designed for the situation where the stored contents in cache have a valid life time. The algorithm is suitable in network cache applications, such as ICN,
100:. Such access patterns exhibit temporal locality, where data is requested that has been recently requested, and spatial locality, where data is requested that is stored near data that has already been requested. 808:
website or application. When a user requests a piece of content, the CDN will check to see if it has a copy of the content in its cache. If it does, the CDN will deliver the content to the user from the cache.
398:
that always pre-loads the entire executable into RAM. A few caches go even further, not only pre-loading an entire file, but also starting to load other related files that may soon be requested, such as the
759:
applications are stored in an ISP cache to accelerate P2P transfers. Similarly, decentralised equivalents exist, which allow communities to perform the same task for P2P traffic, for example, Corelli.
108:
In memory design, there is an inherent trade-off between capacity and speed because larger capacity implies larger size and thus greater physical distances for signals to travel causing
974:
The semantics of a "buffer" and a "cache" are not totally different; even so, there are fundamental differences in intent between the process of caching and the process of buffering.
986:
reduces the number of transfers for otherwise novel data amongst communicating processes, which amortizes overhead involved for several small transfers over fewer, larger transfers,
292:
for later writing to the backing store. The data in these locations are written back to the backing store only when they are evicted from the cache, a process referred to as a
232:. This requires a more expensive access of data from the backing store. Once the requested data is retrieved, it is typically copied into the cache, ready for the next access. 324:): data at the missed-write location is not loaded to cache, and is written directly to the backing store. In this approach, data is loaded into the cache on read misses only. 1598:"Globally Distributed Content Delivery, by J. Dilley, B. Maggs, J. Parikh, H. Prokop, R. Sitaraman and B. Weihl, IEEE Internet Computing, Volume 6, Issue 5, November 2002" 628:
performed on a subset of the total content stored in cache node. The TLRU ensures that less popular and small life content should be replaced with the incoming content.
908:
uses networked hosts to provide scalability, reliability and performance to the application. The hosts can be co-located or spread over different geographical regions.
267:
When a system writes data to cache, it must at some point write that data to the backing store as well. The timing of this write is controlled by what is known as the
284:: initially, writing is done only to the cache. The write to the backing store is postponed until the modified content is about to be replaced by another cache block. 1295: 299:
Since no data is returned to the requester on write operations, a decision needs to be made whether or not data would be loaded into the cache on write misses.
506: 335:
A write-through cache uses no-write allocate. Here, subsequent writes have no advantage, since they still need to be written directly to the backing store.
883:
provides a "Cached" link next to each search result. This can prove useful when web pages from a web server are temporarily or permanently inaccessible.
217:. In this example, the URL is the tag, and the content of the web page is the data. The percentage of accesses that result in cache hits is known as the 691:
Finally, a fast local hard disk drive can also cache information held on even slower data storage devices, such as remote servers (web cache) or local
310:): data at the missed-write location is loaded to cache, followed by a write-hit operation. In this approach, write misses are similar to read misses. 1604: 213:. For example, a web browser program might check its local cache on disk to see if it has a local copy of the contents of a web page at a particular 155:
can be used to guess where future reads will come from and make requests ahead of time; if done optimally, the latency is bypassed altogether.
1009:
that written data will soon be read. A cache's sole purpose is to reduce accesses to the underlying slower storage. Cache is also usually an
235:
During a cache miss, some other previously existing cache entry is typically removed in order to make room for the newly retrieved data. The
502:
was not used. Caching was important to leverage 32-bit (and wider) transfers for texture data that was often as little as 4 bits per pixel.
703:. Also, fast flash-based solid-state drives (SSDs) can be used as caches for slower rotational-media hard disk drives, working together as 570:
A memory management unit (MMU) that fetches page table entries from main memory has a specialized cache, used for recording the results of
1296:"Intel Broadwell Core i7 5775C '128MB L4 Cache' Gaming Behemoth and Skylake Core i7 6700K Flagship Processors Finally Available In Retail" 288:
A write-back cache is more complex to implement since it needs to track which of its locations have been written over and mark them as
339:
Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date or
606:
and lightweight. Various cache replication and eviction schemes for different ICN architectures and applications have been proposed.
92:
To be cost-effective, caches must be relatively small. Nevertheless, caches are effective in many areas of computing because typical
1099: 1466:
Bilal, Muhammad; et al. (2017). "A Cache Management Scheme for Efficient Content Eviction and Replication in Cache Networks".
785:
within a lookup table, allowing subsequent calls to reuse the stored results and avoid repeated computation. It is related to the
454:
may have as many as six types of cache (between levels and functions). Some examples of caches with a specific function are the
1571: 332:
A write-back cache uses write allocate, hoping for subsequent writes (or even reads) to the same location, which is now cached.
748:(ISPs) or organizations also use a caching proxy server, which is a web cache that is shared among all users of that network. 1540: 1442: 1213: 992:
ensures a minimum data size or representation required by at least one of the communicating processes involved in a transfer.
228:
The alternative situation, when the cache is checked and found not to contain any entry with the desired tag, is known as a
1300:
Mentions L4 cache. Combined with separate I-Cache and TLB, this brings the total 'number of caches (levels+functions) to 6.
939: 1238: 961: 328:
Both write-through and write-back policies can use either of these write-miss policies, but usually they are paired.
73: 1354:
Bilal, Muhammad; et al. (2019). "Secure Distribution of Protected Content in Information-Centric Networking".
989:
provides an intermediary for communicating processes which are incapable of direct transfers amongst each other, or
700: 1735: 943: 866: 591: 669:
While CPU caches are generally managed entirely by hardware, a variety of software manages other caches. The
553: 487: 1629: 1597: 1256:"Survey of CPU Cache-Based Side-Channel Attacks: Systematic Analysis, Security Models, and Countermeasures" 565: 463: 236: 873:) are typically read-only or write-through specifically to keep the network protocol simple and reliable. 362: 113: 1310: 1730: 240: 1573:
Corelli: A Dynamic Replication Service for Supporting Latency-Dependent Content in Community Networks
1033: 745: 674: 522: 1661: 1325: 798: 708: 621: 537: 479: 1203: 1048: 1001: 932: 172: 1656: 467: 22: 1710: 1131:"Cache hit ratio maximization in device-to-device communications overlaying cellular networks" 773:
A cache can store data that is computed on demand rather than retrieved from a backing store.
202:, which specifies the identity of the data in the backing store of which the entry is a copy. 1107: 827: 817: 491: 144: 136: 97: 1485: 1422: 1373: 1073: 1043: 870: 823: 778: 599: 545: 521:, exhibiting functionality commonly found in CPU caches. These caches have grown to handle 379: 168: 93: 879:
also frequently make web pages they have indexed available from their cache. For example,
598:
infrastructure away from a host-centric paradigm, based on perpetual connectivity and the
8: 862: 786: 251: 1715: 1489: 1426: 1411:. 16th International Conference on Advanced Communication Technology. pp. 528–532. 1377: 439: 1501: 1475: 1448: 1412: 1389: 1363: 1158: 847: 443: 395: 1670: 673:
in main memory, which is an example of disk cache, is managed by the operating system
513:, they have developed progressively larger and increasingly general caches, including 1647:
Paul, S.; Fei, Z. (1 February 2001). "Distributed caching with centralized control".
1438: 1393: 1277: 1234: 1209: 1162: 1150: 1063: 1010: 905: 876: 734: 578:
translations. This specialized cache is called a translation lookaside buffer (TLB).
541: 514: 356: 176: 152: 109: 1685: 1580: 1505: 1129:
Zhong, Liang; Zheng, Xueqian; Liu, Yong; Wang, Mengting; Cao, Yang (February 2020).
1666: 1493: 1430: 1381: 1267: 1142: 1068: 1058: 886: 645:
In 2011, the use of smartphones with weather forecasting options was overly taxing
575: 549: 526: 447: 412: 259: 206: 47: 1452: 789:
algorithm design methodology, which can also be thought of as a means of caching.
179:(SSDs) and hard disk drives (HDDs) frequently include hardware-based cache, while 1521: 1028: 898: 696: 685: 571: 344: 1497: 1434: 1385: 1078: 1038: 1023: 1000:
A buffer is a temporary memory location that is traditionally used because CPU
894: 831: 738: 510: 451: 1548: 1527:
300 million to 500 million fewer requests a day handled by AccuWeather servers
1130: 434:
Small memories on or close to the CPU can operate faster than the much larger
1724: 1570:
Tyson, Gareth; Mauthe, Andreas; Kaune, Sebastian; Mu, Mu; Plagemann, Thomas.
1281: 1154: 1146: 1013:
that is designed to be invisible from the perspective of neighboring layers.
782: 483: 369: 1177: 1272: 1255: 756: 726: 704: 388: 121: 688:
often have their own on-board cache of the hard disk drive's data blocks.
859: 774: 768: 752: 681: 646: 435: 278:: write is done synchronously both to the cache and to the backing store. 180: 132: 112:. There is also a tradeoff between high-performance technologies such as 729:
employ web caches to store previous responses from web servers, such as
946: in this section. Unsourced material may be challenged and removed. 851: 692: 670: 664: 499: 495: 404: 400: 184: 140: 1254:
Su, Chao; Zeng, Qingkai (10 June 2021). Nicopolitidis, Petros (ed.).
1053: 835: 720: 429: 408: 125: 37: 921: 438:. Most CPUs since the 1980s have used one or more caches, sometimes 1480: 1417: 1368: 890: 730: 595: 190:
A cache is made up of a pool of entries. Each entry has associated
1409:
Time Aware Least Recent Used (TLRU) cache management policy in ICN
459: 455: 375:
cache line of 64 bytes from the L2 cache into the L1 cache.
171:
of memory for temporary storage of data likely to be used again.
540:
have similarly generalized over the years. Earlier designs used
84:
occurs when the requested data can be found in a cache, while a
880: 518: 28: 1519:
Murphy, Chris (30 May 2011). "5 Lines Of Code In The Cloud".
16:
Additional storage that enables faster access to main storage
822:
A cloud storage gateway, also known as an edge filer, is a
117: 552:
often include a very similar set of caches to a CPU (e.g.
56: 214: 53: 781:
technique that stores the results of resource-consuming
614: 387:
times to sequentially read the next few chunks, such as
631: 507:
general-purpose computing on graphics processing units
116:
and cheaper, easily mass-produced commodities such as
74: 50: 1195: 826:
device that connects a local network to one or more
239:
used to select the entry to replace is known as the
744:Web browsers employ a built-in web cache, but some 1228: 1201: 1128: 586: 1722: 1711:"What Every Programmer Should Know About Memory" 1686:"Distributed Caching on the Path To Scalability" 363:Memory paging § Page replacement techniques 1465: 1353: 1229:Patterson, David A.; Hennessy, John L. (1990). 1202:Hennessy, John L.; Patterson, David A. (2011). 893:applications, for example in the processing of 559: 556:with shared L2, split L1 I-cache and D-cache). 1622: 1569: 1205:Computer Architecture: A Quantitative Approach 498:would drastically affect performance, e.g. if 418: 255:A write-through cache without write allocation 21:"Caching" redirects here. For other uses, see 1231:Computer Architecture A Quantitative Approach 889:can substantially improve the throughput of 135:provided by a cache benefits one or both of 1538: 1323: 1233:. Morgan Kaufmann Publishers. p. 413. 850:daemon caches a mapping of domain names to 792: 205:When the cache client (a CPU, web browser, 194:, which is a copy of the same data in some 394:A few operating systems go further with a 271:. There are two basic writing approaches: 1660: 1479: 1416: 1406: 1367: 1271: 1175: 962:Learn how and when to remove this message 1716:"Caching in the Distributed Environment" 1407:Bilal, Muhammad; Kang, Shin-Gak (2014). 1333:CSE 120: Principles of Operating Systems 811: 263:A write-back cache with write allocation 258: 250: 27: 1646: 901:, and frequently used subsets of data. 699:; such a scheme is the main concept of 32:Diagram of a CPU memory cache operation 1723: 1518: 1253: 529:, and interface with a CPU-style MMU. 755:, where the files most sought for by 615:Time aware least recently used (TLRU) 1683: 944:adding citations to reliable sources 915: 367:On a cache read miss, caches with a 1630:"Definition: cloud storage gateway" 1260:Security and Communication Networks 1176:Bottomley, James (1 January 2004). 911: 640: 632:Least frequent recently used (LFRU) 594:(ICN) is an approach to evolve the 581: 482:(GPUs) often had limited read-only 246: 187:commonly rely on software caching. 13: 1704: 1610:from the original on 9 August 2017 1590: 1311:"qualcom Hexagon DSP SDK overview" 653: 96:access data with a high degree of 14: 1747: 920: 46: 1677: 1640: 1563: 1532: 1512: 1459: 1400: 1347: 931:needs additional citations for 841: 701:hierarchical storage management 167:Hardware implements cache as a 1326:"Lecture 7: Memory Management" 1317: 1303: 1288: 1247: 1222: 1169: 1122: 1092: 854:, as does a resolver library. 762: 592:Information-centric networking 587:Information-centric networking 151:cache. Prediction or explicit 1: 1671:10.1016/S0140-3664(00)00322-4 1085: 658: 554:Modified Harvard architecture 505:As GPUs advanced, supporting 103: 714: 566:Translation lookaside buffer 560:Translation lookaside buffer 473: 464:translation lookaside buffer 423: 162: 7: 1498:10.1109/ACCESS.2017.2669344 1016: 609: 419:Examples of hardware caches 350: 10: 1752: 1435:10.1109/ICACT.2014.6779016 1386:10.1109/JSYST.2019.2931813 1208:. Elsevier. p. B–12. 815: 796: 766: 746:Internet service providers 718: 662: 563: 548:, but modern DSPs such as 523:synchronization primitives 427: 384:anticipatory paging policy 360: 354: 20: 1684:Khan, Iqbal (July 2009). 1579:. MMCN'09. Archived from 1541:"Web application caching" 1034:Cache-oblivious algorithm 751:Another form of cache is 709:solid-state hybrid drives 622:content delivery networks 538:Digital signal processors 480:graphics processing units 1147:10.23919/jcc.2020.02.018 799:Content delivery network 793:Content delivery network 198:. Each entry also has a 173:Central processing units 1649:Computer Communications 1178:"Understanding Caching" 1049:Cache manifest in HTML5 532: 865:caches (like those in 828:cloud storage services 741:for users of the web. 468:memory management unit 264: 256: 33: 23:Cache (disambiguation) 1736:Computer architecture 818:Cloud storage gateway 812:Cloud storage gateway 492:locality of reference 361:Further information: 262: 254: 98:locality of reference 94:computer applications 31: 1356:IEEE Systems Journal 1324:Frank Uyeda (2009). 1298:. 25 September 2015. 1273:10.1155/2021/5559552 1135:China Communications 1074:Pipeline burst cache 1044:Cache language model 940:improve this article 824:hybrid cloud storage 600:end-to-end principle 546:direct memory access 525:between threads and 380:prefetch input queue 1551:on 12 December 2019 1490:2017arXiv170204078B 1427:2018arXiv180100390B 1378:2020ISysJ..14.1921B 1104:Oxford Dictionaries 863:network file system 787:dynamic programming 515:instruction caches 442:; modern high-end 440:in cascaded levels 403:associated with a 265: 257: 241:replacement policy 177:solid-state drives 110:propagation delays 34: 1731:Cache (computing) 1539:Multiple (wiki). 1444:978-89-968650-3-2 1215:978-0-12-383872-8 1110:on 18 August 2012 1064:Materialized view 1011:abstraction layer 972: 971: 964: 906:distributed cache 899:data dictionaries 834:services such as 727:web proxy servers 725:Web browsers and 697:optical jukeboxes 542:scratchpad memory 527:atomic operations 357:Cache prefetching 318:write-no-allocate 314:No-write allocate 1743: 1698: 1697: 1681: 1675: 1674: 1664: 1644: 1638: 1637: 1626: 1620: 1619: 1617: 1615: 1609: 1602: 1594: 1588: 1587: 1586:on 18 June 2015. 1585: 1578: 1567: 1561: 1560: 1558: 1556: 1547:. Archived from 1536: 1530: 1529: 1516: 1510: 1509: 1483: 1463: 1457: 1456: 1420: 1404: 1398: 1397: 1371: 1351: 1345: 1344: 1342: 1340: 1330: 1321: 1315: 1314: 1307: 1301: 1299: 1292: 1286: 1285: 1275: 1251: 1245: 1244: 1226: 1220: 1219: 1199: 1193: 1192: 1190: 1188: 1173: 1167: 1166: 1126: 1120: 1119: 1117: 1115: 1106:. Archived from 1096: 1069:Memory hierarchy 1059:Five-minute rule 978:caching system. 967: 960: 956: 953: 947: 924: 916: 912:Buffer vs. cache 887:Database caching 686:disk controllers 641:Weather forecast 582:In-network cache 576:physical address 550:Qualcomm Hexagon 413:link prefetching 411:associated with 382:or more general 247:Writing policies 207:operating system 77: 72: 71: 70: 69: 62: 59: 58: 55: 52: 1751: 1750: 1746: 1745: 1744: 1742: 1741: 1740: 1721: 1720: 1707: 1705:Further reading 1702: 1701: 1682: 1678: 1645: 1641: 1628: 1627: 1623: 1613: 1611: 1607: 1600: 1596: 1595: 1591: 1583: 1576: 1568: 1564: 1554: 1552: 1537: 1533: 1522:InformationWeek 1517: 1513: 1464: 1460: 1445: 1405: 1401: 1352: 1348: 1338: 1336: 1328: 1322: 1318: 1309: 1308: 1304: 1294: 1293: 1289: 1252: 1248: 1241: 1227: 1223: 1216: 1200: 1196: 1186: 1184: 1174: 1170: 1127: 1123: 1113: 1111: 1098: 1097: 1093: 1088: 1083: 1029:Cache hierarchy 1019: 968: 957: 951: 948: 937: 925: 914: 844: 820: 814: 801: 795: 771: 765: 723: 717: 667: 661: 656: 654:Software caches 643: 634: 617: 612: 589: 584: 572:virtual address 568: 562: 535: 511:compute kernels 476: 452:microprocessors 432: 426: 421: 365: 359: 353: 345:cache coherence 249: 165: 106: 75: 65: 64: 49: 45: 26: 17: 12: 11: 5: 1749: 1739: 1738: 1733: 1719: 1718: 1713: 1706: 1703: 1700: 1699: 1676: 1662:10.1.1.38.1094 1655:(2): 256–268. 1639: 1621: 1589: 1562: 1531: 1525:. p. 28. 1511: 1458: 1443: 1399: 1346: 1335:. UC San Diego 1316: 1302: 1287: 1246: 1239: 1221: 1214: 1194: 1168: 1141:(2): 232–238. 1121: 1090: 1089: 1087: 1084: 1082: 1081: 1079:Temporary file 1076: 1071: 1066: 1061: 1056: 1051: 1046: 1041: 1039:Cache stampede 1036: 1031: 1026: 1024:Cache coloring 1020: 1018: 1015: 994: 993: 990: 987: 970: 969: 928: 926: 919: 913: 910: 877:Search engines 843: 840: 832:object storage 816:Main article: 813: 810: 797:Main article: 794: 791: 783:function calls 767:Main article: 764: 761: 739:responsiveness 719:Main article: 716: 713: 663:Main article: 660: 657: 655: 652: 642: 639: 633: 630: 616: 613: 611: 608: 588: 585: 583: 580: 564:Main article: 561: 558: 534: 531: 490:to improve 2D 484:texture caches 475: 472: 428:Main article: 425: 422: 420: 417: 378:Caches with a 355:Main article: 352: 349: 337: 336: 333: 326: 325: 311: 308:fetch on write 304:Write allocate 286: 285: 279: 248: 245: 225:of the cache. 164: 161: 105: 102: 15: 9: 6: 4: 3: 2: 1748: 1737: 1734: 1732: 1729: 1728: 1726: 1717: 1714: 1712: 1709: 1708: 1695: 1691: 1687: 1680: 1672: 1668: 1663: 1658: 1654: 1650: 1643: 1635: 1634:SearchStorage 1631: 1625: 1606: 1599: 1593: 1582: 1575: 1574: 1566: 1550: 1546: 1542: 1535: 1528: 1524: 1523: 1515: 1507: 1503: 1499: 1495: 1491: 1487: 1482: 1477: 1474:: 1692–1701. 1473: 1469: 1462: 1454: 1450: 1446: 1440: 1436: 1432: 1428: 1424: 1419: 1414: 1410: 1403: 1395: 1391: 1387: 1383: 1379: 1375: 1370: 1365: 1361: 1357: 1350: 1334: 1327: 1320: 1312: 1306: 1297: 1291: 1283: 1279: 1274: 1269: 1265: 1261: 1257: 1250: 1242: 1240:1-55860-069-8 1236: 1232: 1225: 1217: 1211: 1207: 1206: 1198: 1183: 1182:Linux Journal 1179: 1172: 1164: 1160: 1156: 1152: 1148: 1144: 1140: 1136: 1132: 1125: 1109: 1105: 1101: 1095: 1091: 1080: 1077: 1075: 1072: 1070: 1067: 1065: 1062: 1060: 1057: 1055: 1052: 1050: 1047: 1045: 1042: 1040: 1037: 1035: 1032: 1030: 1027: 1025: 1022: 1021: 1014: 1012: 1006: 1003: 998: 991: 988: 985: 984: 983: 979: 975: 966: 963: 955: 945: 941: 935: 934: 929:This section 927: 923: 918: 917: 909: 907: 902: 900: 896: 892: 888: 884: 882: 878: 874: 872: 868: 864: 861: 855: 853: 849: 839: 837: 833: 829: 825: 819: 809: 805: 800: 790: 788: 784: 780: 776: 770: 760: 758: 754: 749: 747: 742: 740: 736: 732: 728: 722: 712: 710: 706: 705:hybrid drives 702: 698: 694: 689: 687: 683: 678: 676: 672: 666: 651: 648: 638: 629: 625: 623: 607: 603: 601: 597: 593: 579: 577: 573: 567: 557: 555: 551: 547: 543: 539: 530: 528: 524: 520: 516: 512: 508: 503: 501: 497: 493: 489: 485: 481: 471: 469: 465: 461: 457: 453: 449: 445: 441: 437: 431: 416: 414: 410: 406: 402: 397: 392: 390: 385: 381: 376: 373: 371: 370:demand paging 364: 358: 348: 346: 342: 334: 331: 330: 329: 323: 319: 316:(also called 315: 312: 309: 306:(also called 305: 302: 301: 300: 297: 295: 291: 283: 280: 277: 276:Write-through 274: 273: 272: 270: 261: 253: 244: 242: 238: 233: 231: 226: 224: 220: 216: 212: 208: 203: 201: 197: 196:backing store 193: 188: 186: 182: 178: 174: 170: 160: 156: 154: 148: 146: 142: 138: 134: 129: 127: 123: 119: 115: 111: 101: 99: 95: 90: 87: 83: 79: 78: 68: 61: 43: 39: 30: 24: 19: 1693: 1689: 1679: 1652: 1648: 1642: 1636:. July 2014. 1633: 1624: 1612:. Retrieved 1592: 1581:the original 1572: 1565: 1553:. Retrieved 1549:the original 1544: 1534: 1526: 1520: 1514: 1471: 1467: 1461: 1408: 1402: 1359: 1355: 1349: 1337:. Retrieved 1332: 1319: 1305: 1290: 1263: 1259: 1249: 1230: 1224: 1204: 1197: 1185:. Retrieved 1181: 1171: 1138: 1134: 1124: 1112:. Retrieved 1108:the original 1103: 1094: 1007: 1002:instructions 999: 995: 980: 976: 973: 958: 949: 938:Please help 933:verification 930: 903: 885: 875: 856: 852:IP addresses 845: 842:Other caches 830:, typically 821: 806: 802: 779:optimization 772: 757:peer-to-peer 750: 743: 724: 690: 679: 668: 644: 635: 626: 618: 604: 590: 569: 536: 504: 496:Cache misses 477: 433: 393: 389:disk storage 383: 377: 368: 366: 340: 338: 327: 322:write around 321: 317: 313: 307: 303: 298: 293: 289: 287: 281: 275: 269:write policy 268: 266: 234: 229: 227: 222: 218: 210: 204: 199: 195: 191: 189: 181:web browsers 166: 157: 149: 130: 107: 91: 85: 81: 41: 35: 18: 1468:IEEE Access 1362:(2): 1–12. 860:client-side 775:Memoization 769:Memoization 763:Memoization 753:P2P caching 693:tape drives 682:disk buffer 647:AccuWeather 450:and server 436:main memory 185:web servers 153:prefetching 1725:Categories 1614:25 October 1481:1702.04078 1418:1801.00390 1369:1907.11717 1339:4 December 1086:References 680:While the 671:page cache 665:Page cache 659:Disk cache 500:mipmapping 405:prefetcher 401:page cache 391:and DRAM. 294:lazy write 282:Write-back 230:cache miss 141:throughput 126:hard disks 104:Motivation 86:cache miss 1657:CiteSeerX 1394:198967720 1282:1939-0122 1187:1 October 1163:212649328 1155:1673-5447 1054:Dirty bit 952:June 2021 846:The BIND 836:Amazon S3 731:web pages 721:Web cache 715:Web cache 711:(SSHDs). 488:swizzling 486:and used 474:GPU cache 430:CPU cache 424:CPU cache 409:web cache 237:heuristic 223:hit ratio 211:cache hit 163:Operation 145:bandwidth 133:buffering 82:cache hit 38:computing 1605:Archived 1545:Docforge 1506:14517299 1266:: 1–15. 1114:2 August 1017:See also 891:database 610:Policies 596:Internet 478:Earlier 466:for the 462:and the 444:embedded 351:Prefetch 219:hit rate 175:(CPUs), 1555:24 July 1486:Bibcode 1423:Bibcode 1374:Bibcode 1100:"Cache" 895:indexes 544:fed by 519:shaders 470:(MMU). 460:I-cache 456:D-cache 448:desktop 407:or the 137:latency 1659:  1504:  1453:830503 1451:  1441:  1392:  1280:  1237:  1212:  1161:  1153:  881:Google 777:is an 735:images 675:kernel 396:loader 372:policy 1608:(PDF) 1601:(PDF) 1584:(PDF) 1577:(PDF) 1502:S2CID 1476:arXiv 1449:S2CID 1413:arXiv 1390:S2CID 1364:arXiv 1329:(PDF) 1159:S2CID 341:stale 290:dirty 169:block 124:, or 122:flash 63: 42:cache 1696:(7). 1690:MSDN 1616:2019 1557:2013 1439:ISBN 1341:2013 1278:ISSN 1264:2021 1235:ISBN 1210:ISBN 1189:2019 1151:ISSN 1116:2016 733:and 533:DSPs 517:for 509:and 192:data 183:and 139:and 131:The 118:DRAM 114:SRAM 76:KASH 40:, a 1667:doi 1494:doi 1431:doi 1382:doi 1268:doi 1143:doi 942:by 871:SMB 869:or 867:NFS 848:DNS 707:or 695:or 574:to 320:or 221:or 215:URL 200:tag 147:). 36:In 1727:: 1694:24 1692:. 1688:. 1665:. 1653:24 1651:. 1632:. 1603:. 1543:. 1500:. 1492:. 1484:. 1470:. 1447:. 1437:. 1429:. 1421:. 1388:. 1380:. 1372:. 1360:14 1358:. 1331:. 1276:. 1262:. 1258:. 1180:. 1157:. 1149:. 1139:17 1137:. 1133:. 1102:. 904:A 897:, 677:. 494:. 458:, 446:, 415:. 347:. 128:. 120:, 1673:. 1669:: 1618:. 1559:. 1508:. 1496:: 1488:: 1478:: 1472:5 1455:. 1433:: 1425:: 1415:: 1396:. 1384:: 1376:: 1366:: 1343:. 1313:. 1284:. 1270:: 1243:. 1218:. 1191:. 1165:. 1145:: 1118:. 965:) 959:( 954:) 950:( 936:. 143:( 60:/ 57:ʃ 54:æ 51:k 48:/ 44:( 25:.

Index

Cache (disambiguation)

computing
/kæʃ/

KASH
computer applications
locality of reference
propagation delays
SRAM
DRAM
flash
hard disks
buffering
latency
throughput
bandwidth
prefetching
block
Central processing units
solid-state drives
web browsers
web servers
operating system
URL
heuristic
replacement policy


cache coherence

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.