Talk:Cache replacement policies
This level-5 vital article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||
|
Wiki Education Foundation-supported course assignment
[edit]This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Student editor(s): Advaitjavadekar.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 16:31, 16 January 2022 (UTC)
[Untitled]
[edit]How about this formulation: optimizing instructions implementable in a computer program to manage a (locally) stored cache of information rene.bach@isb.admin.ch
Mistake
[edit]Shouldn't the latency be multiplied by the hit rate? Otherwise it doesn't seem to be a proper weight average. — Preceding unsigned comment added by 12.206.133.4 (talk) 06:00, 26 June 2013 (UTC)
There's a mistake in the LRU example image. Block E should be replacing A, not D. It is shown (incorrectly) to replace D, then in the next step it is shown to (correctly) have replaced A. --Gamvrosi (talk) 17:53, 16 November 2016 (UTC)
I think the text on the LRU replacement image is still incorrect. After A is replaced by E the "sequence number" of block D is updated to 5, then F is added with sequence number 6. Surely this means the access sequence is A B C D E D F and not, as the article states, A B C D E F? 86.3.11.201 (talk) 18:48, 23 August 2018 (UTC)
Merge
[edit]Since both the articles describe the same thing, I vote for their merger. --Soumyasch 06:16, 25 February 2006 (UTC)
I vote for not merging - cache algorithms are applicable outside of the operating systems area. —The preceding unsigned comment was added by 213.67.251.34 (talk • contribs) 12:03, 14 April, 2006 (UTC).
This is definitly two different things and should not be merged together. This page helped me get information easy however I viewed the other page and it would have taken me a lot longer to get the information. —The preceding unsigned comment was added by Peppage (talk • contribs) 07:44, 26 April, 2006.
I think the articles should should merge! The article on page replacement algorithms describe the applications of caching algorithms to virtual memory management. It should be a section in this article. —The preceding unsigned comment was added by 129.241.127.122 (talk • contribs) 16:04, 9 May, 2006.
I vote against merging. While caching and paging are very closely related concepts, caching algorithms and page replacement algorithms are not similar. When was the last time you heard of K-way set associative virtual memory? Crispy 04:47, 13 May 2006 (UTC)
- You could certainly make such a virtual memory. You'd have to redesign your MMU to be K-way associative instead of fully associative, but the algorithm would work the same way for virtual memory pages as it does to a CPU cache. Bigpeteb 17:23, 14 September 2006 (UTC)
Maybe best to keep the articles separate and write a small section on page replacement algorithms in this article 129.241.127.122
These should be separate articles. There are a lot of applications for caching, and many people looking for information about those applications would not know to look for page replacement algorithms. Even if they did, they would have to dig through a complicated example to get the general information. The caching algorithms page should just mention that page replacement algorithms are a good example, and have a link.Funbobby 16:53, 22 May 2006 (UTC)
- ^^. Seriously, LRU != page replacement. Were you people that want to merge the articles even CS majors? If you were, you would know that LRU is merely a candidate of many possibilities and only a part of page replacement in general. How in the heck could that possibly be the same thing? I use a combustion engine in my car...should the articles for a combustion engine and automobile be merged into one?
- Umm. The article you're commenting on is about cache algorithms in general, not any specific algorithm. In general, a cache is a "practical" implementation of "endless storage", and a virtual memory system (which is where page replacement algorithms are used) is a "practical" implementation of "endless memory". Cache item replacement algorithms and page replacement algoritms address the same exact problem: how to avoid throwing things away that you will need again, sooner or later. Where did you get your CS major? ;-) 85.119.130.132 10:16, 17 August 2006 (UTC)
- (but for the record, I don't see the point of merging these; page replacement is specific enough to deserve its own page)
I vote for the merge. The algorithms described on Page replacement algorithm are just specific applications of general cache algorithms. That page certainly goes into more detail, but much of it is spent describing the algorithms. The details of how those particular algorithms are implemented in terms of paging should go in paging or virtual memory, don't you think? (Although "cache algorithm" is a poor name to cover both subjects, since virtual memory isn't exactly a cache. If they're merged I'd vote for changing the name to something else, too.) Bigpeteb 17:23, 14 September 2006 (UTC)
- Yep, if you keep an open mind, paging uses a form of caching to achieve its goal. However, to be practical we should not merge the pages. There are a lot more caches than memory-to-disk (think of a web cache...), so the cache algorithms should be on a very general level (not using words such as disk or page or CPU). It would be then hardly an usable article from the paging perspective, hence we have the Page replacement algorithm that describes those once again with paging terms and provides paging-specific details (with links to general versions!). --Kubanczyk 19:37, 7 September 2007 (UTC)
Merge or not, the pages Cache algorithms and Page replacement algorithm have no links to each other. This must be fixed. Vegasprof 18:39, 19 November 2006 (UTC)
Summary: since we shouldn't keep those pages tagged forever, I'm closing the discussion. There was no consensus reached, and no real dialogue, so I will just count the votes... Wait a minute... Tadaaaam: Cache algorithms will not be merged with Page replacement algorithm. --Kubanczyk 19:55, 17 September 2007 (UTC)
- I suggest people reconsider the merge. the algorithms described on this page are all cache eviction algorithms, whereas in general, caching algorithms include various other algorithms such as cache coherency, prefetching, cache-write algorithms and cache index related (e.g. Bloom filter). this should be more of an overview of the classes. current page content should be merged, together with most content of 'paging' algorithms to 'cache eviction algorithms' —Preceding unsigned comment added by 85.64.221.163 (talk) 07:06, 28 November 2008 (UTC)
Difference between LRU and MRU algorithms
[edit]As I understand it, the LRU algorithm keeps track of how many times each item has been used, so it can discard the item that have been used the fewest times. The MRU algorithm on the other hand just keeps a linked list and moves items to the front when they are accessed and discards from the back - that is, it discards the item that haven't been used for the longest time.
I am unable to find a good reference to confirm this, but the current description of the MRU is completely nonsensical: why would one want to discard the most recently used item?
Do anyone have a good reference? I am not certain about the correct definitions, just that the current one is wrong. Rasmus (talk) 22:06, 7 January 2009 (UTC)
- I looked a bit more into it, and it seems I am confusing LRU and LFU algorithms. But the section on MRU is clearly wrong. You get an MRU-cache by using an LRU-algorithm to select which item to remove. I will update the article to reflect this. Rasmus (talk) 22:15, 7 January 2009 (UTC)
No. An LRU algorithm discards the least-recently used items. This is different to an MRU algorithm. An MRU algorithm discards the most recently used items. MRU algorithms are useful in cases where you know that your users are scanning a large dataset and so are unlikely to fetch the same item many times in a short period of time. See this USENIX paper: http://www.usenix.org/events/usenix01/full_papers/zhou/zhou_html/index.html and this paper: www.vldb.org/conf/1985/P127.PDF from the Very Large Databases symposium. I've brought back the MRU section, tidied up the confusion in the LRU section and added references. Ade oshineye (talk) 08:26, 8 January 2009 (UTC)
- OK, thanks for the references. I cleaned the section up a bit and moved it underneath the LRU algorithm (since it contrasts itself to that). Rasmus (talk) 20:13, 10 January 2009 (UTC)
LRU description
[edit]What is a "cache-line"? The term is used four times in the paragraph that describes the LRU algorithm but nowhere else, and nowhere is there an explanation of what it is. I'm guessing it basically means a "cache", and an LRU implementation may have several.
Also, I would have expected a description of the "lift-to-front" implementation of LRU caches, with all of the items in a linked list and every time one is accessed it's moved to the front of the list (actually, a bunch of pointers are swapped around). If A is ahead of B in the list then it means that A must have been accessed more recently than B (otherwise B would have been moved to the head of the list after A was so moved and would now be ahead of it). The implication is that every item in the list has been accessed since the most recent access of the item at the tail of the list - i.e., that the item at the tail is the least recently used. Those "age bits" would only come up if you had several caches ("cache-lines"?) in the same space because otherwise you'd not know the relative recencies of the different tails. — Preceding unsigned comment added by 125.237.149.178 (talk) 09:14, 5 January 2017 (UTC)
Code Example
[edit]While the code example shows that the normal LinkedHashMap can be used as LRU cache (be overriding a method) it does not explain anything about the LRU algorithm.
I would rather see a simple implementation of the LRU algorithm itself.
80.218.184.37 (talk) 19:24, 1 August 2009 (UTC)
I've reverted the massive code example because it doesn't actually explain anything about the LRU algorithm. It's the moral equivalent of : new LRUCache() where the algorithm's implementation is hidden away in the code provided by the standard Java libraries. Ade oshineye (talk) 04:35, 2 August 2009 (UTC)
LRU cache has more efficient implementation
[edit]This pages says general LRU requires "age bits", and that accessing requires a scan. But it seems that you can implement LRU much more efficiently with hash maps and doubly linked list.
Merge with Page replacement algorithm
[edit]Most concepts and algorithm used in management of page caches are applicable to caching in general and vice-versa. For example: The LRU algorithm with is the reference to which other algorithms are judged is applicable to both page caches, application-specific caching of persistent objects and CPU caches. Having separate articles naturally leads to duplication and misplacement of content. Right now at least the following algorithms are duplicated between page replacement algorithm and cache replacement algorithm:
- LRU
- FIFO
- CLOCK
- GCLOCK
- CLOCK-Pro
- CLOCK with adaptive replacement
- ARC
- Random
Mario Castelán Castro (talk) 16:55, 17 July 2020 (UTC).
- Tentatively support. My opinion has not changed since the last time a merge was proposed. However, I have the same question: What would you call the merged article? Also, how will the article address implementation differences between caches and paging, or the fact that some algorithms are only used for one or the other? --Bigpeteb (talk) 22:55, 20 July 2020 (UTC)
- Support merging. I'm fine with the merged article being named cache replacement policies or replacement algorithm. Bigpeteb has a good point that some cache replacement policies (such as 2-way set associative) are apparently *only* used for CPU caches of lines backed by main memory. I'm OK with those policies only very briefly mentioned in this article with links to some other article that goes into far more detail (currently CPU cache).
- I suppose I might be OK with algorithms that are *only* used in paging to be only very briefly mentioned in this article with links to the page replacement algorithm article for more details.
- However, I don't know of *any* cache policy that is *only* used for in-memory page cache of pages of virtual memory backed by disk. My understanding is that each and every page cache replacement policy is also used for other kinds of cache -- on-disk client-side web cache of documents backed by other servers on the web, relatively "local" content delivery network servers and web proxy servers caching documents backed by the global internet, etc.
- --DavidCary (talk) 00:34, 20 August 2020 (UTC)
I'm going to remove the merge proposal for now. A) the constraints of page replacement and cache replacement policies are very different and actual algorithms used are implemented quite differently (most page table policies don't use true LRU for example because they are far too large and associative for that to be reasonable) and B) it's been months with no action. Hobit (talk) 03:02, 24 March 2021 (UTC)
Source for [9] was moved
[edit]Don't know a better way to report this, sorry ! — Preceding unsigned comment added by 91.245.11.185 (talk) 19:18, 18 April 2021 (UTC)
Introduction
[edit]The current introduction is:
In computing, cache algorithms (also frequently called cache replacement algorithms or cache replacement policies) are optimizing instructions, or algorithms, that a computer program or a hardware-maintained structure can utilize in order to manage a cache of information stored on the computer. Caching improves performance by keeping recent or often-used data items in memory locations that are faster or computationally cheaper to access than normal memory stores. When the cache is full, the algorithm must choose which items to discard to make room for the new ones.
I find this wordy and confusing. Suggest:
In computing, caches improve performance by keeping recent or often-used data items in memory locations that are faster or computationally cheaper to access than normal memory stores. Caches have limited size and thus data must be evicted to make room for new data. The cache replacement algorithm or cache replacement policy selects which data is evicted.
What do you think? David.Monniaux (talk) 20:10, 30 January 2023 (UTC)
- C-Class level-5 vital articles
- Wikipedia level-5 vital articles in Mathematics
- C-Class vital articles in Mathematics
- C-Class Computer science articles
- Mid-importance Computer science articles
- WikiProject Computer science articles
- C-Class Computing articles
- Low-importance Computing articles
- C-Class software articles
- Low-importance software articles
- C-Class software articles of Low-importance
- All Software articles
- All Computing articles