Skip to content

A very (very) noob understanding of the M1 architecture - as well as other rando projects. xALL contributions and corrections are welcomed

License

Notifications You must be signed in to change notification settings

Polyglot-K0/pollen

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

  1. Least Recently Used (LRU):

LRU is a popular cache replacement policy. It removes the least recently used item when the cache is full. This policy relies on the principle that recently accessed data is more likely to be accessed again soon.

[]

  1. FIFO (First-In, First-Out):

In a FIFO cache replacement policy, the first item that was added to the cache is the first to be removed when the cache is full. It

follows the same principle as a queue.

[]

  1. MRU (Most Recently Used):

MRU is the opposite of LRU. It removes the most recently used item when the cache is full. This policy assumes that the most recently used data is the most likely to be used again.

[]

  1. LFU (Least Frequently Used):

LFU removes the item that has been accessed the least number of times. This policy assumes that less frequently accessed data is less likely to be accessed in the future.

[]

  1. Random Replacement:

In this strategy, a random item is selected for replacement. While simple, it doesn't consider usage patterns and may not be as efficient as other algorithms.

[]

  1. Optimal Replacement:

The optimal replacement policy, also known as "Belady's algorithm," removes the item that will not be used for the longest time in the future. While this is theoretically the most efficient, it's not practical to implement in real-time systems because it requires knowledge of future accesses.

[]

  1. Write-back and Write-through:

These are strategies for managing writes to the cache. Write-back means that data is only written to the cache and is later written to main memory. Write-through means that data is written to both the cache and main memory simultaneously.

[]

  1. Cache Coherence Protocols:

In multi-processor systems, cache coherence protocols are used to maintain consistency between caches. These protocols ensure that # multiple processors see a consistent view of memory.

[]

  1. Cache Prefetching:

This is a technique where the cache predicts which data will be accessed next and loads it into the cache before it's actually requested.

[]

  1. Cache Associativity:

This determines how cache lines are mapped to cache sets. It can be direct-mapped, set-associative, or fully associative.

[]