Re: OT: Cache memory
- From: Johnny Billquist <bqt@xxxxxxxxxx>
- Date: Fri, 23 Sep 2011 15:10:10 +0200
On 2011-09-23 04.21, glen herrmannsfeldt wrote:
Rich Alderson<news@xxxxxxxxxxxxxxxxxxxxxxxx> wrote:
koehler@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx (Bob Koehler) writes:
PDP-10 implementations varied. But according to the hardware
architecture manual (it's been about 28 years since I read it)
the "accumulators" (address 0 - 20 octal) are memory addresses,
but are referenced so often that they are almost always in cache.
Only in late-model PDP-10 models. The PDP-6, KA-10, KI-10,
and original KL-10 processors do not have cache.
Depending on your definition of cache.
Some are considering the registers (in computer architecture terms)
cache for the low memory addresses.
run faster than
? Can that be called a cache?
I would not call that cache. The fact that processor registers overmap
some memory addresses does not make it a cache. A cache should be
available for all memory, or else it's not cache, but just some faster
You might as well ask yourself if you have cache if you have two memory
boxes on the CPU, and one of them have faster memory than the other
- Re: Cached memory never gets released
... Stock linux 2.4.26 kernel. ... Due to flash bug 3M of memory gets lost due to font memory getting lost ... The output of "free" cache number steadily grows. ... longer to exhaust all of system memory with the cache. ...
- Re: Problem: Creating a raw binary string
... > While its true that a 64-bit cpu will move twice the data per instruction it ... > Memory bus width plays an important role here and unless it too is widened / ... You are forgetting the two levels of cache in the processor. ... The memory chips are addressed in Row col fashion. ...
- Re: Is Greenspun enough?
... Most OSes memory map executables directly from the file system so code doesn't pollute the file cache or swap space. ...
- Re: Superstitious learning in Computer Architecture
... Without a LOT of logic or some other better approach, re-executing the instructions requires re-decoding and it ties up the cache memory bus transferring more data as instructions than the instructions are working on. ... The concept of cache is fundamentally flawed in that it STILL restricts access to one word per clock cycle, when a single modern ALU can easily use 5 plus whatever is eaten up with instruction accesses. ... The size of an optimizing compiler is proportional to the SQUARE of the size of the language times the SQUARE of the complexity of the machine - because all interactions must be considered. ...
- Re: High-bandwidth computing interest group
... sequential access patterns, brute force - neither of us consider that interesting ... Perhaps we should lose the cache line orientation - transferring data bytes that aren't needed. ... Particularly if it has scatter/gather vector instructions like Larrabee, or if it is a CIMT coherent threaded architecture like the GPUs. ... As I have discussed in this newsgroup before, this allows us to have writeback caches where multiple processors can write to the same memory location simultaneously. ...