Re: memory fragmentation with malloc



On Wed, 1 Aug 2007 11:57:28 +0200, "G. Maly" <gemal@xxxxxxxxxxxxxx> wrote:
seems to be optimized in terms of speed. umem will assume that the
999999 freed blocks will be used again soon and cache them, but not
give them back to the OS. isn't it?

libumem is a bit more complex than the description above.

There is an integrated 'cache reaping' mechanism, which runs either on a
time-based interval or when the heap needs to be grown. This cache
reaping mechanism tries to free up enough resources to satisfy
allocation requests from already freed (but cached) objects.

So libumem won't give freed (but cached) areas "back to the OS", but it
will happily reuse them to satisfy future allocation requests blazingly
fast.

It will not reuse them happily if the application will NOT need such
large amount of memory any more or at least in the next time.
Assume the application which runs
1) short phases with large memory consumption and
2) long phases with small memory consumption.

It would be preferable if VirtMemory would be reduced during phase 2.
The question was which memory allocator does support it?

Since cache reaping is not based only on memory demand/pressure but it
is also time-triggered after a configurable period of seconds, libumem
*will* return unused memory from its cache after the time-based reap
happens.

Doesn't that cover the (1) and (2) phases described above?

.



Relevant Pages

  • Re: Cached memory never gets released
    ... Stock linux 2.4.26 kernel. ... Due to flash bug 3M of memory gets lost due to font memory getting lost ... The output of "free" cache number steadily grows. ... longer to exhaust all of system memory with the cache. ...
    (Linux-Kernel)
  • Re: Is Greenspun enough?
    ... Most OSes memory map executables directly from the file system so code doesn't pollute the file cache or swap space. ...
    (comp.lang.lisp)
  • Re: UMA cache back pressure
    ... on a cache for another week or two. ... done in memory allocation last years improved situation. ... heavy and doesn't benefit much from bucket sizing. ... You're also biasing the first zones in the list. ...
    (freebsd-current)
  • Re: UMA cache back pressure
    ... on a cache for another week or two. ... done in memory allocation last years improved situation. ... heavy and doesn't benefit much from bucket sizing. ... You're also biasing the first zones in the list. ...
    (freebsd-hackers)
  • Re: UMA cache back pressure
    ... done in memory allocation last years improved situation. ... limits on cache sizes -- they are self-tuned, ... Even on my workload with ZFS creating strong memory pressure I still have mbuf* zones buckets almost maxed out. ...
    (freebsd-current)