Re: memory fragmentation with malloc



On Wed, 1 Aug 2007 11:57:28 +0200, "G. Maly" <gemal@xxxxxxxxxxxxxx> wrote:
seems to be optimized in terms of speed. umem will assume that the
999999 freed blocks will be used again soon and cache them, but not
give them back to the OS. isn't it?

libumem is a bit more complex than the description above.

There is an integrated 'cache reaping' mechanism, which runs either on a
time-based interval or when the heap needs to be grown. This cache
reaping mechanism tries to free up enough resources to satisfy
allocation requests from already freed (but cached) objects.

So libumem won't give freed (but cached) areas "back to the OS", but it
will happily reuse them to satisfy future allocation requests blazingly
fast.

It will not reuse them happily if the application will NOT need such
large amount of memory any more or at least in the next time.
Assume the application which runs
1) short phases with large memory consumption and
2) long phases with small memory consumption.

It would be preferable if VirtMemory would be reduced during phase 2.
The question was which memory allocator does support it?

Since cache reaping is not based only on memory demand/pressure but it
is also time-triggered after a configurable period of seconds, libumem
*will* return unused memory from its cache after the time-based reap
happens.

Doesn't that cover the (1) and (2) phases described above?

.