Does Rust's memory management result in fragmented memory? -


does rust programming language's automatic memory management need reclaim fragmented memory? if so, how this?

my understanding type system (ownership types, borrowing, rc, arc) allows deterministically know @ compile-time when chunk of allocated memory can freed.

but isn't possible memory chunks allocated in 1 order, , freed in different order, resulting in fragmentation? if prevented, how? if happen, how memory fragments managed efficiently? if defragmented, what's methodology used?

tl;dr: programs never have worry fragmentation in c, c++ or rust. have handle themselves.


does rust programming language's automatic memory management need reclaim fragmented memory?

rust not have automatic memory management; has manual memory management compiler checks correctness. difference might sound theoretical, important because means memory operations map directly source code, there no magic going on behind scenes.

in general, language needs have compacting gc able compact fragmented memory. rust, c , c++, not have gc memory may fragmented depending on usage, , cannot defragmented without program freeing annoying blocks because relocation impossible.


however, before start dreading fragmentation, must first think means.

what effect of fragmentation?

fragmentation causes waste of physical memory , address space: program occupies more uses. @ extreme, waste may prevent allocation requests though amount of unused memory should sufficient grant them.

when making parallel gc'ed language, it's important realize gc'ed languages also cause waste.

indeed, it's notable fragmentation not source of waste; over-allocation common "issue" too:

  • a vec allocate power-of-2 number of elements, maybe use 2^n + 1, wasting 2^n - 1 slots
  • a btreemap or hashmap allocate more space use
  • even memory allocator allocates chunks of predefined sizes, asking 157 bytes may rounded maybe 196 bytes, wasting 39 bytes

and that's not counting waste going on management of memory, since memory allocator maintains state know pages has, , what's used in them.

this underlines important fact: there's little point getting rid of fragmentation if consume memory because of book-keeping/overhead allocation scheme imposes suffer same issues.


how modern allocators manage memory?

modern allocators not da's free-list allocators.

the typical allocation scheme relatively simple, , yet @ keeping fragmentation down small requests:

  1. large slabs of memory "big" requests (close or on os page size: 4kb in general)
  2. small slabs of memory "smaller" requests

for small slabs, number of classes defined size. example: (0, 8], (8, 12], (12, 16], ..., (164, 196], ..., (..., 512]. each class size manage own list of os pages, , carves each os page own private use. example of 512 bytes class on 4kb os page be:

+---+---+---+---+---+---+---+---+ | | b | c | d | e | f | g | h | +---+---+---+---+---+---+---+---+ 

where 512-bytes slots a through g available allocations , latest slots h reserved meta-data (free slots, next/prev pages in same class, etc...). note bigger class size, more wasted in last slot, why larger allocations use different scheme.

when deallocating, page kept in class size until last slot deallocated @ point page blank again , can used group.

note: default, rust executables uses jemalloc, paper has specific details it.


what mean memory consumption?

the maximum memory consumption of small slabs scheme1 number of os pages can computed sum of maximum number of os pages consumed each bucket size, maximum number of concurrent allocations in size multiplied divided number of allocations fitting in page (and rounded up).

this because if allocate 1000 slots of given size, release of them in haphazard fashion pokes holes in os pages, , re-allocate slots of same size until reach 1000 again... memory consumption constant because allocator use free slots partially filled os pages fulfill second wave of allocations.

which means allocations in small class sizes both fast, , yet not contribute fragmentation.

of course, that's ignoring case of program make 1m 1 byte allocation, deallocate of them in way leaves pages used, same 2 bytes, 3 bytes, etc... seems pathological case.

1 yes, lying through teeth. need account allocator's internal structures overhead , fact may cache few unused os pages prepare future allocations, ... still, it's sufficient explaining effect of fragmentation.


so, fragmentation issue?

well, can still be. address space can still fragmented, though @ granularity of os pages.

with virtual memory, ram need not contiguous, long there sufficient space pages can used. is, address space gives user illusion of contiguous memory if memory physically spread on ram.

and there lies issue: illusion of contiguous memory requires finding contiguous region of address space, subject fragmentation.

this fragmentation not show in small requests, requests on page size can issue. these days, 64-bits pointers, less of issue in practice (even when using 47-bits user-land) in 32-bits programs more surface: example, extremely difficult mmap 2gb file in 32 bits address space, because occupies half of it... assuming no stray allocation prevents (in case request fail).


is fight lost?

well, main advantage of systems programming that... can talk systems language.

if program has allocation behavior typical allocator not handle well, can:

  • take control of particular allocations cause issues (using sbrk/mmap handle them)
  • or rewrite tuned-up allocator

in 10 years, i've never needed to, , wrote allocators fun in spare time; it's possible.


Comments

Popular posts from this blog

account - Script error login visual studio DefaultLogin_PCore.js -

xcode - CocoaPod Storyboard error: -