(Replying to PARENT post)

The musl remark is funny, because jemalloc's use of pretty fine-grained arenas sometimes leads to better memory utilisation through reduced fragmentation. For instance Aerospike couldn't fit in available memory under (admittedly old) glibc, and jemalloc fixed the issue: http://highscalability.com/blog/2015/3/17/in-memory-computin...

And this is not a one-off: https://hackernoon.com/reducing-rails-memory-use-on-amazon-l... https://engineering.linkedin.com/blog/2021/taming-memory-fra...

jemalloc also has extensive observability / debugging capabilities, which can provide a useful global view of the system, it's been used to debug memleaks in JNI-bridge code: https://www.evanjones.ca/java-native-leak-bug.html https://technology.blog.gov.uk/2015/12/11/using-jemalloc-to-...

๐Ÿ‘คmasklinn๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Yes, almost everybody who looks at memory usage in production will eventually discover glibc's memory fragmentation issues. This is how I learned about this topic.

Setting the env var MALLOC_MMAP_THRESHOLD_=65536 usually solves these problems instantaneously.

Most programmers seem to not bother to understand what is going on (thus arriving at the above solution) but follow "we switched to jemalloc and it fixes the issue".

(I have no opinion yet on whether jemalloc is better or worse than glibc malloc. Both have tunables, and will create problematic corner cases if the tunables are not set accordingly. The fact that jemalloc has /more/ tunables, and more observability / debugging features, seems like a pro point for those that read the documentation. For users that "just want low memory usage", both libraries' defaults look bad, and the musl attitude seems like the best default, since OOM will cause a crash vs just having the program be some percent slower.)

๐Ÿ‘คnh2๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0