Mem Ops Tutorial
Mem Ops is an open source Java toolkit providing memory allocation tools for applications that require steady state memory consumption. By steady state is meant as constant memory consumption as possible, and as little garbage collection as possible. Mem Ops is developed by Nanosai which I am a co-founder of.
Mem Ops provides the following set of tools:
Mem Ops Motivation
Java does not enable you to allocate and free memory at your will. If your application needs to allocate and free small blocks of memory rapidly, doing so by instantiating objects or byte arrays will put pressure on the garbage collector. Granted, Java's garbage collectors get better and better all the time, but that might still not be good enough. The only real way to guarantee that memory consumption remains stable, and that you don't get too long garbage collection pauses is to take control of the memory allocation yourself.
Since Java does not enable you to control memory allocation and garbage collection directly, you will have to use other constructs to achieve a similar effect. Two such constructs are byte array allocators and object pools. Mem Ops contains both of these constructs, and the byte array allocator in two variations.
Systems that read and write byte sequences at high speeds has the need to allocate byte arrays for the data.
Rather than instantiating Java byte arrays using
new byte[size], you can allocate a bigger
byte array, and then suballocate smaller blocks of bytes from that bigger byte array for the byte sequences
the application needs to read and write.
Mem Ops contains two constructs for byte sequence allocations called "byte array allocators". These are:
By allocating smaller sections of a bigger byte array and managing that allocation and deallocation yourself, you get the following advantages:
- You can assure how big amounts of bytes the Java VM allocates for the purpose.
- You can control memory defragmentation (garbage collection) of freed byte sequences (blocks).
- You can align the size of the underlying byte array with the sizes of the internal memory caches (L1, L2 and L3) of the architecture you are running on. Continued allocation from a byte array that is already contained in the L2 or L3 cache can speed up the byte access considerably.
Systems that need to create high numbers of objects at a rapid pace, but do not need all of these objects
at the same time, can benefit from using object pools rather than instantiating the objects using
new XYZObject() . Mem Ops contains an ObjectPool
too, which you can use to pool and reuse objects.