6.4 Fixed Sized Buffer Pattern
Many real-time and embedded systems are complex enough to be unpredictable in the order in which memory must be allocated and too complex to allocate enough memory for all possible worst cases. Such systems would be relatively simple to design using dynamic memory allocation. However, many such systems in the real-time and embedded world must function reliably for long periods of timeoften years or even decadesbetween reboots. That means that while they are complex enough to require dynamic random allocation of memory, they cannot tolerate one of the major problems associated with dynamic allocation: fragmentation. For such systems, the Fixed Sized Buffer Pattern offers a viable solution: fragmentation-free dynamic memory allocation at the cost of some loss of memory usage optimality.
6.4.1 Abstract
The Fixed Sized Buffer Pattern provides an approach for true dynamic memory allocation that does not suffer from one of the major problems that affect most such systems: memory fragmentation. It is a pattern supported by most real-time operating systems directly. Although it requires static memory analysis to minimize nonoptimal memory usage, it is a simple and easy to implement approach.
6.4.2 Problem
One of the key problems with dynamic memory allocation is memory fragmentation. Memory fragmentation is the random intermixing of free and allocated memory in the heap. For memory fragmentation to occur, the following conditions must be met.
The order of memory allocation is unrelated to the order in which it is released.
Memory is allocated in various sizes from the heap.
When these preconditions are met, then memory fragmentation will inevitably occur if the system runs long enough. Note that this is not a problem solely related to object-oriented systems, functionally decomposed systems written in C are just as affected as those written in C++.2 The problem is severe enough that it will usually lead to system failure if the system runs long enough. The failure occurs even when analysis has demonstrated that there is adequate memory because if the memory is highly fragmented, there may be more than enough memory to satisfy a request, but it may not be in a contiguous block of adequate size. When this occurs, the memory allocation request fails even though there is enough total memory to satisfy the request.
6.4.3 Pattern Structure
There are two ways to fix dynamic allocation so that it does not lead to fragmentation: (1) correlate the order of allocation and deallocation, or (2) do not allow memory to be allocated in any but a few spe-cific block sizes. The basic concept of the Fixed Sized Buffer Pattern is to not allow memory to be allocated in any random block size but to limit the allocation to a set of specific block sizes.
Imagine a system in which you can determine the worst case of the total number of objects needed (similar to computing the worst-case memory allocation) as well as the largest object size needed. If the entire heap was divided into blocks equal to the largest block ever needed, then you could guarantee that if there is any memory available, then the memory request could be fulfilled.
The cost of such an approach is the inefficient use of available memory. Even if only a single byte of memory were needed, a worst-case block would be allocated, wasting most of the space within the block. If the object sizes were randomly distributed between 1 byte and the worst case, then overall, 1 / 2 of the heap memory would be wasted when the heap was fully allocated. Clearly, this is wasteful, but the advantage of this approach is compelling: There will never be failure due to the fragmentation of memory.
To minimize this waste, the Fixed Sized Buffer Pattern provides a finite set of fixed-sized heaps, each of which offers blocks of a single size. Static analysis of the system can usually reveal a reasonable allocation of memory to the various-sized heaps. Memory is then allocated from the smallest block-sized heap that can fulfill the request. This compromise requires more analysis at design time but allows the designer to "tune" the available heap memory to minimize waste. Figure 6-5 shows the basic Fixed Sized Buffer Pattern.
Figure 6-5: Fixed Sized Buffer Pattern
6.4.4 Collaboration Roles
Client
The Client is the user of the objects allocated from the fixed sized heaps. It does this by creating new objects as needed. In C++ this can be done by overwriting the global new and delete operators. In other languages, it may be necessary to explicitly call Object-Factory.new() and ObjectFactory.delete().
Free Block List
This is a list of the unallocated blocks of memory within a single Memory Segment.
Heap Manager
This manages the sized heaps. When a request is made for a block of memory for an object, it determines the appropriate Sized Heap from which to request it. When memory is released, the Heap Manager can check the address for the memory block to determine which memory segment (and hence which free list) it should be added back into.
Memory Segment
A Memory Segment is a block of memory divided into equal-sized blocks, which may be allocated or unallocated. Only the free blocks must be listed, though. When memory is released, it is added back into the free list. The Memory Segment has attributes that provide the size of the blocks it holds and the starting and ending addresses for the Memory Segment.
Object Factory
The Object Factory takes over the job of allocation of memory on the heap. It does this by allocating an appropriately sized block of memory from one of the Sized Heaps and mapping the newly created object's data members into it and then calling the newly created object's constructor. Deleting an object reverses this procedure: The destructor of the object is called, and then the memory used is returned to the appropriate Free Block List.
Sized Heap
A Sized Heap manages the free and allocated blocks from a single Memory Segment. It returns a reference to the memory block when allocated, moves the block to the allocated list, and accepts a reference to an allocated block to free it. Then it moves that block to the free list so that subsequent requests can use it.
6.4.5 Consequences
The use of this pattern eliminates memory fragmentation. However, the pattern is suboptimal in terms of total allocated memory because more memory is allocated than is actually used. Assuming a random probability of memory size needs, on the average, half of the allocated memory is wasted. The use of Sized Heaps with appropriately sized blocks can alleviate some of this waste but cannot eliminate it. Many RTOSs support fixed sized block allocation out-of-the-box, simplifying the implementation.
6.4.6 Implementation Strategies
If you use an RTOS, then most of the pattern is provided for you by the underlying RTOS. In that case, you need to perform an analysis to determine the best allocation of your free memory into various-sized block heaps. If you rewrite global new and delete operators so that they use the Object Factory object rather than the default operators, then the use of sized heaps can be totally hidden from the clients.
6.4.7 Related Patterns
This pattern allows true dynamic allocation but without the problems of memory fragmentation. The issues of nondeterministic time are minimized but still present. However, there is no protection against memory leaks (clients neglecting to release inaccessible memory), inappropriate access to released memory, and the potentially critical issue of wasted memory. In simpler cases, the pooled allocation, or even static allocations patterns, may be adequate. If time predictability is not a major issue, then the Garbage Collector pattern may be a better choice, since it does protect against memory leaks.
6.4.8 Sample Model
Figure 6-6a shows a structural example of an instance of this pattern. In this case, there are three block-sized heaps: 128-byte blocks, 256-byte blocks, and 1024-byte blocks. Figure 6-6b presents a scenario in which a small object is allocated, followed by a larger object. After this, the smaller object is deleted.
Figure 6-6: Fixed Sized Buffer Pattern Example