A Wonderful Array: How Stacks, Bags, Lists, and Gap Buffers Work

AuthorKevin Nygaard
Published

Arrays are a foundational programming concept used in many data structures, including stacks, bags, and lists. This article explores these data structures and their implementations, and shows how they incremental evolve into gap buffers: arrays optimized for insertion and removal.

Contents

1 Arrays

In programming, the array is a prevalent and fundamental data structure, used both in higher level structures and by itself. It is a thin (practically non-existent) abstraction over program memory, and writing efficient and effective programs comes from a deep understanding of this reality.

This article explores key array concepts and shows their use in four array-based data structures: stacks, bags, lists, and gap buffers.

2 Stacks

The stack data structure holds data like a stack of papers, where pages are laid one on top of another, and the topmost page is visible and readily accessible. The most recently added item is on the top, and oldest item is on the bottom, hence the stack's other monikers: LIFO (last in, first out), and FILO (first in, last out).

An array that only adds and removes elements from one end has this same behavior, and thus, is a stack. The figure below shows adding and removing an element from the "top" of the stack.1

One diagram appending an element to the end of an array, and another removing the last element from an array.
Figure: Inserting and removing an element from the end of an array

In the upper half of the figure, the element "Hello" is added or pushed onto the stack. In the lower half, the element "Hello" is removed or popped off the stack. The bold figures indicate the number of valid elements; more specifically, it shows the value of the variable NameCount in the implementation below:

#define MaxNameCount 10
int NameCount;
char *Names[MaxNameCount];
void Push(char *Name) {
    if (NameCount >= MaxNameCount) { return 0; }
    Names[NameCount++] = Name;
}
char *Pop(void) {
    if (NameCount <= 0) { return 0; }
    return Names[--NameCount];
}

Notice how NameCount points to one element past the top of stack. Taking advantage of this relationship, the post-increment and pre-decrement operators elegantly push and pop elements from the stack, and also accurately reflect the underlying implementation: the processor's instruction set natively supports post-increment and pre-decrement addressing modes, which the compiler uses during object code generation.

This structure is compact and efficient: all elements are kept contiguous without "bubbles" or invalid cells, making iteration and indexing trivial. However, items must be added and removed from the end. Without this limitation, the stack becomes a more general purpose data structure: either a bag, or a list.

3 Bags and Lists

Bags2 and lists3 are similar data structures holding collections of elements; the former are unordered, and the latter are ordered.

As discussed earlier, adding and removing elements from the end of an array results in a stack. Consider removing an element from the middle. This opens a "hole" in the array and breaks contiguity, making queries and manipulations more difficult. The figure below shows two methods for restoring contiguity:

Two diagrams removing a middle element in an array: the first replaces the removed element with the last element of the array, and the second slides the remaining elements upward, eliminating the empty location.
Figure: Two techniques for maintaining array contiguity after removing an element

The first technique fills the empty location with the last element in the array, resulting in a bag: the original element order is neglected. The second technique shifts the remaining elements up, covering the empty location, resulting in a list: the original element order is maintained. The following implements the first technique, creating a bag:

#define MaxNameCount 10
int NameCount;
char *Names[MaxNameCount];
void Push(char *Name) {
    if (NameCount >= MaxNameCount) { return 0; }
    Names[NameCount++] = Name;
}
void Free(int Index) {
    if (NameCount <= 0) { return; }
    Names[Index] = Names[--NameCount];
}

Notice the similarities between Free() and the stack's Pop() listed earlier; both use the same pre-decrement pattern, but Free() inserts the popped item into the now empty index. Compare that with the second technique, creating a list:

#define MaxNameCount 10
int NameCount;
char *Names[MaxNameCount];
void Push(char *Name) {
    if (NameCount >= MaxNameCount) { return 0; }
    Names[NameCount++] = Name;
}
void Free(int Index) {
    if (NameCount <= 0) { return; }
    for (; Index < NameCount - 1; Index++) {
        Names[Index] = Names[Index + 1];
    }
    NameCount--;
}

Free() shifts the remaining elements down by one: an operation foreign to both stacks and bags. Also, this implementation only allows appending elements to the list; arbitrary insertion involves element shifting similar to Free(), which, as it turns out, is a costly operation.

4 Performance

Stacks are the most efficient structure in terms of element removal: just decrement a counter. Bags are similar, but add an additional step: write the popped element into the array. The additional cost is negligible with relatively small element sizes. Furthermore, both operate in constant time, regardless of the number of elements.

Like bags, lists also write an element during removal, but potentially more than one: the removal location and list count determine how many elements to write. The worst case is removing the first element, which rewrites all elements. For lists with many elements, or elements of insignificant size, or both, removal is a costly operation.

Similarly, insertion is cheap for both stacks and bags, but expensive for lists. The former share the same insertion code: a counter addition and an element write into the array. (Being unordered, the bag could use any insertion method, but the stack's push is easy and efficient.) Conversely, lists open a gap for the new element by performing a reverse shift, with the same costs as removals.

However, lazily maintaining the list (ie leaving the gap present after removal) removes all shifts when inserting an element at the removal location. The next data structure uses this insight, and purposefully widens the gap.

5 Gap Buffers

The gap buffer divides an array into three sub-arrays: the upper array, the gap array, and the lower array. This layout optimizes insertion and removal around a location by amortizing and deferring shifts, as shown in the figure below:

One diagram removing a middle element in an array, sliding the remaining elements to the end, enlarging the empty space created from the removal. The other diagram inserts new elements into this newly created space, though not completely filling it.
Figure: Removing and inserting elements from the middle of an array

Initially, the gap buffer contains three sub-arrays (not two): the upper array (0–3), the gap (4–6), and the lower array (empty). Removing the element at index 1 repartitions the sub-arrays into the upper array (0–0), the gap (1–4), and the lower array (5–6). The figure shows the lower array (2–3) materializing and being shifted down. Equivalently, this inserts the gap (4–6) at the removal location (1) and increases its size by one.

Sequential inserts and removals around the gap result in zero shifts. Shown above, inserting two new elements causes no additional shifts; each insert appends to the upper array, increases its size by one, and decreases the gap size by one. Similarly, the lower array can be prepended by adjusting its size and the gap size. Removals perform the inverse operation, increasing the gap size and decreasing either the upper or lower array.

Conceptually equivalent is two stacks that grow toward one another (one is upside down), and the area in between is the gap. Then, inserts are equivalent to pushing, removals are equivalent to popping, and moving the gap is equivalent to popping one stack and pushing onto the other.4

Each operation moves the gap as needed; operations around the gap are inexpensive, either because the gap is already correct, or because few elements need "migrating" across the gap for correction. However, correctly moving the gap is nuanced.

5.1 Moving the Gap

Processors are optimized for forward sequential data processing (ie from lower to higher addresses); arrays are laid out in this direction, caches pre-fill assuming this direction, etc. Thus, moving data across a gap buffer's gap in the forward direction is logical:

Two diagrams showing quarter-full arrays, sliding elements from the beginning of the arrays to the end; the first diagram starts with the first element and continues forward, the second diagram starts with the last element and continues backward. The result is identical.
Figure: Effect of copy direction in less than half-full arrays

As shown in the figure, both forward and backward copying have identical results, suggesting use of the faster, forward technique. However, consider a crowded array:

Two diagrams showing three-quarter-full arrays, sliding elements from the beginning of the arrays to the end. The first diagram starts with the first element and continues forward, accidentally overwriting yet-to-be copied elements, thus corrupting the result. The second diagram starts with the last element and continues backward, successfully moving all the elements.
Figure: Effect of copy direction in more than half-full arrays

Because the source and destination locations overlap, forward copying corrupts the array, writing over yet-to-be copied elements. Backward copying successfully moves the data in this scenario, but the same hazard exists for backward copying, too. The implementation must determine the safe direction to avoid corruption.

5.2 Implementation

With the above in mind, the following code implements a simple gap buffer, enabling efficient insertions and removals around the gap:

#define MaxNameCount 10
int NameCount, GapIndex, GapCount = MaxNameCount;
char *Names[MaxNameCount];
void SetCursor(int CursorIndex) {
    if (CursorIndex < GapIndex) {
        int Count  = GapIndex - CursorIndex;
        int Source = GapIndex - 1;
        int Dest   = GapIndex + GapCount - 1;
        for (int Index = 0; Index < Count; Index++) {
            Names[Dest - Index] = Names[Source - Index];
        }
    } else {
        int Count  = CursorIndex - GapIndex;
        int Source = GapIndex + GapCount;
        int Dest   = GapIndex;
        for (int Index = 0; Index < Count; Index++) {
            Names[Dest + Index] = Names[Source + Index];
        }
    }
    GapIndex = CursorIndex;
}
void Push(char *Name) {
    if (NameCount >= MaxNameCount) { return 0; }
    NameCount++, GapCount--, Names[GapIndex++] = Name;
}
void Free(int Index) {
    if (NameCount <= 0) { return; }
    SetCursor(Index);
    NameCount--, GapCount++;
}

Push() inserts an element at the current gap location, called the cursor. Free() moves the cursor to the element to remove, and removes it. SetCursor() manually sets the gap to a desired location. GapIndex and GapCount track the location and size of the gap buffer, and NameCount tracks the number of valid elements in the array. These values can derive the lower array location and size, and upper array size (should they be needed).

SetCursor() determines the safe copy direction by comparing the current and desired location of the gap: if moving the gap backward, copy backward; if moving the gap forward, copy forward.

To iterate over the buffer, either iterate over the upper and lower arrays individually, or "normalize" the buffer by moving the gap to either end.

For simplicity, this implementation is sub-optimal. For example, Free() needlessly copies the element before removal. Furthermore, SetCursor() only copies elements forward or backward: a more sophisticated implementation breaks the copy into its non-overlapping and overlapping parts, using the faster, forward copy for the former, and the slower, backward copy for the latter. (Caveat lector, memcpy(3) does not support overlapping regions; memmove(3) does.)

However, this example illustrates the basic idea to carry into your own specialized data structures.

6 Closing Thoughts

Stacks, bags, lists, and gap buffers are powerful array-based data structures with three major advantages: 1) they are efficient, 2) they are fast, and 3) they are simple. Since elements are packed closely together, they can be stored with minimal overhead, contrary to pointer-based data structures. Because of their density, arrays have high spatial locality and are quickly accessed with minimal address computation. Finally, a virtue to both efficiency and speed, simplicity makes the data structure easier to understand, implement, use, debug, and maintain.5 With an intimate understanding of arrays, data structures and algorithms reveal themselves for understanding and optimization.

Thanks for reading; I hope this brief article on arrays has been helpful. Feedback and corrections are welcome via email. ✚

Footnotes

  1. The "top" of the stack is a conceptual term, referring to the most recently pushed item. In the figures, addresses increase as they go down the page, which puts the "top" of the stack at the bottom of the figure visually. The figure is presented this way for two reasons: 1) English is read from top to bottom, and 2) debug output is printed similarly, either from stepping through an array sequentially, or by viewing a memory dump. Confusion increases when the stack is implemented "upside down," but appears "right side up" visually. In this article, directional terms typically refer to visual positioning, not address space positioning. Get comfortable with these mental gymnastics – we are likely stuck with this confusion forever. For more examples, see inverted y coordinate systems.
  2. A set is closely related to a bag; the former contains only unique elements, the latter allows duplicate elements. However, the term set is often used both for bags and sets.
  3. List refers to an array-based list, not a linked list. (However, linked lists are also ordered.) This overloading of terms is yet another data point proving the most difficult task of any endeavor is communication, and by corollary, naming things.
  4. Implementing a gap buffer using stacks results in poor performance, unless the compiler removes the abstraction during optimization – compilers are good, but not as good as the optimizer between your ears.
  5. Notice how a single counter maintains an array's state, enabling initialization and truncation by modifying one counter. Compare that trees and linked lists, where these operations involve maintaining pointer networks.

See Also

Further Resources