- 7.1 Quadratic Behavior of Repeated List Search
- 7.2 Deleting or Adding Elements to the Middle of a List
- 7.3 Strings Are Iterables of Strings
- 7.4 (Often) Use enum Rather Than CONSTANT
- 7.5 Learn Less Common Dictionary Methods
- 7.6 JSON Does Not Round-Trip Cleanly to Python
- 7.7 Rolling Your Own Data Structures
- 7.8 Wrapping Up
7.7 Rolling Your Own Data Structures
This section covers a nuanced issue (and a long one). Readers who have come out of a college data structures course, or read a good book on the topic,4 have learned of many powerful data structures that are neither within Python’s standard library nor in the prominent third-party libraries I discuss in various parts of this book. Some of these include treaps, k-d trees, R-trees, B-trees, Fibonacci heaps, tries (prefix tree), singly-, doubly-, and multiply-linked lists, heaps, graphs, bloom filters, cons cells, and dozens of others.
The choice of which data structures to include as built-ins, or in the standard library, is one that language designers debate, and which often leads to in-depth discussion and analysis. Python’s philosophy is to include a relatively minimal, but extremely powerful and versatile, collection of primitives with dict, list, tuple, set, frozenset, bytes, and bytearray in __builtins__ (arguably, complex is a simple data structure as well). Modules such as collections, queue, dataclasses, enum, array, and a few others peripherally, include other data structures, but even there the number is much smaller than for many programming languages.
A clear contrast with Python, in this regard, is Java. Whereas Python strives for simplicity, Java strives to include every data structure users might ever want within its standard library (i.e., the java.util namespace). Java has hundreds of distinct data structures included in the language itself. For Pythonic programmers, this richness of choice largely leads only to “analysis paralysis” (https://en.wikipedia.org/wiki/Analysis_paralysis). Choosing among so many only-slightly-different data structures imposes a large cognitive burden, and the final decision made (after greater work) often remains sub-optimal. Giving someone more hammers can sometimes provide little other than more ways for them to hit their thumb.
7.7.1 When Rolling Your Own Is a Bad Idea
Writing any of the data structures mentioned thus far is comparatively easy in Python. Doing so is often the subject of college exams and software engineering interviews, for example. Doing so is also usually a bad idea for most software tasks you will face. When you reach quickly for an opportunity to use one of these data structures you have learned—each of which genuinely does have concrete advantages in specific contexts—it often reflects an excess of cleverness and eagerness more than it does good design instincts.
A reality is that Python itself is a relatively slow bytecode interpreter. Unlike compiled programming languages, including just-in-time ( JIT) compiled languages, which produce machine-native instructions, CPython is a giant bytecode dispatch loop. Every time an instruction is executed, many levels of indirection are needed, and basic values are all relatively complex wrappers around their underlying data (remember all those methods of datatypes that you love so much?).
Accompanying the fact that Python is relatively slow, most of the built-in and standard library data structures you might reach for are written in highly optimized C. Much the same is true for the widely used library NumPy, which has a chapter of its own.
On the one hand, custom data structures such as those mentioned can have significant big-O complexity advantages over those that come with Python.5 On the other hand, these advantages need to be balanced against what is usually a (roughly) constant multiplicative disadvantage to pure-Python code. That is to say, implementing the identical data structure purely in Python is likely to be 100x, or even 1000x, slower than doing so in a well-optimized compiled language like C, C++, Rust, or Fortran. At some point as a dataset grows, big-O dominates any multiplicative factor, but often that point is well past the dataset sizes you actually care about.
Plus, writing a new data structure requires actually writing it. This is prone to bugs, takes developer time, needs documentation, and accumulates technical debt. In other words, doing so might very well be a mistake.
7.7.2 When Rolling Your Own Is a Good Idea
Taking all the warnings and caveats of the first subsection of this discussion into account, there remain many times when not writing a custom data structure is its own mistake. Damned if you do, damned if you don’t, one might think. But the real issue is more subtle; it’s a mistake to make a poor judgment about which side of this decision to choose.
I present in the following subsections a “pretty good” specialized data structure that illustrates both sides. This example is inspired by the section “Deleting or Adding Elements to the Middle of a List” earlier in this chapter. To quickly summarize that section: Inserting into the middle of a Python list is inefficient, but doing so is very often a matter of solving the wrong problem.
For now, however, let’s suppose that you genuinely do need to have a data structure that is concrete, strictly ordered, indexable, iterable, and into which you need to insert new items in varying middle positions. There simply is not any standard library or widely used Python library that gives you exactly this. Perhaps it’s worth developing your own.
Always Benchmark When You Create a Data Structure
Before I show you the code I created to solve this specific requirement, I want to reveal the “punch line” by showing you performance. A testing function shows the general behavior we want to be performant.
The insert_many() function that exercises our use case
from random import randint, seed from get_word import get_word # ❶ def insert_many(Collection, n, test_seed="roll-your-own"): seed(test_seed) # ❷ collection = Collection() for _ in range(n): collection.insert(randint(0, len(collection)), get_word()) return collection
❶ The get_word() function available at this book’s website is used in many examples. It simply returns a different word each time it is called.
❷ Using the same random seed assures that we do exactly the same insertions for each collection type.
The testing function performs however many insertions we ask it to, and we can time that:
>>> from binary_tree import CountingTree >>> %timeit insert_many(list, 100) 92.9 μs ± 742 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) >>> %timeit insert_many(CountingTree, 100) 219 μs ± 8.17 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) >>> %timeit insert_many(list, 10_000) 13.9 ms ± 193 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) >>> %timeit insert_many(CountingTree, 10_000) 38 ms ± 755 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) >>> %timeit insert_many(list, 100_000) 690 ms ± 5.84 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) >>> %timeit insert_many(CountingTree, 100_000) 674 ms ± 20.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) >>> %timeit insert_many(list, 1_000_000) 1min 5s ± 688 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) >>> %timeit insert_many(CountingTree, 1_000_000) 9.72 s ± 321 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Without having yet said just what a CountingTree is, I can say that I spent more time ironing out the bugs in my code than I entirely want to admit. It’s not a large amount of code, as you’ll see, but the details are futzy.
Notable points are that even though I’ve created a data structure optimized for exactly this task, it does worse than list for 100 items. CountingTree does worse than list for 10,000 items also, even by a slightly larger margin than for 100. However, my custom data structure pulls ahead slightly for 100,000 items; and then hugely so for a million items.
It would be painful to use list for the million-item sequence, and increasingly worse if I needed to do even more collection.insert() operations.
Performing Magic in Pure Python
The source code for binary_tree.py is available at the book’s website (https://gnosis.cx/better). But we will go through most of it here. The basic idea behind my Counting Binary Tree data structure is that I want to keep a binary tree, but I also want each node to keep a count of the total number of items within it and all of its descendants. Unlike some other tree data structures, we specifically do not want to order the node values by their inequality comparison, but rather to maintain each node exactly where it is inserted.
Figure 7.1 A graph of a Counting Binary Tree.
In Figure 7.1, each node contains a value that is a single letter; in parentheses we show the length of each node with its subtree. Identical values can occur in multiple places (unlike, e.g., for a set or a dictionary key). Finding the len() of this data structure is a matter of reading a single attribute. But having this length available is what guides insertions.
It is very easy to construct a sequence from a tree. It is simply a matter of choosing a deterministic rule for how to order the nodes. For my code, I chose to use depth-first, left-to-right; that’s not the only possible choice, but it is an obvious and common one. In other words, every node value occurs at exactly one position in the sequence, and every sequence position (up to the length) is occupied by exactly one value. Since our use case is approximately random insertion points for new items, no extra work is needed for rebalancing or enforcing any other invariants.
The code shown only implements insertions, our stated use case. A natural extension to the data structure would be to implement deletions as well. Or changing values at a given position. Or other capabilities that lists and other data structures have. Most of those capabilities would remain inexpensive, but details would vary by the specific operation, of course.
The basic implementation of Counting Binary Tree
class CountingTree: def __init__(self, value=EMPTY): self.left = EMPTY self.right = EMPTY self.value = value self.length = 0 if value is EMPTY else 1 def insert(self, index: int, value): if index != 0 and not 0 < index <= self.length: raise IndexError( f"CountingTree index {index} out of range") if self.value is EMPTY: self.value = value elif index == self.length: if self.right is EMPTY: self.right = CountingTree(value) else: self.right.insert( index - (self.left.length + 1), value) elif index == 0 and self.left is EMPTY: self.left = CountingTree(value) else: if index > self.left.length: self.right.insert( index - (self.left.length + 1), value) else: self.left.insert(index, value) self.length += 1
This much is all we actually need to run the benchmarks performed here. Calling CountingTree.insert() repeatedly creates trees much like that in the figure. The .left and .right attributes at each level might be occupied by the sentinel EMPTY, which the logic can utilize for nodes without a given child.
It’s useful also to define a few other behaviors we’d like a collection to have.
Additional methods within Counting Binary Tree
def append(self, value): self.insert(len(self), value) def __iter__(self): if self.left is not EMPTY: yield from self.left if self.value is not EMPTY: yield self.value if self.right is not EMPTY: yield from self.right def __repr__(self): return f"CountingTree({list(self)})" def __len__(self): return self.length def tree(self, indent=0): print(f"{'· '*indent}{self.value}") if self.left is not EMPTY or self.right is not EMPTY: self.left.tree(indent+1) self.right.tree(indent+1)
These other methods largely just build off of .insert(). A CountingBinaryTree is iterable, but along with .__iter__() it would be natural to define .__getitem__() or .__contains__() to allow use of square bracket indexing and the in operator. These would be straightforward.
For the .tree() method we need our sentinel to have a couple specific behaviors. This method is just for visual appeal in viewing the data structure, but it’s nice to have.
The EMPTY sentinel
# Sentinel for an unused node class Empty: length = 0 def __repr__(self): return "EMPTY" def tree(self, indent=0): print(f"{'· '*indent}EMPTY") EMPTY = Empty()
Observing the Behavior of Our Data Structure
By no means am I advocating the general use of this specific skeletal data structure implementation. It’s shown merely to illustrate the general way you might go about creating something analogous for well-understood use cases and with a knowledge of the theoretical advantages of particular data structures. Let’s look at a few behaviors, though:
>>> insert_many(CountingTree, 10) CountingTree(['secedes', 'poss', 'killcows', 'unpucker', 'gaufferings', 'funninesses', 'trilingual', 'nihil', 'bewigging', 'reproachably']) >>> insert_many(list, 10) # ❶ ['secedes', 'poss', 'killcows', 'unpucker', 'gaufferings', 'funninesses', 'trilingual', 'nihil', 'bewigging', 'reproachably'] >>> ct = insert_many(CountingTree, 1000, "david") >>> lst = insert_many(list, 1000, "david") >>> list(ct) == lst # ❷ True >>> insert_many(CountingTree, 9, "foobar").tree() # ❸ loaf · acknown · · spongily · · · saeculums · · · EMPTY · · EMPTY · fecundities · · EMPTY · · input · · · boddle · · · · sots · · · · shrifts · · · EMPTY
❶ Insertions into list or CountingTree preserve the same order.
❷ Equivalence for some operations between list and CountingTree
❸ Display the underlying tree implementing the sequence.
The tree is fairly balanced, and sometimes a given subtree fills only one or the other of its left and right children. This balance would be lost if, for example, we always used .append() (it would degenerate to a singly-linked list).
7.7.3 Takeaways
This section has had a long discussion. The takeaway you should leave with isn’t a simple one. The lesson is “be subtle and accurate in your judgments” about when to create and when to avoid creating custom data structures. It’s not a recipe, but more vaguely an advocacy of a nuanced attitude.
As a general approach to making the right choice, I’d suggest following a few steps in your thinking:
Try implementing the code using a widely used, standard Python data structure.
Run benchmarks to find out if any theoretical sub-optimality genuinely matters for the use case your code is put to.
Research the wide range of data structures that exist in the world to see which, if any, are theoretically optimal for your use case.
Research whether someone else has already written a well-tested Python implementation of the less common data structure you are considering. Such a library might not be widely used simply because the niche it fulfills is relatively narrow. On the other hand, it is also easy to put partially developed, poorly tested, and buggy libraries on PyPI, conda-forge, GitHub, GitLab, Bitbucket, or other public locations.
Assuming you are writing your own after considering the preceding steps, create both tests and benchmarks either in conjunction with—or even before—the implementation of the data structure.
If your well-tested implementation of a new data structure makes your code better, ask your boss for a raise or a bonus… and then share the code with the Python community under an open source license.