- Why Use Binary Trees?
- Tree Terminology
- An Analogy
- How Do Binary Search Trees Work?
- Finding a Node
- Inserting a Node
- Traversing the Tree
- Finding Minimum and Maximum Key Values
- Deleting a Node
- The Efficiency of Binary Search Trees
- Trees Represented as Arrays
- Printing Trees
- Duplicate Keys
- The BinarySearchTreeTester.py Program
- The Huffman Code
- Summary
- Questions
- Experiments
- Programming Projects
The Huffman Code
You shouldn’t get the idea that binary trees are always search trees. Many binary trees are used in other ways. Figure 8-16 shows an example where a binary tree represents an algebraic expression. We now discuss an algorithm that uses a binary tree in a surprising way to compress data. It’s called the Huffman code, after David Huffman, who discovered it in 1952. Data compression is important in many situations. An example is sending data over the Internet or via digital broadcasts, where it’s important to send the information in its shortest form. Compressing the data means more data can be sent in the same time under the bandwidth limits.
Character Codes
Each character in an uncompressed text file is represented in the computer by one to four bytes, depending on the way characters are encoded. For the venerable ASCII code, only one byte is used, but that limits the range of characters that can be expressed to fewer than 128. To account for all the world’s languages plus other symbols like emojis , the various Unicode standards use up to four bytes per character. For this discussion, we assume that only the ASCII characters are needed, and each character takes one byte (or eight bits). Table 8-2 shows how some characters are represented in binary using the ASCII code.
Table 8-2 Some ASCII Codes
Character |
Decimal |
Binary |
---|---|---|
@ |
64 |
01000000 |
A |
65 |
01000001 |
B |
66 |
01000010 |
… |
… |
… |
Y |
89 |
01011001 |
Z |
90 |
01011010 |
… |
… |
… |
a |
97 |
01100001 |
b |
98 |
01100010 |
There are several approaches to compressing data. For text, the most common approach is to reduce the number of bits that represent the most-used characters. As a consequence, each character takes a variable number of bits in the “stream” of bits that represents the full text.
In English, E and T are very common letters, when examining prose and other person-to-person communication and ignoring things like spaces and punctuation. If you choose a scheme that uses only a few bits to write E, T, and other common letters, it should be more compact than if you use the same number of bits for every letter. On the other end of the spectrum, Q and Z seldom appear, so using a large number of bits occasionally for those letters is not so bad.
Suppose you use just two bits for E—say 01. You can’t encode every letter of the English alphabet in two bits because there are only four 2-bit combinations: 00, 01, 10, and 11. Can you use these four combinations for the four most-used characters? Well, if you did, and you still wanted to have some encoding for the lesser-used characters, you would have trouble. The algorithm that interprets the bits would have to somehow guess whether a pair of bits is a single character or part of some longer character code.
One of the key ideas in encoding is that we must set aside some of the code values as indicators that a longer bit string follows to encode a lesser-used character. The algorithm needs a way to look at a bit string of a particular length and determine if that is the full code for one of the characters or just a prefix for a longer code value. You must be careful that no character is represented by the same bit combination that appears at the beginning of a longer code used for some other character. For example, if E is 01, and Z is 01011000, then an algorithm decoding 01011000 wouldn’t know whether the initial 01 represented an E or the beginning of a Z. This leads to a rule: No code can be the prefix of any other code.
Consider also that in some messages, E might not be the most-used character. If the text is a program source file, for example, punctuation characters such as the colon (:), semicolon (;), and underscore (_) might appear more often than E does. Here’s a solution to that problem: for each message, you make up a new code tailored to that particular message. Suppose you want to send the message SPAM SPAM SPAM EGG + SPAM. The letter S appears a lot, and so does the space character. You might want to make up a table showing how many times each letter appears. This is called a frequency table, as shown in Table 8-3.
Table 8-3 Frequency Table for the SPAM Message
Character |
Count |
|
Character |
Count |
---|---|---|---|---|
A |
4 |
|
P |
4 |
E |
1 |
|
S |
4 |
G |
2 |
|
Space |
5 |
M |
4 |
|
+ |
1 |
The characters with the highest counts should be coded with a small number of bits. Table 8-4 shows one way how you might encode the characters in the SPAM message.
Table 8-4 Huffman Code for the SPAM Message
Character |
Count |
Code |
|
Character |
Count |
Code |
---|---|---|---|---|---|---|
A |
4 |
111 |
|
P |
4 |
110 |
E |
1 |
10000 |
|
S |
4 |
101 |
G |
2 |
1001 |
|
Space |
5 |
01 |
M |
4 |
00 |
|
+ |
1 |
10001 |
You can use 01 for the space because it is the most frequent. The next most frequent characters are S, P, A, and M, each one appearing four times. You use the code 00 for the last one, M. The remaining codes can’t start with 00 or 01 because that would break the rule that no code can be a prefix of another code. That leaves 10 and 11 to use as prefixes for the other characters.
What about 3-bit code combinations? There are eight possibilities: 000, 001, 010, 011, 100, 101, 110, and 111, but you already know you can’t use anything starting with 00 or 01. That eliminates four possibilities. You can assign some of those 3-bit codes to the next most frequent characters, S as 101, P as 110, and A as 111. That leaves the prefix 100 to use for the remaining characters. You use a 4-bit code, 1001, for the next most frequent character, G, which appears twice. There are two characters that appear only once, E and +. They are encoded with 5-bit codes, 10000 and 10001.
Thus, the entire message is coded as
101 110 111 00 01 101 110 111 00 01 101 110 111 00 01 10000 1001 1001 01 10001 01 101 110 111 00
For legibility, we show this message broken into the codes for individual characters. Of course, all the bits would run together because there is no space character in a binary message, only 0s and 1s. That makes it more challenging to find which bits correspond to a character. The main point, however, is that the 25 characters in the input message, which would typically be stored in 200 bits in memory (8 × 25), require only 72 bits in the Huffman coding.
Decoding with the Huffman Tree
We show later how to create Huffman codes. First, let’s examine the somewhat easier process of decoding. Suppose you received the string of bits shown in the preceding section. How would you transform it back into characters? You could use a kind of binary tree called a Huffman tree. Figure 8-27 shows the Huffman tree for the SPAM message just discussed.
FIGURE 8-27 Huffman tree for the SPAM message
The characters in the message appear in the tree as leaf nodes. The higher their frequency in the message, the higher up they appear in the tree. The number outside each leaf node is its frequency. That puts the space character (sp) at the second level, and the S, P, A, and M characters at the second or third level. The least frequent, E and +, are on the lowest level, 5.
How do you use this tree to decode the message? You start by looking at the first bit of the message and set a pointer to the root node of the tree. If you see a 0 bit, you move the pointer to the left child of the node, and if you see a 1 bit, you move it right. If the identified node does not have an associated character, then you advance to the next bit in the message. Try it with the code for S, which is 101. You go right, left, then right again, and voila, you find yourself on the S node. This is shown by the blue arrows in Figure 8-27.
You can do the same with the other characters. After you’ve arrived at a leaf node, you can add its character to the decoded string and move the pointer back to the root node. If you have the patience, you can decode the entire bit string this way.
Creating the Huffman Tree
You’ve seen how to use a Huffman tree for decoding, but how do you create this tree? There are many ways to handle this problem. You need a Huffman tree object, and that is somewhat like the BinarySearchTree described previously in that it has nodes that have up to two child nodes. It’s quite different, however, because routines that are specific to keys in search trees, like find(), insert(), and delete(), are not relevant. The constraint that a node’s key be larger than any key of its left child and equal to or less than any key of its right child doesn’t apply to a Huffman tree. Let’s call the new class HuffmanTree, and like the search tree, store a key and a value at each node. The key will hold the decoded message character such as S or G. It could be the space character, as you’ve seen, and it needs a special value for “no character”.
Here is the algorithm for constructing a Huffman tree from a message string:
Preparation
Count how many times each character appears in the message string.
Make a HuffmanTree object for each character used in the message. For the SPAM message example, that would be eight trees. Each tree has a single node whose key is a character and whose value is that character’s frequency in the message. Those values can be found in Table 8-3 or Table 8-4 for the SPAM message.
Insert these trees in a priority queue (as described in Chapter 4). They are ordered by the frequency (stored as the value of each root node) and the number of levels in the tree. The tree with the smallest frequency has the highest priority. Among trees with equal frequency, the one with more levels is the highest priority. In other words, when you remove a tree from the priority queue, it’s always the one with the deepest tree of the least-used character. (Breaking ties using the tree depth, improves the balance of the final Huffman tree.)
That completes the preparation, as shown in Step 0 of Figure 8-28. Each single node Huffman trees has a character shown in the center of the node and a frequency value shown below and to the left of the node.
FIGURE 8-28 Growing the Huffman tree, first six steps
Then do the following:
Tree consolidation
Remove two trees from the priority queue and make them into children of a new node. The new node has a frequency value that is the sum of the children’s frequencies; its character key can be left blank (the special value for no character, not the space character).
Insert this new, deeper tree back into the priority queue.
Keep repeating steps 1 and 2. The trees will get larger and larger, and there will be fewer and fewer of them. When there is only one tree left in the priority queue, it is the Huffman tree and you’re done.
Figure 8-28 and Figure 8-29 show how the Huffman tree is constructed for the SPAM message.
FIGURE 8-29 Growing the Huffman tree, final step
Coding the Message
Now that you have the Huffman tree, how do you encode a message? You start by creating a code table, which lists the Huffman code alongside each character. To simplify the discussion, we continue to assume that only ASCII characters are possible, so we need a table with 128 cells. The index of each cell would be the numerical value of the ASCII character: 65 for A, 66 for B, and so on. The contents of the cell would be the Huffman code for the corresponding character. Initially, you could fill in some special value for indicating “no code” like None or an empty string in Python to check for errors where you failed to make a code for some character.
Such a code table makes it easy to generate the coded message: for each character in the original message, you use its code as an index into the code table. You then repeatedly append the Huffman codes to the end of the coded message until it’s complete.
To fill in the codes in the table, you traverse the Huffman tree, keeping track of the path to each node as it is visited. When you visit a leaf node, you use the key for that node as the index to the table and insert the path as a binary string into the cell’s value. Not every cell contains a code—only those appearing in the message. Figure 8-30 shows how this looks for the SPAM message. The table is abbreviated to show only the significant rows. The path to the leaf node for character G is shown as the tree is being traversed.
FIGURE 8-30 Building the code table
The full code table can be built by calling a method that starts at the root and then calls itself recursively for each child. Eventually, the paths to all the leaf nodes will be explored, and the code table will be complete.
One more thing to consider: if you receive a binary message that’s been compressed with a Huffman code, how do you know what Huffman tree to use for decoding it? The answer is that the Huffman tree must be sent first, before the binary message, in some format that doesn’t require knowledge of the message content. Remember that Huffman codes are for compressing the data, not encrypting it. Sending a short description of the Huffman tree followed by a compressed version of a long message saves many bits.