- Item 1: Know Which JavaScript You Are Using
- Item 2: Understand JavaScript's Floating-Point Numbers
- Item 3: Beware of Implicit Coercions
- Item 4: Prefer Primitives to Object Wrappers
- Item 5: Avoid using == with Mixed Types
- Item 6: Learn the Limits of Semicolon Insertion
- Item 7: Think of Strings As Sequences of 16-Bit Code Units
Item 7: Think of Strings As Sequences of 16-Bit Code Units
Unicode has a reputation for being complicated—despite the ubiquity of strings, most programmers avoid learning about Unicode and hope for the best. But at a conceptual level, there’s nothing to be afraid of. The basics of Unicode are perfectly simple: Every unit of text of all the world’s writing systems is assigned a unique integer between 0 and 1,114,111, known as a code point in Unicode terminology. That’s it—hardly any different from any other text encoding, such as ASCII. The difference, however, is that while ASCII maps each index to a unique binary representation, Unicode allows multiple different binary encodings of code points. Different encodings make trade-offs between the amount of storage required for a string and the speed of operations such as indexing into a string. Today there are multiple standard encodings of Unicode, the most popular of which are UTF-8, UTF-16, and UTF-32.
Complicating the picture further, the designers of Unicode historically miscalculated their budget for code points. It was originally thought that Unicode would need no more than 216 code points. This made UCS-2, the original standard 16-bit encoding, a particularly attractive choice. Since every code point could fit in a 16-bit number, there was a simple, one-to-one mapping between code points and the elements of their encodings, known as code units. That is, UCS-2 was made up of individual 16-bit code units, each of which corresponded to a single Unicode code point. The primary benefit of this encoding is that indexing into a string is a cheap, constant-time operation: Accessing the nth code point of a string simply selects from the nth 16-bit element of the array. Figure 1.1 shows an example string consisting only of code points in the original 16-bit range. As you can see, the indices match up perfectly between elements of the encoding and code points in the Unicode string.
Figure 1.1. A JavaScript string containing code points from the Basic Multilingual Plane
As a result, a number of platforms at the time committed to using a 16-bit encoding of strings. Java was one such platform, and JavaScript followed suit: Every element of a JavaScript string is a 16-bit value. Now, if Unicode had remained as it was in the early 1990s, each element of a JavaScript string would still correspond to a single code point.
This 16-bit range is quite large, encompassing far more of the world’s text systems than ASCII or any of its myriad historical successors ever did. Even so, in time it became clear that Unicode would outgrow its initial range, and the standard expanded to its current range of over 220 code points. The new increased range is organized into 17 subranges of 216 code points each. The first of these, known as the Basic Multilingual Plane (or BMP), consists of the original 216 code points. The additional 16 ranges are known as the supplementary planes.
Once the range of code points expanded, UCS-2 had become obsolete: It needed to be extended to represent the additional code points. Its successor, UTF-16, is mostly the same, but with the addition of what are known as surrogate pairs: pairs of 16-bit code units that together encode a single code point 216 or greater. For example, the musical G clef symbol (“”), which is assigned the code point U+1D11E—the conventional hexadecimal spelling of code point number 119,070—is represented in UTF-16 by the pair of code units 0xd834 and 0xdd1e. The code point can be decoded by combining selected bits from each of the two code units. (Cleverly, the encoding ensures that neither of these “surrogates” can ever be confused for a valid BMP code point, so you can always tell if you’re looking at a surrogate, even if you start searching from somewhere in the middle of a string.) You can see an example of a string with a surrogate pair in Figure 1.2. The first code point of the string requires a surrogate pair, causing the indices of code units to differ from the indices of code points.
Figure 1.2. A JavaScript string containing a code point from a supplementary plane
Because each code point in a UTF-16 encoding may require either one or two 16-byte code units, UTF-16 is a variable-length encoding: The size in memory of a string of length n varies based on the particular code points in the string. Moreover, finding the nth code point of a string is no longer a constant-time operation: It generally requires searching from the beginning of the string.
But by the time Unicode expanded in size, JavaScript had already committed to 16-bit string elements. String properties and methods such as length, charAt, and charCodeAt all work at the level of code units rather than code points. So whenever a string contains code points from the supplementary planes, JavaScript represents each as two elements—the code point’s UTF-16 surrogate pair—rather than one. Simply put:
An element of a JavaScript string is a 16-bit code unit.
Internally, JavaScript engines may optimize the storage of string contents. But as far as their properties and methods are concerned, strings behave like sequences of UTF-16 code units. Consider the string from Figure 1.2. Despite the fact that the string contains six code points, JavaScript reports its length as 7:
" clef"
.length; // 7"G clef"
.length; // 6
Extracting individual elements of the string produces code units rather than code points:
" clef"
.charCodeAt
(0
); // 55348 (0xd834)" clef"
.charCodeAt
(1
); // 56606 (0xdd1e)" clef"
.charAt
(1
) ===" "
; // false" clef"
.charAt
(2
) ===" "
; // true
Similarly, regular expressions operate at the level of code units. The single-character pattern (“.”) matches a single code unit:
/^.$/
.test
(""
); // false/^..$/
.test
(""
); // true
This state of affairs means that applications working with the full range of Unicode have to work a lot harder: They can’t rely on string methods, length values, indexed lookups, or many regular expression patterns. If you are working outside the BMP, it’s a good idea to look for help from code point-aware libraries. It can be tricky to get the details of encoding and decoding right, so it’s advisable to use an existing library rather than implement the logic yourself.
While JavaScript’s built-in string datatype operates at the level of code units, this doesn’t prevent APIs from being aware of code points and surrogate pairs. In fact, some of the standard ECMAScript libraries correctly handle surrogate pairs, such as the URI manipulation functions encodeURI, decodeURI, encodeURIComponent, and decodeURIComponent. Whenever a JavaScript environment provides a library that operates on strings—for example, manipulating the contents of a web page or performing I/O with strings—you should consult the library’s documentation to see how it handles the full range of Unicode code points.
Things to Remember
- JavaScript strings consist of 16-bit code units, not Unicode code points.
- Unicode code points 216 and above are represented in JavaScript by two code units, known as a surrogate pair.
- Surrogate pairs throw off string element counts, affecting length, charAt, charCodeAt, and regular expression patterns such as “.”.
- Use third-party libraries for writing code point-aware string manipulation.
- Whenever you are using a library that works with strings, consult the documentation to see how it handles the full range of code points.