- 2-1 Manipulating Rightmost Bits
- 2-2 Addition Combined with Logical Operations
- 2-3 Inequalities among Logical and Arithmetic Expressions
- 2-4 Absolute Value Function
- 2-5 Average of Two Integers
- 2-6 Sign Extension
- 2-7 Shift Right Signed from Unsigned
- 2-8 Sign Function
- 2-9 Three-Valued Compare Function
- 2-10 Transfer of Sign Function
- 2-11 Decoding a "Zero Means 2 **n" Field
- 2-12 Comparison Predicates
- 2-13 Overflow Detection
- 2-14 Condition Code Result of Add, Subtract, and Multiply
- 2-15 Rotate Shifts
- 2-16 Double-Length Add/Subtract
- 2-17 Double-Length Shifts
- 2-18 Multibyte Add, Subtract, Absolute Value
- 2-19 Doz, Max, Min
- 2-20 Exchanging Registers
- 2-21 Alternating among Two or More Values
- 2-22 A Boolean Decomposition Formula
- 2-23 Implementing Instructions for All 16 Binary Boolean Operations
2–22 A Boolean Decomposition Formula
In this section, we have a look at the minimum number of binary Boolean operations, or instructions, that suffice to implement any Boolean function of three, four, or five variables. By a “Boolean function” we mean a Boolean-valued function of Boolean arguments.
Our notation for Boolean algebra uses “+” for or, juxtaposition for and, for exclusive or, and either an overbar or a prefix ¬ for not. These operators can be applied to single-bit operands or “bitwise” to computer words. Our main result is the following theorem:
- THEOREM. If f(x, y, z) is a Boolean function of three variables, then it can be decomposed into the form g(x, y) zh(x, y), where g and h are Boolean functions of two variables.6
Proof [Ditlow]. f(x, y, z) can be expressed as a sum of minterms, and then and z can be factored out of their terms, giving
Because the operands to “+” cannot both be 1, the or can be replaced with exclusive or, giving
where we have twice used the identity (a b) c = ac bc.
This is in the required form with g(x, y) = f0(x, y) and h (x, y) = f0(x, y) f1(x, y). f0(x, y), incidentally, is f(x, y, z) with z = 0, and f1(x, y) is f(x, y, z) with z = 1.
- COROLLARY. If a computer’s instruction set includes an instruction for each of the 16 Boolean functions of two variables, then any Boolean function of three variables can be implemented with four (or fewer) instructions.
One instruction implements g(x, y), another implements h(x, y), and these are combined with and and exclusive or.
As an example, consider the Boolean function that is 1 if exactly two of x, y, and z are 1:
Before proceeding, the interested reader might like to try to implement f with four instructions, without using the theorem.
From the proof of the theorem,
which is four instructions.
Clearly, the theorem can be extended to functions of four or more variables. That is, any Boolean function f(x1, x2, …, xn) can be decomposed into the form g(x1, x2, …, xn–1) xnh (x1, x2, …, xn–1). Thus, a function of four variables can be decomposed as follows:
This shows that a computer that has an instruction for each of the 16 binary Boolean functions can implement any function of four variables with ten instructions. Similarly, any function of five variables can be implemented with 22 instructions.
However, it is possible to do much better. For functions of four or more variables there is probably no simple plug-in equation like the theorem gives, but exhaustive computer searches have been done. The results are that any Boolean function of four variables can be implemented with seven binary Boolean instructions, and any such function of five variables can be implemented with 12 such instructions [Knu4, 7.1.2].
In the case of five variables, only 1920 of the 225 = 4,294,967,296 functions require 12 instructions, and these 1920 functions are all essentially the same function. The variations are obtained by permuting the arguments, replacing some arguments with their complements, or complementing the value of the function.