1.2 The Basic Verification Principle
There are two types of design error. The first type of error exists not in the specifications but in the implementations, and it is introduced during the implementation process. An example is human error in interpreting design functionality. To prevent this type of error, we can use a software program to synthesize an implementation directly from the specifications. Although this approach eliminates most human errors, errors can still result from bugs in the software program, or usage errors of the software program may be encountered. Furthermore, this synthesis approach is rather limited in practice for two reasons. First, many specifications are in the form of casual conversational language, such as English, as opposed to a form of precise mathematical language, such as Verilog or C++. We know that automatic synthesis from a loose language is infeasible. In fact, as of this writing, there is no high-level formal language that specifies both functional and timing requirements. A reason for this is that a high-level functional requirement does not lend itself to timing requirements, which are more intuitive at the implementation level. Therefore, timing requirements such as delay and power, when combined with high-level functional specifications, are so overtly inaccurate that people relegate timing specifications to levels of lower abstraction. Second, even if the specifications are written in a precise mathematical language, few synthesis software programs can produce implementations that meet all requirements. Usually, the software program synthesizes from a set of functional specifications but fails to meet timing requirements.
Another method—the more widely used method—to uncover errors of this type is through redundancy. That is, the same specifications are implemented two or more times using different approaches, and the results of the approaches are compared. In theory, the more times and the more different ways the specifications are implemented, the higher the confidence produced by the verification. In practice, more than two approaches is rarely used, because more errors can be introduced in each alternative verification, and costs and time can be insurmountable.
The design process can be regarded as a path that transforms a set of specifications into an implementation. The basic principle behind verification consists of two steps. During the first step, there is a transformation from specifications to an implementation. Let us call this step verification transformation. During the second step, the result from the verification is compared with the result from the design to detect any errors. This is illustrated in Figure 1.3 (A). Oftentimes, the result from a verification transformation takes place in the head of a verification engineer, and takes the form of the properties deduced from the specifications. For instance, the expected result for a simulation input vector is calculated by a verification engineer based on the specifications and is an alternative implementation.

Figure 1.3 The basic principle of design verification. (A) The basic methodology of verification by redundancy. (B) A variant of the basic methodology adapted in model checking. (C) Simulation methodology cast in the form of verification by redundancy.
Obviously, if verification engineers go through the exact same procedures as the design engineers, both the design and verification engineers are likely to arrive at the same conclusions, avoiding and committing the same errors. Therefore, the more different the design and verification paths, the higher confidence the verification produces. One way to achieve high confidence is for verification engineers to transform specifications into an implementation model in a language different from the design language. This language is called verification language, as a counterpart to design language. Examples of verification languages include Vera, C/C++, and e. A possible verification strategy is to use C/C++ for the verification model and Verilog/VHSIC Hardware Description Language (VHDL) for the design model.
During the second step of verification, two forms of implementation are compared. This is achieved by expressing the two forms of implementation in a common intermediate form so that equivalency can be checked efficiently. Sometimes, a comparison mechanism can be sophisticated—for example, comparing two networks with arrival packets that may be out of order. In this case, a common form is to sort the arrival packets in a predefined way. Another example of a comparison mechanism is determining the equivalence between a transistor-level circuit and an RTL implementation. A common intermediate form in this case is a binary decision diagram.
Here we see that the classic simulation-based verification paradigm fits the verification principle. A simulation-based verification paradigm consists of four components: the circuit, test patterns, reference output, and a comparison mechanism. The circuit is simulated on the test patterns and the result is compared with the reference output. The implementation result from the design path is the circuit, and the implementation results from the verification path are the test patterns and the reference output. The reason for considering the test patterns and the reference output as implementation results from the verification path is that, during the process of determining the reference output from the test patterns, the verification engineer transforms the test patterns based on the specifications into the reference output, and this process is an implementation process. Finally, the comparison mechanism samples the simulation results and determines their equality with the reference output. The principle behind simulation-based verification is illustrated in Figure 1.3 (C).
Verification through redundancy is a double-edged sword. On the one hand, it uncovers inconsistencies between the two approaches. On the other hand, it can also introduce incompatible differences between the two approaches and often verification errors. For example, using a C/C++ model to verify against a Verilog design may force the verification engineer to resolve fundamental differences between the two languages that otherwise could be avoided. Because the two languages are different, there are areas where one language models accurately whereas the other cannot. A case in point is modeling timing and parallelism in the C/C++ model, which is deficient. Because design codes are susceptible to errors, verification code is equally prone to errors. Therefore, verification engineers have to debug both design errors as well as verification errors. Thus, if used carelessly, redundancy strategy can end up making engineers debug more errors than those that exist in the design—design errors plus verification errors—resulting in large verification overhead costs.
As discussed earlier, the first type of error is introduced during an implementation process. The second type of error exists in the specifications. It can be unspecified functionality, conflicting requirements, and unrealized features. The only way to detect the type of error is through redundancy, because specification is already at the top of the abstraction hierarchy and thus there is no reference model against which to check. Holding a design review meeting and having a team of engineers go over the design architecture is a form of verification through redundancy at work. Besides checking with redundancy directly, examining the requirements in the application environment in which the design will reside when it has become a product also detects bugs during specification, because the environment dictates how the design should behave and thus serves as a complementary form of design specification. Therefore, verifying the design requirements against the environment is another form of verification through redundancy. Furthermore, some of these types of errors will eventually be uncovered as the design takes a more concrete form. For example, at a later stage of implementation, conflicting requirements will surface as consistencies, and features will emerge as unrealizable given the available technologies and affordable resources.