Introduction to Jitter, Noise, and Signal Integrity at High-Speed
- 1.1 Jitter, Noise, and Communication System Basics
- 1.2 Sources of Timing Jitter, Amplitude Noise, and Signal Integrity
- 1.3 Signal and Statistical Perspectives on Jitter and Noise
- 1.4 System Perspective on Jitter, Noise, and BER
- 1.5 Historical Overview of Jitter, Noise, BER, and Signal Integrity
- 1.6 Overview of This Book
1.1 Jitter, Noise, and Communication System Basics
The essence of communication is about transmitting and receiving a signal through a medium or channel. An early mathematical model for communication may be tracked back to Claude Shannon's famous 1948 paper.1 Depending on what kind of medium is used to transmit and receive a signal, communication systems are grouped into three basic categories: fiber, copper, and wireless (or free space) (see Figure 1.1). The bandwidths typically are a few THz for fiber and a few GHZ for copper media. Considering the constraints of bandwidth, attenuation, and cost, fiber-based communication is often used for long-distance (> 1 km), high-data-rate (up to > 100 Gb/s per channel) communication. Copper-based communication is used for medium-distance (< 1 km) and medium-high data rates (1 Mb/s to a few Gb/s per channel). Wireless is used for medium distance (~ km) and medium data rates (up to ~100 Mb/s). The choice of a communication medium is largely determined by cost and application requirements. Clearly, fiber has the highest intrinsic bandwidth, so it can deliver the highest data rate possible for a single channel.
Figure 1.1 A simple communication system, including three basic building blocks: transmitter, medium, and receiver.
1.1.1 What Are Jitter, Noise, and Signal Integrity?
When a signal is transmitted and received, a physical process called noise is always associated with it. Noise is basically any undesired signals added to the ideal signal. In the context of digital communication, the information is encoded in logical bits of 1 and 0. An ideal signal may be represented by a trapezoid wave with a finite 0 to 1 rise time or 1 to 0 fall time. In the presence of noise, it is the sum of ideal signal, with the noise giving rise to the "net" or actual signal waveform. If no noise is added, the actual signal is identical to the ideal signal waveform. If the noise is added, the actual signal is deviated from the ideal signal, as shown in Figure 1.2.
Figure 1.2 An ideal signal versus a noisy signal for a digital waveform.
The deviation of a noisy signal from its ideal can be viewed from two aspects: timing deviation and amplitude deviation. The amplitude of the digital signal for a copper-based system is the voltage, and for a fiber-based or radio frequency (RF) wireless system it is the power. The deviation of the signal amplitude (ΔA) is defined as the amplitude noise (or simply noise), and the deviation of time (Δt) is defined as the timing jitter (or simply jitter). Those definitions will be used throughout this book. The impacts of timing jitter and amplitude noise are not symmetrical, though. Amplitude noise is a constant function and can affect system performance all the time. Timing jitter affects system performance only when an edge transition exists.
Signal integrity generally is defined as any deviation from ideal waveform.2 As such, signal integrity contains both amplitude noise and timing jitter in a broad sense. However, certain signal integrity signatures such as overshoot, undershoot, and ringing (see Figure 1.3) may not be well covered by either noise or jitter alone.
Figure 1.3 Some signal integrity key signatures.
1.1.2 How Do Jitter and Noise Impact the Performance of a Communication System?
There is no doubt that jitter, noise, and signal integrity all impact the quality of a communication system. The following sections discuss and illustrate how jitter and noise cause a bit error and under what conditions this bit error occurs. Then the metric that is commonly used to quantify the bit error rate in a communication system is discussed.
1.1.2.1 Bit Error Mechanisms
The impacts of timing jitter and amplitude noise can best be understood from the perspective of a receiver for a communication system.3 A receiver samples the incoming logical 1 pulse data at a sampling time of ts and threshold voltage of vs, as shown in Figure 1.4. For a jitter- and noise-free digital pulse, an ideal receiver samples the data at the center of the incoming pulse. In this context, clearly there is no need to talk about signal integrity, because its effects are covered by jitter and noise. Under such conditions, threshold crossing times for rising and falling edges satisfying the conditions of tr < ts <tf and V1 > vs result in a logical 1 being detected, and the data bit is received correctly (see part (a) of Figure 1.4). In the presence of jitter and noise, the rising and falling edges can move along the time axis, and the voltage level can move along the amplitude axis. As such, the correct bit detection conditions for sampling time and voltage may not be satisfied, resulting in a bit error due to bit 1 being received/detected as bit 0. The violations of those sampling conditions can occur in three scenarios:
- The crossing time of the rising edge lags behind the sampling time, or tr > ts.
- The crossing time of the falling edge is ahead of the sampling time, or tf < ts.
- The logical 1 voltage is below the sampling voltage vs, or V1 < vs.
Figure 1.4 A receiver sampling an incoming data bit 1 (a) and 0 (b), where tr and tf are the timings for the 50% crossing (or zero crossing timings) for the rising and falling edges, respectively, and ts and vs are the sampling time and voltage, respectively.
For a zero pulse or bit "0"detection, in the case of part (b) of Figure 1.4, the correct detection condition becomes tr < ts < tf and V0 < vs. Similarly, the violation of correct sampling condition causes a bit error because bit 0 is received as bit 1. The violation scenarios for timing are similar to those of bit 1 pulse (part (a) of Figure 1.4). However, the violation condition for voltage becomes V0 > vs.
1.1.2.2 Bit Error Rate (BER)
We have demonstrated how jitter and noise cause a digital system bit error with a simple example. Because a digital system transmits and receives many bits for a given time, the system's overall performance can best be described by the rate of bit failure—namely, the ratio of the total failed bits Nf to the total bits received N. This ratio is called the bit error rate (BER) or bit error ratio. Bit error ratio is a more precise definition because BER = Nf/N and no normalization of time is as involved as most of the rate definition otherwise required.
BER is the bottom-line metric for determining a good communication system. At multiple Gb/s rates, the BER requirement for most communication standards such as Fibre Channel (FC), Gigabit Ethernet, SONET, and PCI Express is 10–12 or smaller. Larger BER degrades network or link efficiency and, worse, system latency. A simple implication of BER = 10–12 is that with 1012 bits being transmitted/received, only one bit error is allowed. Clearly BER depends on data rate, jitter, and noise in the communication system. The definition of BER implies that BER is a counting statistic so that Poisson statistics may apply.