- Low-Power Design: An Overview
- Low-Voltage, Low-Power Design Limitations
- Silicon-On-Insulator (SOI)
- From Devices to Circuits
- References
1.2 Low-Voltage, Low-Power Design Limitations
1.2.1 Power Supply Voltage
From the device designer's viewpoint, it has been said, "the lower the supply voltage, the better." Even though the dynamic power is largely dependent on the supply voltage, stray capacitance, and the frequency of operation, the overall supply voltage has the largest effect. Therefore, with overall supply voltage lowered, the power dissipation of the circuits can be largely reduced, without compromising the frequency of operation, or, in other words, the speed performance. However, there are various problems associated with lowering the voltage. In CMOS circuitry, the drivability of MOSFETs will decrease, signals will become smaller, and the threshold voltage variations will become more limiting. As shown in Figure 1.3, the increase of the gate delay time is serious when the operating voltage is reduced to 2 V or less, even when the device dimensions are scaled down. The supply voltage scaling in BiCMOS circuits puts even more serious constraints on the circuit performance. Although BiCMOS Ultra-Large-Scale-Integration (ULSI) systems realize the benefits of the low-power dissipation of CMOS and the high-output drive capability of bipolar devices, under low-power supply voltage conditions, the gate delay time significantly increases. This occurs because the effective voltage applied to MOS devices is dropped by the inherent built-in voltage (VBE ~ 0.7 V) of the bipolar devices in the conventional totem-pole type circuit. New methods, therefore, must be devised to overcome these obstacles to lowering the supply voltage.
Fig. 1.3 Inverter time versus supply voltage [13] (_1995 IEEE).
to MOS devices is dropped by the inherent built-in voltage (VBE ~ 0.7 V) of the bipolar devices in the conventional totem-pole type circuit. New methods, therefore, must be devised to overcome these obstacles to lowering the supply voltage.
1.2.2 Threshold Voltage
Another related issue of scaling down the power supply voltage is the threshold voltage restriction. At a low-power supply voltage, a low threshold voltage is preferable to maintain the performance trend. However, because the reduction of the threshold voltage causes a drastic increase in the cut-off current, the lower limit of the threshold voltage should be carefully considered by taking into account the stability of the circuit operation and the power dissipation. Furthermore, the threshold voltage dispersion must be suppressed proportional to the supply voltage. The dispersion of threshold voltage affects the noise margin, the standby power dissipation, and the transient power dissipation. Because the worst case critical path restricts LSI performance, it is influenced by the threshold voltage dispersion. Therefore, suppressing the threshold voltage is strongly recommended for low-power large-scale integration (LSI) from the process control and the circuit design point of view [14].
Figure 1.4 shows the Vth/VDD dependence of the gate delay time of the CMOS inverter [15]. When the threshold voltage approaches VDD/2, the delay time increases rapidly causing a drastic reduction of the MOSFET current and a corresponding increase in the CMOS inverter threshold. On the other hand, lowering the threshold voltage drastically improves the gate delay time. Therefore, a Vth/VDD ratio of 0.2 and below is required for high-speed operation, and it is necessary to reduce the threshold voltage to as low as possible when lowering the power supply voltage. However, because the subthreshold swing is almost constant in any device generation, reduction of the threshold voltage sharply increases the MOSFET cut-off current and degrades its ON/OFF ratio. Moreover, the threshold voltage reduction increases the power dissipation due to the switching transient current. At high threshold voltages, the transient power dissipation is negligible as compared to the total power dissipation. On the other hand, at low threshold voltage, the transient power greatly increases with the transient current. Thus, a compromise needs to be found for the Vth/VDD ratio to have both low-power and high-speed operation.
Fig. 1.4 Gate delay time of CMOS inverter versus threshold voltage/power supply voltage [43] (Reprinted by permission of Pearson Education, Inc.).
a corresponding increase in the CMOS inverter threshold. On the other hand, lowering the threshold voltage drastically improves the gate delay time. Therefore, a Vth/VDD ratio of 0.2 and below is required for high-speed operation, and it is necessary to reduce the threshold voltage to as low as possible when lowering the power supply voltage. However, because the subthreshold swing is almost constant in any device generation, reduction of the threshold voltage sharply increases the MOSFET cut-off current and degrades its ON/OFF ratio. Moreover, the threshold voltage reduction increases the power dissipation due to the switching transient current. At high threshold voltages, the transient power dissipation is negligible as compared to the total power dissipation. On the other hand, at low threshold voltage, the transient power greatly increases with the transient current. Thus, a compromise needs to be found for the Vth/VDD ratio to have both low-power and high-speed operation.
1.2.3 Scaling
As the demand for high-speed, low-power consumption and high packing density continues to grow each year, there is a need to scale the device to smaller dimensions. As the market trend moves toward a greater scale of integration, the move toward a reduced supply voltage also has the advantage of improving the reliability of IC components of ever-reducing dimensions. This change can be easily understood if one recalls that IC components with smaller dimensions have more of a tendency to break down at high voltages. It has already been accepted that scaled-down CMOS devices even at 2.5 V do not sacrifice device performance as they maintain device reliability [16].
Scaling the supply voltage for digital circuits has historically been the most effective way to lower the power dissipation because it reduces all components of power and is felt globally across the entire system. The 1997 National Technology Roadmap for Semiconductors (NTRS) [17] projects the supply voltage of future gigascale integrated systems to scale from 2.5 V in 1997 to 0.5 V in 2012 primarily to reduce power dissipation and power density, increases of which are projected to be driven by higher clock rates, higher overall capacitance, and larger chip sizes.
Scaling brings about the following benefits:
Improved device characteristics for low-voltage operation due to the improvement in the current driving capabilities
Reduced capacitance through small geometries and junction capacitances
Improved interconnect technology
Availability of multiple and variable threshold devices, which results in good management of active and standby power trade-off
Higher density of integration (It has been shown that the integration of a whole system into a single chip provides orders of magnitude in power savings.)
However, during the scaling process, the supply voltage would have to decrease to limit the field strength in the insulator of the CMOS and relax the electric field from the reliability point of view. This decrease leads to a tremendous increase in the propagation delay of the BiCMOS gates, especially if the supply voltage is scaled below 3 V [18]. Also, scaling down the supply voltage causes the output voltage swing of the BiCMOS circuits to decrease [19,20]. Moreover, external noise does not scale down as the device features' size reduces, giving rise to adverse effects on the circuit performance and reliability.
The major device problem associated with the simple scaling lies in the increase of the threshold voltage and the decrease of the carrier surface mobility, when the substrate doping concentration is increased to prevent punch-through. To sustain the low threshold voltage with a high carrier surface mobility and a high immunity to punch-through simultaneously, substrate engineering will be a prerequisite.
1.2.4 Interconnect Wires
In the deep submicron era, interconnect wires are responsible for an increasing fraction of the power consumption of an integrated circuit. Most of this increase is attributed to global wires, such as busses, clocks, and timing signals. D. Liu et al. [21] found that, for gate array and cell library-based designs, the power consumption of wires and clock signals can be up to 40 and 50% of the total on-chip power consumption, respectively. The influence of this interconnect is even more significant for reconfigurable circuits. It has also been reported that, over a wide range of applications, more than 90% of the power dissipation of traditional Field Programmable Gate Array (FPGA) devices has been attributable to the interconnect [22]. Therefore, it is both advantageous and desirable to adopt techniques that can help to reduce these ratios. For chip-to-chip interconnects, wires are treated as transmission lines, and many low-power Input/Output (I/O) schemes were proposed at both the circuit level [23] and the coding level [24]. One of the effective techniques to reduce the power consumption of on-chip interconnects is to reduce the voltage swing of the signal on the wire. A few reduced-swing interconnect schemes have been proposed in the literature [2530]. These schemes present a wide range of potential energy reductions, but other considerations such as complexity, reliability, and performance play an important role as well. Nakagome et al. [26] proposed a static driver with a reduced power supply. This driver requires two extra power rails to limit the interconnect swing and uses special low-threshold devices (~0.1 V) to compensate for the current-drive loss due to the lower supply voltages. Differential signaling, proposed and analyzed by Burd [31], achieves great energy savings by using a very low-voltage supply. The driver uses nMOS transistors for both pull-up and pull-down, and the receiver is a clocked unbalanced current-latch sense amplifier. The receiver overhead may hence be dominant for short interconnect wires with small capacitive loads. The main disadvantage of the differential approach is the doubling of the number of wires, which certainly presents a major concern in most designs. Another class of circuits comes under the category of Dynamically Enabled Drivers. The idea behind this family of circuits is to control the charging and discharging times of the drivers so that a desired swing on the interconnect is obtained. This concept has been widely applied in memory designs. However, it only works well in cases when the capacitive loads are well known beforehand.
Another scheme called the Reduced-Swing DriverVoltage-Sense Translator (RSDVST [29]) also uses a dynamically enabled driver, with an embedded copy of the receiver circuit, called voltage-sense translator (VST), to sense the interconnect swing and provide a feedback signal to control the driver. Inherent in the scheme is the drawback of the mismatch of the switching threshold voltage between the two VSTs. The charge intershared bus (CISB) [27] and charge-recycling bus (CRB) [28] are two schemes that reduce the interconnect swing by utilizing charge sharing between multiple data bit lines of a bus. The CRB scheme uses differential signaling, and the CISB scheme is single ended with references. Both schemes reduce the interconnect swing by a factor of n, where n is the number of bits.