- Considering the Importance of the Power Supply
- Primary Function and Operation
- Power Supply Form Factors
- Motherboard Power Connectors
- Peripheral Power Connectors
- Power Supply Loading
- Power Supply Ratings
- Power Supply Specifications
- Overloading the Power Supply
- Power Off When Not in Use
- Power Management
- Power Supply Troubleshooting
- Repairing the Power Supply
- Using Power-Protection Systems
- RTC/NVRAM Batteries (CMOS Chips)
Power Off When Not in Use
Should you turn off a system when it is not in use? To answer this frequent question, you should understand some facts about electrical components and what makes them fail. Combine this knowledge with information on power consumption, cost, and safety to come to your own conclusion. Because circumstances can vary, the best answer for your own situation might be different from the answer for others, depending on your particular needs and applications.
Frequently, powering a system on and off does cause deterioration and damage to the components. This seems logical, but the simple reason is not obvious to most people. Many believe that flipping system power on and off frequently is harmful because it electrically "shocks" the system. The real problem, however, is temperature or thermal shock. As the system warms up, the components expand; as it cools off, the components contract. In addition, various materials in the system have different thermal expansion coefficients, which means that they expand and contract at different rates. Over time, thermal shock causes deterioration in many areas of a system.
From a pure system-reliability viewpoint, you should insulate the system from thermal shock as much as possible. When a system is turned on, the components go from ambient (room) temperature to as high as 185° F (85° C) within 30 minutes or less. When you turn off the system, the same thing happens in reverse, and the components cool back to ambient temperature in a short period of time.
Thermal expansion and contraction remains the single largest cause of component failure. Chip cases can split, allowing moisture to enter and contaminate them. Delicate internal wires and contacts can break, and circuit boards can develop stress cracks. Surface-mounted components expand and contract at rates different from the circuit board they are mounted on, which causes enormous stress at the solder joints. Solder joints can fail due to the metal hardening from the repeated stress, resulting in cracks in the joint. Components that use heatsinks, such as processors, transistors, or voltage regulators, can overheat and fail because the thermal cycling causes heatsink adhesives to deteriorate and break the thermally conductive bond between the device and heatsink. Thermal cycling also causes socketed devices and connections to loosen, or creep, which can cause a variety of intermittent contact failures.
→ See "SIMMs, DIMMs, and RIMMs," p. 360
Thermal expansion and contraction affect not only chips and circuit boards, but also things such as hard disk drives. Most hard drives today have sophisticated thermal compensation routines that make adjustments in head position relative to the expanding and contracting platters. Most drives perform this thermal compensation routine once every 5 minutes for the first 30 minutes the drive is running, and then every 30 minutes thereafter. In many drives, this procedure can be heard as a rapid "tick-tick-tick-tick" sound. In essence, anything you can do to keep the system at a constant temperature prolongs the life of the system, and the best way to accomplish this is to leave the system either permanently on or permanently off. Of course, if the system is never turned on in the first place, it should last a long time indeed!
Now, I am not saying that you should leave all systems on 24 hours a day. A system powered on and left unattended can be a fire hazard (I have had monitors spontaneously catch fireluckily, I was there at the time), is a data security risk (from cleaning crews and other nocturnal visitors), can be easily damaged if moved while running, and wastes electrical energy.
I currently pay 11 cents for a kilowatt-hour of electricity. A typical desktop-style PC with display consumes at least 300 watts (0.3 kilowatt) of electricity (and that is a conservative estimate). This means it would cost 3.3 cents to run my typical PC for an hour. Multiplying by 168 hours in a week means that it would cost $5.54 per week to run this PC continuously. If the PC were turned on at 9 a.m. and off at 5 p.m., it would be on only 40 hours per week and would cost only $1.32a savings of $4.22 per week! Multiply this savings by 100 systems, and you are saving $422 per week. Multiply this by 1,000 systems, and you are saving $4,220 per week! Using systems certified under the EPA Energy Star program (so-called green PCs) would account for an additional savings of around $1 per system per weekor $1,000 per week for 1,000 systems. The great thing about Energy Star systems is that the savings are even greater if the systems are left on for long periods of time because the power management routines are automatic.
Based on these facts, my recommendations are that you power on the systems at the beginning of the workday and off at the end of the workday. Do not power the systems off for lunch, breaks, or any other short periods of time. If you are a home user, leave your computer on if you are going to be using it later in the day or if instant access is important. I'd normally recommend home users turn off the system when leaving the house or when sleeping. Servers, of course, should be left on continuously. This seems to be the best compromise of system longevity with pure economics. No matter what, these are just guidelines; if it works better for you to leave your system on 24 hours a day, seven days a week, make it so.