New-Style Design Process
Today the process is radically different, although no less painstaking or complex. Modern chip-design tools have done away with the error-prone manual work of taping up individual layers, but they only shift the workload. Modern multimillion-transistor chips supply more than enough new challenges to make up the difference.
EDA Design Tools
Modern chips are far too complex to design manually. No single engineer can personally understand everything that goes on inside a new chip. Even teams of engineers have no single member who truly understands all the details and nuances of the design. One person might manage the project and command the overall architecture of the chip, but individual engineers will be responsible for portions of the detailed design. They must work together as a team, and they must rely on and trust each other as well as their tools.
Those tools are all computer programs. Just as an experienced carpenter will accumulate a toolbox full of favored tools, chip designers develop a repertoire of computer programs that aid in various aspects of chip design. No single EDA tool can take a chip design from start to finish, just as no carpenter's tool can do every job. The assortment is important, and using a mixture of EDA tools is part of an engineer's craft.
Schematic Capture
Schematic diagrams are the time-honored way of representing electrical circuits. At least that's true for simple electrical circuits. Schematics, or wiring diagrams, are like subway maps. They draw, more or less realistically, the actual arrangement of wires and components (lines and stations). The diagram might be stylized a bit for easier understanding, like the classic route map for the London Underground, but for the most part, schematics are accurate, maps of circuit connections, as shown in Figure 3.1.
Figure 3.1 This typical schematic diagram shows how two ICs (rectangles near the center) connect by wires (straight lines) to other components. Resistors, capacitors, diodes, LEDs, and other components are all represented by standard symbols. Except for the ICs, the figures on a schematic don't look like the actual components in the circuit.
Drawing a schematic on paper doesn't do an engineer any good if the goal is to ultimately transfer that design into film to have a chip made. Instead, schematics are drawn on a computer screen using schematic-capture software. With this, you can drag and drop symbols for various electrical components (adders, resistors, etc.) across your screen and then connect them by drawing wires between them. The schematic-capture software takes care of the simpler annotation chores like labeling each function and making sure no wires are left dangling in midair. Schematic-capture software is a bit like word-processing software in that it won't provide inspiration or talent or create designs from nothing, but it will catch common mistakes and keep your workspace free from embarrassing erasure marks.
A number of companies supply schematic-capture software, but most vendors are small firms and their ranks are dwindling. Modern chips are too complex to be designed this way, not because the schematic-capture programs can't handle it, but because the engineers can't. Designing a multimillion-transistor chip using schematic-capture software would be like painting a bridge with a tiny artist's brush and palette. There's just far too much area to cover and not enough time.
Hardware Synthesis
To alleviate some of the tedium of designing a chip bit by bit, engineers have turned to a new method, called hardware synthesis. Synthesis is not quite as space-age as it might sound: Chips are not magically synthesized from thin air. Instead, engineers feed their computers instructions about the chip's organization and the computer generates the detailed circuit designs. (The exact method for doing this is described in the next section.) The engineer still has to design the chip, just not at such a detailed level. It's like the difference between describing a brick wall, brick by brick and inch by inch, or telling an as_sistant, "Build me a brick wall that's three feet high by 10 feet long." If you have an assistant you trust who is skilled in bricklaying, you should get the same result either way.
Theoretically, that's true of hardware synthesis as well, but the reality is somewhat different. Although today's chip designers have overwhelmingly adopted hardware synthesis for their work, they still grumble about the trade-offs. For example, synthesized designs tend to run about 20 to 30 percent slower than "handcrafted" chip designs. Instead of running at 500 MHz, a synthesized chip might run at only 400 MHz. That's fine if it's a low-cost chip that really only needs to run at, say, 250 MHz. However, it's a showstopper if you're selling prestigious high-end microprocessors for personal computers. There are a few types of chips, therefore, that are still not designed using hardware synthesis because the vendor can't afford the reduction in performance.
Another drawback of hardware synthesis is that the resulting chips are often about 30 to 50 percent larger in terms of silicon real estate. More silicon means more cost, both because they need more raw material and because fewer chips fit on a round silicon wafer. Bigger chips mean lower manufacturing yields. Once again, there are certain chip makers that don't use hardware synthesis because they're shaving every penny of cost possible.
Third, chips made from synthesized designs tend to use more electricity than do manually designed chips. That's a side effect of the larger silicon size previously mentioned. More silicon means more power drained, and the difference can be 20 percent or more. For extremely low-power chips, such as the ones used in cellular telephones, handcrafted chips are still popular.
Despite all these serious drawbacks, most new chips are created from synthesized designs simply because there's no other way. Even big engineering teams need the help of hardware-synthesis programs to finish a large chip in a reasonable amount of time. It takes a long time to finish a cathedral if you're laying every brick by hand.
Companies will often use both design styles, synthesis and handcrafting. The first generation of a new chip will usually be designed with hardware-syn_thesis languages (described later) to get the chip out the door and onto the market as quickly as possible. After the chip is released (assuming that it sells well), the company might order its engineers to revise or redesign the chip, this time using more labor-intensive methods to shrink the silicon size, reduce power consumption, and cut manufacturing costs. This "second spin" of the chip will often appear six to nine months after the first version. When microprocessor makers announce faster, upgraded versions of an existing chip, this is often how the new chip was created.
Hardware-Description Languages
Schematics are fine, up to a point, but they're too detail-oriented for large-scale designs, which modern chips have become. Instead, engineering teams need something that's more high-level and more abstract; something that allows them to think big and avoid getting bogged down in so many of the electrical details of what they're creating. What they'd like is some way to describe what they want, and have a tool magically produce it.
Enter hardware-description languages (HDLs), the next step up the evolutionary ladder from schematic-capture programs. HDLs also enable hardware synthesis. Using an HDL, engineering teams can design the behavior of a circuit without exactly designing the circuit itself in detail. The HDL tool will translate the engineers' wishes (albeit very specific and carefully defined wishes) into a circuit design. Instead of using schematic-capture software to draw an adder and all of its attendant wiring, an HDL user can simply specify that he or she wants two numbers added together. The HDL tool handles the detail work of selecting an adder and wiring it up.
HDLs are far from magical. Like all computer programs, they're very literal minded and can misinterpret their user's wishes. Engineers spend years learning to use HDLs effectively and have to be very methodical about defining what they want. The bigger the chip design, the greater the opportunity for failure, so the greater the pressure to make sure everything's right.
HDLs look nothing like schematic capture. Instead of drawing figures on a screen, using an HDL is more like writing. It's a hardware-description language, after all. Engineers use an HDL to tell their computer what kind of circuit they want, in the form of a step-by-step procedure. The computer then interprets the steps and produces an equivalent circuit diagram. If schematics are blueprints, HDLs are recipes.
For a large chip, this procedural description can be hundreds of thousands of lines long, as long as a novel. A sample of HDL code is shown in Figure 3.2. It has to describe absolutely everything the chip does under all circumstances.
Figure 3.2 This extract of HDL code describes how an adder circuit should work. An HDL compiler translates this detailed description into a schematic, which can then be given to an engineer to build or, more likely, will be combined with other HDL components to make a larger chip.
If you're familiar with computer programming and software, you've probably already recognized the concept of HDLs; they're programming languages. Paradoxically, HDLs are a software approach to a hardware problemthey're a way to "write" new hardware.
The HDL Leaders: VHDL and Verilog
The two most common HDLs are called VHDL and Verilog. Practically all new chips are designed using one of these HDLs. The two languages are quite similar, but both have devoted practitioners who'd argue the merits of their chosen HDL with an uncommon zeal.
Both of these languages were developed in the United States, although both are used worldwide. Interestingly, there is a definite geographic division between VHDL users and Verilog users. Verilog aficionados seem to be clustered around the western United States and Canada, whereas VHDL holds sway in Europe and New England. Asian users seem to be evenly split. (Regardless of the local tongue, the VHDL and Verilog languages always use English words and phrases to describe hardware.)
Tech Talk
Verilog is a few years older than VHDL. First developed in 1983, Verilog was for some time a proprietary HDL belonging to Cadence Design Systems. VHDL, on the other hand, was created as an open language and became an Institute for Electrical and Electronics Engineering (IEEE) standard in 1987. Sensing that VHDL's standard status would jeopardize its investment in Verilog, Cadence put its HDL in the public domain in 1990 and applied for IEEE approval, which it gained in 1995. Since 2001, both sides have been working to unify the languages and add new features to address the challenges of next-generation chips.
Although neither the VHDL nor Verilog languages belong to anyone, as such, several EDA vendors do compete to sell the tool that converts engineers' HDL descriptions into working circuit designs. These are essentially translation programs, converting from one language (VHDL or Verilog) to another (circuit schematics). Like any translation program or service, there is fierce competition over nuances of how accurate or efficient those translations might be. Some engineers are interested in the most efficient (i.e., fastest) translation because they plan to do several translations per day. Others might be interested more in performance. In HDL terms, this means translating the original HDL into the fewest number of transistors possible, or something comparable to translating German into dense, terse English. Still other customers might want the opposite: a translator that produces long, flowing passages with lots of footnotes and annotations, corresponding to a circuit that's large and uses lots of transistors but is easy to understand and pick apart for future designs.
Tech Talk
Register-transfer level (RTL) is a generic term that covers both VHDL and Verilog. RTL is not a language; it's a "zoom level" that's not quite as detailed as individual transistors but not quite as high level as a complete chip. It's an intermediate level of detail that works well for today's electronics engineers. Looking at an RTL description (e.g., VHDL or Verilog), a good engineer can divine what a circuit is going to do, but not exactly how it will do it. Like courtroom shorthand, RTL conveys just enough information to get the message across.
Alternate HDLs
VHDL and Verilog are by no means the only HDLs available, although they are clearly the two top players in the HDL market. Because of their age and rapid increases in chip complexity, both languages have begun to crack a little bit under the strain of modern chip design. Some engineers argue that designing a 10-million-transistor chip using VHDL or Verilog is little better than the rubylith methods from the 1970s. These engineers turn to a number of alternative HDLs.
HDLs come and go, but a few that seem to have reached critical mass in the EDA market are Superlog, Handel-C, and SystemC. Superlog, as the name implies, is a pumped-up version of Verilog. It's a superset of the language that adds some higher lever, more abstract features to the language. Superlog allows designers experienced in Verilog to handle larger chip designs without going crazy with details.
Handel-C and SystemC are both examples of a more radical approach to HDLs. These take the heretical stance that, because popular HDLs like Verilog and VHDL are already programming languages, why not use an actual programming language in their place? In other words, they use the C programming language to define hardware as well as software. This has the advantage that millions of programmers already know C programming, and thousands more students learn it every year. Idealistically, it might also help close some of the traditional gap between hardware and software engineers.
The C-into-hardware approach has some philosophical appeal but has met with raucous resistance from precisely the hardware engineers it is meant to entice. Programming languages such as C, BASIC, Java, and the rest were never designed to createor even adequately describehardware, the argument goes, so they would naturally do a terrible job at it. It's like composing love sonnets in Klingon, you might say. Some of the resistance to this approach might just be the natural impulse to protect one's livelihood, or it might really be a terrible idea. Time will tell.
Tech Talk
Even the strongest backers of the C-as-hardware-language movement realize that the original C programming language isn't well suited to the job. They've all added various extensions to the language to help express parallelism, the ability of electronic circuits to perform multiple functions simultaneously, which programming languages like C can't describe. Many have also added "libraries" of common hardware functions so that engineers don't have to create them from scratch. The results clearly do work, but opinions vary widely as to the efficiency and performance of the resulting design.
Producing a Netlist
Whether engineers use schematic capture or an HDL, their input will eventually be translated into something called a netlist. A netlist, or a list of nets, is a tangled list of which electrical circuits are connected to which other circuits. A netlist is generally only readable by a computer; it's too convoluted and condensed to be of any use to a person. The netlist is just an intermediate stop along the way to a new chip.
Floor Planning
After a chip's circuit design is created, it's time to start getting physical. Because the ultimate goal is to produce film, which will then be used to fabricate a chip, there comes a point when the design ceases being an abstract schematic or HDL description and starts to take on a real shape. It's the point where architecture becomes floor planning. In fact, that's what it's called: chip floor planning.
Here a whole new software tool comes out of the engineer's toolbox. A floor-planning program takes the netlist, counts the number of electronic functions and features that are in it, tallies up the wires needed to connect it all, and estimates how much silicon real estate the chip will cover. Naturally, the more complex the chip design, the bigger the chip will be. However, other factors influence the size of the chip, too, such as precisely what company will be manufacturing the chip and how fast it needs to run. (Slow-running chips can sometimes be packed more densely than high-performance chips, saving silicon area and cost.) Many factors affecting a chip's size and shape are hard to predict, even for the engineers who designed it. If a certain portion of the chip needs to connect all the other parts of the chip, all the wires, tiny as they are, will take up space and make the chip bigger.
The real job of the floor-planning program is to find the optimal arrangement for all the parts of the chip. Which circuits should be near which other circuits? What's the best way to shorten the wires? How will electricity be distributed among all the portions of the chip? Are there any sensitive radio receivers or optical sensors on the chip that need to be isolated from other areas? Of course, the floor planner also needs to pack this all into a rectangular (and ideally square) chip design. Square chips fit best on a round silicon wafer, yielding more useful chips per wafer and lowering overall manufacturing costs.
Place and Route
After the chip has been floor-planned (a new verb coined by chip engineers), it's time for the detail work: routing all the tiny wires that connect the chip's various parts. Floor-planning blocks out the rough shape of the chip and organizes its major components. Place and route (sometimes shortened to P&R) works out the messy details of exactly where every transistor, capacitor, wire, and resistor actually goes. It is as detailed a blueprint of the chip as will ever exist. If all goes well, place and route is the last step in designing the chip before its creators hand it off to be manufactured.
Unfortunately, very little often goes well when a chip is placed and routed for the first time. Place and route software is horribly expensive because its task is so difficult. These programs must manage millions of details, all interconnected with one another, and find the optimal physical arrangement of the pieces without disturbing any of the connections among them. Consider the child's puzzle of the three houses and three utilities shown in Figure 3.3.
Figure 3.3 In this classic puzzle, you have to draw lines connecting the three houses (above) with each of the three utility companies (below). The only rule is that you cannot cross over a line you've already drawn.
Can you draw a line from each of the three houses to each of the three utilities without crossing over any line? You can make the lines as long and twisty and roundabout as you like; you just can't cross over any of them. Take a look at Figure 3.4.
Figure 3.4 Here we've partially completed the puzzle, connecting two of the houses. But the third one presents some problems. How to connect the last few lines without crossing another line? This problem is similar to what all chip designers face on a much larger scale.
If you don't solve it after the first few days, don't feel too bad. The task has been mathematically proven to be impossible. To connect all the houses you'd need to change the rules and use two sheets of paper to build "overpasses" over some of your lines. Like freeway overpasses built over roads below them, coming off the paper and working in three dimensions is the only solution to the problem.
Now imagine the same puzzle with 1 million houses and 1 million utilities, and you begin to get an idea of the problem facing place and route programs. They will take hours, and often days, working on the problem of how to connect all the parts of a chip without dropping a single connection or making one wrong connection (crossed lines). They have to do all this while preserving the floor plan generated by the previous EDA tool.
Granted, place and route tools have more than two layers to work with, but the task is still not easy. Typical chips will have from four to seven "metal layers" used for routing wires. Adding more layers would obviously ease the routing congestion, but it also adds cost. Each additional metal layer adds about a day to the processing time for a chip, slowing down the production line. Each layer also adds hundreds of dollars to the cost of each silicon wafer, which must be divided among the chips that are on it. Throwing money (and time) at the problem will solve it, but not in an economically pleasing manner.
What typically happens after the first bout of placing and routing is that the EDA tool figuratively throws up its hands and says it cannot complete the task. There might be too many wire connections and not enough metal layers on which to route them. Or different portions of the chip might be too far apart to connect them efficiently, a sign that the floor-planning software has not done a good job. Furthermore, the engineering team might force some constraints on the place and route program, prohibiting it from routing any wire that would exceed a certain length, for example. In the world of chip design, if the chip successfully passes place and route the first time, you weren't trying hard enough.
The next step depends on the attitude of the engineering team, the company's goals, and the time they feel they have to finish the job. They might resubmit the chip design to the floor-planning tool and hope (or specify) that it gives them different results. They might decide to loosen some of the restrictions they placed on the place and route program. They might decide to spend the extra money on an additional metal layer to ease overcrowding. Or they might go all the way back to the beginning and change some of their original design, eliminating features or connecting them in different ways. Either way, it means repeating some or all of the steps leading up to place and route, and it usually means many sleepless nights bent over a flickering computer screen, waiting for an encouraging result.