Do We Still Need x86?
A little over twenty years ago, IBM noticed that it was missing out on the desktop market, and quickly rushed a machine to market using commodity parts. The CPU that IBM chose was a cheap part called the 8088. While this was a 16-bit CPU, it had an 8-bit external bus, allowing the manufacturer to cut corners by using cheap 8-bit components on the motherboard.
Since IBM didn’t want to be held over a barrel by the CPU supplier, Intel, a second source was required. Intel licensed its designs to AMD, which made chips based on Intel’s designs for years. Since then, AMD and Intel, along with a few other intermittent players, have been pushing chips compatible with the 8088. They added 32-bit support, floating-point instructions, and segmented memory, and then paged and segmented memory, vector instructions, and even 64-bit support.
Everyone loves to hate x86, but people keep buying it. Why? Two reasons. The first is that it’s cheap. A lot of people buy x86, so it has economies of scale, which means that it’s cheap, which means that a lot of people buy it....
The second reason is backward compatibility. In the ’90s, Microsoft supported Windows NT on a number of platforms, including PowerPC, MIPS, and Alpha. The fastest Windows machine you could buy for a long time used an Alpha CPU. Not many people bought them, in spite of the fact that these machines were fast and even had good price/performance. Why? Because they wanted to run x86 applications on them.
A few things have changed since then. The first is the rise of Free Software. It’s now possible, although not trivial, to run nothing but Free Software on machines in many configurations. Since you have the source code, it’s generally pretty easy to move between CPU architectures. I run many of the same applications on a PowerPC Mac and an UltraSPARC desktop as I do on an x86 machine. Writing code that works on multiple architectures isn’t very difficult, and for popular applications there’s always someone who wants to run it on something obscure and is willing to submit patches that help with architecture independence.
The second thing to happen is the rise of bytecode—intermediate code that’s translated into native code at runtime. Java is the most obvious example. If your applications all use Java, you can swap out your CPU, kernel, and JVM—and not even need to recompile your applications. Increasingly, languages such as Python and Ruby are being used in large projects, and these languages have the same property. Even Microsoft is on the bytecode bandwagon, with its Common Language Intermediate Runtime. In this case, processor independence is the goal—not on the desktop, but in the pocket. Windows Mobile runs several architectures, and applications that work on all of them without recompiling are a significant advantage.
The final thing is the rise in emulation. The latest crop of emulators allow the use of native system calls and even libraries, so you only need to emulate the instructions executed directly by the application. This design lets you run foreign applications at 50% of native speed, or even faster. Or, put another way, as fast as they ran on the previous generation of processors. If you’re upgrading to get better performance from legacy (closed source) applications, switching architectures is a problem. If you’re upgrading to get new applications to run faster, but still need to run legacy applications, then it might not be a problem.