Is Computer Science Dying?
- Computer Science and Telescopes
- Computer Scientists Cant Program!
- What Is It Good For?
In the late 1990s, during the first dotcom bubble, there was a perception that a computer science degree was a quick way of making money. The dotcom boom had venture capitalists throwing money at the craziest schemes, just because they happened to involve the Internet. While not entirely grounded in fact, this trend led to a perception that anyone walking out of a university with a computer science degree would immediately find his pockets full of venture capital funding.
Then came the inevitable crash, and suddenly there were a lot more IT professionals than IT jobs. Many of these people were the ones that just got into the industry to make a quick buck, but quite a few were competent people now unemployed. This situation didn’t do much for the perception of computer science as an attractive degree scheme.
Since the end of the first dotcom bubble, we’ve seen a gradual decline in the number of people applying to earn computer science degrees. In the UK, many departments were able to prop up the decline in local applicants by attracting more overseas students, particularly from Southeast Asia, by dint of being considerably cheaper than American universities for those students wishing to study abroad. This only slowed the drop, however, and some people are starting to ask whether computer science is dying.
Computer Science and Telescopes
Part of the problem is a lack of understanding of exactly what computer science is. Even undergraduates accepted into computer science courses generally have only the broadest idea of what the subject entails. It’s hardly surprising, then, that people would wonder if the discipline is dying.
Even among those in computing-related fields, there’s a general feeling that computer science is basically a vocational course, teaching programming. In January 2007, the British Computer Society (BCS) published an article by Neil McBride of De Montfort University, entitled "The Death of Computing." Although the content was of a lower quality than the average Slashdot troll post (which at least tries to pretend that it’s raising a valid point) and convinced me that I didn’t want to be a member of the BCS, it was nevertheless circulated quite widely. This article contained choice lines such as the following: "What has changed is the need to know low-level programming or any programming at all. Who needs C when there’s Ruby on Rails?"
Who needs C? Well, at least those people who want to understand something of what’s going on when the Ruby on Rails program runs. An assembly language or two would do equally well. The point of an academic degree, as opposed to a vocational qualification, is to teach understanding, rather than skills—a point sadly lost on Dr. McBride when he penned his article.
In attempting to describe computer science, Edsger Dijkstra claimed, "Computer science is no more about computers than astronomy is about telescopes." I like this quote, but it’s often taken in the wrong way by people who haven’t met many astronomers. When I was younger, I was quite interested in astronomy, and spent a fair bit of time hanging around observatories and reading about the science (as well as looking through telescopes). During this period, I learned a lot more about optics than I ever did in physics courses at school. I never built my own telescope, but a lot of real astronomers did, and many of the earliest members of the profession made considerable contributions to our understanding of optics.
There’s a difference between a telescope builder and an astronomer, of course. A telescope builder is likely to know more about the construction of telescopes and less about the motion of stellar bodies. But both will have a solid understanding of what happens to light as it travels through the lenses and bounces off the mirrors. Without this understanding, astronomy is very difficult.
The same principle holds true for computer science. A computer scientist may not fabricate her own ICs, and may not write her own compiler and operating system. In the modern age, these things are generally too complicated for a single person to do to a standard where the result can compete with off-the-shelf components. But the computer scientist definitely will understand what’s happening in the compiler, operating system, and CPU when a program is compiled and run.
A telescope is an important tool to an astronomer, and a computer is an important tool for a computer scientist—but each is merely a tool, not the focus of study. For an astronomer, celestial bodies are studied using a telescope. For a computer scientist, algorithms are studied using a computer.
Software and hardware are often regarded as being very separate concepts. This is a convenient distinction, but it’s not based on any form of reality. The first computers had no software per se, and needed to be rewired to run different programs. Modern hardware often ships with firmware—software that’s closely tied to the hardware to perform special-purpose functions on general-purpose silicon. Whether a task is handled in hardware or software is of little importance from a scientific perspective. (From an engineering perspective, there are tradeoffs among cost, maintenance, and speed.) Either way, the combination of hardware and software is a concrete instantiation of an algorithm, allowing it to be studied.
As with other subjects, there are a lot of specializations within computer science. I tend to view the subject as the intersection between three fields:
- Mathematics
- Engineering
- Psychology
At the very mathematical end are computer scientists who study algorithms without the aid of a computer, purely in the abstract. Closer to engineering are those who build large hardware and software systems. In between are the people who use formal verification tools to construct these systems.
A computer isn’t much use without a human instructing it, and this is where the psychology is important. Computers need to interact with humans a lot, and neither group is really suited to the task. The reason that computers have found such widespread use is that they perform well in areas where humans perform poorly (and vice versa). Trying to find a mechanism for describing something that is understandable by both humans and computers is the role of the "human/computer interaction" (HCI) subdiscipline within computer science. This is generally close to psychology.
HCI isn’t the only part of computer science related to psychology. As far back as 1950, Alan Turing proposed the Turing Test as a method of determining whether an entity should be treated as intelligent.
It’s understandable that people who aren’t directly exposed to computer science would miss the breadth of the discipline, associating it with something more familiar. One solution proposed for this lack of vision is that of renaming the subject to "informatics." In principle, this is a good idea, but the drawback is that it’s very difficult to describe someone as an "informatician" with a straight face.