- Its All Bits and Bytes
- The Rise of Structure
- Higher Abstractions
- Rise of the Type Theorists
Rise of the Type Theorists
One debate as old as programming languages has to do with when a type should be assigned to a value.
Most modern CPUs treat memory as a huge blob of untyped bytes, while registers are either integer or floating-point values (a few further differentiate between pointers and integers). This means that it’s up to the program to determine the type of a value before loading it into a register. If a value is a 16-bit integer, you need to use a 16-bit load operation with an integer register as a destination operand, for example.
There are two general ways in which this decision can be made. The simplest to implement is the typed-variable mechanism (also called strongly typed), in which each symbolic name for a value has an associated type.
With typed-variable languages, the compiler can determine the type at compile time. This is very popular with compiler writers because it makes their job a lot easier.
Within this family, there are explicitly and implicitly typed languages. The former, such as Java, requires the programmer to assign a type to each variable.
The latter, such as Haskell, determines the type from what is done with the value. For example, a variable used to store the result of a function that returns an integer will be assumed to be an integer.
At the other extreme are languages such as Smalltalk and Lisp, which use the typed-value approach (also called loosely, weakly, or dynamically typed languages). They keep track of types at runtime, which can make programs considerably simpler because it’s possible to write very generic code. Typed-variable languages typically do this with a template mechanism or through polytypisim.
The downside of this flexibility is runtime overhead. Before you can do any work on a value, you have to determine its type.
Next week, in the second article in this series, we will look at how languages evolved to deal with the needs of parallel processing.