Some authors have used the terms “calculator” and “computer” interchangeably. Following the more common practice, I reserve “computer” to cover only a special class of calculators. Given this restricted usage, a good way to introduce computers is to contrast them to their close relatives, generic calculators.
A calculator is a computing mechanism made of four kinds of (appropriately connected) components: input devices, output devices, memory units, and processing units. In this section, I only discuss non-programmable calculators. So called “programmable calculators,” from a functional perspective, are special purpose computers with small memory, and are subsumed within the next section.5
Input devices of calculators receive two kinds of inputs from the environment: data for the calculation and a command that determines which operation needs to be performed on the data. A command is an instruction made of only one symbol, which is usually inserted in the calculator by pressing an appropriate button. Calculators’ memory units hold the data, and possibly intermediate results, until the operation is performed. Calculators’ processing units perform one operation on the data. Which operation is performed depends only on which command was inserted through the input device. After the operation is performed, the output devices of calculators return the results to the environment. The results returned by calculators are computable functions of the data. Hence, calculators—unlike the sieves and threshing machines mentioned by Churchland and Sejnowski—are genuine computing mechanisms. However, the computing power of calculators is limited by three important factors.
First, (ceteris paribus) a calculator’s result is the value of one of a finite number of functions of the data, and the set of those functions cannot be augmented (e.g., by adding new instructions or programs to the calculator). The finite set of functions that can be computed by a calculator is determined by the set of primitive operations on strings that can be performed by the processing unit. Which of those functions is computed at any given time is determined by the command that is inserted through the input device, which sets the calculator on one of a finite number of initial states, which correspond to the functions the calculator can compute. In short, calculators (in the present sense) are not programmable.
Second, a calculator performs only one operation on its data, after which it outputs the result and stops. A calculator has no provision for performing several of its primitive operations in a specified order, so as to follow an algorithm automatically.6 In other words, a calculator has no control structure besides the insertion of commands through the input device.7 The operations that a calculator can compute on its data can be combined in sequences, but only by inserting successive commands through the input device after each operation is performed.
Finally, the range of values that calculators can compute is limited by the size of their memory and input and output devices. Typically, calculators’ memories are of fixed size and cannot be increased in a modular fashion. Also, calculators’ input and output devices only take data and deliver results of fixed size. Hence, calculators can only operate within the size of those data and results, and cannot outrun their fixed memory capacity.
As a consequence of these limitations, calculators lack computers’ most interesting functional properties: calculators have no “virtual memory” and do not support “complex notations” and “complex operations.” In short, calculators have no “functional hierarchy.” (These terms are explained in the next section.)
Calculators are computationally more powerful than simpler computing mechanisms, such as logic gates or arithmetic-logic units. Nevertheless, the computing power of calculators is limited in a number of ways. Contrasting these limitations with the power of computers sheds light on why computers are so special and why computers, rather than calculators, have been used as a model for the brain.
Like a calculator, a computer is made of four types of components: input devices, output devices, memory units, and processing units. The processing units of modern computers are called processors and can be analyzed as a combination of datapaths and control units. A schematic representation of the functional organization of a modern computer (without input and output devices) is shown in Figure 10-4.
Figure 1. The main components of a computer and their functional relations.
The difference between calculators and computers lies in the functional properties and functional organization of their components. Computers’ processors are capable of branching behavior and can be set up to perform any number of their primitive operations in any order (until they run out of memory). Computers’ memory units are orders of magnitude larger than those of calculators, and often they can be increased in a modular fashion if more storage space is required. This allows today’s computers to take in data and programs, and yield results, of a size that has no well-defined upper bound. So computers, unlike calculators, are programmable, capable of branching, and capable of taking data and yielding results of “unbounded” size. Because of these characteristics, today’s computers are called programmable, stored-program, and computationally universal. (These terms are defined more explicitly below.)
If we were to define “computer” as something with all of the characteristics of today’s computers, we would certainly obtain a robust, observer-independent notion of computer. We would also lose the ability to distinguish between many classes of computing mechanisms that lack some of those characteristics, yet are significantly more powerful than ordinary calculators, are similar to modern computers in important ways, and were often called computers when they were built. Because of this, I will follow the standard practice of using the term “computer” so as to encompass more than today’s computers, while introducing distinctions among different classes of computers. Ultimately, what matters for our understanding of computing mechanisms is not how restrictively we use the word “computer,” but how careful and precise we are in classifying computing mechanisms based on their relevant functional properties.
Until at least the 1940s, the term “computer” was used to designate people whose job was to perform calculations, usually with the help of a calculator or abacus. Unlike the calculators they used, these computing humans could string together a number of primitive operations (each of which was performed by the calculator) in accordance with a fixed plan, or algorithm, so as to solve complicated problems defined over strings of symbols. Any machine with an analogous ability may also be called a computer.
To a first approximation, a computer is a computing mechanism with a control structure that can string together a sequence of primitive operations, each of which can be performed by the processing unit, so as to follow an algorithm or pseudo-algorithm (i.e., an algorithm defined over finitely many inputs). The number of operations that a control structure can string together in a sequence, and the complexity of the algorithm that a control structure can follow (and consequently of the problem that can be solved by the machine), is obviously a matter of degree. For instance, some machines that were built in the first half of the 20th century—such as the IBM 601—could string together a handful of arithmetical operations. They were barely more powerful than ordinary calculators, and a computing human could easily do anything that they did. Other machines—such as the Atanasoff-Berry Computer (ABC)—could perform long sequences of operations on their data, and a computing human could not solve the problems that they solved without taking a prohibitively long amount of time.
The ABC, which was completed in 1942 and was designed to solve systems of up to 29 linear algebraic equations in 29 unknowns, appears to be the first machine that was called computer by its inventor.8 So, a good cut-off point between calculators and computers might be the one between machines like the IBM 601, which can be easily replaced by computing humans, and machines like the ABC, which outperform computing humans at solving at least some problems.9 The exact boundary is best left vague. What matters to understanding computing mechanisms is not how many machines we honor with the term “computer,” but that we identify functional properties that make a difference in computing power, and that whether a machine possesses any of these functional properties is a matter of fact, not interpretation. We’ve seen that one of those functional properties is the ability to follow an algorithm defined in terms of the primitive operations that a machine’s processing unit(s) can perform. Other important functional properties of computers are discussed in the rest of this section.
Computers and their components can also be classified according to the technology they use, which can be mechanical, electro-mechanical, electronic, etc., as well as according to other characteristics (size, speed, cost, etc.). These differences don’t matter for our purposes, because they don’t affect which functions can in principle be computed by different classes of computing mechanisms. However, it is important to point out that historically, the introduction of electronic technology, and the consequent enormous increase in computation speed, made a huge difference in making computers practically useful.