A stored-program computer has an internal memory that can store strings of symbols. Programs are stored in memory as lists of instructions placed in appropriate memory units. A stored-program computer also contains at least one processor. The processor contains a tracking mechanism (program counter) that allows the processor to retrieve instructions from a program in the appropriate order. The processor can extract strings of symbols from memory and copy them into its internal registers (if they are data) or execute them on the data (if they are instructions). After an instruction is executed, the tracking mechanism allows the control to retrieve the next instruction until all the relevant instructions are executed and the computation is completed.
A stored-program machine need not be programmable. Some kinds of memory units are read-only, namely they can communicate their content to other components but their content cannot be changed. Stored-program calculators can be built using this kind of memory. Other kinds of memory units are such that symbols can be inserted into them. This opens up the possibility that a program be inserted from the outside of the computer (or from the inside, i.e. from other parts of the memory or from the processor) into a memory unit, allowing a computer to be (soft) programmed and the existing programs to be modified. Once the program is in the appropriate part of the memory, the computer will execute that program on any data it gets. A programmable stored-program computer can even be set up to modify its own program, changing what it computes over time. Stored-program computers are usually designed to be soft programmable and to execute any list of instructions, including branch instructions, up to their memory limitations, which makes them computationally universal (see below). For this reason, the term “stored-program computer” is often used as a synonym for “universal computer.”
The idea to encode instructions as sequences of symbols in the same form as the data for the computation and the idea of storing instructions inside the computer are the core of the notion of programmable, stored-program computer, which is perhaps the most fundamental aspect of modern mechanical computing.16 At this point, it should be clear that not every computing mechanism is a computer, and not every computer is a programmable, stored-program computer. In order to have a programmable, stored-program computer, one needs to have a system of symbols, a scheme for encoding instructions, a writable memory to store the instructions, and a processor that is appropriately designed to execute those instructions. Several technical problems need to be solved, such as how to retrieve the instructions from the memory in the right order and how to handle malformed instructions. These properties of computers, the problems that need to be solved to design them, and the solutions to those problems are all formulated in terms of the components of the mechanism and their functional organization. This shows how the functional account of computing mechanisms sheds light on the properties of computers.
2.3Special-Purpose, General-Purpose, or Universal
Many old computers, which were not stored-program, were designed with specific applications in mind, such as solving certain classes of equations. Some of these machines, like the ABC, were hardwired to follow a specific algorithm. They are called special purpose computers to distinguish them from their general-purpose successors. Special-purpose computers are still used for a number of specialized applications; for instance, computers in automobiles are special-purpose.
In the 1940s, several computers were designed and built to be programmable in the most flexible way, so as to solve as many problems as possible.17 They were called general-purpose or, as von Neumann said (von Neumann 1945), all-purpose computers. The extent to which computers are general- or all-purpose is a matter of degree, which can be evaluated by looking at how much memory they have and how easy they are to program and use for certain applications.18 General-purpose computers are programmable but need not be soft programmable, let alone stored-program.
Assuming the Church-Turing thesis, namely the thesis that every effectively computable function is computable by a Turing Machine, Alan Turing (1936-7) showed how to design universal computing mechanisms, i.e. computing mechanisms that would take any appropriately encoded program as input and respond to that program so as to compute any computable function.19 Notice that Turing’s universal computing mechanisms, universal Turing Machines, are not stored-program (see below for more discussion of Turing Machines).
We have seen that soft programmable computers can respond to programs of unbounded length and manipulate their inputs (of finite but unbounded length) according to the instructions encoded in the program. This does not immediately turn them into universal computers, for example because they may not have the ability to handle branch-instructions, but no one builds computers like that.20 Ordinary soft programmable computers, which are designed to execute all the relevant kinds of instructions, are universal computing mechanisms. For example, ordinary computers like our desktops and laptops are universal in this sense. Even Charles Babbage’s legendary analytical engine was universal. Universal computers can compute any Turing-computable function until they run out of memory. Because of this, the expressions “general-purpose computer” and “all-purpose computer” are sometimes loosely used as synonyms of “universal computer.”
Above, I briefly described the functional organization that allows soft-programmable computers to perform a finite number of primitive operations in response to a finite number of the corresponding kinds of instructions. Computers with this capability are computationally universal up to their memory limitations. In order to get them to execute any given program operating on any given notation, all that is needed is to encode the given notation and program using the instructions of the computer’s machine language.
In early stored-program computers, this encoding was be done manually by human programmers. It was slow, cumbersome, and prone to errors. To speed up the process of encoding and diminish the number of errors, early computer designers introduced ways to mechanize at least part of the encoding process, giving rise to a hierarchy of functional organizations within the stored-program computer. This hierarchy is made possible by automatic encoding mechanisms, such as operating systems, compilers, and assemblers. These mechanisms are nothing but programs executed by the computer’s processor(s), whose instructions are often written in special, read-only memories. These mechanisms generate virtual memory, complex notations, and complex operations, which make the job of computer programmers and users easier and quicker. I will now briefly explain these three notions and their functions.
When a processor executes instructions, memory registers for instructions and data are functionally identified by the addresses of the memory registers that contain the data or instructions. Moreover, each memory register has a fixed storage capacity, which depends on the number of memory cells it contains. Virtual memory is a way to functionally identify data and instructions by virtual addresses, which are independent of the physical location of the data and instructions, so that a programmer or user need not keep track of the physical location of the data and instructions she needs to use by specifying the register addresses of every relevant memory location. During the execution phase, the physical addresses of the relevant memory registers are automatically generated by the compiler on the basis of the virtual addresses. Since virtual memory is identified independently of its physical location, in principle it can have unlimited storage capacity, although in practice the total number of physical symbols that a computer can store is limited by the size of its physical memory.21
In analyzing how a processor executes instructions, all data and instructions must be described as binary strings (strings of bits) corresponding to the physical signals traveling through the processor, and the functional significance of these strings is determined by their location within an instruction or data string. Moreover, all data and instruction strings have a fixed length, which corresponds to the length of the memory registers that contain them. Complex notations, instead, can contain any characters from any finite alphabet, such as the English alphabet. Programmers and users can use complex notations to form data structures that are natural and easy to interpret in a convenient way (similarly to natural languages). Thanks to virtual memory, these strings can be concatenated into strings of any length (up to the computer’s memory limitations). During the execution phase, the encoding of data structures written in complex notations into data written in machine language is automatically taken care of by the functional hierarchy within the computer (operating system, compiler, and assembler).
The processor can only receive a finite number of instruction types corresponding to the primitive operations that the processor can execute. Complex operations are operations effectively defined in terms of primitive operations or in terms of already effectively defined complex operations. As long as the computer is universal, the Church-Turing thesis guarantees that any computable operation can be effectively defined in terms of the primitive operations of the computer. Complex operations can be expressed using a complex notation (e.g., an English word or phrase; for instance, a high-level programming language may include a control structure of the form UNTIL P TRUE DO ___ ENDUNTIL) that is more transparent to the programmers and users than a binary string would be, and placed in a virtual memory location. A high-level programming language is nothing but a complex notation that allows one to use a set of complex operations in writing programs. For their own convenience, programmers and users will typically assign a semantics to strings written in complex notations. This ascription of a semantics is often implicit, relying on the natural tendency of language users to interpret linguistic expressions of languages they understand. Thanks to virtual memory, complex instructions and programs containing them can be of any length (up to the memory limitations of the computers). During the execution of one of these programs, the encoding of the instructions into machine language instructions is automatically taken care of by the functional hierarchy within the computer.
Notice that the considerations made in this section apply only to programmable, stored-program, universal computers. In order to obtain the wonderful flexibility of use that comes with the functional hierarchy of programming languages, one needs a very special kind of mechanism: a programmable, stored-program, universal computer.
The convenience of complex notations and the complex operations they represent, together with the natural tendency of programmers and users to assign them a semantics, makes it tempting to conclude that computers’ inputs, outputs, and internal states are identified by their content, and that computers respond to the semantic properties of their inputs and internal states. For example, both computer scientists and philosophers sometimes say that computers understand their instructions. The literal reading of these statements was discussed at length, and rejected, in Piccinini 20003 and forthcoming a. Now we are ready to clarify in what sense computers can be said to understand their instructions.
Every piece of data and every instruction in a computer has functional significance, which depends on its role within the functional hierarchy of the computer. Before a program written in a complex notation is executed, its data and instructions are automatically encoded into machine language data and instructions. Then they can be executed. All the relevant elements of the process, including the programs written in complex notation, the programs written in machine language, and the programs constituting the functional hierarchy (operating system, compiler, and assembler), can be functionally characterized as strings of symbols. All the operations performed by the processor in response to these instructions can be functionally characterized as operations on strings. The resulting kind of “computer understanding” is mechanistically explained without ascribing any semantics to the inputs, internal states, or outputs of the computer.
At the same time, data and instructions are inserted into computers and retrieved from them by programmers and users who have their own purposes. As long as the functional hierarchy is working properly, programmers and users are free to assign a semantics to their data and instructions in any way that fits their purposes. This semantics applies naturally to the data and instructions written in the complex notation used by the programmers and users, although it will need to be replaced by other semantics when the complex code is compiled and assembled into machine language data and instructions. This shows that the semantics of a computer’s inputs, outputs, and external states is unnecessary to individuate computing mechanisms and what functions they compute.