When
I entered MIT in September 1949, the von Neumann concept of the stored program
computer was only a few years old: it was the subject of a summer school
program at the University of Pennsylvania Moore School of Electrical
Engineering in 1946. It was around 1949 that realizations of the stored program
concept went into operation at
My
introduction to computers came through two undergraduate friends: Bill Eccles,
a fellow EE, who had some experience with the IBM "card programmed
calculator" in Frank Verzuh's shop, and Ken Ralston, who entertained us
with stories of his experiences with the Whirlwind computer, then under
construction. I became a Whirlwind programmer, first to do optimization
calculations for Prof. Saunders who was visiting from
That
was 1954. Fifty years later much has changed: A room full of vacuum tubes has
become a tiny chip with millions of transistors. A phenomenon once limited to
research laboratories has become an industry producing commodity products that anyone
can own and use beneficially.
But
much of the fundamentals have not changed. A computer program is still a
sequence of instructions obeyed by a processor as if their effects were
accomplished one at a time. The only exception to this is the
"interruption" of the program sequence to allow the processor to pay
attention to another program, or to some input-output device calling for
service. Even our favorite programming languages deal only with the sequential
aspect of computation, and provide no means for expressing actions that require
use of program interruptions; that is left to operating system calls, or, at
best, library packages that encapsulate usage patterns of operating system facilities.
This situation leaves programming languages unable to express large
applications in a modular style such as that supported by the familiar procedure
concept in sequential programming languages.
I
have seen this history from inside a major research university, as a teacher of
computer science, and as a researcher in what I like to call "computer
system architecture".
In
1960 Professor John McCarthy, now at
At
this time Prof. Fano had a vision of the Computer Utility – the concept of the
computer system as a repository for the knowledge of a community – data and
procedures in a form that could be readily shared – a repository that could be
built upon to create ever more powerful procedures, services, and active
knowledge from those already in place. Prof. Corbató's goal was to provide the kind of central computer installation
and operating system that could make this vision a reality. With funding from
DARPA, the Defense Advanced Research Projects Agency, the result was Multics.
I
am proud of the role I played in the activities that led to the construction of
Multics. From the work of the LRCSG we envisioned that the hardware for Multics
would be a symmetric multiprocessor (several processors having equal access to
several banks of main memory). To support the sharing of processor and memory
resources by independent programs running for many users at the same time, I advocated
the combination of segmentation (inspired by the Burroughs B5000 system) and
paging (inspired by the Manchester Atlas computer). This would allow segments
containing program modules and units of structured data to be shared by many
users without the need for making copies. This concept was adopted for Multics.
A
team, composed of Prof. Corbató, Ted Glaser,
Robert Graham, and me, spent much of the 1963-64 academic year visiting most of
the major
Rather
than work on the software operating system for Multics, I chose to do
independent research following the intellectual ideas and issues that arose
from my experience with the Multics effort, and the time-sharing system some
students and I had built using a DEC PDP-1 computer. During the 1960s Project MAC
had very generous support from DARPA, and the MIT Computer Science faculty and graduate
students could choose any topic to study, so long as it had some relation to
computing. I formed the Computation Structures Group and focused on
architectural concepts that could narrow the acknowledged gap between
programming concepts and the organization of computer hardware. I found myself dismayed
that people would consider themselves to be either hardware or software
experts, but paid little heed to how joint advances in programming and
architecture could lead to a synergistic outcome that might revolutionize
computing practice.
Nevertheless,
in the 1970s I found it easy to get government funding. The agencies were
willing to fund pretty wild ideas, and I was supported to do research on
"data flow" architecture, first by NSF and later by the DOE. This work
inspired related projects at several companies and research institutions around
the world, and earned me the Echert-Mauchly Award in 1984.
During
the 1980s things changed. Computer Science Departments had proliferated
throughout the universities to meet the demand, primarily for programmers and
software engineers, and the faculty assembled to teach the subjects was
expected to do meaningful research. To manage the burgeoning flood of
conference papers, program committees adopted a new strategy for papers in computer
architecture: No more wild ideas; papers had to present quantitative results.
The effect was to create a style of graduate research in computer architecture
that remains the "conventional wisdom" of the community to the
present day: Make a small, innovative, change to a commercially accepted design
and evaluate it using standard benchmark programs. This style has stifled the
exploration and publication of interesting architectural ideas that require
more than a modicum of change from current practice. The practice of basing evaluations
on standard benchmark codes neglects the potential benefits of architectural
concepts that need a change in programming methodology to demonstrate their
full benefit.
Today,
much of the excitement in computer science has shifted to various important
application domains: medicine, speech processing, support for advanced human
interfaces, communications, graphics, etc., and less funding is available for
academic work in the core areas: programming languages, operating systems, and
computer architecture. In fact, there are people who consider that these core
areas have reached the limit of their potential for innovation. However, this
is belied by the popularity of Java, a new language that incorporates important
features long advocated by computer science academics.
The
real gains in programmer productivity yielded by modern computer hardware are
due mainly to the increasing size of physical memories and the universal
adoption of single-user virtual memory support. Of course, major advances in
programming have resulted from ideas such as structured programming and
object-oriented design and the influence of these ideas on programming
languages. However, these advances have failed to address the issues of
high-level programming for computations using multiple processors.
Now,
the design of computer processor chips has reached an impasse. It is no longer
practical to make a faster processor by adding transistors to the logic, and it
is very expensive to further increase the clock rate of the processor. To any
hardware engineer the obvious answer is to put multiple processors on a chip,
and this is now being done. However, the way provided for processors to
cooperate with each other is ad hoc
with little attention paid to how a beneficial methodology of parallel
programming could be supported.
In
September 1988 the MIT Laboratory for Computer Science celebrated the
thirty-fifth anniversary of Project MAC. At that event I pointed out the
limitations of conventional multiprocessor architecture:
"Yet present multiprocessors are very limited in their
effective application. Their programming tools are absurdly limited and primitive
in contrast to those of Multics. There is no automatic management of memory by
the system on behalf of its users. Moreover, within the massive research effort
now devoted to parallel architecture, hardly any effort is devoted to the
problem of improving programmability in any fundamental sense."
Some
wild ideas may be the key to a breakthrough: Functional programming; the idea
of a memory that directly supports creation and access to data objects, but
does not permit updates; hardware-supported allocation and garbage collection
of memory. They need to be seriously explored.
At
the Project MAC anniversary I explained:
"...
the key idea ... is functional programming: getting away from the burden of
sequential programming concepts embedded in our popular programming languages
and computer architectures. The ideas to accomplish this advance exist; in fact
they have been known for some time. Yet they are not in favor. Why? They do not
fall into the current main stream of computer science. They do not solve the
multiprocessor cache problem--instead they make it irrelevant. They do not
solve the problem of shared objects in object-oriented programming--they
eliminate the problem. They do not minimize the overhead of processor
synchronization--they make it disappear altogether."
Fifteen
years later these remarks remain valid. I concluded by saying:
"The
computer systems of today do not realize our original vision from the inception
of Project MAC. Yet the opportunity to make our dreams come true is still
there. The vision is not obsolete. It is one that will be achieved. I believe
the day will come when the ideas are widely accepted and we can move forward to
build the Computer Utility. I hope to contribute to its realization and I look
forward to enjoying its fruits."
There
are wonderful opportunities for a new generation of talent to work at the core
of a most relevant and rewarding area of intellectual effort.
Written
for the 50th reunion book of the MIT class of 1953
Belmont,
MA, August 2003