Highlights
“Our computers should be like our childhood,” he thinks: “an invisible foundation that is quickly forgotten but always with us, and effortlessly used throughout our lives.”
The story of Weiser’s time at PARC debunks any notion that technocratic manipulation—total surveillance and zero privacy, runaway automation, and diminished agency—is the inherent cost of living with the Internet of Things. Big Tech’s exploitative data practices and covert revenue streams were manufactured out of flagrant disregard for the philosophy that inspired the machinery.
Sometimes the desire to invent comes from a drive to answer fears and longings that never abate. Before combat and after commerce, there is the recurrent white noise of a mind conflicted, the lingering images of love, and of a loved one’s passing. A killer app can be a rewarding afterthought, a handy cover-up, a means to justify publicly this most private pursuit. “The machine,” wrote the historian Lewis Mumford, “is just as much a creature of thought as the poem.”
The beanbag chairs that Taylor famously brought into a PARC conference room—which visiting journalists often cited as symbols of CSL’s playful, hippie-leaning spirit—were really just a tool for managing conflicts. “It was impossible to leap to your feet and denounce someone from a bean bag,” PARC icon Alan Kay would explain.
A few of those pioneering researchers who spoke at the January conference had alternative prospects on their minds. Even as they were called to wax nostalgically on the personal workstation and the legacy of early desktop systems like the NLS and the Alto, they peppered their remarks with mentions of prototypes-in-process that might someday challenge the industry’s recent uptake of the mouse and keyboard, the stationary monitor, and the graphical user interface. Licklider mentioned with great interest “the idea of instrumenting the body of the user.” He said the physical keyboard was “clearly on the way out” and wagered that it would be replaced by “instrumented fingers”—future users could wear thin fingernail attachments, allowing them to select content on a screen by touching it. He believed that people would speak to computers and that the machine would be able to speak back.
The Tacit Dimension presented a whole new theory of knowledge in less than ninety pages. And while its author famously coined the term tacit knowledge (which remains a popular descriptor for learning that cannot be formally taught), his theory went deeper than that. Polanyi argued that human experience was more or less tacit by nature; all of us, at every moment, “know more than we can tell,” he wrote. Our relationship with the world around us always includes more sensory information than our conscious minds can process. According to Polanyi, this omnipresent layer of peripheral impressions is not merely white noise. All formal knowledge is rooted in, born out of, and filtered through this tacit dimension that both evades our understanding and makes understanding possible.
[Lucy] Suchman contended most computer interfaces, be they expert systems like Bluebonnet or conventional software running on a desktop, suffered from a limited capacity to access, let alone understand, “the moment-by-moment contingencies that constitute the conditions of situated interaction.” The continuous stream of sensory details that we live by—the tacit dimension, in Polanyi’s verbiage—was just a bunch of white noise to even the most sophisticated programs. The disparity between humans and machines in this critical area lay at the bottom of most user complaints. Human-machine communication, even in cutting-edge labs at PARC, was a wildly asymmetrical affair.
Now riffing off Bateson, Weiser insisted: “We are not separate from, but are inseparably reliant on, a world around us.” He went so far as to supply the audience with a new word: “horld,” meant to denote this inseparability of human and world.
His on-stage successes—his hints at PARC’s plan to create a network of “tiny,” “medium,” and “large” ubicomp interfaces—pinned him up against a question he struggled to answer back in his office: “Could I design a radically new kind of computer that could more deeply participate in the world of people?” He realized he couldn’t—at least, not yet. Weiser recalled the pickle in writing a few years later: “As I began to glimpse what such an information appliance might look like, I saw that it would be so different from today’s computer that I could not begin to understand or build it.”
Industrialized cities and towns, Weiser’s article had pointed out, came to be smothered in useful words throughout the twentieth century with “street signs, billboards, shop signs, and even graffiti. . . . Candy wrappers are covered in writing.” These texts shared a unique attribute: they communicated their messages to readers on location, at precisely the moment readers were likely to want that information.
In the same issue of Scientific American in which Weiser’s article appeared, Negroponte as well as ex-PARC–turned–Apple researchers Alan Kay and Larry Tesler had each stressed the need for interface agents in their published musings on technology’s future. Tesler, who led the development of Apple’s Newton handheld device, asserted that computing would become largely mobile by the decade’s end. Mobile users did not have a robust keyboard or mouse, and generally could not allocate the time or attention to scroll through lengthy documents on a tiny screen. Tesler’s article proposed that speaking and listening to the handheld device would eventually become the best way for people to interact with it; these “pericomputers,” in Tesler’s estimation, would at their best serve as a lightweight extension of a desktop machine that granted continuous access to its files and data.
“Personal computing,” he continued, “is the wrong idea and intimate computing is even worse.” Even though agent-oriented mobile devices dispensed with the desktop model that Weiser critiqued, he argued that agents would perpetuate the PC-era habit of making digital devices “a single locus of information,” which people might feel compelled to attend to constantly. Having people chat with their own portable, talkative AI assistant would keep them focused on a computer, even in the absence of a keyboard, mouse, and monitor.
Interface agents, too, would by virtue of their design be talking over the user’s live encounters with other people and things, putting the user in a position of having to juggle multiple conversations at once. (For this reason, Weiser thought vocal interaction should be a last resort, to be employed sparingly and very briefly.) Moreover, because interface agents monitored and analyzed so much personal data in order to construct models of their users’ psychology, agents could develop an increasingly keen ability to get their users’ attention and keep it. Ubiquitous computing, in contrast, called for a distributed network of unremarkable interfaces (tabs, pads, and boards being phase I) that together presented and organized bits of electronic information “by place, time and situation.” Abandoning the notion of an immersive “single locus”—be it a desktop you sat at or an interface agent you carried around—was a prerequisite in Weiser’s eyes.
But the casual blowback, the maddening ease with which the other speakers had dismissed his carefully crafted intervention, was beyond what he expected. Having scarcely been heard hurt worse than not speaking at all. A year had gone by since his Scientific American article wowed the world with his ubicomp dream. The further he traveled from Xerox PARC, delivering all these talks elsewhere now, the more he struggled to convey the details that seemed to matter only to him and his closest collaborators.
The afterlife of unpredictable exchanges between smart people was precious currency, and its true value was often realized weeks later, when some remembered fragment echoed in somebody’s head while they were driving home or reading a book. A seemingly random aside uttered during such a session could later accrue into something worthwhile through the compound interest of persistent brooding.
Suchman saw the utopia that some ubicomp researchers seemed to be chasing all along: “a world which is always familiar.” Between embedded machines that adjusted a room to accommodate each user’s preferences and urban locales that redefined themselves according to each passerby’s frame of reference, Suchman realized a common thread linking various projects was “this desire to never feel that you are out of place.”
What was possible within PARC’s wireless network became five or ten years beyond reach the moment you left the building. On the inside, the stuff of Weiser’s imagination pervaded the space, infusing offices and hallways with extra layers of meaning that, for him, made everything feel more in tune, connected, and alive. He could speak to the future it pointed to during his talks to fellow researchers, but such gestures would carry no currency in the venture capitalists’ boardrooms.
On her desk at PARC she kept a hand grenade. Those who asked about it were relieved to learn the grenade contained no explosives. It had been hollowed out, but it gave Jeremijenko occasion to talk about her mission to “demilitarize technology.” Too often, the knowledge and intelligence that computer systems housed were inherently concealed from plain view, let alone public access. To the extent that it further empowered a technocratic few over the many, computing could and did serve as a weapon of sorts, she insisted. That notion did not generally sit well with her computer scientist colleagues.
If the promise of AI lay in its speed and precision, its underside was how it rendered people. Someone who is, via AI, relieved of the burden to stay attuned to the unfolding present is effectively robbed of their agency. The AI user—like the “driver” in an autonomous vehicle—is positioned to be reactive, deferring to algorithms and interface agents rather than grappling with the world. Such AI systems discount the value of intuition, and they do not prioritize boosting one’s tacit awareness.
Thanks to an innovative funding model, MIT’s Media Lab thrived in spite of the declining government and corporate support that was forcing other research centers into crisis mode. On the coattails of Negroponte’s guru status, the Media Lab sold membership packages at six or seven figures a pop to giant companies eager to buy an early glimpse of the professors’ inventions. Paying the annual dues granted these one hundred–some sponsors—firms including LEGO, Nike, Eastman Kodak, and AT&T—an invitation to exclusive project showcases and privileged access to the lab’s considerable intellectual property.
Weiser and Brown couldn’t unsee the hand-drawn images of a future that Gold conjured up as he delivered his instant classic talk, which went by the title “How Smart Does Your Bed Have to Be Before You Are Afraid to Go to Sleep at Night?” It was a thirty-minute presentation composed entirely of questions, one after another, that Gold asked his Silicon Valley audiences in unwavering succession. The interrogation began with Gold confessing honest bafflement about the motives driving so-called smart environments like those being tested at PARC and MIT: “Why would anyone want to live in an intelligent house? What would be the forces that would compel a designer, or an architect, to create such a thing?”
A definition of the periphery accompanied their first principle of calm technology, which stipulated, “A calm technology will move easily from the periphery of our attention, to center, and back.” The very thought of an electronic medium that was designed to rest comfortably at the edge of our awareness seemed alien, and it still does.
Weiser’s ideal computerized space might be packed with smart objects, but none would do much more than convey information to people in their periphery. The purpose of adding computing to things, he now believed, was to make them more usefully expressive.
“Good engineering requires a good relationship to the mystery of being alive,” Weiser told the audience gathered before him at Berkeley.
“I have been doing engineering for more than 30 years—in start up companies, in big companies, as a student, as a professor, for a hobby, for an escape, to lose myself, to find myself,” wrote Weiser in the opening of his final essay. He described how he had lately come to regard engineering as “a form of worship” not so different than religious prayer. Both practices, as he saw them, involved “honoring the unknowable connection among things.” He conceded that most engineers found this comparison ludicrous whenever he aired it in conversation. Whether they constructed bridges or software, engineers took pride in their precision. It was the duty of their profession to make every necessary calculation to ensure structural integrity, operationalize a system, and fix any bugs. Weiser prided himself on all of this, too. But his conception of engineering went far above materials, physical laws, or computer code.
As these technologies continue to develop and spread, the engineers who design them and the corporations overseeing them will face tough decisions about where to stand on the spectrum of visions that took root at Xerox PARC and at the MIT Media Lab in the 1990s. Whether they recognize it or not, today’s post-PC innovators are standing on the shoulders of either Weiser and his collaborators or Negroponte and his protégés. The ones who lean toward the latter have taken an early lead in the race to make everything smart.
- Links: Buy this book
- Finished: ~Nov 14, 2025
- More from this year: 2025
Previous:
– Trust