Slumming with Buddha

zen

Meditation and mindfulness are all the rage in Silicon Valley. Which is a good thing, I guess. Wired‘s Noah Shactman reports on how the tech elite are, during breaks in the work day, unfolding their yoga pads and, in emulation of Steve Jobs, pursuing the Eastern path to nirvana, often with instruction from Buddhist monks. It’s an odd sort of enlightenment they’re after, though. Explains Google mindfulness coach Chade-Meng Tan, who helps the techies gain “emotional intelligence” through meditation, “Everybody knows this EI thing is good for their career. And every company knows that if their people have EI, they’re gonna make a shitload of money.”

That’s so Zen.

Photo by Edward Dalmulder.

Transparency begins at Home

zuckeyes

In response to last week’s disclosures about the NSA’s Prism spy program, Facebook, together with other tech companies like Google and Microsoft, has called on the government to be more “transparent” about its collection of online data. Writes Facebook’s top lawyer, Ted Ullyot, in a statement:

As Mark [Zuckerberg] said last week, we strongly encourage all governments to be much more transparent about all programs aimed at keeping the public safe. … We would welcome the opportunity to provide a transparency report that allows us to share with those who use Facebook around the world a complete picture of the government requests we receive, and how we respond. We urge the United States government to help make that possible by allowing companies to include information about the size and scope of national security requests we receive, and look forward to publishing a report that includes that information.

That all seems very noble, and I applaud Facebook for taking such a strong public stand in support of giving its users “a complete picture” of how data about them is being collected and used. But the company hardly has to wait for the government’s permission to give its users a clearer accounting of what personal information is being collected about them and how it’s being used. After all, the reason spy agencies request data from Facebook (and other internet operators) is because that’s where the data is—Facebook has already collected it, parsed it, and stored it. The NSA goes to Facebook for the same reason that Willie Sutton went to banks.

While it awaits a reply from the government, Facebook could immediately launch its own effort to give “transparency reports” to its members. It could provide each of its users with access to a simple, personalized data log that shows what particular pieces of information it has stored about them, when it collected the data, from which sites or apps the data was collected (including third-party sites and apps), and with what other organizations, commercial as well as governmental, the data has been shared. If Facebook is really interested in providing users with “a complete picture” of how data about them is being used, that would be an excellent, and obvious, place to start.

Image from Time.

The prism house

The nature of Rusanov’s work had been for many years, by now almost twenty, that of personnel records administration. It was a job that went by different names in different institutions, but the substance of it was always the same. Only ignoramuses and uninformed outsiders were unaware what subtle, meticulous work it was, what talent it required. It was a form of poetry not yet mastered by the poets themselves. As every man goes through life he fills in a number of forms for the record, each containing a number of questions. A man’s answer to one question on one form becomes a little thread, permanently connecting him to the local center of personnel records administration. There are thus hundreds of little threads radiating from every man, millions of threads in all. If these threads were suddenly to become visible, the whole sky would look like a spider’s web, and if they materialized as elastic bands, buses, trams and even people would all lose the ability to move, and the wind would be unable to carry torn-up newspapers or autumn leaves along the streets of the city. They are not visible, they are not material, but every man is constantly aware of their existence. The point is that a so-called completely clean record was almost unattainable, an ideal, like absolute truth. Something negative or suspicious can always be noted down against any man alive. Everyone is guilty of something or has something to conceal. All one has to do is look hard enough to find out what it is.

Each man, permanently aware of his own invisible threads, naturally develops a respect for the people who manipulate the threads, who manage personnel records administration, that most complicated science, and for these people’s authority.

— From Cancer Ward by Alexsandr Solzhenitsyn, published in 1967.

To connect with love

I don’t think that the world appreciates the existential loneliness, the profound sense of abandonment, endured by those of us who don’t have Facebook accounts. So let me give you a feeling for the blighted landscape through which we outcasts wander:

god

love

truth

hope

peace

 

There are, as well, certain existential consolations:

nothing

Disruption and control

OLYMPUS DIGITAL CAMERA

The following essay is adapted from “A Spider’s Web,” the tenth chapter of my book The Big Switch.

It’s natural to think of the Internet as a technology of emancipation. It gives us unprecedented freedom to express ourselves, to share our ideas and passions, to find and collaborate with soul mates, and to discover information on almost any topic imaginable. For many people, going online has felt like a passage into a new and radically different kind of democratic state, one freed of the physical and social demarcations and constraints that can hobble us in the real world.

The sense of the Web as personally “empowering,” to use the common buzzword, remains strong, even among those who rue its commercialization or decry the crassness of much of its content. In early 2006, the editors of the Cato Institute’s online journal Cato Unbound published a special issue on the state of the Internet. They reported that the “collection of visionaries” contributing to the issue appeared to be “in unanimous agreement that the Internet is, and will continue to be, a force for liberation.” The Net is “the world’s largest ungoverned space,” declare Google’s Eric Schmidt and Jared Cohen in their new book The New Digital Age. “Never before in history have so many people, from so many places, had so much power at their fingertips.” David Weinberger, in Small Pieces Loosely Joined, summed up the Internet’s liberation mythology in simple terms: “The Web is a world we’ve made for one another.”

It’s a stirring thought, but like most myths it’s at best a half-truth and at worst a delusion. Computer systems in general and the Internet in particular put enormous power into the hands of individuals, but they put even greater power into the hands of companies, governments, and other institutions whose business it is to control individuals. Computer systems are not at their core technologies of emancipation. They are technologies of control. They were designed as tools for monitoring and influencing human behavior, for controlling what people do and how they do it. As we spend more time online, filling databases with the details of our lives and desires, software programs will grow ever more capable of discovering and exploiting subtle patterns in our behavior. The people or organizations using the programs will be able to discern what we want, what motivates us, and how we’re likely to react to various stimuli. They will, to use a cliché that happens in this case to be true, know more about us than we know about ourselves.

Even as the latest Net-related technological developments—the cloud, social media and networks, powerful handheld computers—grant us new opportunities and tools for self-expression and self-fulfillment, they are also giving others an unprecedented ability to influence how we think and what we do, to funnel our attention and actions toward their own ends. The technology’s ultimate social and personal consequences will be determined in large measure by how the tension between the two sides of its nature—liberating and controlling—comes to be resolved.

All living systems, from amoebas to nation-states, sustain themselves through the processing of matter, energy, and information. They take in materials from their surroundings, and they use energy to transform those materials into various useful substances, discarding the waste. This continuous turning of inputs into outputs is controlled through the collection, interpretation, and manipulation of information. The process of control itself has two thrusts. It involves measurement—the comparison of the current state of a system to its desired state. And it involves two-way communication—the transmission of instructions and the collection of feedback on results. The processing of information for the purpose of control may result in the release of a hormone into the bloodstream, the expansion of a factory’s production capacity, or the launch of a missile from a warship, but it works in essentially the same way in any living system.

When, in the 1880s, Herman Hollerith created the punch-card tabulator, the antecedent of today’s digital computer, he wasn’t just pursuing his native curiosity as an engineer and an inventor. He was responding to an imbalance between, on the one hand, the technologies for processing matter and energy and, on the other, the technologies for processing information. He was trying to help resolve what James R. Beniger, in The Control Revolution, calls a “crisis of control,” a crisis that was threatening to undermine the stability of markets and bring economic and technological progress to a halt.

Throughout the first two centuries of the Industrial Revolution, the processing of matter and energy had advanced far more rapidly than the processing of information. The steam engine, used to power ships and trains and industrial machines, allowed factories, transportation carriers, retailers, and other businesses to expand their operations and their markets far beyond what was possible when production and distribution were restricted by the limitations of muscle power. Business owners, who had previously been able to observe their operations in their entirety and control them directly, now had to rely on information from many different sources to manage their companies. But they found that they lacked the means to collect and analyze the information fast enough to make timely decisions. Measurement and communication both began to break down, hamstringing management and impeding the further growth of businesses. As the sociologist Emile Durkheim observed in 1893, “The producer can no longer embrace the market in a glance, nor even in thought. He can no longer see limits, since it is, so to speak, limitless. Accordingly production becomes unbridled and unregulated.” Government officials found themselves in a similar predicament, unable to assemble and analyze the information required to regulate commerce. The processing of materials and energy had progressed so rapidly that it had gone, quite literally, out of control.

During the second half of the nineteenth century, a series of technological advances in information processing helped administrators, in both business and government, begin to re-impose control over commerce and society, bringing order to chaos and opening the way for even larger organizations. The construction of the telegraph system, begun by Samuel F.B. Morse in 1845, allowed information to be communicated instantaneously across great distances. The establishment of time zones in 1883 allowed for more precise measurement of the flows of goods. The most important of the new control technologies, however, was bureaucracy—the organization of people into hierarchical information-processing systems. Bureaucracies had, of course, been around as long as civilization itself, but, as Beniger writes, “bureaucratic administration did not begin to achieve anything approximating its modern form until the late Industrial Revolution.” Just as the division of labor in factories provided for the more efficient processing of matter, so the division of labor in government and business offices allowed for the more efficient processing of information.

But bureaucrats alone could not keep up with the flood of data that needed to be processed—the measurement and communication requirements went beyond the capacities of even large groups of human beings. Just like their counterparts on factory floors, information workers needed new tools to do their jobs. That requirement became embarrassingly obvious inside the U.S. Census Bureau at the end of the century. During the 1870s, the federal government, struggling to administer a country and an economy that were growing rapidly in size and complexity, had demanded that the Bureau greatly expand the scope of its data collection, particularly in the areas of business and transport. The 1870 census had spanned just five subjects; the 1880 round was expanded to cover 215. But the new census turned into a disaster for the government. Even though many professional managers and clerks had been hired by the Bureau, the volume of data overwhelmed their ability to process it. By 1887, the agency found itself in the uncomfortable position of having to begin preparations for the next census even as it was still laboring to tabulate the results of the last one. It was in that context that Hollerith, who had worked on the 1880 census, rushed to invent his information-processing machine. He judged, correctly, that it would prove invaluable not only to the Census Bureau but to large companies across the nation. (Hollerith’s company would become IBM.)

The arrival of the punch-card tabulator was a seminal event in a new revolution—a “Control Revolution,” as Beniger terms it—that followed and was made necessary and inevitable by the Industrial Revolution. Through the Control Revolution, the technologies for processing information finally caught up with the technologies for processing matter and energy, bringing the living system of society back into equilibrium. The entire history of automated data processing, from Hollerith’s tabulator through the mainframe computer and on to the modern computer network, is best understood as part of that ongoing process of reestablishing and maintaining control. “Microprocessor and computer technologies, contrary to currently fashionable opinion, are not new forces only recently unleashed upon an unprepared society,” writes Beniger, “but merely the latest installment in the continuing development of the Control Revolution.”

It should come as no surprise, then, that most of the major advances in computing and networking, from Hollerith’s time to the present, have been spurred not by a desire to liberate the masses but by a need for greater control on the part of commercial and governmental bureaucrats, often ones associated with military operations and national defense. Indeed, the very structure of a bureaucracy is replicated in the functions of a computer. A computer gathers information through its input devices, records information as files in its memory, imposes formal rules and procedures on its users through its programs, and communicates information through its output devices. It is a tool for dispensing instructions, for gathering feedback on how well those instructions are carried out, and for measuring progress toward some specified goal. In using a computer, a person becomes part of the control mechanism. He turns into a component of what the Internet pioneer J. C. R. Licklider, in the seminal 1960 paper “Man-Computer Symbiosis,” described as a system integrating man and machine into a single, programmable unit.

But while computer systems played a major role in helping businesses and governments reestablish central control over workers and citizens in the wake of the Industrial Revolution, the other side of their nature—as tools for personal empowerment—has also helped shape modern society, particularly in recent years. By shifting power from institutions to individuals, information-processing machines can disturb and disrupt control as well as reinforce it. Such disturbances tend to be short-lived, however. Institutions have proven adept at reestablishing control through the development of ever more powerful information technologies. As Beniger explains, “information processing and flows need themselves to be controlled, so that informational technologies continue to be applied at higher and higher levels of control.”

The arrival of the personal computer in the 1980s, for example, posed a sudden and unexpected threat to centralized power. It initiated a new, if much more limited, crisis of control. Pioneered by countercultural hackers and hobbyists, the PC was infused from the start with a libertarian ideology. As memorably portrayed in Apple Computer’s dramatic “1984” television advertisement, the personal computer was to be a weapon against central control, a tool for destroying the Big Brother-like hegemony of the corporate mainframe. Office workers began buying PCs with their own money, bringing them to their offices, and setting them up on their desks. Bypassing corporate systems altogether, PC-empowered employees gained personal control over the data and programs they used. They gained freedom, but in the process they weakened the ability of bureaucracies to monitor and steer their work. Business executives and the IT managers that served them viewed the flood of PCs into the workplace as “a Biblical plague,” in the words of computer historian Paul Ceruzzi.

The breakdown of control proved fleeting. The client-server system, which tied all the previously autonomous PCs together into a single network connected to a central store of corporate information and software, was the means by which the bureaucrats reasserted their control over information and its processing. Together with an expansion in the size and power of IT departments, client-server systems enabled companies to restrict access to data and to limit the use of software to a set of prescribed programs. Ironically, once they were networked into a corporate system, PCs actually enabled companies to monitor, structure, and guide the work of employees more tightly than was ever possible before. “Local networking took the ‘personal’ out of personal computing,” explains Ceruzzi in A History of Modern Computing. “PC users in the workplace accepted this Faustian bargain. The more computer-savvy among them resisted, but the majority of office workers hardly even noticed how much this represented a shift away from the forces that drove the invention of the personal computer in the first place. The ease with which this transition took place shows that those who believed in truly autonomous, personal computing were perhaps naïve.”

The popularization of the Internet, through the World Wide Web and its browser, brought another and very similar control crisis. Although the construction of the Internet was spearheaded by the Department of Defense, a paragon of centralized power, it was designed, paradoxically, to be a highly dispersed, loosely organized network. Since the overriding goal was to build as reliable a system as possible—one that could withstand the failure of any of its parts—it was given a radically decentralized structure. Every computer, or node, operates autonomously, and communications between computers don’t have to pass through any central clearinghouse. The Net’s “internal protocols,” as New York University professor Alexander Galloway writes in his book Protocol, “are the enemy of bureaucracy, of rigid hierarchy, and of centralization.” If a corporate computer network was akin to a railroad, with tightly scheduled and monitored traffic, the Internet was more like the highway system, with largely free-flowing and unsupervised traffic.

At work and at home, people found they could use the Web to once again bypass established centers of control, whether corporate bureaucracies, government agencies, retailing empires, or media conglomerates. Seemingly uncontrolled and uncontrollable, the Web was routinely portrayed as a new frontier, a Rousseauian wilderness in which we, as autonomous agents, were free to redefine society on our own terms. “Governments of the Industrial World,” proclaimed John Perry Barlow in his 1996 Declaration of the Independence of Cyberspace, “you are not welcome among us. You have no sovereignty where we gather.” But, as with the arrival of the PC, it didn’t take long for governments, and corporations, to begin reasserting and even extending their dominion.

The error that Barlow and many others have made is to assume that the Net’s decentralized structure is necessarily resistant to social control. They’ve turned a technical characteristic into a metaphor for personal freedom. But, as Galloway explains, the connection of previously untethered computers into a network governed by strict protocols has actually created “a new apparatus of control.” Indeed, he writes, “the founding principle of the Net is control, not freedom—control has existed from the beginning.” As the fragmented pages of the World Wide Web turn into the unified and programmable database of the cloud, moreover, powerful new forms of surveillance and control become possible. What is programming, after all, but a method of control? Even though the Internet still has no center, technically speaking, control can now be wielded, through data-mining algorithms and other software code, from anywhere. What’s different, in comparison to the physical world, is that acts of control, even as they grow much larger in scale, become harder to detect and the wielders of control become more difficult to discern.

EPILOGUE: Shortly after I posted this piece, word broke about the National Security Agency’s up-to-now secret PRISM program, in which the spy agency accesses vast amounts of data from major Internet-service companies, including Google, Facebook, Apple, and Microsoft, to track and otherwise gather intelligence on individuals. Here we see the basic pattern of the Control Revolution again playing out, with a disruption of control, caused by a flood of unregulated information, followed by the development of new information technologies that enable the reestablishment of control at a higher level. A report in the Times seems pertinent:

Today, a revolution in software technology that allows for the highly automated and instantaneous analysis of enormous volumes of digital information has transformed the N.S.A., turning it into the virtual landlord of the digital assets of Americans and foreigners alike. The new technology has, for the first time, given America’s spies the ability to track the activities and movements of people almost anywhere in the world without actually watching them or listening to their conversations. …

While once the flow of data across the Internet appeared too overwhelming for N.S.A. to keep up with, the recent revelations suggest that the agency’s capabilities are now far greater than most outsiders believed. “Five years ago, I would have said they don’t have the capability to monitor a significant amount of Internet traffic,” said Herbert S. Lin, an expert in computer science and telecommunications at the National Research Council. Now, he said, it appears “that they are getting close to that goal.”

As does this, from the Journal:

Key advances in computing and software in recent years opened the door for the National Security Agency to analyze far larger volumes of phone, Internet and financial data to search for terrorist attacks, paving the way for the programs now generating controversy. … The NSA’s advances have come in the form of programs developed on the West Coast—a central one was known by the quirky name Hadoop—that enable intelligence agencies to cheaply amplify computing power, U.S. and industry officials said. The new capabilities allowed officials to shift from being overwhelmed by data to being able to make sense of large chunks of it to predict events, the officials said.

Big Switch: new edition available

switch2

The new paperback edition of my book The Big Switch: Rewiring the World, from Edison to Google has been published and is available at Amazon, B&N, Powell’s, and your local independent bookstore. The new edition has an afterword that brings the story of the cloud up to date (and also reveals the fate of Samuel Insull). Here’s how the afterword begins:

On the night of June 29, 2012, a Friday, the Internet flickered like a loose lightbulb. Thousands of Netflix movies, streaming through the screens of TVs and computers around the land, froze in midscene. The Instagram apps installed on millions of smartphones and tablets refused to upload or download snapshots. The popular online scrapbook Pinterest disappeared entirely, replaced by a simple white page bearing a curt and unhelpful legend: Server Not Responding. For Web-loving Americans, it was an irritating start to the weekend. …

Here’s what some reviewers have said about the book:

Financial Times: “The best read so far about the significance of the shift to cloud computing.”

Fast Company:Future Shock for the Web-apps era … Compulsively readable – for nontechies, too.”

Salon: “Magisterial … Draws an elegant and illuminating parallel between the late-19th-century electrification of America and today’s computing world.”

Wall Street Journal: “Mr. Carr’s provocations are destined to influence CEOs and the boards and investors that support them as companies grapple with the constant change of the digital age.”

Christian Science Monitor: “Widely considered to be the most influential book so far on the cloud computing movement.”

Times Higher Education: “Lucid and accessible … [Carr’s] account is one of high journalism, rather than of a social or computer scientist. His book should be read by anyone interested in the shift from the world wide web and its implications for industry, work and our information environment.”

New York Post: “The Big Switch is thought-provoking and an enjoyable read, and the history of American electricity that makes up the first half of the book is riveting stuff. Further, the book broadly reinforces the point that it’s always wise to distrust utopias, technological or otherwise.”

Lethal autonomous robots are coming

battle-thermopylae

In Geneva today, United Nations special rapporteur Christof Heyns presented his report on lethal autonomous robots, or LARs, to the Human Rights Council. You can download the full report, which is methodical, dispassionate, and chilling, here.

LARs, which Heyns defines as “weapon systems that, once activated, can select and engage targets without further human intervention,” have not yet been deployed in wars or other conflicts, but the technology to produce them is very much in reach. It’s just a matter of taking the human decision-maker out of the hurly-burly of the immediate “kill loop” and leaving the firing decision to algorithms (ie, abstract protocols scripted by humans in calmer circumstances). Governments with the capability to field such weapons “indicate that their use during armed conflict or elsewhere is not currently envisioned,” but history, as Heyns points out, suggests that such assurances are subject to revision without warning:

It should be recalled that aeroplanes and drones were first used in armed conflict for surveillance purposes only, and offensive use was ruled out because of the anticipated adverse consequences. Subsequent experience shows that when technology that provides a perceived advantage over an adversary is available, initial intentions are often cast aside. Likewise, military technology is easily transferred into the civilian sphere. If the international legal framework has to be reinforced against the pressures of the future, this must be done while it is still possible.

Another complicating factor, and one that makes the issue of LARs even more pressing, is that “the nature of robotic development generally makes it a difficult subject of regulation”:

Bright lines are difficult to find. Robotic development is incremental in nature. Furthermore, there is significant continuity between military and non-military technologies. The same robotic platforms can have civilian as well as military applications, and can be deployed for non-lethal purposes (e.g. to defuse improvised explosive devices) or be equipped with lethal capability (i.e. LARs). Moreover, LARs typically have a composite nature and are combinations of underlying technologies with multiple purposes.

The importance of the free pursuit of scientific study is a powerful disincentive to regulate research and development in this area. Yet “technology creep” in this area may over time and almost unnoticeably result in a situation which presents grave dangers to core human values and to the international security system.

The UN report makes it clear that there are practical advantages as well as drawbacks to using LARs in place of soldiers and airmen:

Robots may in some respects serve humanitarian purposes. While the current emergence of unmanned systems may be related to the desire on the part of States not to become entangled in the complexities of capture, future generations of robots may be able to employ less lethal force, and thus cause fewer unnecessary deaths. Technology can offer creative alternatives to lethality, for instance by immobilizing or disarming the target. Robots can be programmed to leave a digital trail, which potentially allows better scrutiny of their actions than is often the case with soldiers and could therefore in that sense enhance accountability.

The progression from remote controlled systems to LARs, for its part, is driven by a number of other considerations. Perhaps foremost is the fact that, given the increased pace of warfare, humans have in some respects become the weakest link in the military arsenal and are thus being taken out of the decision-making loop. The reaction time of autonomous systems far exceeds that of human beings, especially if the speed of remote-controlled systems is further slowed down through the inevitable time-lag of global communications. States also have incentives to develop LARs to enable them to continue with operations even if communication links have been broken off behind enemy lines.

LARs will not be susceptible to some of the human shortcomings that may undermine the protection of life. Typically they would not act out of revenge, panic, anger, spite, prejudice or fear. Moreover, unless specifically programmed to do so, robots would not cause intentional suffering on civilian populations, for example through torture. Robots also do not rape.

Yet robots have limitations in other respects as compared to humans. Armed conflict and IHL often require human judgement, common sense, appreciation of the larger picture, understanding of the intentions behind people’s actions, and understanding of values and anticipation of the direction in which events are unfolding. Decisions over life and death in armed conflict may require compassion and intuition. Humans – while they are fallible – at least might possess these qualities, whereas robots definitely do not. While robots are especially effective at dealing with quantitative issues, they have limited abilities to make the qualitative assessments that are often called for when dealing with human life. Machine calculations are rendered difficult by some of the contradictions often underlying battlefield choices. A further concern relates to the ability of robots to distinguish legal from illegal orders.

While LARs may thus in some ways be able to make certain assessments more accurately and faster than humans, they are in other ways more limited, often because they have restricted abilities to interpret context and to make value-based calculations.

Beyond the obvious moral and technical questions, one of the greatest and most insidious risks of autonomous killer robots, Heyns writes, is that they can erode the “built-in constraints that humans have against going to war,” notably “our aversion to getting killed, losing loved ones, or having to kill other people”:

Due to the low or lowered human costs of armed conflict to States with LARs in their arsenals, the national public may over time become increasingly disengaged and leave the decision to use force as a largely financial or diplomatic question for the State, leading to the “normalization” of armed conflict. LARs may thus lower the threshold for States for going to war or otherwise using lethal force, resulting in armed conflict no longer being a measure of last resort.

It seems clear that the time to think about lethal autonomous robots is now. Writes Heyns: “This report is a call for pause, to allow serious and meaningful international engagement with this issue.” Once LARs are deployed, he implies, almost certainly correctly, it will probably be too late to restrict their use. So here we find ourselves in the midst of a case study, with extraordinarily high stakes, about whether or not society is capable of weighing the costs and benefits of a particular technology before it goes into use and of choosing a course rather than having a course imposed on it.