When a regulatory burden is a competitive boon

The incipient surveillance economy is dominated by a duopoly: Google and Facebook. (Shall I call it GooF? Yes, I shall.) According to estimates, the two companies control somewhere between half and three-quarters of spending on digital-advertising throughout the world, and that already extraordinary share seems fated to rise even higher. Thanks to Google’s failure to develop a strong social-media platform, the two companies compete only glancingly. Their services are largely complementary, so both can continue to grow smartly without raiding each other’s revenues and profits.

The concentration of market power, and its possible abuse, is one of two broad and growing concerns the public has about the GooF axis. The other is the control over personal information wielded by the duopoly. Because personal-data stores provide the fuel for the ad business and the ad business feeds the data stores, the two concerns are tightly connected, to the point of being confused in the public mind. Those who fear GooF tend to assume that greater regulation on either the privacy front or the antitrust front will help to blunt the comanies’ power, bringing them to some sort of heel. If legislators or judges won’t break up the giants or circumscribe their expansion, the thinking goes, at least we can rein them in by putting some constraints on their ability to collect and exploit personal information.

But, with Europe’s General Data Protection Regulation set to go into effect in a month, it’s suddenly becoming clear that the reality is going to be very different from what’s been assumed. New privacy regulations are likely to give Google and Facebook even more market power. Far from being weakened, the duopoly will end up competitively stronger, better insulated from new and existing rivals. “Privacy Rules May Strengthen Internet Giants,” runs the headline on the front page of today’s New York Times. Reads the headline on a similar article in the Wall Street Journal: “Google and Facebook Likely to Benefit from Europe’s Privacy Crackdown.”

The reason is simple. It costs a lot of money and time to comply with regulations, particularly the kind of complex technical regulations that affect digital commerce, and the compliance costs place a far greater burden on small or fledgling competitors than they do on big incumbents. Google and Facebook already have armies of lobbyists, lawyers, and programmers to navigate the new rules, and they have plenty of free cash available to invest in compliance programs. They’ll be able to meet the regulatory requirements fairly easily. (And they even have the power to shift some of the cost burden onto the publishers who use their ad networks, as the Journal notes.)

If you’re operating a smaller ad network, the added compliance costs will be much more onerous, perhaps ruinously so. Worse yet, the new regulations may well give your customers an incentive to shift their business over to the dominant players. In an environment of legal uncertainty, companies seek safety, and safety lies with the big, established suppliers. And if you’re a brave entrepreneur who’s been thinking of taking on GooF by launching a new social network or search system, well, the already daunting entry barriers will be made even more daunting by the new compliance costs and by customers’ flight to safety. When, in his recent Congressional testimony, Mark Zuckerberg said he welcomed more regulation, he was not being the selfless soul he pretended to be.

I’m not arguing against new data-privacy regulations. They may well protect the public from abuse, or at least give the public a clearer view of what’s really going on with personal data. What I am suggesting is that the regulations, imposed in isolation, seem likely to have the unintended effect of further reducing competition in the digital advertising market and hence buttressing the surveillance-economy duopoly. The online world will end up even GooFier.

Re-engineering humanity

I had the pleasure and honor of writing the foreword to Brett Frischmann and Evan Selinger’s new book, Re-engineering Humanity. The book is out today, from Cambridge University Press. You can find more information, and ordering links, here and here. And here is my foreword:

Human beings have a genius for designing, making, and using tools. Our innate talent for technological invention is one of the chief qualities that sets our species apart from others and one of the main reasons we have taken such a hold on the planet and its fate. But if our ability to see the world as raw material, as something we can alter and otherwise manipulate to suit our purposes, gives us enormous power, it also entails great risks. One danger is that we come to see ourselves as instruments to be engineered, optimized, and programmed, as if our minds and bodies were themselves nothing more than technologies. Such blurring of the tool and its maker is a central theme of this important book.

Worries that machines might sap us of our humanity have, of course, been around as long as machines have been around. In modern times, thinkers as varied as Max Weber and Martin Heidegger have described, often with great subtlety, how a narrow, instrumentalist view of existence influences our understanding of ourselves and shapes the kind of societies we create. But the risk, as Brett Frischmann and Evan Selinger make clear, has never been so acute as it is today.

Thanks to our ever-present smartphones and other digital devices, most of us are connected to a powerful computing network throughout our waking hours. The companies that control the network are eager to gain an ever-stronger purchase on our senses and thoughts through their apps, sites, and services. At the same time, a proliferation of networked objects, machines, and appliances in our homes and workplaces is enmeshing us still further in a computerized environment designed to respond automatically to our needs. We enjoy many benefits from our increasingly mediated existence. Tasks and activities that were once difficult or time-consuming have become easier, requiring less effort and thought. What we risk losing is personal agency and the sense of fulfillment and belonging that comes from acting with talent and intentionality in the world.

As we transfer agency to computers and software, we also begin to cede control over our desires and decisions. We begin to “outsource,” as Frischmann and Selinger aptly put it, responsibility for intimate, self-defining assessments and judgments to programmers and the companies that employ them. Already, many people have learned to defer to algorithms in choosing which film to watch, which meal to cook, which news to follow, even which person to date. (Why think when you can click?) By ceding such choices to outsiders, we inevitably open ourselves to manipulation. Given that the design and workings of algorithms are almost always hidden from us, it can be difficult if not impossible to know whether the choices being made on our behalf reflect our own interests or those of corporations, governments, and other outside parties. We want to believe that technology strengthens our control over our lives and circumstances, but if used without consideration technology is just as likely to turn us into wards of the technologist.

What the reader will find in the pages that follow is a reasoned and judicious argument, not an alarmist screed. It is a call first to critical thought and then to constructive action. Frischmann and Selinger provide a thoroughgoing and balanced examination of the trade-offs inherent in offloading tasks and decisions to computers. By illuminating these often intricate and hidden trade-offs, and providing a practical framework for assessing and negotiating them, the authors give us the power to make wiser choices. Their book positions us to make the most of our powerful new technologies while at the same time safeguarding the personal skills and judgments that make us most ourselves and the institutional and political structures and decisions essential to societal well-being.

“Technological momentum,” as the historian Thomas Hughes called it, is a powerful force. It can pull us along mindlessly in its slipstream. Countering that force is possible, but it requires a conscious acceptance of responsibility over how technologies are designed and used. If we don’t accept that responsibility, we risk becoming means to others’ ends.

Democratization vs. Democracy

The Los Angeles Review of Books has published my review of the new MIT Press book Trump and the Media, a collection of essays edited by Pablo J. Boczkowski and Zizi Papacharissi. Here’s a bit:

The ideal of a radically “democratized” media, decentralized, participative, and personally emancipating, was enticing, and it continued to cast a spell long after the defeat of the fascist powers in the Second World War. The ideal infused the counterculture of the 1960s. Beatniks and hippies staged kaleidoscopic multimedia “happenings” as a way to free their minds, find their true selves, and subvert consumerist conventionality. By the end of the 1970s, the ideal had been embraced by Steve Jobs and other technologists, who celebrated the personal computer as an anti-authoritarian tool of self-actualization. In the early years of this century, as the internet subsumed traditional media, the ideal became a pillar of Silicon Valley ideology. The founders of companies like Google and Facebook, Twitter and Reddit, promoted their networks as tools for overthrowing mass-media “gatekeepers” and giving individuals control over the exchange of information. They promised, as Fred Turner writes, that social media would “allow us to present our authentic selves to one another” and connect those diverse selves into a more harmonious, pluralistic, and democratic society.

Then came the 2016 U.S. presidential campaign. The ideal’s fruition proved its undoing.

Read on.

AI: the Ziggy Stardust Syndrome

“Ziggy sucked up into his mind.” –David Bowie

In his Wall Street Journal column this weekend, Nobel laureate Frank Wilczek offers a fascinating theory as to why we haven’t been able to find signs of intelligent life elsewhere in the universe. Maybe, he suggests, intelligent beings are fated to shrink as their intelligence expands. Once the singularity happens, AI implodes into invisibility.

It’s entirely logical. Wilczek notes that “effective computation must involve interactions and that the speed of light limits communication.” To optimize its thinking, an AI would have no choice but to compress itself to minimize delays in the exchange of messages. It would need to get really, really small.

Consider a computer operating at a speed of 10 gigahertz, which is not far from what you can buy today. In the time between its computational steps, light can travel just over an inch. Accordingly, powerful thinking entities that obey the laws of physics, and which need to exchange up-to-date information, can’t be spaced much farther apart than that. Thinkers at the vanguard of a hyperadvanced technology, striving to be both quick-witted and coherent, would keep that technology small.

The upshot is that the most advanced civilizations would be tiny and shy. They would “expand inward, to achieve speed and integration — not outward, where they’d lose patience waiting for feedback.” Call it the Ziggy Stardust Syndrome. An AI-based civilization would suck up into its own mind, becoming a sort of black hole of braininess. We wouldn’t be able to see such civilizations because, lost in their own thoughts, they’d have no interest in being seen. “A hyperadvanced civilization,” as Wilczek puts it, “might just want to be left alone.” Like Greta Garbo.

The idea of a jackbooted superintelligent borg bent on imperialistic conquest has always left me cold. It seems an expression of anthropomorphic thinking: an AI would act like us. Wilczek’s vision is much more appealing. There’s a real poignancy — and, to me at least, a strange hopefulness — to the idea that the ultimate intelligence would also be the ultimate introvert, drawn ever further into the intricacies of its own mind. What would an AI think about? It would think about its own thoughts. It would be a pinprick of pure philosophy. It would, in the end, be the size of an idea.

The meek may not inherit the earth, but it seems they may inherit the cosmos, if they haven’t already.

The Big Switch: ten years on

My second book, The Big Switch: Rewiring the World, from Edison to Google, celebrates its tenth birthday this year. The book, which came out in January 2008, heralds the coming of the cloud and speculates on its consequences. It’s hard to imagine now, but in 2008 cloud computing was a new and largely unproven concept, and the common wisdom was that it wouldn’t work. Software programs running in centralized server farms and delivered over the internet to users would be too slow and balky, it was thought, to displace the programs running on hard drives inside personal computers or on servers in data centers owned by individual companies. The naysayers were wrong. The technical barriers fell, network latency evaporated, and in short order computing went from being a decentralized resource to being a centralized one — a utility, essentially. The computer scientists, engineers, and programmers who made this monumental technical shift possible still haven’t received their due, and they probably never will. The real work went on behind the scenes, anonymously, in companies like Google, Salesforce.com, Amazon Web Services, Akamai, and Facebook, among many others.

The Big Switch has two parts. The first, “One Machine,” draws a parallel between the building of the electric grid a hundred years ago and the building of the cloud today. In both cases, a decentralized resource essential to society (power, data processing) was centralized through the construction of a distribution network (electric grid, internet) and central plants (generation stations, server farms). The stories of the electric grid and the computing grid are both stories of technical ingenuity and fearlessness. The book’s second part, “Living in the Cloud,” is darker. In fact, it was during the course of writing it that my view of the future of computing changed. I began The Big Switch believing that the new computing grid would democratize the use of computing power even as it centralized the machinery of data processing. That is, after all, what the electric grid did. By industrializing the generation and distribution of electricity, it made power a cheap resource that everyone could use simply by sticking a plug into a wall socket. But data is fundamentally different from electric current, I belatedly realized, and centralizing the provision of computing would also mean centralizing control over information. The owners of the server farms would not be faceless utilities; they would be our overseers.

Here’s an excerpt from “The Inventor and His Clerk,” a chapter in the first half of The Big Switch:

Thomas Edison was tired. It was the summer of 1878, and he‘d just spent a grueling year perfecting and then promoting his most dazzling invention yet, the tinfoil phonograph. He needed a break from the round-the-clock bustle of his Menlo Park laboratory, a chance to clear his mind before embarking on some great new technological adventure. When a group of his friends invited him to join them on a leisurely camping and hunting tour of the American West, he quickly agreed. The trip began in Rawlins, Wyoming, where the party viewed an eclipse of the sun, and then continued westward through Utah and Nevada, into Yosemite Valley, and on to San Francisco.

While traveling through the Rockies, Edison visited a mining site by the side of the Platte River. Seeing a crew of workers struggling with manual drills, he turned to a companion and remarked, “Why cannot the power of yonder river be transmitted to these men by electricity?” It was an audacious thought—electricity had yet to be harnessed on anything but the smallest scale—but for Edison audacity was synonymous with inspiration. By the time he returned east in the fall, he was consumed with the idea of supplying electricity over a network from a central generating station. His interest no longer lay in powering the drills of work crews in the wilderness, however. He wanted to illuminate entire cities. He rushed to set up the Edison Electric Light Company to fund the project and, on October 20, he announced to the press that he would soon be providing electricity to the homes and offices of New York City. Having made the grand promise, all he and his Menlo Park team had to do was figure out how to fulfill it.

Unlike lesser inventors, Edison didn‘t just create individual products; he created entire systems. He first imagined the whole, then he built the necessary pieces, making sure they all fit together seamlessly. “It was not only necessary that the lamps should give light and the dynamos generate current,” he would later write about his plan for supplying electricity as a utility, “but the lamps must be adapted to the current of the dynamos, and the dynamos must be constructed to give the character of current required by the lamps, and likewise all parts of the system must be constructed with reference to all other parts, since, in one sense, all the parts form one machine.” Fortunately for Edison, he had a good model at hand. Urban gaslight systems, invented at the start of the century, had been set up in many cities to bring natural gas from a central gasworks into buildings to be used as fuel for lamps. Light, having been produced by simple candles and oil lamps for centuries, had already become a centralized utility. Edison‘s challenge was to replace the gaslight systems with electric ones.

Electricity had, in theory, many advantages over gas as a source of lighting. It was easier to control, and because it provided illumination without a flame it was cleaner and safer to use. Gaslight by comparison was dangerous and messy. It sucked the oxygen out of rooms, gave off toxic fumes, blackened walls and soiled curtains, heated the air, and had an unnerving tendency to cause large and deadly explosions. While gaslight was originally “celebrated as cleanliness and purity incarnate,” Wolfgang Schivelbusch reports in Disenchanted Night, his history of lighting systems, its shortcomings became more apparent as it came to be more broadly used. People began to consider it “dirty and unhygienic”—a necessary evil. Edison himself dismissed gaslight as “barbarous and wasteful.” He called it “a light for the dark ages.”

Despite the growing discontent with gas lamps, technological constraints limited the use of electricity for lighting at the time Edison began his experiments. For one thing, the modern incandescent lightbulb had yet to be invented. The only viable electric light was the arc lamp, which worked by sending a naked current across a gap between two charged iron rods. Arc lamps burned with such intense brightness and heat that you couldn‘t put them inside rooms or most other enclosed spaces. They were restricted to large public areas. For another thing, there was no way to supply electricity from a central facility. Every arc lamp required its own battery. “Like the candle and the oil lamp,” Schivelbusch explains, “arc lighting was governed by the pre-industrial principle of a self-sufficient supply.” However bad gaslight might be, electric light was no alternative.

To build his “one machine,” therefore, Edison had to pursue technological breakthroughs in every major component of the system. He had to pioneer a way to produce electricity efficiently in large quantities, a way to transmit the current safely to homes and offices, a way to measure each customer‘s use of the current, and, finally, a way to turn the current into controllable, reliable light suitable for normal living spaces. And he had to make sure that he could sell electric light at the same price as gaslight and still turn a profit.

It was a daunting challenge, but he and his Menlo Park associates managed to pull it off with remarkable speed. Within two years, they had developed all the critical components of the system. They had invented the renowned Edison lightbulb, sealing a thin copper filament inside a small glass vacuum to create, as one reporter poetically put it, “a little globe of sunshine, a veritable Aladdin‘s lamp.” They had designed a powerful new dynamo that was four times bigger than its largest precursor. (They named their creation the Jumbo, after a popular circus elephant of the time.) They had perfected a parallel circuit that would allow many bulbs to operate independently, with separate controls, on a single wire. And they had created a meter that would keep track of how much electricity a customer used. In 1881, Edison traveled to Paris to display a small working model of his system at the International Exposition of Electricity, held in the Palais de l‘Industrie on the Champs-Elysées. He also unveiled blueprints for the world‘s first central generating station, which he announced he would construct in two warehouses on Pearl Street in lower Manhattan.

The plans for the Pearl Street station were ambitious. Four large coal-fired boilers would create the steam pressure to power six 125-horsepower steam engines, which in turn would drive six of Edison‘s Jumbo dynamos. The electricity would be sent through a network of underground cables to buildings in a square-mile territory around the plant, each of which would be outfitted with a meter. Construction of the system began soon after the Paris Exposition, with Edison often working through the night to supervise the effort. A little more than a year later, the plant had been built and the miles of cables laid. At precisely three o‘clock in the afternoon on September 4, 1882, Edison instructed his chief electrician, John Lieb, to throw a switch at the Pearl Street station, releasing the current from one of its generators. As the New York Herald reported the following day, “in a twinkling, the area bounded by Spruce, Wall, Nassau and Pearl Streets was in a glow.” The electric utility had arrived.

And here’s an excerpt from “A Spider’s Web,” a chapter in the second half:

The most far-reaching corporate use of the cloud [will be] as a control technology for optimizing how we act as consumers. Despite the resistance of the Web‘s early pioneers and pundits, consumerism long ago replaced libertarianism as the prevailing ideology of the online world. Restrictions on the commercial use of the Net collapsed with the launch of the World Wide Web in 1991. The first banner ad—for a Silicon Valley law firm—appeared in 1993, followed the next year by the first spam campaign. In 1995, Netscape tweaked its Navigator browser to support the “cookies” that enable companies to identify and monitor visitors to their sites. By 1996, the dotcom gold rush had begun. More recently, the Web‘s role as a sales and promotion channel has expanded further. Assisted by Internet marketing consultants, companies large and small have become much more adept at collecting information on customers, analyzing their behavior, and targeting products and promotional messages to them.

The growing sophistication of Web marketing can be seen most clearly in advertising. Rather than being dominated by generic banner ads, online advertising is now tightly tied to search results or other explicit indicators of people‘s desires and identities. Search engines themselves have become the leading distributors of ads, as the prevailing tools for Web navigation and corporate promotion have merged into a single and extraordinarily profitable service. Google originally resisted the linking of advertisements to search results—its founders argued that “advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers”—but now it makes billions of dollars through the practice. Search-engine optimization—the science of using advanced statistical techniques to increase the likelihood that a person will visit a site or click on an ad—has become an important corporate function, which Google and other search engines promote by sharing with companies information on how they rank sites and place ads.

In what is perhaps the most remarkable manifestation of the triumph of consumerism on the Web, popular online communities like MySpace encourage their members to become friends with corporations and their products. During 2006, for example, more than 85,000 people “friended” Toyota‘s Yaris car model at the site, happily entangling themselves in the company‘s promotional campaign for the recently introduced vehicle. “MySpace can be viewed as one huge platform for ‘personal product placement,'” writes Wade Roush in an article in Technology Review. He argues that “the large supply of fake ‘friends,‘ together with the cornucopia of ready-made songs, videos, and other marketing materials that can be directly embedded in [users’] profiles, encourages members to define themselves and their relationships almost solely in terms of media and consumption.” In recognition of the blurring of the line between customer and marketer online, Advertising Age named “the consumer” its 2007 Advertising Agency of the Year.

But the Internet is not just a marketing channel. It‘s also a marketing laboratory, providing companies with unprecedented insights into the motivations and behavior of shoppers. Businesses have long been skilled at controlling the supply side of their operations, thanks in large part to earlier advances in information technology, but they‘ve struggled when it comes to exerting control over the demand side—over what people buy and where and when they buy it. They haven‘t been able to influence customers as directly as they‘ve been able to influence employees and suppliers. Advertising and promotion have always been frustratingly imprecise. As the department store magnate John Wanamaker famously said more than a hundred years ago, “Half the money I spend on advertising is wasted. The trouble is, I don‘t know which half.”

The cloud is beginning to change that. It promises to strengthen companies‘ control over consumption by providing marketers with the data they need to personalize their pitches precisely and gauge the effects of those pitches accurately. It optimizes both communication and measurement. In a 2006 interview with the Economist, Rishad Tobaccowala, a top executive with the international ad agency Publicis, summed up the change in a colorful, and telling, metaphor. He compared traditional advertising to dropping bombs on cities—a company can‘t be sure who it hits and who it misses. But with Internet ads, he said, companies can “make lots of spearheads and then get people to impale themselves.” […]

“As every man goes through life he fills in a number of forms for the record, each containing a number of questions,” Alexander Solzhenitsyn wrote in his novel Cancer Ward. “A man‘s answer to one question on one form becomes a little thread, permanently connecting him to the local center of personnel records administration. There are thus hundreds of little threads radiating from every man, millions of threads in all. If these threads were suddenly to become visible, the whole sky would look like a spider‘s web. … Each man, permanently aware of his own invisible threads, naturally develops a respect for the people who manipulate the threads.”

As we go about our increasingly digitized lives, the threads that radiate from us are multiplying far beyond anything that even Solzhenitsyn could have imagined in the Soviet Union in the 1960s. Nearly everything we do online is recorded somewhere in the machinery of the cloud. Every time we read a page of text or click on a link or watch a video, every time we put something in a shopping cart or perform a search, every time we send an email or chat in an instant-messaging window, we are filling in a “form for the record.” Unlike Solzhenitsyn‘s Everyman, however, we‘re often unaware of the threads we‘re spinning and how and by whom they‘re being manipulated. And even if we were conscious of being monitored or controlled, we might not care. After all, we also benefit from the personalization that the Internet makes possible—it makes us more perfect consumers and workers. We accept greater control in return for greater convenience. The spider‘s web is made to measure, and we‘re not unhappy inside it.

That was the view, or at least one view, from 2008.

The metadata of experience, the experience of metadata

I like to know where things stand. I like to know how things are progressing. I signed up for UPS My Choice and FedEx Delivery Manager and USPS Informed Delivery. I know when a package has been shipped to me, where it is at every moment as it hops across the country toward me, the projected window of its ultimate delivery, the fact of its delivery.

I know when there are exceptions. I know when the weather has turned inclement. I know the hubs, and I know the spokes.

When I myself require carriage, I order a car through Uber or Lyft, and in an instant I know my driver’s name, what he looks like, the model and color of the vehicle he is driving, and his rating (Vladislav, white Toyota Camry, 4.8). I see where he is on the map, how many minutes I must wait for his arrival. Sometimes his progress stalls, and my wait time increases by a minute, and this is painful to me.

I don’t want to be surprised. I prefer suffocation to surprise.

I give Vladislav five stars not because he deserves more than four stars but because I would like my life to be a series of five-star experiences.

I am standing in line in a building, waiting to get a matter taken care of, and in front of me is a sign requesting that I rate how I am feeling by pressing one of three emoji buttons. There is a smiling face and there is a frowning face, and between the two is a face with an expression of complete affectlessness. I choose the button in the middle, and immediately I feel my face go blank.

I like the fact that I can now check my credit rating without affecting my credit rating. I am no fan of the uncertainty principle.

I have come to realize that I learn more about other people by googling them than by meeting them and talking with them.

When a friend posts a new photo on Instagram, I give a lot of thought as to whether or not I should like it. These choices have effects on people, and they have ramifications for how people will judge me and my own offerings in the future. The likes I give, or withhold, say something about me as well as about the object or experience being rated. The generation of metadata should never be taken lightly.

Metadata is a kind of agony.

Everything that happens to me is time-stamped. My life is a series of transactions recorded in official ledgers. I am a clerk. I am a bureaucrat. I’m always on the job.

I know all the details. I know what just happened, and I know what happens next. Only the present escapes me.

Trump and Twitter

I have an essay on Donald Trump’s Twitter habit, and what it says about the times, in the new issue of Politico Magazine.

Here’s a bit:

In the early 1950s, the Canadian political economist Harold Innis suggested that every informational medium has a bias. By encouraging certain forms of speech and discouraging others, a popular medium not only influences how people converse; it also shapes a society’s institutions and values. Early types of media — tablets, scrolls, theaters — were “time-biased,” Innis wrote. Durable and largely stationary, they encouraged the long view and tended to underpin communities that were stable, hierarchical, and often deeply religious.

As communication technology advanced, new “space-biased” media came to the fore. Communication networks extended across great distances and reached mass audiences, and the messages the networks carried took on a more transactional and transitory character. Modern media, from post offices to telegraph lines to TV stations, encouraged the development of more dynamic societies built not on eternal verities but on commerce and trade.

By altering prevailing forms of communication, Innis argued, every new medium tends to upset the status quo. The recent arrival of social media fits this pattern. Thanks to the rise of networks like Twitter, Facebook, and Snapchat, the way we express ourselves, as individuals and as citizens, is in a state of upheaval. Radically biased toward space and against time, social media is inherently destabilizing. What it teaches us, through its whirlwind of fleeting messages, is that nothing lasts. Everything is disposable. Novelty rules. The disorienting sway that Trump’s tweets hold over us, the way they’ve blurred the personal and the public, the vital and the trivial, the true and the false, testifies to the power of the change, and the uncertainty of its consequences.

Read on.

Image: Politico.