For me, that
unilluminating, sycophantic interview, during which Carlson never asked a challenging question and let Trump ramble on about whatever random subjects flitted through his mind, was hard to watch.
But as I write, it has racked up more than 256 million views, which suggests that more than a few people were interested in what Trump had to say. By comparison, Fox News says fewer than 13 million people watched its broadcast of the debate that Trump skipped.
X, in short, seems to be giving people what they want, which makes good business sense. One might also argue, as Carlson did, that "whatever you think of Trump…voters have an interest in hearing what he thinks," since he is the "indisputable, far-and-away front-runner in the Republican race."
Nix and Ellison do not see it that way. For the good of democracy, they think, social media platforms should be showing users political content only if it can be certified as accurate. That is, of course, an impossible challenge, one that is magnified by the difficulty of determining when speech, although not demonstrably false, nevertheless qualifies as "misinformation" because it is "misleading." Policing "hate speech," which Nix and Ellison also want the platforms to do, poses similar problems of interpretation and judgment.
The major platforms define their content moderation mission more narrowly than Nix and Ellison would like. "We remove content that misleads voters on how to vote or encourages interference in the democratic process," YouTube told the
Post. "Additionally, we connect people to authoritative election news and information through recommendations and information panels." Meta, which owns Facebook, Instagram, and Threads, was vaguer. "Protecting the U.S. 2024 elections is one of our top priorities," it said, "and our integrity efforts continue to lead the industry."
No matter how they decide to flag or suppress content, the platforms will be pissing off a lot of people. There is "no winning," Katie Harbath, former director of public policy at Facebook, told the
Post. "For Democrats, we weren't taking down enough, and for Republicans we were taking down too much." In light of those conflicting demands, Harbath said, Facebook decided "it's just not worth it anymore."
This situation becomes even more difficult and complicated when federal officials start
demanding that social media companies do more to suppress speech those officials view as dangerous to democracy, public health, or national security. It also becomes
constitutionally problematic—a point that Nix and Ellison do not even acknowledge. Instead they complain that "an aggressive legal battle over claims that the Biden administration pressured social media platforms to silence certain speech has blocked a key path to detecting election interference."
Those are not merely "claims." The Biden administration
indisputably "pressured social media platforms," publicly and privately, "to silence certain speech." The legal question is whether that
pressure amounted to government-directed censorship, in violation of the First Amendment. A federal judge
concluded that it did.
Nix and Ellison probably disagree with that decision. But they do not even mention it, let alone explain why they think it was wrong. More generally, they seem completely untroubled by the free speech implications of not-so-subtly threatening social media companies with antitrust litigation, heavier regulation, and increased exposure to civil liability if they fail to follow the government's content moderation recommendations.
Nix and Ellison repeatedly raise the specter of foreign interference with U.S. elections. The "new approach" to content moderation, they say, "marks a sharp shift from the 2020 election, when social media companies expanded their efforts to police disinformation. The companies feared a repeat of 2016, when Russian trolls attempted to interfere in the U.S. presidential campaign, turning the platforms into tools of political manipulation and division."
Those sinister-sounding
efforts were
pretty pitiful, less than
a drop in the bucket of the "misinformation" and "disinformation" that Americans themselves regularly produce. By invoking a foreign threat, Nix and Ellison distract readers from the central issue, which is whether democracy is better served by heavy-handed moderation that aims to shield social media users from false, misleading, and hateful speech or by the more free-wheeling approach that Musk prefers. They think the answer is obvious, which is why they present their advocacy as straight news reporting.