On April 14th, 2022, the industrialist Elon Musk expressed his apparent intention to take complete control of Twitter’s stocks, a few days after purchasing a 9.2% stake in the company. In a letter to the Chairman of Twitter’s board, Musk made the following argument for buying the company:
“I invested in Twitter as I believe in its potential to be the platform for free speech around the globe, and I believe free speech is a societal imperative for a functioning democracy.
However, since making my investment I now realize the company will neither thrive nor serve this societal imperative in its current form. Twitter needs to be transformed as a private company.”
Musk’s offer may not be genuine, but this argument taps into a widely shared sense that something is deeply wrong with social media sites as they currently exist. Passing swiftly over Musk’s free speech absolutism, there is a cornucopia of possibilities for reimagining social media, ranging from platform socialism, small social media, decentralised social media, public ownership, collective blocklists, and a range of incremental fixes designed to support the online public sphere. This sense of the problems with social media is an important part of the sales pitch for web3 companies and products—blockchains, non-fungible tokens and their cousins—in that they offer a new kind of online existence, in which users retain control and ownership of their data and digital labour.
What kinds of contributions might philosophers make to debates about the future of the internet? Philosophers are good at dealing with at least two things: mapping out complexity, and clarifying arguments and concepts. Both resources are in short supply in these debates.
Many sites have made a big deal of their efforts to ‘fight fake news’, either removing or making less salient certain kinds of undesirable content. On one way of disambiguating ‘fake news’ and ‘misinformation’ talk, these efforts are aimed at reducing the number of posts on social media which assert or implicate false claims. Philosophers can remind us of the complexity of our intellectual lives. Our goal is not simply to reduce the number of false beliefs we have, but to balance the twin goals of minimising false beliefs and maximising true beliefs about questions of interest. In coverage of information pollution, we hear very little about the effects of these interventions on the dissemination of true beliefs.
Our goal is not simply to reduce the number of false beliefs we have, but to balance the twin goals of minimising false beliefs and maximising true beliefs about questions of interest.
It is also common for companies to construct their reform efforts around vague ideals, much as Musk does in his appeal to ‘free speech’. Twitter has framed its efforts to improve its site with the metaphor of ‘conversational health’. Conversational health sounds lovely, but it is a metaphor in dire need of an explanation. In a post unpacking the notion of conversational health, Jack Dorsey (at that time Twitter’s CEO), suggested the following metrics (taken from work by the social enterprise Cortico):
Shared Attention: Is there overlap in what we are talking about?
Shared Reality: Are we using the same facts?
Variety: Are we exposed to different opinions grounded in shared reality?
Receptivity: Are we open, civil, and listening to different opinions?
These are all desirable properties of public conversation, but as an operationalisation of the concept of conversational health, they will get us into trouble. Plausibly, conversations between people who share a rich background of beliefs will score highly on shared attention and shared reality, and low on variety and receptivity, whereas conversations between people with significant disagreements will score highly on variety and receptivity, but low on shared attention and shared reality. Bundling these four properties together leaves us unable to appreciate the role for both kinds of conversation in good public discourse.
In this context, a vital role for philosophers (working with sociologists, media theorists, and other researchers) is to get clear on the conceptual terrain: what values we ought to be making central in our social media imaginaries, and how these values interact and come into tension. In this blog post (which is a tl;dr of a much longer and more thorough handbook article), I want to restrict my attention considerably; focusing in on the way in which the specially epistemic (or truth and knowledge-focused) aims of social media sites interact with one another, and laying some groundwork for thinking about how the epistemic goals of social media come into conflict. How these values interact with political values—free speech, the minimisation of harm, privacy—is a difficult question for another day.
As we’ve seen in the examples above, companies tend pick one epistemic desideratum, and focus on changes to a site which would improve that property (minimising the number of false beliefs users have, for example). To correct this tendency, let’s consider the epistemic goals of social media sites in full generality. Social epistemologists have proposed three kinds of epistemic goals for institutions:
1. Promoting good epistemic states for people who are served by that institution (true beliefs, knowledge, understanding);
2. Realising epistemically good institutional properties (reliably reaching consensus, open discourse in which all people served by the institution contribute on an equal footing);
3. Respecting the epistemic agency of all people served by the institution, and protecting the epistemic interests of minority groups (ensuring that minority groups are not subject to harmful misrepresentations, or otherwise excluded from participating in the institution’s services).
Let’s call the first kind of value individual epistemic value, the second institutional epistemic value, and the third epistemic justice. These goals are fully general: one might find them in discussions or jury design, voting systems, or the social incentive structure of science.
If we want to do better in imagining—and hopefully in building—a better internet, we need to appreciate the dilemmas and compromises which are involved in designing social institutions.
Focusing in on social media sites and other internet-based institutions, we can find these different kinds of values lurking beneath the surface of social media criticism. Our concern with misinformation illustrates the desire for good individual outcomes, worries about polarisation and echo chambers can easily be framed as concerns about bad institutional properties, and concerns about algorithmic bias, and the systematic misrepresentation of minority groups (for example by google search) demonstrate the importance of epistemic justice in the design of online institutions. Although there have been attempts to reduce these epistemic goals down to one basic value (see Alvin Goldman’s Knowledge in a Social World), none have been particularly successful, so I will assume that they are all important epistemic design principles for institutions and social practices.
Conceptual clarity about the proper epistemic goals of internet institutions is useful for two reasons. First, having clear goals makes it clear what the designers and maintainers of social media sites ought to be aiming for, making it more difficult to obfuscate by using vague terms like ‘conversational health’. And second, it allows us to understand how the epistemic goals of institutions might come into conflict. Mapping out these conflicts is a little complex, so I’ll focus on illustrating them, rather than showing why the conflicts systematically occur.
Start with the conflict between promoting good epistemic states for people served by an institution, and realising good epistemic properties at the institutional level. To see how these goals can come into conflict, let’s consider some mathematical results which concern the modelling of epistemic communities. The Condorcet Jury theorem states that in a community of people voting on a question, when all votes are independent, and voters on average get things right more often than not, adding more people into the group increases the probability that the outcome of the vote will be correct, tending towards perfect reliability. This theorem is often used to motivate the use of large groups of people to address factual questions—the idea being that the group is more reliable than its average members—but it also contains a lesson about the value of isolation. If individuals are connected to one another and able to access their peers’ opinions, then the possibility of information cascades emerge. In this situation, people who share information early, or are highly connected have an outsized impact on the collective position, and the group will be less reliable than it would have been if everyone was isolated.
Why is the Jury theorem relevant to the design of social media sites? Consider a case in which the users of a social media site meet the assumptions of the theorem with respect to some set of questions, and are on average more likely to get answers to some set of questions right than wrong. How should they be connected to one another? Purely for epistemic reasons each user may well want to be connected to as many others as possible, to maximise her available evidence, and her chance of getting the right answer. Although this highly connected network might be good for each individual, it would be bad for the collective: the more connected the network is, the more likely it is to be subject to information cascades, making it less likely it is to reach a correct consensus on answers to the questions. From the point of view of collective reliability, it might be best to isolate users from one another, perhaps voting without discussion. (We can see this kind of rationale at work in the use of betting in constructing prediction markets for political events.) The social dilemma posed by the degree of connectedness adds an extra dimension to the choice of sites like Facebook to make sure that people are connected to as many other people as possible. There’s at least a possibility that in encouraging users to become highly connected, the site was sacrificing the collective epistemic good to benefit individual users.[1]
A social media site designed only around the value of free speech—say—fails to reckon with the rich human messiness of the people who will use that site over time.
Next, consider the tension between good institutional properties and the goal of realising epistemic justice. A great deal of social media critique has focused on the idea that social media sites ought to resemble the ideal of the unified and open public sphere, in which people contribute to a common conversation on the basis of equal standing. Although this ideal is attractive in the abstract, in societies like ours—characterised by histories of oppression, and enormous material inequalities—these institutional structures can actually undermine epistemic justice. This point is made most clearly by Nancy Fraser in her famous paper Rethinking the Public Sphere, in which she argues that in societies like ours, minority and oppressed groups are subject to informal exclusion and marginalisation in public debate. It Fraser’s view, what these groups need is not inclusion in the public sphere on grounds of apparent equality, but semi-autonomous subaltern counterpublics, in which minority and oppressed groups can develop conceptual resources and arguments which would be drowned out in the public sphere. The relevance of counterpublics for the design of social media is easy to see, and there is a rich media studies literature on online counterpublics.
Finally, consider the way in which good outcomes for individuals and epistemic justice might come into conflict. Karen Frost-Arnold discusses the dilemmas involved in online accountability practices. There is a decent case to be made that, in general, making users of a social media site more accountable will have a positive impact on individual-level epistemic outcomes: making false beliefs less likely to spread, because of the higher cost of getting things wrong. However, this general positive effect can come along with more specific negative effects. Making users more accountable—say by removing anonymity, or by banning or suspending accounts which post false claims—will plausibly have an outsized effect on the speech of minority groups, whose speech is already subject to higher social costs if they get things wrong. It’s easy to imagine cases in which accountability practices have a net increase in good epistemic outcomes, whilst making it less likely that true claims about the situation of minority groups are shared outside of those groups.
If we want to do better in imagining—and hopefully in building—a better internet, we need to appreciate the dilemmas and compromises which are involved in designing social institutions. Architects (even modernists) don’t design by only considering one value: they need to consider aesthetics, utility, accessibility, possibility for re-use and redesign. A social media site designed only around the value of free speech—say—fails to reckon with the rich human messiness of the people who will use that site over time. To put the point a different way, the design of internet institutions is political, and rather than submerging these political choices, we need to bring them to the surface, and make clear which compromises and decisions are being made. Although we might have wanted to isolate our pursuit of truth from political values, these dilemmas illustrate the political dimensions of epistemic institution design.
Notes
[1] For a much more in-depth discussion of this kind of network modelling, and the importance of network structure to misinformation on social media sites, see Cailin O’Connor and James Weatherall’s book The Misinformation Age, and for discussion of the separateness of individual and institutional evaluation see this paper by Conor Mayo-Wilson, Kevin Zollman, and David Danks.
Joshua Habgood-Coote is a research fellow on the GROUNDS project at the school of Philosophy, Religion, and History of Science at the University of Leeds.
Disclaimer: Any views or opinions expressed on The Public Ethics Blog are solely those of the post author(s) and not The Stockholm Centre for the Ethics of War and Peace, Stockholm University, the Wallenberg Foundation, or the staff of those organisations.
Comments