In May, researchers at IE University in Madrid found that 51% of Europeans support replacing at least some of their politicians with artificial intelligence. Chinese citizens are even more bullish about AI-driven governance, with 75% of Chinese respondents indicating support for this proposal. Skepticism about the role of AI in politics finds a wider audience in the US and UK, where only 40% and 31% respectively approve of this idea.
Which position on this question is more reasonable? Of course, much depends on how we specify the proposal. One option is to allow algorithms to run for office. Indeed, a chatbot named Alisa ran against Vladimir Putin for the Russian presidency in 2018, and a bot named SAM appears to be attempting something similar in New Zealand. Another possibility, proposed by data scientist César Hidalgo, is to grant each citizen a personal bot that votes directly on legislation, thus reducing or even bypassing the role of human representatives.
These proposals represent different ways of altering or augmenting democracy. But a more extreme proposal is to replace democratic processes entirely with sophisticated algorithms, resulting in an alternative political regime that we might call algocracy. (I borrow the term from John Danaher, who uses it slightly differently.) Different visions of algocracy, both utopian and dystopian, appear in books by Hiroki Azuma, Yuval Noah Harari, and Jamie Susskind. The basic idea is that an algorithm would sift through the vast troves of data that we create throughout our daily lives—the traffic patterns tracked by our personal devices, the purchase decisions we make online, our search and social media activity, and so on. It would use these data to infer citizens’ preferences about political questions and aggregate these preferences to impute a “general will” to the population in question. In addition to analyzing citizens’ political preferences, the algorithm would simultaneously be crunching all the available data on economic, social, and environmental indicators to form beliefs about the world. It would then legislate on the basis of these estimates of popular preferences and factual states.
The technology needed to fully automate political decision-making doesn’t yet exist, and perhaps it never will. But I believe extreme scenarios like algocracy are worth taking seriously. Analyzing scenarios like this one is helpful for clarifying what justifies processes of political decision-making and how we should evaluate different technological options for reforming current political systems.
Developments in artificial intelligence will increasingly confront us with ways of altering or revolutionizing political decision-making.
One area of significant recent debate in political philosophy concerns the justification of democracy. Broadly defined, democracy is a rule-making system in which individuals decide matters collectively and on equal terms. Participants in this debate tend to agree that a well-functioning democracy is the most legitimate way of making coercively-binding decisions. Citizens governed by democratic processes have stronger reasons to comply with laws than citizens governed by alternative processes. But participants in this debate disagree about the basis for this judgment.
Instrumentalists hold that democracy’s value hinges on its tendency to generate superior outcomes. Over time, democracies tend to be more stable, more prosperous, and more just. For many versions of this position, democracy is better than other decision-making processes because it can better extract and filter information. For noninstrumentalists, meanwhile, democracy is valuable apart from its consequences. The most influential recent version of this position holds that democracy is valuable because it is part of an ideal of social equality. Alternative systems—such as monarchies, aristocracies, and oligarchies—involve granting people different amounts of political power. This reflects the view that some people are inherently wiser or worthier of consideration than others. By rejecting inequalities in political power, democratic political processes instead affirm a commitment to a society marked by the absence of social hierarchy and subordination. Democracy is valuable as an important part of an ideal of living among social equals.
Much more could be said about the details and merits of these different positions. What I find interesting, however, is that the prospect of algocracy seems to put both positions on the defensive. For instrumentalist views of the justification of political authority, which regime to prefer is the regime that gets the best outcomes. If algocracy proves to be superior to democracy at generating good outcomes, the consistent instrumentalist must concede that algocracy is the superior political regime. Meanwhile, the noninstrumentalist’s objection to nondemocratic systems stems from the fact that these systems allow some people to rule over others. The algocrat, however, is not a human person or someone with whom we have ongoing social relations. So, if algocracy gets better outcomes and accomplishes this feat without causing social inequalities, it’s not clear why democracy should still enjoy our allegiance.
One response to these observations is to conclude that the case for democracy’s justification is weaker than many have thought. If and when the technology for superseding democracy’s limitations becomes available, we ought to implement it with all deliberate speed. Democracy, according to this perspective, may soon become technically obsolete. Another response to these observations is to conclude that democracy is not obsolete but that the arguments in its support need an “upgrade.” My own sympathies lie with the latter position, but I find it difficult to say exactly why.
Upgrading democracy’s justification for the age of artificial intelligence would also help guide us on ways of using AI to improve democratic processes or close democratic deficits.
What would it take to upgrade democracy’s justification for the age of artificial intelligence? Certainly, there may be technical or instrumental reasons to consider. Major challenges with AI these days are bias (that algorithms reproduce and intensify biases in their underlying data and reflect the prejudices of their designers) and inscrutability (that algorithmic decisions are too complex for people to understand and verify). Perhaps democracy can be shown superior to AI because it isn’t susceptible to these problems. But I think we do well to consider other elements of the democratic ideal that haven’t featured as prominently in recent debates. Perhaps democracy’s value has more to do with its recognition of human agency. As I have described it, algocracy relies on extracting preferences through our consumption behavior. But the impulses and inclinations revealed in our behavior are often poor indicators of our values. Democratic decision-making allows us to evaluate our preferences and determine which options rest on stronger reasons. Democracy might be valuable because it allows us to be ruled by our autonomous judgments rather than our unconscious behavior. Or, perhaps democracy’s value has more to do with public deliberation—that we don’t form judgments in a vacuum but instead seek to clarify these judgments through the public exchange of reasons.
Upgrading democracy’s justification for the age of artificial intelligence would also help guide us on ways of using AI to improve democratic processes or close democratic deficits. If democracy is valuable in part because it supports our autonomous judgments, we might explore technological innovations that help us improve the quality of these judgments. If democracy is valuable in part because it involves public deliberation, we might explore technological solutions that improve the quality of deliberative processes or make them more widely accessible.
Developments in artificial intelligence will increasingly confront us with ways of altering or revolutionizing political decision-making. What is clear, at least, is that we can’t make well-reasoned choices about these opportunities without deeper reflection on the values at stake, and philosophical literature on these questions can be a helpful guide. To serve this purpose, however, philosophers will need to grapple seriously with the possibilities that new technologies create.
Disclaimer: Any views or opinions expressed on The Public Ethics Blog are solely those of the post author(s) and not The Stockholm Centre for the Ethics of War and Peace, Stockholm University, the Wallenberg Foundation, or the staff of those organisations.