Worldwide there are more than three billion smartphone users and more than three million confirmed cases of covid-19. These two figures seem unrelated, but the development of tracing apps could turn smartphone use into a powerful weapon against the spread of covid-19. Wouldn’t that be a wonderful thing?
People’s smartphones (could be made to) contain and disseminate a lot of information about them, which is potentially useful for reducing the spread of covid-19. This could be done in various ways. Given traditional liberal-democratic concerns regarding privacy, abuse of state powers, and voluntariness, probably the most benign model is that people are offered the opportunity to freely download a covid-19 tracing app. If they chose not to, they will suffer no state-imposed sanctions. If they choose to, they will receive an auto-generated text message if they have been physically close to another app user (not revealing this user’s identity), who entered the information that he or she is infected into his or her app.[1] The user can now, say, (again: freely choose to) self-isolate. If everyone chose to download the app etc., there would be no need for costly lockdowns, and the state would not be involved in monitoring people’s private lives.
Why then not restrict our use of covid-19 tracing apps to uses compatible with this benign model? The problem is that for this model to be really effective in fighting the pandemic enough people must download the tracing app, not turn off their phones, and, if relevant, register as ill. In a recent simulation, “researchers at the University of Oxford found that 80 per cent of smartphone users in the UK would need to install a contact-tracing app in order for it to be effective in suppressing an epidemic”.[2] Evidence suggests that the 80 per cent target will be far beyond reach.[3] We could voluntarily act in such a way that the benign model was (almost) a solution to the problem. However, we won’t.
If the benign model doesn’t work, perhaps we should adopt a non-benign model? To fix the discussion I introduce another toy model in addition to the benign one – call it the non-benign model.[4] To reach something like 80% coverage, the state ensures that the relevant tracking device is either automatically installed on people’s smartphones in connection with software updates, or the state makes it illegal not to download it. Additionally, the state renders it is impossible for citizens to perform certain acts involving a risk of contagion, e.g., buying a train ticket, if they don’t have the app on their smartphone or if it classifies them as likely carriers of covid-19.[5] If you’re treated for covid-19, health care authorities confiscate your smartphone and enter information regarding your health status. They even use the information provided by your app to identify and contact those whom you have been in contact with to force them to take a covid-19 test.
You can imagine worse things from the perspective of the above-mentioned liberal-democratic concerns, but to many this non-benign model already looks pretty bad. First, it looks bad because, commonsensically, it seems to violate individual rights, e.g., rights to privacy and non-interference with private property and one’s own person. Second, it looks bad because of how it clashes with what sort of society is a desirable one to live in independently of concerns about the violation of individual rights, e.g., you might think that even if the non-benign model did not violate individual rights, it is likely to instill undesirable dispositions in citizens such as lack of initiative, blind trust in authorities etc.[6] My focus here is the first cause for concern, i.e., whether citizens can complain that the non-benign model violates their rights. While I myself will take no stand on this matter, I will consider two arguments suggesting that the non-benign model does not violate individual right given how we – or at any rate many of us – think about what else can be done in the interest of avoiding threats of harm without violating people’s rights.
Consider first widespread views on the moral qualities of forced quarantines such as those recently imposed on several cruise ships around the world. A common view is that it involves no violation of rights to forcibly quarantine everyone on infected ships in the interest of preventing the further spread of the disease.[7] Quarantine involves surveillance of people to prevent them from escaping. Even worse, it involves confining people to small rooms for weeks. No doubt the risk that any particular quarantined person is infected is considerably higher than the risk that any particular smartphone user is, and this speaks to the gravity of the sort of means that can permissibly be used to stop the disease.[8] But this difference in risk seems unable to explain why installing a covid-19 tracing app on smartphone users’ devices against their will violates their rights, while the surveillance of people to prevent them from escaping and enforced confinement of them to small rooms for weeks is not, since the interference in the latter case appears much more severe than in the former.[9]Hence, if we oppose the illiberal model because it violates the rights of individual citizens, we should perhaps revise our view that forced quarantines do not (if, indeed, we initially hold the view that they don’t).
This last claim might be denied on the ground that passengers, who flee quarantine and then socialize with others in unsafe manners, are so-called culpable threats, whereas smartphone users who refuse to install covid-19 tracking apps are not. The former are threats because they are likely to infect others, and they are culpable (despite having innocently – or if they are medical personnel: even heroically – contracted the disease) because they voluntarily choose to expose others to a risk of infection in order to save themselves the lesser harm of quarantine.[10] While ordinary smartphone users impose a risk – albeit typically a much smaller one than in the former case – on others, they are not culpable simply in virtue of going about their daily business, one might think. In short: Ordinary smartphone users are threats in some minimal sense, but they are innocent threats. Or so it may seem. This takes us to the second argument that I would like to consider.
There is a large body of discussion in the literature on defense against innocent threats. In the American philosopher Robert Nozick’s classical discussion, he imagines a person, who unpredictably is caught by a gust of wind and thrown down a well. At the bottom of the well, there is another person, who will be crushed to death if hit. The airlifted person will however survive unharmed. According to Nozick and many others, the person at the bottom of the well could shoot the innocent threat if that would somehow save his or her life without violating that person’s rights.[11] This would be so even more clearly if the falling person culpably ventured outside in stormy weather fully aware of the significant risk of being blown down one of the many “occupied” wells in the area.
A smartphone user who is innocently unaware that she might infect others, because she is innocently unaware of both the fact that she has covid-19 and the fact that relative to what she knows she might infect others with covid-19, is an innocent threat to others just like Nozick’s person who is picked up by the wind and thrown down a well. As noted, many believe that the person at the bottom of the well could kill the falling, innocent threat without violating his or her rights, if that would save his or her own life. If so, might not potential victims of covid-19 – that is, almost all of us – through the state also forcibly install tracking software on smartphone users’ devices without thereby violating their rights if that would prevent the transmission of the deadly disease and even if smartphone users without such apps are merely innocent threats?
This question seems even more worrisome when we reflect on whether a smartphone user, who resists the installment of a covid-19 tracking device and who – as is presently true of most smartphone users – knows that there is some risk that he or she might pass on the disease to others, really is an innocent, rather than a culpable, threat.[12] Consider such a non-self-isolating smartphone user who resists the installment of a tracing app despite the known risk to others this involves. This person is more like a person in Nozick’s well example who has culpably chosen to venture outside for no important reason refusing to download a “wind gust-meter” on his or her smartphone that would prevent him or her from being turned into a human missile and, thus, posing a risk to others. Or like a non-self-isolating cruise ship passenger, who escapes quarantine well aware of the risks he or she imposes on others.
My purpose here is not to embrace the non-benign model. After all, I have explicitly set aside concerns about the abuse of such devices, which bear importantly on the acceptability of that model. My purpose is not even to suggest that the non-benign model does not violate individual rights. Rather, it is to point to the possible tension between our moral views about self-defense against threats in philosophers’ toy examples such as Nozick’s and real-life cases like quarantines, on the one hand, and our views about whether the non-benign model involves rights violations, on the other hand. If, as I have argued, there is such a tension, one way of addressing it is by revising our strict views about illiberal policies regarding tracing apps and their compatibility with individual rights. However, another way is to revise our lax views about quarantine and self-defense against innocent threats and their compatibility with the absence of violation of individual rights.[13] I suspect that, like me, many will see this as a difficult choice.
References
[1] They will do so even if there was a 30 cm concrete wall between them, but, arguably, this is a case where false positives are worse than false negatives (though that problem exists with present devices also). [2] https://www.newscientist.com/article/2241041-there-are-many-reasons-why-covid-19-contact-tracing-apps-may-not-work/#ixzz6KzhcXohW [3] https://qz.com/1842200/singapore-wants-everyone-to-download-covid-19-contact-tracing-apps/ [4] Jay Stanley and Jennifer Stisa Granick, “The Limits of Location Tracking in an Epidemic”, ACLU, April 8, 2020. [5] https://abcnews.go.com/International/china-rolls-software-surveillance-covid-19-pandemic-alarming/story?id=70131355 [6] https://www.amnesty.org/download/Documents/POL3020812020ENGLISH.pdf; “Emergency Powers and Civil Liberties Report”, Big Brother Watch, April 2020: 71-75. [7] See Tom Douglas’ excellent ’Flouting quarantine’ on this blog. [8] https://edition.cnn.com/asia/live-news/coronavirus-outbreak-03-08-20-intl-hnk/h_eea45446f3d9ff663e53f3cf93f21bd6. [9] As Romy Eskens pointed out to me, in terms of duration the interference involved in the non-benign tracking device model (months if not years) is more severe than the interference involved in the typical forced quarantine (two weeks). I suspect, however, that the relevant asymmetric assessment will not disappear if we equalize duration across the two cases. [10] For reasons of space, I ignore the distinction between fact-relative and evidence-relative threats. If I aim an unloaded gun at you with the apparent intention of pulling the trigger, I am an evidence-relative threat (from your perspective) even if I am not one in the fact-relative sense. [11] Robert Nozick, Anarchy, State, and Utopia (Oxford: Basil Blackwell, 1974), p. 34. I have slightly modified Nozick’s original example, in which it is a villain who throws the bystander into the well, to avoid the moral complication the presence of a culpable human agent gives rise to. [12] Typically but not always, this threat is smaller in the case of smartphone users (though some smartphone users intend to engage in risky behavior covid-19-wise) than in the case of quarantine escapees (though some quarantined cruise passengers, say, as a result of seasickness did not venture out of their cabins during the cruise anyway). [13] I am grateful to Romy Eskens, William Lippert, and Kira Vrist Rønn for helpful comments.
Disclaimer: Any views or opinions expressed on The Public Ethics Blog are solely those of the post author(s) and not The Stockholm Centre for the Ethics of War and Peace, Stockholm University, the Wallenberg Foundation, or the staff of those organisations.
Comentaris