Humans are fantasizers. One of the things we fantasize lots about is sex.
Although puritanical types won’t approve, there’s much to be said for sexual fantasy (and its corollary, masturbation). It’s an easy source of pleasure. It quells powerful desires. It’s relaxing. It allows one to explore one’s sexuality. It staves off boredom.
Sexual fantasies aren’t always OK, of course. Context and content are important. For instance, it’s probably wrong for men to fantasize about rape and for teachers to fantasize about students, even if these fantasies never directly materialize into wrongdoing. Besides being inherently objectionable, such fantasies may encourage inappropriate attitudes, configure relationships in problematic ways, or reinforce bad social structures.
Still, it’s surely sometimes OK to indulge in sexual fantasies, even about real people without their consent. Arguably, you don’t harm or wrong, say, Dwayne Johnson–or the woman you pass in the supermarket, for that matter–simply by fantasizing about them without their consent.
But what about technologically augmented fantasy? When does the use of technology make an otherwise innocuous fantasy wrong?
This question is pressing given the advent of deepfake pornography, which is used to extend fantasy into the digital realm.
Deepfake pornography is a form of realistic-looking video pornography that utilizes artificial intelligence to depict sexually explicit acts involving people who didn’t actually participate in those acts. Deepfake pornography is already widespread on the internet. The most high-profile videos depict celebrities (usually women), although there are many depicting noncelebrities in circulation. Moreover, if you have access to images or a short video of someone, you can create (or commission) deepfake pornography yourself. For example, if you had access to a fifteen second video clip of me lecturing, you could pay someone a small fee to superimpose my face onto an existing pornographic video and make it look like it’s me having sex in the video.
This technology is only going to become more common in the coming years. So, it’s crucial that we investigate its moral status.
When does the use of technology make an otherwise innocuous fantasy wrong?
In my discussion, I’m going to ignore pornography that depicts acts that are inherently degrading, inegalitarian, or otherwise wrong since this sort of pornography is objectionable independently of its status as a deepfake. So, for example, I’m going to assume we’re not talking about pornography that depicts rape or minors. This will allow us to focus on distinctive issues associated with deepfakes.
Creating, consuming, and distributing deepfake pornography of someone with their permission is presumably morally unproblematic. But most deepfake pornography is created, consumed, or distributed without permission. So, we need to focus on the ethical issues associated with doing these things nonconsensually.
It seems undeniable that distributing deepfake pornography of someone without their consent is normally seriously wrong since doing so can cause them significant distress, damage their reputation, adversely affect their relationships, harm their career, impede their ability to participate in discourse, and distort their social identity and sense of self, among other things.
But even this issue isn’t totally clearcut. For one, distributing deepfake pornography doesn’t always produce these effects. Moreover, distributing pornography that will foreseeably produce these effects isn’t always for this reason obviously wrong. Suppose I have an indistinguishable look-alike named ‘Danny’ in the pornography business. Danny’s videos are not deepfakes and do not depict me, but people who watch Danny’s videos mistakenly think they depict me. Consequently, the videos affect me in the ways just mentioned. It’s not obvious either that Danny is obligated to stop distributing his pornography if he knows this or that Danny wrongs me by distributing his pornography without my consent. This suggests that nonconsensually distributing deepfake pornography isn’t wrong simply because it produces the aforementioned effects. There seems to be some additional element that contributes to its wrongness, which is missing in Danny’s case. What is it?
One thing you might say is that Danny has a right to make a living, and this explains why it’s permissible for him to make his videos. This consideration doesn’t seem decisive, however, since some people make a living by distributing deepfakes nonconsensually, and that’s wrong.
[A]lthough we don’t have a right that others not distribute depictions that coincidentally resemble us, we do have a right not to be depicted in certain ways in certain social contexts.
Unlike in the case of revenge pornography, we can’t straightforwardly appeal to privacy, since deepfake pornography doesn’t depict actual details of someone’s private life any more than a painting of someone on the toilet does.
On the other hand, like revenge pornography, deepfakes are sometimes distributed with malicious intent, and such intentions are criticizable. But nonconsensually distributing deepfakes seems wrong when it’s done without malice, too.
Deepfake pornography is often deceptive in that it convinces people that someone did things they didn’t do. This can make its distribution wrong. Yet deepfake pornography is often explicitly flagged as fake. This doesn’t seem to make its distribution OK. So, it seems there are further factors in the mix here.
One such factor is that nonconsensually distributing deepfake pornography can contribute to unjust or harmful social structures. For example, when a man distributes deepfake pornography of women nonconsensually, this can reinforce the idea that women can be treated as sexual objects. It’s not obvious that distributing deepfake pornography is always wrong for this sort of reason, though. For example, distributing deepfake pornography depicting loving sexual interactions between men could actually have a positive social impact by, say, combatting pernicious narratives about homosexual relationships.
Similar things can be said about the overall impact of distributing deepfake pornography from an impartial consequentialist point of view. Undoubtedly, the nonconsensual distribution of deepfake pornography usually produces more bad than good. But the disutility produced by nonconsensually distributing a deepfake can theoretically be outweighed by the utility produced in viewers. This wouldn’t make it OK. Again, there seems to be some other factor in the mix.
I suspect that the most fundamental morally relevant difference between distributing deepfake pornography of me and distributing Danny’s pornography is that the deepfake pornography depicts me while Danny’s pornography, despite appearances, does not. Suppose you take a picture of me. That picture will look like it depicts my look-alike. But it doesn’t. It depicts me because the camera was pointed at me; I, not my look-alike, am the causal origin of the image. Now, if you use that picture to make a deepfake, then that deepfake depicts me, because it’s based on an image that depicts me. But when Danny makes pornography, that pornography depicts not me but him, because he is the causal origin of the pornographic images (and he doesn’t intend to depict me or anything like that). This seems to make a difference, perhaps because, although we don’t have a right that others not distribute depictions that coincidentally resemble us, we do have a right not to be depicted in certain ways in certain social contexts. Just as it would be wrong for someone to nonconsensually distribute a stick figure illustration of me having sex to students in my classroom, so it’s wrong for someone to nonconsensually distribute deepfake pornography of me on the internet. These wrongs may be differentially harmful, but they seem to be wrongs of the same basic kind.
Note that this proposal aligns with our intuitions about depiction in other types of fiction. If, say, Law and Order depicts you committing heinous crimes you didn’t commit, then you are entitled to object. But if Law and Order depicts a character that coincidentally resembles you committing heinous crimes, you haven’t been wronged, no matter how irritating it may be.
Assuming this is right, nonconsensually distributing deepfake pornography of someone is wrong even if it isn’t harmful, malicious, deceptive, etc. It seems we generally have strong moral reasons to refrain from nonconsensually distributing deepfake pornography.
However, just because we shouldn’t nonconsensually distribute deepfake pornography doesn’t mean it’s wrong to nonconsensually create and privately consume it. I foresee a future where people routinely create personalized deepfake pornography depicting public figures, friends, and acquaintances without their consent using accessible software designed for private consumption.
Is this sort of individualized, private, technologically augmented fantasy OK?
Scholars have noted that creating and privately consuming deepfake pornography of someone without their consent is in many ways analogous to mentally fantasizing about someone without their consent. If mental fantasy is OK–even sometimes good–so long as you keep it to yourself, perhaps creating and consuming deepfake pornography is OK so long as you don’t distribute it and you ensure, via encryption, that it cannot possibly be stolen. You might even argue that personalized deepfake pornography levels the pleasure playing field by enabling people who can’t voluntarily produce mental imagery to experience the fruits of fantasy that are naturally accessible to everyone else.
It’s possible that nonconsensually creating and privately consuming deepfake pornography falls within the category of the suberogatory, meaning it’s bad but not strictly speaking forbidden.
Is there any morally relevant difference between fantasizing and privately creating and using personalized deepfake pornography?
For most people, visual perceptions are more forceful and vivacious than imagination. You might argue that mental fantasy is OK because it involves hazy and unconvincing imagery, whereas personalized deepfake pornography is unacceptable because, phenomenologically speaking, it looks real. This line of response may be promising, but it could implausibly imply that it’s wrong for people with unusually vivid mental imagery to fantasize.
Another line of response turns on the fact that creating personalized deepfake pornography requires significantly more effort than conjuring sexually explicit mental images. Sometimes we think it's vicious to expend effort to do something that it would be OK to do if it required no effort. For example, there’s nothing depraved about rubbernecking at a car crash you happen to pass on the road, but intuitively there is something depraved about seeking out crashes to gawk at. Perhaps sexual fantasy is the same way. Even so, this objection probably has a shelf life because creating deepfake pornography may soon require little more than a quick Google search and a few mouse clicks.
A weightier line of response points to how personalized deepfake pornography might contribute to unjust or harmful social structures or, on an individual level, vicious attitudes or dispositions. If personalized deepfake pornography generally produces these effects, then it’s generally objectionable. This can generate a moral reason against creating and consuming it, at least absent any special reason for thinking it won’t produce these effects in one’s own case. However, whether personalized deepfake pornography (or conventional pornography, for that matter) has these effects is an open empirical question, which cannot be settled from the armchair. Studies are needed to determine its impact.
The upshot is that this issue is not clearcut. It’s possible that nonconsensually creating and privately consuming deepfake pornography falls within the category of the suberogatory, meaning it’s bad but not strictly speaking forbidden. There’s also a chance that using deepfake pornography in this way is straightforwardly impermissible, perhaps for reasons I haven’t mentioned.
Given this uncertainty, arguably the conscientious person should err on the side of caution and refrain from privately creating and using deepfake pornography. Then again, this might be excessively moralistic. Maybe we should not categorically deny ourselves the special pleasures associated with private deepfakes unless we can identify a strong case against them.
It’s only natural that we humans should want to augment our fantasies through technology. Although there’s nothing inherently wrong with this, we all have an obligation to fantasize responsibly. Deepfake pornography may—or may not—violate this obligation.
Acknowledgments
Thanks to Romy Eskens and Sam Zahn for very helpful comments on this piece. I owe special intellectual debts to Ryan Jenkins and Catelynn Kenner for rich discussion and assistance in the process of developing these ideas.
Disclaimer: Any views or opinions expressed on The Public Ethics Blog are solely those of the post author(s) and not The Stockholm Centre for the Ethics of War and Peace, Stockholm University, the Wallenberg Foundation, or the staff of those organisations.