Picture Perfect Dishonesty

A few months ago, the /r/crossdressing subreddit took the fairly bold step of effectively banning “morphed” photos. The exact definition of morphed leaves some room for interpretation, but it refers mostly to photos processed by mobile apps like Snapchat and FaceApp, although it can apply to any photo which is too heavily software-transformed. Technically, morphed photos are still permitted, but are now relegated to a weekly comment thread, which in practice gets multiple orders of magnitude fewer eyeballs and correspondingly less karma 1.

I have no insight into the discussions that led to this decision by the subreddit’s moderators, but I imagine that it had to do with the fact that before it was implemented, the subreddit was getting to a point where every one of its top submissions were morphed. Content with light/no editing was consistently relegated to the bottom of the heap.

Morphed photos vary somewhat in nature, but very often they’re more or less entirely computer generated – a man’s face run through FaceApp to be turned into an AI-generated composite of a woman’s, with only the barest traces of the original used as input. But because the end result was generated from the inputs of thousands of beautiful (genetic) women, it also ended up beautiful. Here’s a photo of my guy persona run through FaceApp with gender flip and some de-aging applied:

FaceApp sample
Cute right? I know :) Unfortunately, she doesn't look too much like me.

The subreddit’s first order problem was the presence of a lot of AI-generated content, but that paled compared to its second order problem – there were fake photos everywhere, everyone knew it, and no one cared. If morphed content was present but not upvoted, it’d be one thing, but 99% synthetic FaceApp content was consistently being upvoted to top positions on a daily basis. Guys would snap a selfie, send it through FaceApp, and become beautiful women – all in three seconds flat. No fussing about with hair, makeup, feminine features, or any of the inconvenient details more difficult than pressing a filter button.

A big part of the reason was that AI are quite good. But that in itself wasn’t an adequate explanation because many very overtly fake photos would find their way to the top. Users seemed to be looking for just enough plausible deniability in that it wasn’t just full computer render, or a photo of a genetic girl passed off as a crossdresser, but only just enough – any level of fake up to that point was acceptable.

Misleading photography is of course not a new phenomenon – techniques for bad photos are as old as the art itself. They’ve improved with technology and their total ubiquity is a modern phenomena, but have been possible since the day we started exposing film to light.

From an older, more civilized digital age, a common misleading technique was the “MySpace photo”. Users would take a low resolution photo of themselves at an angle designed to make them look more attractive than they really were (and most often, quite a bit thinner). These were also often overexposed to blast away any inconvenient blemishes or imperfections. MySpace photos were so common that they became a meme worldwide.

MySpace photo
(This is me in something close to a MySpace photo, except not for MySpace. High angle, low resolution baby!)

Before that, we had Photoshop (and still have it). With varying effectiveness it could be used for anything from changing a subject’s identity to adding Nessie or a UFO.

And before any computer, we had simple, classic camera techniques. If you want a good photo of the Pyramids in Egypt, make sure to head to the front of the crowd so that it looks like it’s just you and the sand – minus the thousand other tourists there with you. Shoot the side of the beach without all the garbage on it, or do it at low tide below the garbage line. Take photos at 5 AM before everyone else arrives. Crop liberally.

The difference now is the ease of which it’s possible to produce a fake. It’s no longer necessary to develop a photographer’s eye, technical skill, or even get good with Photoshop – all you have to do is click a button. And a lower barrier to entry leads to a very predictable result – more fakes, and more people making them.

Knowing that distinguishing a fake photo is already so difficult, users subvert /r/crossdressing’s ban on them daily, and many will make it close to the top before a moderator intercepts them. This is pretty definitive proof that differentiating a fake photo from a real one is already out of reach most people.

It is possible though, by looking for certain telling traces left by the software – soft hairlines and visual artifacts for example (here’s a good post on how to spot fakes). These days I’m pretty good at looking for the tells in a fake image, but there are already some that I’m not completely sure about it, and as technology continues to improve, we can only expect it to get more difficult.

We should also expect these techniques to become available in real time. It might soon be possible to video call a person who’s using an app to transform themselves from boy to girl, and similar to Apple’s Memoji, is able to convincingly mimic its user’s facial expressions and reflect them onto their female persona. The only step left would be to correct for voice, and that technology probably already exists.

And a truth that I personally find uncomfortable is that fake images might just be enough for most people. Before their ban on /r/crossdressing they were unquestionably some of the most popular images around. The fakes were good, but not that good – in most cases the users upvoting them knew what they were doing.

It showed us that many people have no qualms about admiring and upvoting an altered photo. I find this a little surprising because if you know it’s generated by a computer anyway, why not just admire a photo of Lara Croft or Tifa Lockhart in glorious 4k ultra high definition? They’re more attractive, and absolutely perfect down to the most minute detail.

Tifa Lockhart from FF7 remake
Tifa Lockhart from FF7 Remake. Obviously this image is entirely rendered by a computer, so ... about the same as FaceApp.

I believe the answer is a deeply set human desire to bond with other humans, even if only a little. The simple fact that fake images still have a morsel of another person on the other side of them is enough for their viewers to feel like they’ve made a connection to another human being. It may be a small one, but still enough to satisfy an instinctual need.

There’s something poetic about that, but also something worryingly dystopian. Reaching out across the internet for a brief connection with a stranger is fundamentally easier and more comfortable than meeting someone in real life, but the quality of the relationship is lessened by full orders of magnitude. The internet’s taught us that the former is adequate, and although we can expect that to produce many more brief, flitting connections between people, those will likely come at the cost of fewer which are deep, meaningful, and lasting.

1 Less karma turns out to be important thing in practice. It doesn’t make much sense logically, but people really do care about their internet points.

October 17, 2020 (3 years ago) by Frey·ja