The end of the reality privilege

There's a general trend that the lower the threshold is for creating fake information, the higher the occurrence of fake things will be, all else being equal. For example, if you see a photo of your house on fire, you can relatively safely judge that it’s real, as it'd be very hard to properly fake it via advanced CGI. However, if you only receive a printed letter claiming that your house is on fire, the certainty is much lower, as letters are easy to fake.

The rapid advancements in generative AI models are quickly lowering this threshold. It’ll be harder and harder to tell what is real and what is fake. We are nearing the end of the “reality privilege” – the ability for people to consume information and reasonably judge its authenticity.

10 years ago, fake information online was limited. Today, you can talk to chatbots that already pass the Turing test1, get stylistically accurate images and even 3D objects from text prompts. Where will this continue? There’s growing consensus in the AI community that – at the risk of oversimplification – once the model architecture is good enough, all you need is more training and data2. If anything, this trend will only accelerate going forward. Fully photorealistic images, speech, music, video and more will come. It’ll be possible to see footage of a politician delivering a speech that they never actually delivered. The possibilities are wide, but also concerning, as humans have a tendency to believe something more the more they're exposed to it. Beliefs, opinions and knowledge in society will grow even more fragmented than it already is today. News articles, evidence gathering for private disputes or even police investigations will get blurry. In general, it'll be harder to prove things.

All of this may mark the early 2020s as the end of the reality privilege era – and make it necessary in the future to come up with new ways of judging authenticity, as we won't be able to believe our eyes.

Edit Apr 5, 2023: early signs of this are appearing faster than I anticipated: 

Notes

[1] For the unfamiliar, the Turing test is a test of a computer's ability to show behavior that's indistinguishable from that of a real human being.

[2] Also see the universal approximation theorem. This theorem states that a neural network with a single hidden layer can approximate any continuous function to any desired accuracy. In other words, it's possible to solve an arbitrarily complex problem with neural nets, provided it can be modeled digitally and there's enough compute available.


December 31, 2022
Subscribe
I send out occasional emails about new posts, learnings, scientific discoveries, and combinations of insights that I found interesting. No promotions.
Thank you. You are now subscribed.
An unexpected error occured