The internet's latest obsession is Kate Middleton—specifically concerning her whereabouts following an unexpected January surgery. Despite an initial statement that the princess wouldn't resume duties until Easter, the world couldn't stop itself from speculating and theorizing about Kate's health and the status of her marriage to Prince William. It didn't help, of course, that the only images publicized of the princess since are, let's say, not definitive. There have been grainy photos taken from afar, and, of course, an infamous family photo that was later discovered to be manipulated. (A post on X (formerly Twitter) attributed to Kate Middleton was later posted, apologizing for the edited photo.)
Finally, The Sun published a video of Kate and William walking through a farm shop on Monday, which should have put the matter to bed. However, the video did little to assuage the most fervent conspiracy theorists, who believe the video is too low quality to confirm whether or not the woman walking is really the princess.
In fact, a number go so far to suggest that what we can see shows this isn't Kate Middleton. To prove it, some have turned to AI photo enhancing software to sharpen up the pixelated frames of the video to discover once and for all who was walking with the future King of England:
This Tweet is currently unavailable. It might be loading or has been removed.
There you have it, people: This woman is not Kate Middleton. It's...one of these three. Case closed! Or, wait, this is actually the woman in the video:
This Tweet is currently unavailable. It might be loading or has been removed.
Er, maybe not. Jeez, these results aren't consistent at all.
That's because these AI "enhancement" programs aren't doing what these users think they're doing. None of the results prove that the woman in the video isn't Kate Middleton. All they prove is AI can't tell you what a pixellated person actually looks like.
I don't necessarily blame anyone who thinks AI has this power. After all, we've seen AI image- and video-generators do extraordinary things over the past year or so: If something like Midjourney can render a realistic landscape in seconds, or if OpenAI's Sora can produce a realistic video of non-existent puppies playing in the snow, then why can't a program sharpen up a blurry image and show us who is really behind those pixels?
AI is only as good as the information it has
See, when you ask an AI program to "enhance" a blurry photo, or to generate extra parts of an image for that matter, what you're really doing is asking the AI to add more information into the photo. Digital images, after all, are just 1s and 0s, and in order to show more detail in someone's face, you need more information. However, AI can't look at a blurry face and, through sheer computational power, "know" who is really there. All it can do is take the information is has and guess what should really be there.
So, in the case of this video, AI programs run the pixels we have of the woman in question, and, based on their training set, add more detail to the photo based on what it thinks should be there—not what really is. That's why you get such wildly different results every time, and often terrible to boot. It's just guessing.
404media's Jason Koebler offers a great demonstration of how these tools simply don't work. Not only did Koebler try programs like Fotor or Remini on The Sun's video, which resulted in similarly terrible results as others online, he tried it on a blurry image of himself as well. The results, as you might guess, were not accurate. So, clearly, Jason Koebler is missing, and an imposter has taken over his role at 404media. #Koeblergate.
Now, some AI programs are better at this than others, but usually in specific use cases. Again, these programs are adding in data based on what they think should be there, so it works well when the answer is obvious. Samsung's "Space Zoom," for example, which the company advertised as being able to take high quality images of the Moon, turned out to be using AI to fill in the rest of that missing data. Your Galaxy would snap a picture of a blurry Moon, and the AI would fill in the information with pieces of the actual Moon.
But the Moon is one thing; specific faces is another. Sure, if you had a program like "KateAI," which was trained solely on images of Kate Middleton, it'd likely be able to turn a pixelated woman's face into Kate Middleton, but that's only because it was trained to do that—and it certainly wouldn't tell you whether a person in the photo was Kate Middleton. As it stands, there's no AI program that can "zoom and enhance" to reveal who a pixelated face really belongs to. If there's not enough data in the image for you to tell who's really there, there's not enough data for the AI, either.