Princess Kate had not been seen in public for weeks when Kensington Palace marked the United Kingdom’s Mother’s Day by releasing a photo of the princess of Wales surrounded by her three smiling children.
“Thank you for your kind wishes and continued support over the last two months. Wishing everyone a Happy Mother’s Day. C,” read the March 10 post that accompanied the image on the prince and princess of Wales’ X account.
The Associated Press withdrew the image hours later and issued a “kill order” for it, asking clients to remove it from their platforms over concerns it had been manipulated “by the source.” Other news agencies including Getty, Reuters, Agence France-Presse and Britain’s national news agency, PA, soon followed suit.
The palace announced in January that the princess was undergoing a “planned abdominal surgery” that required a two-week hospital stay and a pause in her royal duties until Easter. She hasn’t been seen in public since Christmas. Scant information about the surgery and the long recovery period fueled frantic conspiracy theories about Middleton’s whereabouts, and whether the procedure was more serious.
The new image sent the rumor mill into overdrive.
In a March 11 post on the Kensington X account, a message signed by Kate apologized for the doctored photo, saying that “like many amateur photographers, I do occasionally experiment with editing.”
“I wanted to express my apologies for any confusion the family photograph we shared yesterday caused. I hope everyone celebrating had a very happy Mother’s Day. C,” the post said.
The photo was credited to her husband, William, prince of Wales and heir to the throne, and the palace said the photo was taken earlier in the week in Windsor, where the family lives. A palace official told The New York Times that Kate made minor adjustments and reiterated that, although William had taken the photo, Kate edited it.
Forensic image experts pointed to several inconsistencies in the image, including details where sleeves and zippers don’t line up. They agreed it doesn’t look like an AI-generated image, but was subject to more rudimentary photo editing.
“It looks to me like it is a composite image that may have been taken from multiple photos — a common technique used to get the best versions of individual subjects,” said Cole Whitecotton, senior professional research assistant for the National Center for Media Forensics at the University of Colorado, Denver. “Many new phones automatically do stuff like this for things like red-eye removal, making sure subjects aren’t blinking, etc. Nothing we have seen in the image seems like particularly malicious edits. Clearly, not much time was put into the edits as you can see the clear traces left behind.”
Mounir Ibrahim, executive vice president of TruePic, which creates tools to verify the authenticity of online content, said it appears that the photo was edited with a tool like Adobe Creative Cloud and looks like a “cheap fake,” which is usually done with rudimentary editing like cropping, filtering and taking existing pictures and placing them into images, he said.
“You can add in generative fill into an existing cheap fake, so it’s not impossible that pieces of this photo are synthetic, like the background or color,” he said. “But this does not appear to be a completely synthetic image; it seems as though pieces of an old image were used or recycled.”
One royal photographer told the BBC that editing photos isn’t an unusual practice in royal photography, but said the press’s withdrawal of the image “was definitely new.”
The AP said there was no suggestion that the image was fake, but the agency retracted it because it didn’t meet its photo standards, which state that images must be accurate and not altered. The news organization said its editors determined after further inspection that the image showed an “inconsistency in the alignment of Princess Charlotte’s left hand with the sleeve of her sweater.”
Minor editing, such as cropping and color adjustments, are acceptable when necessary for clear reproduction, but should “maintain the authentic nature of the photograph,” the news organization wrote in a March 11 article explaining the decision.
“Changes in density, contrast, color and saturation levels that substantially alter the original scene are not acceptable,” the story said. “Backgrounds should not be digitally blurred or eliminated by burning down or by aggressive toning. The removal of ‘red eye’ from photographs is not permissible.”
How to spot manipulated images
Knowing how these technologies work and having a healthy amount of skepticism is important when looking at any information online.
“It is good to question sources (and) look deeper into multimedia that is shared with you,” Whitecotton said. “Having an understanding of how digital technologies work, understanding how the pipeline of content creation works, how these images are being made and put out into the world, etc.”
Editing tools can sometimes leave traces, experts said, so taking the time to zoom in and see if anything appears unusual can reveal whether content is doctored. In this case, internet users zeroed in on Princess Charlotte’s sleeve, which has a portion missing.
Reverse-image searches are helpful and typically easy using websites like Google Images or TinEye. These searches can reveal a photo’s original source and whether it has been edited or shared in the past.
Unnatural skin tones, or blurred-out features are other indications that an image may be fake or altered. In the royal family’s photo, Catherine’s right hand appears blurry, while her face and left hand are in focus. Experts also recommend examining shadows, reflections and perspective lines to spot irregularities. If a pattern doesn’t intersect at the right place, that could be a tell-tale sign that it’s altered.
As generative AI and other digital editing tools become more sophisticated, experts said it will be more difficult to spot fake or manipulated content.
The news industry, photo specialists and tech platforms are trying new things in response, including infrastructure like content credentials — icons or watermarks on content that would provide a kind of “digital nutrition label.” This would tell people where and when the content was created, what tools were used to make it, whether generative AI was used and any edits made along the way. As more platforms and tools adopt this, experts believe it has the potential to become a reliable standard.
Jevin West, associate professor and co-founder at the University of Washington’s Information School and co-director of its DataLab, said the response has extended to “the hardware level.”
Camera companies are starting to equip cameras with the ability to create digital watermarks. West said these initiatives are helpful and can “start to set some norms to producing images and stories that have public relevance and interest.”
Although this is positive progress, West said, he warned people to be watchful, especially in an election year.
“This is a big year. We need to bring public attention to it and consumers should remain extra vigilant,” he said. “There are really consequential decisions happening around the world, all at a time when AI is rising and becoming more sophisticated and making it harder to tell what’s real or not.”
This fact check was originally published by PolitiFact, which is part of the Poynter Institute. See the sources for this fact check here.