In an era of infinite synthetic content, trusting your eyes to distinguish real from fake is getting harder. The default assumption about photographs and even short videos is about to shift — from treating them as evidence of events to treating them as likely fabrications.

"We're going to move from assuming what we see is real by default, to starting with skepticism. Paying attention to who is sharing something and why. This will be uncomfortable — we're genetically predisposed to believing our eyes." — Adam Mosseri, head of Instagram

In the same post, the head of the world's largest photo platform mentions potential solutions: cryptographic camera signatures plus a chain of custody. The technology already exists and has been working for several years. The C2PA standard gained widespread adoption; many cameras now add cryptographic signatures to their images... And his own platform strips that signature on upload.

The Provenance Bet

Solutions for verifying media authenticity range from invisible watermarks and cryptographic signatures to AI detectors. The latter look like obvious candidates for an endless arms race.

"Detection is probabilistic at best — we do not believe that you will get a detection mechanism where you can upload any image, video, or digital content and get 99.99 percent accuracy in real-time and at scale." — Mounir Ibrahim, Truepic, for The Verge

Watermarks are also vulnerable to tampering and loss during recompression. Cryptographic provenance — signing at the moment of capture — has proved the simplest and most reliable approach, gaining the most traction among hardware manufacturers. This is the basis of C2PA, an open standard governed by the Linux Foundation.

Here's how it works: the camera signs the image when the shutter is pressed, recording the device, time, and location. If you edit that image in Photoshop, the software adds its own entry: who changed what, and when. The signature chain grows. Editing in software without C2PA support breaks the chain. So does uploading to most social networks.

You can verify a photo's authenticity using this system at contentcredentials.org/verify: upload the file and see the full history from camera to final edit.

As of January 2026, the coalition has over 6,000 members, including Adobe, Microsoft, Google, Sony, BBC, OpenAI, and others. Fourteen camera models can authenticate images using this standard: Sony Alpha, Leica, Canon EOS R1, Nikon Z6 III, Fujifilm X-T50. Cloudflare preserves provenance for twenty percent of the global web.

Apple is not part of the coalition. The world's most popular camera — the iPhone — cannot authenticate its own photos.

Samsung Galaxy, starting with the S25, officially supports the standard but only labels AI-edited images. The phones don't authenticate original photographs.

"There is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you're seeing], and it doesn't mean anything. There is no real picture, full stop." — Patrick Chomet, Samsung's Head of Customer Experience, for TechRadar

The Last Mile Problem

For social networks — the primary source of information for many people — everything described above works only until you upload media to the platform.

Instagram and Facebook read C2PA metadata on upload, extract what they need for their own "AI Info" label, then strip the full provenance chain. Meta sits on the C2PA steering committee while using verification for its own purposes: for moderation, not for users.

X strips metadata entirely. At the AI Safety Summit 2023, Musk was asked to implement C2PA. "That sounds like a good idea, we should probably do it," he replied. X then left the CAI. YouTube launched "Captured by Camera," but the feature is buried in descriptions and requires a complete provenance chain — nearly useless in practice.

"The average person should not worry about deepfake detection. It should be on platforms and trust and safety teams." — Ben Coleman, CEO Reality Defender, for The Verge

TikTok is a rare exception: it preserves C2PA and labels AI content. This may reflect the heightened regulatory scrutiny the company faces in Western markets.

LinkedIn is the only major social network displaying Content Credentials — a small "CR" icon next to images. But a series of audits by independent researcher Dr. Neal Krawetz (Hacker Factor) found serious implementation flaws. The platform displays the "issuer" field from certificates without verification. Anyone can create a self-signed certificate with the name "Reuters," and LinkedIn will display it as legitimate. Signature dates are incorrect. The platform also transcodes images on upload, making independent verification impossible.

"No amount of patching will correct the fundamental design issues that permit 'authenticated' forgeries." — Neal Krawetz, Hacker Factor

The Incentive Vacuum

Legacy image processing pipelines destroy metadata as a byproduct of optimization built over decades: compression, resizing, conversion. Add to this user privacy concerns, as metadata contains geolocation and device information.

This isn't new. IPTC has documented metadata stripping by platforms since 2013 — a decade before mass debates about AI content.

But technical reasons are only part of the story. Presumably, the key reason is that verified content does not increase engagement — rather, quite the opposite. Platforms have little rational incentive to change their policies. The EU AI Act, taking full effect in August 2026, may create external pressure — but until then, the economics favor inaction.

But where someone pays directly for verification, the picture is different: authenticity verification is deployed and operational, as much as current infrastructure permits.

Where It Actually Works

Verification works where someone pays for errors. It rarely makes headlines, operating where no one notices — until something goes wrong.

Insurance.$308 billion in annual insurance fraud losses in the US alone. A cryptographically signed photo of a damaged vehicle translates into direct savings by reducing false claims. Truepic, co-creator of C2PA, built an enterprise business on photo verification. In September 2025, the company launched Risk Network, a real-time fraud signal-sharing platform for financial institutions.

Journalism.For news agencies, trust is central to their business model. AFP and Nikon jointly verify photojournalism, BBC runs Project Origin, and France Télévisions broadcasts with C2PA daily.

Government sector.The US Embassy in Conakry has marked all photos with C2PA signatures and Digimarc watermarks since April 2025, after a deepfake video impersonated an ambassador's statement. In January 2025, the DoD and NSA issued Content Credentials guidelines for government agencies.

The common thread: someone specific pays for errors. In consumer social networks, no one does.

What About Blockchains?

During the 2018-2021 boom, blockchain projects promised to create economic incentives for verification — tokenizing it and making it profitable. That changed little. The use cases that actually took off were different: stablecoins and prediction markets. Data verifiability drew far less interest, let alone willingness to pay for it.

But the latest advances in generative AI could boost demand for one of blockchain's old promises: tamper-proof content history. And some notable crypto projects keep working in this direction.

Media provenance.Numbers Protocol is building "Git for media" — a decentralized registry on Avalanche. 67 million registered assets, only 150,000 fully verified — registering is easy, verifying harder. The project builds on C2PA, adding a blockchain layer with decentralized storage and verification. Reuters uses their ERC-7053 for indexing photo archives.

IP licensing.Story Protocol targets the $61 trillion intellectual property market. a16z and Samsung Next invested $54 million, followed by an $80 million Series B. The focus is on automated content licensing for AI training. Mainnet "Homer" launched in February 2025 with the $IP token.

Decentralized fact-checking.Fact Protocol combines AI detection with crowdsourcing and a tokenized incentive system for verifiers, but has not yet secured major partnerships at the Reuters or BBC level.

Academic archives.Starling Lab, a joint research initiative of Stanford University and USC Shoah Foundation, stores 56,000 Holocaust survivor testimonies on Filecoin. They work with Reuters and Canon on end-to-end verification systems from camera to publication, using multiple blockchains — NEAR, Hedera — without being tied to any single one.

Notably, Truepic, co-creator of C2PA and a pioneer in blockchain photo notarization, abandoned blockchain in 2024, switching to PKI — classical public key cryptography. The reasons: speed, scalability, and enterprise requirements.

That's the point: the C2PA standard, which has achieved the widest adoption, runs on traditional PKI infrastructure under the Linux Foundation. Blockchain projects primarily aim to provide an economic layer on top—handling rights, licenses, and payments.

Parallel Worlds

Apple iOS privacy labels, for instance, barely changed consumer behavior. C2PA describes itself as a "nutrition label for digital content" — and like nutrition labels, it may have a limited impact on behavior.

Technical solutions for image authenticity verification exist, but they see only limited adoption among hardware and software manufacturers. Building an external trust infrastructure follows economic incentives. Technical capabilities alone don't create them.

On the other hand, the absence of provenance can be seen as a form of value — creating real economic incentives for fakes, AI slop, and engagement manipulation.

"Labeling is only part of the solution. We need to surface much more context about the accounts sharing content so people can make informed decisions. Who is behind the account?" "In a world of infinite abundance and infinite doubt, the creators who can maintain trust and signal authenticity - by being real, transparent, and consistent - will stand out." — Adam Mosseri on Instagram

Mass verification of consumer content may not be a goal worth pursuing. Journalism, insurance, government — where the cost of error is measurable, and someone pays for it — are building their own verification systems. Social networks will remain a territory of uncertainty. These two worlds will continue to coexist.

A longer view reframes this. For most of human history, images were not reliable evidence: paintings, engravings, and retouching on early photographs. The brief period — roughly from mass adoption of film cameras to the advent of Photoshop — when photos "didn't lie" turned out to be a short-lived exception. Anyone who has thought about it understands that framing and zoom level already impose a perceptual frame. AI image generation returns us to the old "normal," just from the other side.

"For most of my life I could safely assume photographs or videos were largely accurate captures of moments that happened. This is clearly no longer the case and it's going to take us years to adapt." — Adam Mosseri