Push your mind forward. It is November 8, 2028, The day after another presidential election. This process has gone smoothly — there have been no allegations of rampant fraud, and no significant trace of deception — in large part because deepfakes, democracy's newest enemy, have been defeated.
Is this future possible? So far, neither the government nor the tech industry have agreed on effective guardrails against deepfakes. But this FAQ (from five years in the future) shows that the events of 2024 may force this problem, and that a solution is possible.
Why has it taken so long to find an effective way to fight deepfakes?
In late 2022, sophisticated, low-cost artificial intelligence software emerged that made it easy to create realistic audio, video, and photographs — so-called deepfakes. As these generative AI programs rapidly improve, it is becoming clear that deep fake content will pose a risk to democracy.
Deep political fakes — both audio and visual — quickly emerged: President Biden announced that Americans would be drafted to fight in Ukraine. Photo of Donald Trump hugging and kissing Dr. Anthony Fauci. Sen. Elizabeth Warren (D-Mass.) tells MSNBC that Republicans should not be allowed to vote in 2024. Eric Adams, the monolingual mayor of New York, speaks Spanish, Yiddish and Mandarin on AI-generated robocalls.
Very quickly, the White House, the European Union, and major tech companies launched wide-ranging proposals to regulate AI, which included “watermarking” AI content — and inserting identity labels, a permanent part of computer code, into the digital file of any product created by AI. Artificial. Content to determine its artificial origin.
But setting AI rules has proven complex, and labeling was an example of the dilemmas: Will AI watermarks be legally required? How will it be enforced? As early as 2023, some mobile phone cameras used AI in image processing. How much AI input into content requires an ID? Does an Instagram beauty influencer need to put a watermark on her face-set selfies?
The complexities were so great that no system was widely adopted.
What has changed?
The largest coordinated attack in history occurred the day after the November 2024 election. Every social media channel in the United States was filled with audio, video, and fake still images depicting election fraud in twelve battleground states, hyper-realistic content viewed by millions within hours. Debunking efforts by the media and government have been hampered by the constant flow of new fakes, most of which were manufactured in Russia, North Korea, China and Iran. The attack generated legal and civil chaos that would last until the spring of 2025.
However, no early authentication efforts were made He was adopted?
correct. This hack actually came about in early 2026 by a working group of digital journalists from US and international news organizations. Their goal was to find a way to keep deepfakes out of news reports, so they could protect the credibility the mainstream media still held. It was a logical mission: journalists are historically ruthless in punishing their peers for misconduct, and they break the tar and feathers for even minor departures from factual accuracy.
Journalism organizations formed the FAC Alliance – “Authenticated Content” – based on a simple idea: there were already too many counterfeiting AIs in the world to try to enforce a watermarking system for fakery and disinformation. Even the strictest classification rules will simply be ignored by bad actors. But he He was It is possible to watermark parts of the content It wasn't Deep fakes.
Thus the voluntary FACStamp on May 1, 2026 was born.
What does a FACStamp look like?
For consumers, FACStamped content displays a small “FAC” icon in one corner of your screen or includes an audible FAC notification. The user can turn off the signal, or it can be set to appear for only five or 10 seconds at the beginning of the media stream.
FACStamps are completely voluntary. But each FAC coalition member pledged that their online, broadcast and physical reporting would only publish FACStamped media in their news sections.
How does content qualify for a FACStamp?
The latest phones, tablets, cameras, recorders and desktop computers include software that automatically inserts a FACStamp code into each piece of visual or audio content as it is captured, before any AI-based editing is applied. This proves that the image, audio or video was not generated by artificial intelligence. You can also download the FAC app, which does the same for older equipment. The FACStamp is what technology experts call “fragile”: the first time an image, video or audio file is forged by AI, the stamp disappears.
The purpose of artificial intelligence It is often used appropriately to do things like reduce background noise in an audio file. Can't edit FacStamped content at all?
Sure it can. But to retain a FACStamp, your computer must be connected to the nonprofit's FAC Verification Center. The center's computers detect whether the editing was simple — such as cropping or even cosmetic facial adjustment — and the stamp remains. Any greater manipulation, from swapping faces to fake backgrounds, makes the FACStamp disappear.
How did FACStamps spread beyond the press?
It turns out that a lot of people can use FACStamp. Online retailers have embraced FACStamps for videos and photos of their products. Individuals quickly followed suit, using FACStamps to sell goods online — when potential buyers judge a used pickup truck or a used sofa, it's reassuring to know that the image hasn't been spun or retouched by artificial intelligence.
In 2027, the stamp began appearing on social media. Any parent can artificially create a completely realistic photo of his happy family standing in front of the Eiffel Tower and publish or email it to envious friends. A FACStamp proves that the family was actually there.
Dating app profiles without FACStamps are finally becoming rare. Video conferencing apps have FAC options to make sure everyone on the call is real. And for influencers, it's increasingly difficult to claim “authenticity” without at least occasionally using a FACStamp.
Even the AI industry supports the use of FACStamps. During an online training run, if the AI software ingests excessive amounts of AI-generated data instead of the original data, it may experience a “model crash” and become wildly inaccurate. So FACStamp helps AI companies train their models on reality only.
What then?
A bipartisan group of senators and House members plan to introduce the Right to Fact Act when the next Congress opens in January 2029. It would mandate the use of FACStamps in multiple sectors, including local government, shopping sites, investment offerings and real estate. Forging a FACStamp would become a criminal offence. Opinion polls indicate widespread public support for the law, and the FAC coalition has already begun a campaign to promote the brand.
Slogan: “Is this the FAC? “
Michael Rogers is an author and futurist, whose most recent book is Email from the Future: Notes from the Year 2084.