TRUEPIC BLOG

Visual risk is growing: Authenticity is the antidote

Confirm authenticity at the point of creation.

In this article

Subscribe to updates

Stay up to date with our latest resources and articles.

“AI has fully defeated most of the ways that people authenticate currently,” said OpenAI CEO Sam Altman as he warned a group of financial leaders last month that he is “very nervous that we have an impending, significant fraud crisis.” This isn’t hyperbole. While most public debate centers on AI’s societal and political risks, the economic threat is just as urgent and often less understood. Industries from insurance to banking are already facing deepfake impersonations, synthetic loan applications, and false claims that are cheaper and easier to manufacture than ever before.

It is no coincidence that those industries striving to digitally transform are the ones at greatest risk to AI-fueled fraud. This risk will only grow as AI development outpaces its safeguards and becomes central to great-power competition between the West and China. Without authenticity in the businesses, identities, assets, and property we are transacting, our economic and national security could be engulfed by systematic and uncontrollable digital fraud. 

No going back 

Fraud has always existed. However, we are entering a new paradigm as AI gives fraudsters hyper-realistic tools that allow them to operate at scale with minimal effort. According to Forbes contributors, “in 2024 alone, U.S. consumers lost over $12.5 billion to fraud, a 25% increase year over year. Many of these scams were powered by AI-generated deepfakes, spoofed documents and synthetic identities.” These events are well documented, in which everyday people and often vulnerable populations like the elderly, are deceived by AI. 

The problem also exists for businesses, though it is less discussed. Industries rushing to digitize are often the most exposed, trading security for speed and convenience. E-commerce, for example, faces a growing challenge in return fraud. In 2024, U.S. retailers faced an estimated $103 billion in fraudulent returns, part of a staggering $685 billion in total merchandise returned, according to a report by Appriss Retail and Deloitte. Estimates are that enterprises and media consumers will spend nearly 5 billion dollars to attempt to check for approximately 10 billion deepfakes or synthetic videos a year by 2027. 

Lending and banking have also digitized, but face vulnerabilities, as Altman noted. According to a recent Socure report, international fraud rings and bad actors were responsible for 2%-12% of all federal government business loan applications at a scale made possible by AI’s speed and reach. The report noted that fraudsters used the same techniques against commercial lenders too. These are staggering numbers, highlighting how quickly bad actors can move with AI. Insurers and underwriters face similar risk exposure since both rely heavily on unverified digital content and its supporting metadata, such as the time, date, and location of content created. That reliance creates openings for manipulation, contributing to the estimated $308 billion lost annually in the U.S. to insurance fraud across all sectors, according to the Coalition Against Insurance Fraud.

Even the White House, one of the strongest proponents of AI development, acknowledged the risks it brings to less obvious industries trying to adapt to the digital world. The White House highlighted the risks to the legal field. Its ambitious AI Action plan notably featured a section titled “Combatting Synthetic Media in the Legal System,” which instructed NIST’s forensic evidence experts and the Department of Justice to address this growing threat. 

This will accelerate 

These challenges are coming faster and making it harder for businesses to counter. AI is now a global race, especially between the U.S. and China. The White House’s new AI Action Plan makes that clear, pointedly titled “Winning the Race.” Its first priority? Speed up innovation by cutting red tape. Author Nina Schick argues that the U.S. and China have a competitive advantage in terms of AI development and can accelerate production faster than Europe because they can create cohesive industrial and energy strategies; Europe still needs consensus across a continent with different goals in terms of AI. That means AI tools will keep evolving rapidly in the U.S. (and China), along with the open-source technologies that fraudsters are already exploiting. 

AI continues to astonish and alarm. The most recent release of Google’s Veo 3 model has once again stunned the world with lifelike yet fully synthetic videos. An NBC report highlights how realistic and stunning these videos can be, generated from simple prompts. If these tools are still developing and their acceleration is incentivized in the prism of global competition, how will industry keep up? Will companies abandon digital transformation and return to in-person verification, such as insurance adjusters reviewing each fender-bender before issuing a claim? Or can we identify a reliable method to discern what is authentic from synthetic? 

Authenticity and the necessary future 

To securely implement digital transformation across enterprises, we must be able to distinguish authentic from synthetic content across industries. Early efforts have focused on labeling AI-generated material using interoperable and cryptographic standards like Content Credentials from the Coalition for Content Provenance and Authenticity (C2PA). This is a notable and important start, but labeling AI will simply not be enough because bad actors will always use non-transparent AI platforms to deceive. 

A more reliable approach is to confirm authenticity at the point of creation. The same technology designed to label AI can also label authentic content at the creator’s request. This has already been proven through pilots and implementations with Leica and Sony cameras, Qualcomm-powered phones, and other mechanisms. Most recently, Google announced that its Pixel 10 smartphones will natively embed provenance and authenticity marks into images and videos at the moment of capture, using C2PA Content Credentials. This marks a significant turning point: authenticity scaling and being integrated into consumer devices.

Businesses, particularly in financial services, were among the first to integrate authentic content and data into their workflows. This move enabled them to drive significant operational efficiency while continuing digital transformation securely in the age of AI. Industries such as business credentialing, auto warranty, and insurance quickly leveraged these tools to accelerate processes by establishing a trusted digital connection with the clients and partners submitting content. 

Lawmakers are beginning to respond and recognize that similar benefits are available for the general public and government itself. California’s AB 853 would require new cameras to offer users the option of embedding secure provenance data at capture, signaling a shift toward verifiable media as a pillar of economic and informational integrity. This is a notable piece of legislation that will likely kick off other efforts in other states and jurisdictions, with similar conclusions already reached by forward-looking companies. Federal and public-sector agencies are also taking note, recognizing content authenticity as a critical enabler of government efficiency, fraud prevention, and improved service delivery.

With a single click on a smartphone, users could securely verify the legitimacy of business transactions, insurance claims, government documents, and more. Businesses could in turn quickly distinguish authentic from synthetic, allowing digital transformation to continue with confidence. This will become critical to secure digital transformation in the AI era. 

Subscribe to Truepic updates

Stay up to date with our latest resources and articles.

Get started
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Share this article

Text Link