How often have you come across an image on the internet and wondered: “Real or artificial”? Have you ever felt trapped in a reality where AI-created and human-made content merged together? Do we still have to distinguish them?
Artificial intelligence has opened up a world of innovative possibilities, but it has also brought modern challenges, changing the way we perceive online content. From AI-generated images, music and videos flooding social media to deepfakes and bots defrauding users, artificial intelligence now touches a huge part of the internet.
According to a study by Graphite found that by the end of 2024, the amount of content created by artificial intelligence had exceeded the amount of content created by humans, mainly due to the launch of ChatGPT in 2022. Another study suggests that as of April 2025, over 74.2% of the pages in the sample contained AI-generated content.
As AI-generated content becomes increasingly sophisticated and almost indistinguishable from human-made works, humanity faces a pressing question: How well can users truly recognize what is real as we enter 2026?
AI content fatigue is starting to set in: demand for human-made content is growing
After several years of excitement around the “magic” of AI, Internet users are increasingly experiencing AI content fatigue, a collective exhaustion in response to the relentless pace of AI innovation.
According to According to a Pew Research Center survey conducted in spring 2025, a median of 34% of adults worldwide were more concerned than excited about the increased utilize of artificial intelligence, while 42% were equally concerned and excited.
“Many studies have pointed to AI content fatigue as the novelty of AI-generated content slowly wears off and, in its current form, often appears predictable and available in large quantities,” Adrian Ott, director of AI at EY Switzerland, told Cointelegraph.
“In some ways, the content of artificial intelligence can be compared to processed food,” he said, drawing parallels between the evolution of the two.
“When it became possible, it flooded the market. But over time, people started to return to local, high-quality food, known for its provenance,” Ott said, adding:
“It could go in a similar direction with content. You could argue that people like to know who is behind the thoughts they read, and an image is judged not only by its quality, but also by the story behind the artist.”
Ott suggested that labels such as “man-made” could emerge as trust signals in online content, much like “organic” does in food.
AI content management: Certification of real content among working approaches
While many may argue that most people can detect AI-powered text or images effortlessly, the issue of detecting AI-generated content is more complicated.
September Pew Research test found that at least 76% of Americans say it is vital to be able to detect AI content, and only 47% are confident in their ability to detect it accurately.
“While some people are fooled by fake photos, videos or news, others may believe nothing at all or conveniently dismiss real footage as ‘AI-generated’ if it doesn’t fit their narrative,” EY’s Ott said, highlighting the problems of managing AI content on the internet.

According to Ott, global regulators seem to be moving towards flagging AI content, but “there will always be a way around it.” Instead, he suggested the opposite approach, where real content is certified at the moment of capture, so that authenticity can be traced back to the actual event, rather than trying to detect fakes after the fact.
Blockchain’s role in finding “proof of origin”
“As synthetic media becomes increasingly difficult to distinguish from real footage, relying on after-the-fact authentication is no longer effective,” said Jason Crawforth, founder and CEO of Swear, a startup that develops video authentication software.
“Protection will be provided by systems that place trust in content from the outset,” Crawforth said, highlighting the key concept of Swear, which ensures that digital media is trustworthy from the moment it is created using blockchain technology.

Swear’s authentication software uses a blockchain-based fingerprinting method, where each content is linked to a blockchain ledger to provide proof of origin – verifiable “digital DNA” that cannot be altered without detection.
“Any modification, no matter how discreet, becomes traceable by comparing the content to a blockchain-verified original on the Swear platform,” Crawforth said, adding:
“Without built-in authenticity, all media, past and present, are at risk of doubt […] Swear doesn’t ask, “Is it fake?” but proves, “It’s real.” This change makes our solution both proactive and future-proof in the fight to protect the truth.”
So far, Swear’s technology has been used by digital creators and business partners primarily targeting visual and audio media via video capture devices including body cameras and drones.
“While social media integration is a long-term vision, our current focus is on the security and surveillance industry where video integrity is critical,” Crawforth said.
Outlook for 2026: Platform liability and inflection points
As we enter 2026, internet users are increasingly concerned about the growing amount of AI-generated content and its ability to distinguish synthetic from human-made media.
While AI experts emphasize the importance of clearly labeling “real” content over AI-created media, it is uncertain how quickly online platforms will recognize the need to prioritize trusted human-made content as AI continues to flood the Internet.

“Ultimately, it is the responsibility of platform providers to provide users with the tools to filter AI content and extract high-quality material. If they don’t do that, people will leave,” Ott said. “At the moment, there is little that individuals can do to remove AI-generated content from their feeds – control largely rests with the platforms.”
As the need for tools to identify human-made media increases, it is important to realize that often the main problem is not the AI content itself, but the intentions behind its creation. Deepfakes and disinformation are not entirely new phenomena, although artificial intelligence has dramatically increased their scale and speed.
Related: The Texas network is heating up again, this time from artificial intelligence rather than Bitcoin miners
With only a few startups focused on identifying authentic content in 2025, the problem has not yet reached the point where platforms, governments, or users take urgent, coordinated action.
According to Crawforth Swear, humanity has not yet reached the tipping point where manipulated media causes visible, undeniable harm:
“Whether it is legal matters, investigations, corporate governance, journalism or public safety. Waiting for this moment would be a mistake; the foundations for authenticity must be laid now.”
