At the beginning of April Meta has said its final goodbye to the use of verifiers in the United States on all its platforms (Facebook, Threads and Instagram). It has done so in favor of the use of community notes, following in the wake of Elon Musk’s X-driven approach.
This movement, which is not unique (even though there are platforms that are still silent and expectant), highlights the delicate theater in which the main sources of information referents of society in 2025 are operating, a scenario of inflection in which technology, politics and regulation converge, and which is clearly redefining how information is produced, distributed, verified and consumed.
On the specific issue of verification, the answer is not unanimous. Many experts suggest that the future of verification should be a hybrid between AI-assisted verification (systems that identify and prioritize misinformation according to its potential harm and virality), expert verification and structured community input.
Facing this movement of the technological companies that own the large social platforms, the traditional media are adapting to an increasingly fragmented and extremely volatile digital ecosystem. The situation is complex. In addition to the information generated by the user, there is the immense amount of content that is being generated by AI, in an automated and uncontrolled way, which overwhelms the positioning algorithms and confuses our critical capacity. Slop content, in short, hogwash content.
For this reason, the media have decided to be cautious with the use of generative AI. Yes AI, but with certain rules of use and transparency.
This is the summary of a recent study on media self-regulation in the use of artificial intelligence, prepared in cooperation by the University of Valladolid (Spain) and the University of Beira Interior (Portugal), which reveals how the media in 18 different countries are concerned about establishing clear limits to the use of generative AI. The study, which analyzes 45 style manuals and internal regulations that have been published in the last two years, is clear: 87% of the media limit the use of generative AI. The regulations of media such as Agencia EFE, El País, BBC, Al Jazeera, Jot Down, New York Times, The Guardian, USA Today, Wired, DPA, DJV, Verdens Gang or Groupe Les Echos Le Parisien, among many others, reflect three major fundamental ethical commitments: Transparency (96% of the papers demand to identify AI-generated content), verification (76% emphasize the importance of fact-checking – in a world traveling in the opposite direction), and human supervision (98% stress the need for human control in the process).
The study, which is really interesting, has focused on examining four major questions: What types of AI are allowed and what are their limitations? What regulation on multimodal content is implemented? What ethical commitments are established? And how important is human supervision?
Regulation:
– Most media show greater concern for generative AI than for analytical AI.
– 87% of the analyzed documents limit the use of generative AI reducing its role to “support tool and never a substitute for the journalist”.
– Only 7% allow wider use.
Multimodal AI:
– Text generation with AI is limited to 71% of the documents. It can be used for translation, transcription or headline suggestion.
– In the creation of audiovisual content there are discrepancies (20% explicitly prohibit it, while 53% allow it with restrictions).
Ethical commitment:
– 96% of the documents include transparency measures, such as identification of AI-generated content.
– 76% believe verification and fact-checking are necessary.
– 64% explicitly mention respect for copyright.
– 62% believe it is necessary to protect personal data.
Human supervision:
– 98% of the documents highlight the importance of human supervision in the process (human-in-the-loop).
– All consider that the professional brings critical judgment and cultural context that AI cannot replicate.
Sailing into the wind
The efforts of the media are titanic. And they are so because the current political context implies sailing against the wind. On the one hand, we have the Trump administration, historically hostile towards the traditional media. During his first term, more than 600 attacks on journalists were recorded and now, there is no end to the attempts to politicize federal institutions such as the Department of Justice or the Federal Communications Commission.
On the other hand, we find a strong fragmentation of platforms, and an increased reliance on media either driven by AI (new AI-based semantic engines) or by influencers that dominate the public’s attention (TikTok is the star platform of choice for populist governments, as happened recently in Romania with Calin Georgescu, with a strong echo among younger voters).
This situation makes it necessary to increase efforts in reputation, as the Reuters Institute for the Study of Journalism points out in its latest report: “Companies and governments must redouble their transparency, using independent verification and partnerships with reliable media to counteract disinformation”. And, also, to increase regulation, as the European Union is doing with the Digital Services Act or the European Media Freedom Act (which will come into force next August 2025), which seek to create a framework of greater transparency to protect editorial independence.
But we are not alone. Australia is taking a cautious but progressive approach to the regulation of artificial intelligence in journalism and media, balancing innovation with the protection of core values. The ACMA’s Media in Australia 2025 report reveals that Australians’ trust in news is declining, which has put the government on alert and the importance of maintaining robust journalistic standards in the age of AI.
The country currently has no specific legislation on AI, leaving it at the mercy of existing privacy, data, consumer protection and cybersecurity laws. But all indications are that the upcoming May elections could significantly influence its regulation. In the meantime, the country has taken matters into its own hands regarding the use of social networks, banning children under the age of 16 from accessing them. The phrase with which Julie Inman, Commissioner of Electronic Security, compares the law helps us to calm the wind at sea: “We want to keep children swimming between the flags where there is supervision, so that they don’t go into darker waters where there is no supervision”.