South Africa has decided to withdraw its draft national AI policy after it was revealed that some regulations were generated using AI and cited fabricated sources, UNN reports, citing The Independent.
Details
The country's Minister of Communications, Solly Malatsi, withdrew the draft policy after discovering that at least 6 out of 67 academic citations were AI-generated and contained links to non-existent scientific papers.
"The most plausible explanation is that AI-generated references were included without proper verification. This should not have happened," Malatsi said.
"This failure is not just a technical issue, but a compromise of the integrity and credibility of the draft policy," he wrote in a post on X.
The draft policy was presented for public consultation and aimed to position the country as a leader in AI innovation while attempting to address ethical, social, and economic challenges associated with AI use.
It outlined plans to establish new institutions in the country to oversee AI use, including a national AI commission, an AI ethics council, and an AI regulatory body.
The draft regulations also outlined plans to provide tax incentives, grants, and subsidies to encourage private sector collaboration in building AI infrastructure in the country.
It is expected to be revised before being republished for public consultation.
The issue surfaced when the South African publication News24 discovered that at least 6 of the 67 academic references in the document did not exist, although the journals they cited were real.
Editors of journals, including the South African Journal of Philosophy, AI & Society, and the Journal of Ethics and Social Philosophy, independently confirmed that the cited articles were fake.
The Minister of Communications stated that there would be consequences for those responsible for drafting the policy.
"This unacceptable lapse proves why vigilant human oversight of AI use is crucial. It is a lesson we are learning with humility," he wrote on X.
Addendum
This highlights the growing problem of generative AI use by academics and administrators for research and drafting documents, the publication notes.
A study published in the journal Nature showed that over 2.5% of scientific papers published in 2025 contained at least one potentially fabricated reference, compared to 0.3% in 2024.
This amounts to over 110,000 papers published in 2025 containing invalid references "hallucinated" by artificial intelligence.
These are confident but fabricated results generated by AI models when they sense something is missing from their data in that field, the publication writes.
Large language models, such as OpenAI's ChatGPT and Google's Gemini, are designed to predict the next possible word in a sequence of words, rather than to verify truth.