South Africa withdraws national AI policy after at least 6 of 67 academic citations found to be AI-generated hallucinations



TL;DR

South Africa’s Communications Minister Solly Malatsi withdrew the country’s draft national AI policy after News24 discovered that at least 6 of its 67 academic citations were AI-generated hallucinations, citing fake articles in real journals. The policy was approved by Cabinet in March and published for public comment. Malatsi called it an “unacceptable mistake” and promised to manage the consequences. The scandal leaves South Africa without an AI governance framework and raises questions about the institutional capacity to regulate the technology.

South Africa’s Department of Communications and Digital Technologies spent months drafting a national artificial intelligence policy. He proposed a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsman, a National AI Safety Institute, and an AI Insurance Superfund. He outlined five pillars of AI governance: skills capability, responsible governance, ethical and inclusive AI, cultural preservation, and human-centered deployment. It adopted a risk-based approach inspired by the EU AI Law. The Cabinet approved the project on March 25. The Government Gazette published it on April 10 for public comment. And then News24, the South African news outlet, reviewed the bibliography and discovered that at least six of the document’s 67 academic citations did not exist. The diaries were real. The articles were not. The authors credited with fundamental research on AI governance had never written the articles credited to them. The editors of the South African Journal of Philosophy, AI & Society and the Journal of Ethics and Social Philosophy independently confirmed to News24 that the cited articles had never been published in their pages. The most plausible explanation, according to Communications Minister Solly Malatsi, is that the editors used a generative artificial intelligence tool and published the result without checking a single reference. A government policy designed to govern artificial intelligence was undermined by artificial intelligence that failed to govern.

the withdrawal

Malatsi announced the withdrawal on April 27.calling the fictitious appointments an “unacceptable lapse” that “compromised the integrity and credibility of the policy project.” He said consequence management would continue for those responsible for writing and quality control. “This failure is not a mere technical issue,” said the minister. The chair of the parliamentary portfolio committee offered a more concise assessment, suggesting that the department “omit the use of ChatGPT this time” when rewriting. The document will be reviewed before being republished for public comment, but no timeline has been set. South Africa now lacks a formal AI governance framework at a time when Governments around the world are grappling with how to regulate AI.and the country’s credibility as a serious participant in that conversation has taken a blow that will outlast the policy review.

The scandal is not simply because false quotes appeared in a government document. They appeared in a government document on artificial intelligence, written by the department responsible for the country’s digital technology strategy, during the exact period when the world’s most important AI governance debates are taking place in Brussels, Washington and Beijing. The EU AI Lawthe most ambitious regulatory framework for artificial intelligence, is grappling with lagging standards and an implementation timeline that has been pushed back to 2027 for high-risk systems. The United States does not have federal legislation on AI and is watching states legislate independently while the White House tries to get ahead of its efforts. China has enacted regulations on AI, but applies them selectively. In this scenario, South Africa offered a policy that could not survive a literature review.

the pattern

South Africa’s hallucinatory citations are an extreme case of a problem that is quietly spreading among institutions that use generative AI for research and writing. A study published in Nature found that 2.6 percent of academic articles published in 2025 contained at least one potentially hallucinatory citation, up from 0.3 percent in 2024. If that rate holds for the roughly seven million academic publications in 2025, more than 110,000 articles contain invalid references. GPTZero, a Canadian sensing startup, analyzed more than 4,000 research papers accepted at NeurIPS 2025, one of the world’s largest AI conferences, and found more than 100 mind-blowing citations in at least 53 papers. In a separate multi-model study, only 26.5 percent of AI-generated bibliographic references were completely correct. The problem is structural: large language models generate citations using probabilistic token prediction rather than information retrieval. They don’t look for papers. They predict what a quote should look like based on patterns in their training data, and when the prediction is confident enough, they produce a reference that reads as authoritative but points to nothing.

The South African case is distinctive not because the technology caused hallucinations, which is an inherent and well-documented limitation of generative AI, but because the hallucinations were published in an official government policy document that went through Cabinet approval without anyone checking the references. The drafting process included public officials, consultations on the matter, and ministerial review. Dumisani Sondlo, the department’s AI policy lead, had previously described the policy development as “an act of recognition that we don’t know enough.” That recognition did not extend to the recognition that the tool being used to help draft the policy was not itself reliable. The six fake quotes that News24 identified are the ones that were detected. It has not been publicly confirmed whether the additional citations in the paper’s 67 references are genuine. The entire literature is now under suspicion and, by extension, so is the analytical basis on which the policy proposals were built.

The implications

The immediate consequence is that the AI ​​governance timeline in South Africa has been reset. The draft policy, which aimed to position the country as a leader in the responsible adoption of AI on the African continent, will need to be redrafted, consulted and resubmitted. The damage to institutional credibility extends beyond the policy itself. If the department responsible for governing AI cannot verify whether the sources in its own policy document are real, the question is whether it has the capacity to evaluate the AI ​​systems it proposes to regulate. The policy envisioned a multi-regulator model in which AI governance and human oversight they would be integrated into existing oversight frameworks rather than centralized under a single authority. That model requires each participating regulator to have sufficient technical knowledge to evaluate AI systems in their sector. The hallucination scandal does not inspire confidence that the coordination department will meet that threshold.

The broader lesson is not that governments should avoid using AI in policy development. The thing is, the AI’s failure mode is not dramatic. It doesn’t crash. Does not show any error message. It produces a fluid, formatted and confident text that looks exactly like the output of a competent researcher. The false quotes in South Africa’s AI policy were not obviously wrong. They were plausible. They cited real magazines. They attributed the work to real people. They followed the formatting conventions of academic references. The only way to detect them was to check whether each of them actually existed, a task that requires exactly the kind of methodical human verification that AI is supposed to make unnecessary. Growing public distrust of AI It is not irrational. It is a response to a technology that is both powerful enough to write national policy and unreliable enough to fabricate the evidence on which that policy is based. South Africa’s shame is singular, but the underlying failure, the use of AI without the ability to verify its production, is not. It’s happening in universities, law firms, newsrooms and government departments around the world. South Africa is simply the first government to publish the receipts. The challenges of implementing AI regulation They are real, but they start with a prerequisite that the South African department failed to meet: understanding what the technology does before trying to write the rules for it.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *