The AI policy that AI broke

The AI policy that AI broke


South Africa’s draft national AI policy is not the first hallucination-fuelled blunder of its kind – and it won’t be the last. But it does offer the department of communications & digital technologies an opportunity to redevelop the policy guided more by practical experience and less by abstract theory.

Communications minister Solly Malatsi over the weekend withdrew the policy after News24 revealed that the document’s reference list cited sources that do not exist. Malatsi promised “consequence management” for those who let the errors through.

The local reaction has been a predictable mix of outrage, mockery and political point-scoring. Khusela Diko, chair of parliament’s portfolio committee on communications, quipped that the redraft should be done “without using ChatGPT this time”. But hallucinations of this kind are becoming more prevalent. The pattern that played out in Pretoria has already played out in Sydney and Manhattan.

Last year, Deloitte was forced to refund part of a fee to the Australian federal government after a report it produced for the department of employment and workplace relations was found to contain fabricated academic references and a quote attributed to a court judgment that did not exist. Deloitte conceded that generative AI had been used in the drafting and republished a corrected version. The original had cleared the firm’s own quality assurance process.

In the US, the precedent is older. In 2023, a Manhattan federal judge sanctioned two lawyers after they filed a brief that cited six entirely invented court decisions, complete with realistic-sounding case names and judicial reasoning. The lead lawyer had used ChatGPT and even asked the chatbot whether the cases were real. It assured him they were. Since then, similar sanctions have been handed down against attorneys in multiple US states. Public databases now track the growing list of court filings caught using hallucinated citations.

Blunders

South Africa has had its own version. Fabricated case law surfaced in two local matters, prompting the Legal Practice Council in 2025 to begin developing a framework to govern AI use among legal professionals.

These blunders offer an interesting insight into the AI vs humans debate, which typically centres on jobs and whether human livelihoods are at risk of being cannibalised by the emergent technology. The disciplinary action that follows a hallucination-driven fiasco could lead to dismissal for the professionals involved – not because AI did someone’s job better than they could, but because it did so poorly and the human failed to pick it up. In these cases, it is the people using the technology who are a threat to themselves.

Read: Why AI chatbots are a legal liability waiting to happen

But pinning the blame squarely on individuals also misses a bigger opportunity. The withdrawn draft proposed seven new institutions to govern AI in South Africa – a National AI Commission, an AI Ethics Board, an AI Safety Institute and others – yet the department drafting it had no apparent controls on its own use of the technology. The firing of an individual in the procedural chain – whether they hold a clerical or senior position – will not magically produce a sound governance framework. Grappling with the error and asking what could and should have been done differently will.

The author, TechCentral's Nkosinathi Ndlovu
The author, TechCentral’s Nkosinathi Ndlovu

Three responses have started to work elsewhere.

  • The first is disclosure: A growing number of US federal courts now require lawyers to certify whether AI was used in preparing a filing and to confirm that every citation has been independently verified. The EU’s AI Act imposes provenance and transparency requirements on certain categories of AI output. South African cabinet submissions and gazetted policy drafts have no equivalent obligation. They should.
  • The second is verification: The cost of checking every citation in a policy document – confirming that the journal exists, the article exists and the article says what it is claimed to say – is trivial. It can be partly automated and partly handled by a junior official in an afternoon. The reason it does not happen is that nobody is required to do it. It should be a requirement, applied uniformly across departments and entities, with sign-off recorded in writing before any document goes to cabinet or the Government Gazette.
  • The third is procurement transparency: Consultants writing policy for the South African government should be required to disclose which AI tools they used in drafting and how their work was checked. If the state does not want a domestic Deloitte moment, this is the cheapest insurance available.

Read: Malatsi withdraws AI policy after fictitious sources scandal

The withdrawn draft is yesterday’s problem. The technology that produced its hallucinated references is now in every government department and parastatal, and in every private-sector consultancy that writes for the state. The sooner the communications department learns from this mistake, the faster it – as the lead policy department on digital matters – can transfer those lessons to the rest of government. Until that happens, South Africa will keep adding entries to the global list.  – © 2026 NewsCentral Media

Get breaking news from TechCentral on WhatsApp. Sign up here.