What to Look For When South Africa's AI Policy Drops
Apr 7, 2026
South Africa’s draft AI policy marks an important step, but its success will depend on more than good intentions. The real test lies in whether it clearly defines AI and enforces accountability. Without this, high-impact systems shaping decisions in finance, healthcare, and employment risk falling outside regulation. To be effective, the policy must ensure that any system influencing people’s life outcomes is subject to meaningful oversight, independent auditing, and accessible redress. Otherwise, it risks repeating a familiar pattern of strong policy with limited real-world impact.

Cabinet has approved the publication of South Africa's draft AI policy for public comment. For anyone who works at the intersection of technology and society, this is a significant moment, and an overdue one. While the policy was being developed, the technology was not waiting: credit decisions, hiring processes, healthcare triage, and public service delivery have all been quietly reshaped by AI systems that no formal South African framework governed. The policy's arrival is welcome. Whether it arrives with sufficient force is the question.
But South Africans have been here before. We have world-class legislation that does not translate into world-class outcomes. The Protection of Personal Information Act is a sophisticated piece of law. It has not stopped the daily flood of unsolicited marketing calls — the Information Regulator issued its first enforcement notice only in February 2024, more than three years after the Act took effect. Nor has it reversed the accelerating identity theft crisis, with impersonation fraud surging 400% between 2023 and 2024 while the law sat on the books. The gap between what a policy says and what it produces is not a new problem in this country. It is the central problem.
So when the full policy text is released for public comment in the coming days, the question worth asking is not whether it makes the right commitments. Based on what the Cabinet has announced, it likely does. The question is whether it has sufficient teeth to make those commitments real.
Here is the specific test we will be applying at AlgoViva.
The definition of AI will tell you everything
How a policy defines AI determines who falls inside its reach and who does not. A narrow technical definition — one that captures only machine learning models and generative AI systems — creates an immediate and exploitable gap.
Consider financial services. Major banks in South Africa use sophisticated algorithmic models to make credit decisions that determine whether individuals can access home loans, business finance, or credit cards. Many of these models predate the current AI moment. They are statistical, rule-based, and in the strict technical sense, not AI. A narrow policy definition would place them outside the scope of regulation entirely — even as their consequences fall hardest on the people least able to challenge them. In February 2025, the CEOs of South Africa's biggest banks were called before a joint parliamentary committee to answer allegations of racial profiling in credit decisions. Standard Bank's CEO, defending his institution, told Parliament that the bank was satisfied its lending criteria and decision processes were objective and fair, and specifically cited its AI systems. The models exist. The outcomes are racially skewed. The question of which systems produced those outcomes, and whether they fall within any regulatory framework, remains entirely unresolved.
This is precisely the gap that the concept of regulatory arbitrage describes: the tendency of institutions to define their systems in whatever way keeps them outside the boundary of oversight. As AI tools become more capable, that gap becomes easier to exploit. A bank need not deploy AI directly in its credit scoring model to benefit from it. It can use AI to identify which variables are most predictive, refine the weightings in its traditional model, and then retire the AI system before any decision is made. The deployed model is technically traditional. Its logic was shaped by AI. The people affected cannot tell the difference, and under a narrow policy definition, neither can the regulator.
The EU AI Act, which represents the most developed regulatory framework currently in force, addresses this partly by classifying credit scoring as a high-risk application regardless of the underlying technology. South Africa's policy should meet that standard at minimum. But it should go further by explicitly closing the arbitrage gap: if an algorithmic system makes or materially influences consequential decisions about people's access to financial services, employment, healthcare, or public resources, it should fall within the policy's scope. The trigger should be what the system does to people, not what it is technically called.
The pillars are promising but the mechanism is the question
Minister in The Presidency Khumbudzo Ntshavheni, notes that “the policy is structured around six core pillars aimed at promoting the responsible development and ethical deployment of AI,” including pillars on responsible governance and ethical and inclusive AI. These are the right areas of focus. The concern is not the pillars themselves but whether the policy will specify who is responsible for enforcing them, with what powers, and through what process an affected person can seek redress when things go wrong.
South Africa has a particular reason to take this seriously. According to Statistics South Africa's Quarterly Labour Force Survey, the official unemployment rate stood at 31,9% in the fourth quarter of 2024, and the decisions most likely to be automated first are not the decisions of executives. They are the decisions about whether someone gets a loan, whether a job application advances, whether a grant is approved. These are the decisions that shape life chances for the majority of South Africans, and they are already being made, in part, by systems that no current framework adequately governs.
A policy with genuine teeth will name a lead institution with investigative authority, not merely advisory functions. It will require independent auditing for high-risk applications rather than relying on self-regulation by the industries with the most to lose from scrutiny. And it will provide accessible redress mechanisms for individuals affected by automated decisions, not only compliance pathways for the organisations deploying them.
What we are watching for
When the policy is published, read the definition of AI first. Then read the enforcement provisions. Everything else follows from those two things. A policy that defines AI broadly enough to capture consequential algorithmic systems, and that backs its ethical commitments with independent oversight and real accountability, would be genuinely world-leading. It would also be a meaningful departure from the pattern of excellent intentions and insufficient follow-through that has characterised too much of South Africa's technology governance.
The policy is out for comment precisely because it is not yet finished. That is the opportunity. AlgoViva will be making a formal submission when the full text is released. We invite others who care about getting this right to do the same.
