South Africa chose flexibility over control in its new AI policy

As many countries rush to centralise control of AI, South Africa is taking a different approach by spreading responsibility across existing agencies and favouring coordination over top-down control.

On April 2, 2026, the South African cabinet published a draft version of the policy, dated d 24 October 24, 2024, for public comment. Spearheaded by the Department of Communications and Digital Technologies (DCDT), the policy is expected to be fully implemented in the 2027/2028 financial year.

“The AI policy aims to ensure that both the benefits and risks brought by AI are evenly distributed across society and generations,” the South African Cabinet said in a statement.

No super-regulator

In countries like Nigeria and Kenya, policymakers are moving toward centralised AI oversight. Dedicated agencies, commissioners, and top-down structures are becoming the standard. Nigeria’s proposed National Digital Economy and E-Governance Bill follows a prescriptive, risk-based approach inspired by the EU AI Act. High-risk AI systems—especially in surveillance, finance, and public administration—would need licensing, audits, and annual impact assessments.

Kenya’s 2026 AI bill takes a similar risk-based path but adds a strong political dimension. With elections approaching, it targets synthetic media and AI-driven manipulation, imposing criminal penalties for non-consensual deepfakes. At the same time, it maintains flexibility for innovation through regulatory sandboxes, allowing startups to test new AI products under lighter oversight.

South Africa is doing the opposite.

Instead of creating a new regulator, South Africa’s AI  policy leans on institutions already embedded within those sectors. The Financial Sector Conduct Authority (FSCA) and the South African Reserve Bank will oversee financial AI systems. The South African Health Products Regulatory Authority (SAHPRA) will handle AI in medical diagnostics. The Information Regulator retains its role as the primary enforcer of data privacy under the Protection of Personal Information Act (POPIA).

The logic is that regulators closest to the problem are best placed to manage it. A mining regulator understands mining risks. A financial regulator understands financial systems. Why build a new bureaucracy when expertise already exists?

Regulating by risk

The backbone of South Africa’s AI framework is risk-tiered regulation. Not all AI systems are treated equally. Instead, they are grouped into four categories: unacceptable, high, limited, and minimal risk.

At the top end, certain applications, manipulative behavioural systems or forms of mass surveillance are banned outright. High-risk systems, such as those used in hiring, lending, or healthcare, face stricter scrutiny, including audits, impact assessments, and requirements for human oversight. Lower-risk applications operate with lighter-touch rules.

The idea is to focus regulatory firepower where it matters most. Rather than blanket restrictions, the system sends a clear signal: the higher the potential harm, the heavier the compliance burden.

In theory, this creates space for innovation while maintaining safeguards. In practice, it depends heavily on execution.

To hold the AI policy system together, the policy proposes a web of coordinating bodies. A National AI Coordination Office would guide implementation and set standards. Inter-departmental forums would align ministries. Advisory panels and multi-stakeholder groups would feed in technical and ethical expertise.

At the centre sits an AI Advisory Council, a non-executive body bringing together researchers, industry leaders, legal experts, and civil society. Its role is to advise, not enforce.

And that is the crux of the approach: none of these bodies has binding powers. They can guide, recommend, and coordinate, but they cannot compel action.

The enforcement gap

However, this design introduces a fundamental tension. Distributed oversight offers flexibility and sector-specific insight, but it also risks fragmentation.

The framework is clear on what needs to be done: classify risk, conduct audits, ensure transparency, but it is less clear on who ultimately ensures compliance. Enforcement is left to existing regulators, each with different capacities, priorities, and levels of technical expertise.

The result could be uneven oversight. Financial regulators, often well-resourced, may enforce rules rigorously. Other sectors could lag. Gaps and overlaps may emerge. Companies, in turn, may learn to navigate these inconsistencies, exploiting weaker links in the system.

Capacity is another constraint. Risk-tiered regulation is technically demanding. It requires the ability to assess evolving AI systems, monitor real-world performance, and adapt rules as technologies change. Many regulators are already stretched. Building these capabilities will take time—and money.

Even the act of classification is not straightforward. AI systems evolve. A chatbot that begins as a low-risk tool can become a high-stakes decision engine as it scales or integrates new data. Determining risk levels requires constant reassessment, raising the possibility of inconsistent rulings across sectors.

For businesses, that creates uncertainty. A product deemed compliant today could face stricter rules tomorrow.

Beyond governance, the framework is also an industrial strategy. It emphasises the need for local datasets, African language processing, and the integration of indigenous knowledge systems.

The goal is to make AI systems more relevant and less biased. Models trained on foreign data often fail to capture local realities, reinforcing exclusion rather than solving it. By investing in local data infrastructure, South Africa hopes to build a more inclusive AI ecosystem.

But this ambition adds another layer of complexity. Data governance, privacy, and data-sharing frameworks must now be coordinated across the same fragmented system that governs AI itself.



Comments