Minister warns against blind enthusiasm or panic. AI may draft but teachers must decide to protect thinking and human judgment.
Artificial intelligence (AI) is no longer coming to education. It is already here.
It is in lesson preparation, homework, assessment, administration and the growing number of digital tools finding their way into schools. Teachers are using it. Pupils are using it. Education systems are beginning to feel its effects.
The question is, therefore, not whether AI will enter our classrooms. It already has. The real question is whether we will govern it deliberately or allow it to shape education by default.
As minister of basic education, I am clear on one point: South Africa must not drift into AI in education. We must lead with purpose.
That means rejecting two equally flawed instincts. The first is blind enthusiasm – the belief that every new tool is progress. The second is panic – the belief that the answer is simply to ban, block or fear what we do not yet fully understand.
Neither is good enough. Our task is harder and more important: to use AI where it strengthens education and to stop it where it weakens the very human capacities education is meant to build.
Because education is not just about producing answers. It is about developing the ability to think, question, reason, create and exercise judgment.
In a world where a machine can generate a plausible answer in seconds, that purpose becomes even more important, not less.
AI can help teachers recover time, support lesson planning, improve access to multilingual content, reduce repetitive administrative work and provide useful educational support if used well.
Used badly, it can encourage shallow learning, erode academic integrity, compromise privacy and deepen inequality between schools that are already advantaged and those that are not.
That is why we are taking practical steps. We are strengthening national coordination on AI in education so that South Africa’s response is coherent rather than fragmented.
In a system like ours, fragmentation is not innovation. It is a recipe for widening inequality.
One pilot here, another initiative there, different tools, different standards and no common direction do not add up to progress. They add up to confusion.
We are also reviewing our digital education policy framework so that it speaks to the realities of today. The world has changed profoundly. Generative AI has forced us to rethink teaching, learning and assessment. Policy must now do two things at once: enable innovation and protect the integrity of learning.
We are also developing practical guidance for schools. Teachers do not need abstract reassurance. They need clarity.
They need to know what kinds of AI use are appropriate, where the real opportunities are, what the red flags look like and what safeguards must apply.
One of the most useful principles in this work is simple: not all AI use in education is equal.
AI that helps a teacher draft a lesson, simplify a text, translate material or reduce administrative burden is not the same as AI that interacts directly with a pupil, shapes a pupil’s pathway or informs a high-stakes judgment.
That distinction matters. In many cases, the wisest place to begin is with teacher-facing uses of AI, where the value is immediate and the risk is lower.
The principle is straightforward: AI may draft, but teachers must decide. AI is also exposing weaknesses in how we design some learning tasks. If a pupil can complete a homework exercise meaningfully by copying from a machine, then we should ask whether the task itself required enough thought in the first place.
The answer to AI cannot be surveillance alone. It must also be better pedagogy. We need tasks that require pupils to apply, reflect, evaluate and connect knowledge to their own understanding.
We must be equally firm about the lines that cannot be crossed. AI may help flag patterns, support consistency or assist with low-risk processes.
But it must not make highstakes decisions about pupils on its own. It must not decide who is disciplined, who progresses, who is labelled at risk or who is denied opportunity without human oversight.
Human judgment, fairness and accountability must remain at the centre. The same is true of data. Schools should not adopt AI tools casually without understanding what data is being collected, where it goes, who can access it and how it may be reused. If there is no clarity, there should be no use.
SA should not be anti-AI. But we must be firmly against unintentional AI. If we get this right, AI can help us build a stronger, more responsive and inclusive education system. It can give teachers time back. It can support pupils more effectively. It can improve access.
But the goal is not technological adoption for its own sake. The goal is to protect and strengthen human agency: the capacity of every pupil to think, question, create and act with judgment. That is the work of education. And that we must never outsource.
Support Local Journalism
Add The Citizen as a Preferred Source on Google and follow us on Google News to see more of our trusted reporting in Google News and Top Stories.