The double-edged sword of AI in cybersecurity

The double-edged sword of AI in cybersecurity


Artificial intelligence has long lurked in the shadows of cybersecurity. It has filtered abnormal logons, flagged odd traffic and traced malware signatures. Recently though, with OpenAI, Google and others pushing out large language models, its claws have sharpened.

Now AI fights on two fronts. It defends us and attacks through us.

In South Africa, we’re feeling both sides. On one hand, defenders can lean on AI to sift noise, to spot threats early. On the other, attackers mine the same tools. They scan for weak spots, conjure phishing that reads like human speech and spin up deepfakes that fool both brains and systems.

If we want to stay ahead, we need clarity. We need to see how malefactors are using AI. We need to understand where it truly empowers defenders. Only then can we build defences that last, not patchwork that fails when the next wave hits.

Threat actors on AIs payroll

Generative AI lowers the threshold. You no longer need a deep bench of coders to craft malware or to launch phishing at scale. AI becomes the craftsman. Social engineering emails today mimic management tone. They know your jargon and they sound plausible.

Deepfakes are growing in sophistication. A few well-chosen photos, hours of video or voice, a false meeting or a phony instruction – the damage can ripple with false evidence, political manipulation and reputational ruin.

A case in point was the mimicking US Secretary of State Mark Rubio via Signal to mislead foreign ministers. This isn’t sci-fi. It’s now. If we don’t verify content fast, decisions will be made on distortions.

AI as our shield

There is hope. The same technologies that let attackers hone their tools can let defenders see through deception. AI can detect glitches in video, audio anomalies and irregular behaviour across systems. It can sort through millions of alerts, pointing analysts toward what actually matters.

In South African firms, especially those with resource constraints, this matters. AI-powered threat detection can elevate junior teams. It can automate pattern recognition and can help manage vulnerabilities before they become crises.

AI also helps clarify – not every device or user behaving oddly is malicious. AI learns context. It suggests responses based on past incidents. It helps build resilience in environments that are fluid and complex.

Keeping people in the loop

Technology cannot run on autopilot. Human oversight is not optional. Someone must own the decision and the risk. They must understand the trade-offs because AI will make mistakes. Bias seeps in and context is ignored if data is skewed (or poisoned).

AI tools need interpretable designs. Security teams must remain educated. Teams must include diversity. South Africa’s diversity especially makes this point: different languages, different cultures and different threat models. If design ignores that, the risk multiplies.

Laws, norms and what needs fixing

Globally, regulation of AI is moving faster than many expect. The European Union’s AI Act is already enforcing risk categories. Certain high-risk uses require strict governance. Forbidden practices are being banned. Companies are held to standards.

In South Africa, the draft National AI Policy Framework, was released in late 2024, and has now gone through public consultation. With that process closing in April 2025, South Africa is setting the stage for enforceable, ethical AI law by 2026.

Its goal is balance: harnessing AI’s benefits while weighing ethical, social and economic impact. The framework calls for human-centred AI and sets out pillars that matter: skills, infrastructure, ethics and privacy. It also leans on stakeholder input to shape the policy that will anchor future legislation.

But we do not yet have a specific AI law.

What local businesses must do now

  1. Mandate oversight in AI systems used in sensitive sectors: Financial, government and healthcare systems using AI must have audit trails, bias review and explainability.
  2. Board accountability: Directors must understand that AI deployment is not just technical, but ethical, regulatory and reputational.
  3. Certify AI products: There must be minimum standards and security, privacy must be built in. Identity management for AI agents. Treat AI systems as actors, because they are.
  4. Strengthen legal frameworks so transgressions have consequences: Policy frameworks must become law. Enforcement must follow.
  5. Data governance and consent: The Protection of Personal Information Act gives us tools. However, in AI contexts we need dynamic and transparent consent. Users need to know what they sign up for. Bias and fairness audits must be regular.

The future of defence in South Africa

We are at a crossroads. The AI that threatens us can also protect us. The question is: will we build walls, or durable shields? Will we legislate and regulate, or lag and react?

If South African organisations embed security, ethics and human values into every AI deployment, we will not just survive the coming years, we will shape how AI is used across Africa. We can show that progress and responsibility are not incompatible.

Because in cybersecurity, as in national identity, keeping what is precious depends on what we defend, how we defend and who we let define the terms.