Uncategorized

Censorship Has Changed, But Have Our Rights?

Published

on

Tanatswa Murewi, Judith Kama Asomelo, & Kudakwashe Chitapi

Censorship used to be obvious textbook, if you will. Governments banned books. Newspapers were shut down. Radio stations were silenced. If a protest was planned, police blocked the streets. Suppression was visible, direct, and often loud.

Today, censorship looks different.

In Africa’s digital age, social movements trend online before they ever reach the streets. From Nigeria’s #EndSARS protests to Kenya’s finance bill demonstrations, technology has transformed how citizens organise, speak, and demand accountability. But as activism has evolved, so too has control.

Modern censorship is no longer just about banning content. It operates quietly through internet shutdowns, digital surveillance, algorithmic filtering, and biometric tracking. Instead of confiscating pamphlets, authorities can now monitor hashtags. Instead of arresting organisers after a rally, digital tools can identify them before one even begins.

This shift changes more than methods; it reshapes society itself. People begin to self-censor when they know they are being watched. Online spaces that once empowered communities can become tools of intimidation. Technology that promises security can also narrow the space for dissent.

The question is no longer whether censorship exists. It is how it has adapted.

As our political participation moves online, our constitutional rights must travel with us. If freedom of expression and peaceful assembly are to survive the digital turn, we must confront how power now operates—not through burning books, but through controlling data, visibility, and digital space.

Freedom of expression remains one of the most protected rights in modern constitutions. Traditionally, it has been understood as protection against government interference against arrests, bans, or direct state control of speech. But today, much of the control over what we say online is not exercised by governments in courtrooms. It is exercised by private platforms using artificial intelligence.

This shift raises difficult constitutional questions.

If a government bans a newspaper, citizens can challenge that decision in court. But if an algorithm removes a post or suspends an account, who is responsible? Is it the company? The programmer? Or the state that may have pressured the platform behind the scenes? More importantly, does constitutional protection even apply in these digital spaces?

Unlike judges, algorithms do not consider context in a human way. They rely on patterns, keywords, and automated rules. While this makes moderation faster and more efficient, it can also make it less transparent. Users often receive generic notices without clear explanations or meaningful avenues for appeal.

In such cases, the right to freedom of expression may feel distant, even if it technically still exists.

Across many African constitutional systems, freedom of expression was drafted to restrain state authority. Yet today, private technology companies shape public debate on a scale no state broadcaster ever could. If censorship has evolved from visible state action to invisible algorithmic control, then constitutional thinking must also evolve.

The question is not whether free speech still exists, but whether its protections are strong enough for this new reality.

Conclusively, AI moderation exists because the internet is vast and people want it to be safer. In simple terms, platforms use computer programs sometimes with people checking the results, to spot and remove harmful content such as violent images, scams, or hate speech. These systems help manage billions of posts quickly and spare human reviewers from seeing disturbing material every day.

However, they are not perfect. The software learns from examples, and if those examples are biased or incomplete, it can make unfair or mistaken decisions.

The way forward is a combination of better technology, clear rules, and public oversight. Technologists must make moderation tools more transparent and explainable so users understand why a post was removed. Independent researchers should be able to audit these systems and publish their findings. Companies should also create easy appeal processes so people can challenge mistakes.

Governments have a role to play but carefully. Smart laws can protect basic rights such as notification, a fair opportunity to appeal, and regular reporting by platforms. However, heavy-handed regulations risk stifling innovation or imposing a single set of values worldwide.

For African democracies navigating both technological growth and fragile institutional trust, the challenge is especially urgent. A sensible middle path combines legal minimums such as transparency, audits, and redress with industry standards and international cooperation to avoid conflicting rules across countries.

As mentioned earlier, the goal is straightforward: to build systems that reduce real harm, respect freedom of expression, and give people clear ways to challenge wrongful moderation. This must be guided by evidence, public input, and continuous improvement.

If Africa’s digital public square is to remain a space for dissent, organisation, and democratic participation, then constitutional protections must evolve alongside the technologies that now shape them.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version