Media Monitoring Africa tackles the use of AI in media newsrooms and politics
Media Monitoring Africa and a Human Rights lawyers worked on guidelines for media and politicians.
Media Monitoring Africa (MMA) hosted a discussion around AI, disinformation, and election integrity at the Goethe-Institut on April 9.
The discussion gave an overview of the AI situation in South Africa, led by the Forum on Information and Democracy member, Dimitri Martinis.
He launched and gave an overview of the South African report, Artificial Intelligence in the Information and Communication Space.
According to the forum’s website, “The rapid development and advancements in artificial intelligence (AI) continue to transform the global information and communication space at a pace nearly unseen among recent technological innovations. It transforms how we create, disseminate, and consume information and poses critical challenges to democratic processes and fundamental rights. We believe democratic institutions must lead the development and implementation of democratic principles and rules to govern the development, deployment, and use of all aspects of AI in the information space.”
The forum established an international policy working group led by 14 diverse experts to elaborate recommendations to safeguard our democratic information space.
Martinis explained that the following factors contributed to setting up the narrative of the report:
- The philosophical orientation – a human rights-based approach to policy and regulation
- Development perspective – SA in Africa and the global south
- Understanding of AI systems – focus on those enabling and emerging in the ICT space-specific media, including public interest media
- Approach to regulation – G5 concerning regulation of networks, services, and competition regulation.
Human rights lawyer Tina Power said she and MMA established the guidelines for media organisations and political parties using generative AI to infuse AI in different spaces and in a transparent way.
“One of the foundational purposes of this is we have seen many guidelines pop up but these are from the global north and there is a pressing need for the global south to have a voice and we wanted this to be from an African perspective.”
Core principles for media organisations:
- Links to existing journalistic principles – Linking AI guidelines to broader statements of ethics is good practice; showing they are continuous with the values of journalism.
- Transparency – In the interest of fostering public trust, disclosing as much detail on the use of AI across the journalistic process is advisable.
- Human oversight – This is vital for maintaining accuracy and public trust.
- Allowed and prohibited uses – Several guidelines explicitly flag in which instances they will and will not use AI systems.
- Privacy and confidentiality – It is often recognised that, in using AI systems, organisations must be careful with providing the sensitive information to third-party platforms that operate AI systems, to ensure that sources are adequately protected.
- Algorithmic bias – As part of the efforts to counter algorithmic bias, consideration should be given to training the use of AI systems that have been fine-tuned in local contexts.
- Training and literacy – Media organisations are encouraged to educate their workforce on how to best use AI systems.
- Cooperation and dependency – There is a need for both internal and external collaboration to ensure that AI systems are used ethically.
- Enforcement – Few existing guidelines discuss their enforceability. Thought ought to be given to what actions may follow from their violation, such as being sanctioned by the relevant media associations.
- Media diversity – Media organisations ought to be mindful of their audience, and of retaining their distinct editorial voices, to avoid the risk that using AI tools will homogenise their outputs.
Power added that the MMA decided that it was not enough to make these guidelines apply to journalists, political parties are some of the primary purveyors during an election season so it made sense to get political parties on board. “The purpose of the political parties’ guidelines was to maximise trust and credibility. We are not forcing these on newsrooms or political parties, but are guidelines recognising AI and we need to engage with it meaningfully.”
Core principles of political parties:
- Links to existing journalistic principles – Political parties should consider linking any AI policy they develop to the existing principles that guide their operation.
- Transparency – Most guidelines emphasise the importance of transparency on the part of organisations in how they use AI.
- Human oversight – Political parties should specify exactly who within them is responsible for overseeing the different AI systems being used.
- Allowed and prohibited uses – Political parties should enumerate exactly what they do and do not use AI, to foster greater trust.
- Privacy and confidentiality – Organisations must be careful with providing sensitive information to third-party platforms that operate AI systems.
- Algorithmic bias – As part of the efforts to counter algorithmic bias, consideration should be given to training the use of AI systems that have been fine-tuned in local contexts.
- Training and literacy – Political parties are encouraged to explicitly plan for how they can educate their members on how to make use of AI systems in the course of their work.
- Cooperation and dependency – Most political parties that make use of AI systems have not developed them in-house, and so are implicitly collaborating with external parties by making use of their systems.
- Enforcement – Political parties should give thought to what consequences may arise from their non-compliance with their codes of principle.
The vice-chairperson of the Electoral Commission, Janet Love weighed in and agreed that regulation was something that can have pitfalls but thought we needed to understand that without a legislative framework that included regulation and possibility for consequence management, we are headed nowhere.
“There is a deficit in understanding the issues around digital literacy and I think we cannot discount it. There is a generational unevenness that we must balance and a geographical imbalance that we have to recognise.”
Love added that there was an inequity around the digital environment, and the idea that we are using tools like AI to promote equity is important in many ways.
“If we do it outside the context of the reality that just the digital infrastructure reflects the problems we have socially of areas where there is poverty which are areas where we do not have the speed and quality of data that will enable a basic video or lesson to be accessed by a learner.”
ALSO READ: Vox pop: Locals understand AI better