The Global South, AI, and journalism
“I’m a sceptic.”
I declared this to a room full of media practitioners who had come to listen to a panel about AI and journalism. I heard a few nervous titters, spotted some smiles. But I also sensed a collective metaphorical groaning, and imagined a few people internally rolling their eyes. At that moment, I felt inadequate. Later on I wondered, why did I feel that way?
On 18 June 2024, I attended the Media Freedom Lab in London, a one-day programme organised by One World Media that focused on the threats and opportunities of AI. The Lab was held in conjunction with the One World Media Awards, which recognises excellence in reporting in the Global South. Our story A Woman’s World: Creating spaces for joy, leisure, and resistance in South and Southeast Asia had been nominated for an award.
Kontinentalist had been invited to speak on a panel titled “Decolonising AI”. Initially, I was unsure whether we would be able to participate meaningfully, because we have not really woven AI into our workflows. In fact, a number of us are against generative AI—often discussing (or ranting) about how it is built off the unpaid labour of creators, therefore disrespecting intellectual property rights, and relies on the labour of poorly paid workers. Because much of generative AI is trained on the existing corpus of data online, it perpetuates existing biases in the world, including harmful stereotypes, something that Kontinentalist has been actively against since our establishment.
Thankfully, after speaking to someone on the One World Media team, I was reassured that it would be alright to express a contrarian view as a media organisation that did not already use AI in its day-to-day business or have an AI-related policy. I accepted the invitation, but there was still a part of me that wondered if we were going to be the only ones with such views on this panel.
Before I flew, I facilitated a discussion with my teammates to figure out our collective stance. There were some clear takeaways:
- The use of generative AI in our company is quite dependent on one’s job scope. The developers rely on it quite a lot, especially in the extensive coding work required for building Lapis, our data storytelling tool. The writers and designers were more concerned about the implications of AI, and thus more reluctant to use it in their work.
- Kontinentalist clearly needs to seriously think about our stance on AI and the harm it can bring, because we’re a data-driven company and AI is built on data.
- How AI, especially generative AI, works is inherently harmful to creators, and as a values-driven company, we need to ensure our use of AI is aligned with our values.
At the panel, I shared these pointers: that Big Tech is spearheading the AI revolution, which essentially is a kind of “technological colonialism” as it spreads dominant narratives, views, and languages, and with it, ableism, racism, and sexism. That this “technological colonialism” goes against what we’re trying to do at Kontinentalist. And because we don’t do breaking news, we can decide if and when we use AI, and perhaps use it in a way that involves more consideration.
To my relief, the other panellists—ex-BBC editor Mark Frankel and Tshepo Tshabalala from JournalismAI, alongside moderator Jenny Romano from AI start-up The NewsroomAI—had a healthy discussion with me on the harms of AI, such as its environmental toll. We also talked about how certain concerns are slowly being addressed, such as the fact that Large Language Models (LLMs) are being trained on less mainstream languages in Indonesia to create more inclusive forms of generative AI.
I also shared about “refusal”, a concept we’ve been discussing in Konti, referring to a political act of refusing to partake in harmful structures, especially those that can be used to justify or perpetuate harm for you and your community down the line. I tried to explain that there may be people who simply do not want to participate in the AI conversation at all, because the structures that AI is built on are inherently harmful. Refusal to partake should be okay too, right?
Honestly, while trying to explain my perspective, I felt like I had to explain why I didn’t want to be part of this new AI-transformed world, while being expected to provide solutions too. At the same time, I felt quite alone. Looking at the room, which was predominantly filled with people from the Global North, I felt small, but with opinions that were “too big”. I couldn’t help my voice from shaking. It wasn’t that I didn’t believe in what I was saying. So why did I feel like a minority (I was one of two Southeast Asians in the room), ranting too passionately about being a luddite?
To my surprise, a few people rushed up to me after the panel and thanked me for sharing my thoughts. One of them was the director of One World Media, Vivienne, who said that my stance was the reason they invited Kontinentalist to the panel—to offer a perspective from the Global South that would challenge what most of the panellists and participants thought.
For the rest of the day, I was pleased to see that a number of people—mostly brown women—continued to speak up about their perspectives, which were often different from the mainstream opinion in the room. In fact, there was a moment when four different women, including myself and Filipino journalist Raizza P. Bello, started bringing up counterpoints of our own, while the mostly-white panellists listened. One of the panellists shared that her takeaway for the day was to learn how to set aside her privilege and listen to others.
Of course, while the event highlighted the distinction between the Global South and Global North, I didn’t mean to suggest that the Global South has the moral high ground when it comes to the conversations about AI. In fact, I am highly aware of Singapore’s own technocratic position, and how privileged we are as a well-resourced country. At Konti, we’re constantly trying to share our own privilege with others, by spotlighting narratives that might not get attention, revealing inequality, and showing the rich cultural nuances that exist throughout Asia.
At the same time, I think there are things we can learn from Singapore. During the Lab, it was mentioned several times that “the toothpaste is out of the tube” or “the genie is out of the bottle”, suggesting that even though generative AI is harmful, we can’t do anything about it because it’s already part of our lives. So I was glad when I could share how a bunch of Singaporean writers said no to the government’s plan to train LLMs using local literature. This example proves that refusal is not just a theory; we have the ability to say no, and be able to voice the reasons why. (Side thought: Perhaps that’s why I enjoy the ability to vote?). A “no” is a way to reclaim our power.
I like refusal as an alternative to trying to work with the system. While trying to figure out how and why journalism can use AI in its workflows, the underlying assumption that we have to. I’m no scholar of decolonial theory, but perhaps this is what civil rights activist and feminist writer Audre Lorde meant when she said:
“For the master’s tools will never dismantle the master’s house. They may allow us to temporarily beat him at his own game, but they will never enable us to bring about genuine change.”
While we’re still trying to articulate Konti’s stance on AI, I frequently return to this quote. Yes, some say that AI is just a tool and that it’s not inherently bad or good. But it also represents something larger, with an imperialising force we’re not unused to in our corner of the world. If I want to take my time to figure out whether I want to use it, and if I’m given a choice whether to be part of this new, AI-dominated world, then I think it’s okay for now that my answer is an intentional no.