Read this article in Filipino
This article is written by Kyle Nicole Marcelino as part of EngageMedia’s Youth Advocacy and Communications for Internet Freedom project, which aims to expand awareness and engagement with digital rights issues among youth advocates in the Asia-Pacific.
Kyle Nicole Marcelino is a young digital native with a Bachelor’s Degree in Journalism from the Polytechnic University of the Philippines. He is an active fact-check contributor under Rappler’s #FactsFirstPH initiative to debunk online claims about political figures, government policies, and societal issues in the Philippines.
As a fact-check contributor for the online news outlet Rappler, I usually browse popular social media websites and video platforms to spot potential false claims. One practice I’ve observed is the emergence of YouTube videos using cleverly manipulated images and videos to narrate the story of the notorious Marcos gold – the myth that the family of the late Philippine dictator Ferdinand E. Marcos supposedly owned a vast treasure trove. Other controversial topics ranging from religion to politics have also been given the same treatment, and it seems that the power of artificial intelligence (AI) is slowly being introduced to disinformation disseminators in the Philippines.
The proliferation of AI-generated content has evolved further into becoming a tool for manipulating public perception. Because AI-generated content can be harder to differentiate from real, unedited digital content, this means a much harder fight for journalists and truth advocates to suppress the ever-growing misinformation and disinformation in the Philippines, often tagged “patient zero” in the global plague of information disorder.
As veteran Filipino journalist Inday Varona puts it in my interview with her: “Social media is the world on steroids, AI makes it a world on crack.”
The Philippines’ leader believes that the country is “ready” for “the promise of a future” with AI. However, the disinformation culture in the Philippines paints a different tomorrow for this so-called AI utopia and its impact on democracy.
Dangers of AI Disinformation
In the broader category of AI, machine learning refers to enabling computer systems to learn from examples, using algorithms to analyse data and make informed decisions. Generative AI models can create “something entirely new” based on the information provided, according to Google. These models can generate and edit text, images, videos, audio, and personas with just a few prompts.
Researchers at the Copenhagen Institute for Future Studies believe that AI may soon be responsible for 99% of internet content if AI models are widely adopted. However, security experts have raised the alarm over the potential dangers of AI-powered disinformation to democracy as it can easily influence people’s perception of policies and governance with just a few text prompts.
Unlike generating Photoshopped or edited images, disinformation disseminators can instantly create realistic-looking narratives from scratch. This threat can be seen in recent cases of fabricated images such as Pope Francis wearing a puff sweater, former US president Donald Trump being forcefully arrested, and model Bella Hadid supposedly recalling her statement against Israel’s actions on Gaza. The New York Times also reported how AI chatbots like OpenAI’s ChatGPT are being used to generate believable statements to push certain narratives at a much faster rate.
These developments have significant implications for the information landscape in the Philippines, considering how vulnerable the country is to misinformation and disinformation online.
In my fact-checking work, I have come across several YouTube channels that use AI-generated images to create attention-grabbing thumbnails, such as the channel Sa Iyong Araw (330,000 subscribers). This channel has been fact-checked multiple times for erroneous or dubious claims regarding disasters, Philippine politics, and religion.
Manipulating audio and video to the extent where it is difficult to ascertain its authenticity can potentially be used to push certain narratives and political propaganda. For instance, an audio clip circulated last year, supposedly featuring the late dictator Marcos denouncing former president Corazon Aquino for “wasting” the country’s Bataan Nuclear Power Plant. According to fact-checking body VERA Files, the audio has never been authenticated as Marcos’s, yet it is still being used to call for the revival of the power plant or to attack the Aquinos. The same claim has been amplified since the AI boom.
With AI being used to generate realistic digital content, even the youth – often considered digital natives who are more tech-savvy and media literate – are not safe from being duped by AI disinformation. TikTok is notorious for AI-integrated videos and deepfakes, such as this manipulated clip from the 1998 José Rizal film. Several users in the comments could not tell that the clip was AI-edited to make the actors look more similar to their historical counterparts.
Lack of AI Regulations
The Philippine government has not ignored the growing use and potential of AI, with the Department of Trade and Industry releasing a national roadmap in 2021 for the country’s AI readiness, outlining strategic priorities and responsibilities for the government, industry, and academia.
However, the roadmap primarily focuses on innovations and developments in the workforce and industries, and points on the ethical use and implementation of AI tools are not fleshed out.
House Bill No. 7396 or the Artificial Intelligence Development Authority Act was filed under the 19th Congress in 2023, seeking to establish an agency responsible for managing AI development in the country and preventing bad actors from taking advantage of the technology. As of writing, the bill remains pending with the House Committee on Science and Technology.
The lack of adequate regulations on the ethical use of AI must be addressed sooner rather than later, especially now that AI is being brought to mass attention, as evidenced by the use of deep fakes in a segment in the Philippine noontime show Showtime. There has never been a better time to teach Filipinos the dangers that the unethical use of AI could pose to media literacy and disinformation culture in the country.
Countering AI Disinformation
This is not to say that there have been no efforts to suppress AI-powered disinformation in the Philippines. Advocacy groups and journalists have launched several online seminars for young Filipinos to teach them about the dangers of AI and further enhance media literacy.
The National Union of Journalists of the Philippines has called for for more conversations on the country’s vulnerability to the impact of AI in an age of digital information. Groups like The AI Revolution in Media and Communication, MovePH, and DCN Global have launched several webinars this year alone to discuss the dangers of AI integration in disinformation tactics and how to combat it. The Break the Fake Movement and the University of the Philippines have also worked with local and international journalists to help young digital users discern the proper uses of AI.
Several journalists have also discussed how AI can be used proactively in journalism to find ways to combat the growing disinformation culture. Meanwhile, the University of the Philippines published a joint recommendation for guiding principles on the use of AI, including the responsibility to “predict consequences, mitigate risks, and avoid harmful consequences.”
While these initiatives help spread awareness about the threat of AI-powered disinformation, online users can also proactively fight its dissemination by following the basic tenets of fact-checking. Even though AI-generated disinformation can be harder to spot, digital literacy skills remain crucial to guard against false narratives.
Verification is the essence of truthful journalism. The same can be applied to the posts we see on social media. Check whether the source of the content is reputable, such as a government website or official company and news websites. Look for inconsistencies and misshapen elements in images, like additional fingers or smudged people in the background. Reverse-searching images and videos can also help determine its authenticity. There are also AI-powered sites like Copyleaks and Scribbr that can be used to check if text was generated by AI.
It might seem to take a lot of effort just to verify if an image, video, or text is true and man-made, but the extra effort helps ensure that no AI tool can ever silence the truth.
Comments are closed.