Shita Laksmi, Executive Director of the Tifa Foundation, chats with EngageMedia about internet intermediary liability, particularly in regards to disinformation on the COVID-19 pandemic and the role of the state in regulating content. We also ask: How can internet intermediaries balance people’s freedom of expression with their right to be protected from online harms and manipulation?
Listen to the Podcast:
Watch the Podcast:
Relevant Links:
- What is internet intermediary liability? The Association for Progressive Communications lists the frequently asked questions on the topic, including how and why users should learn about it.
- For further reading, check out this primer on the topic by ARTICLE19.
- Also check out the work of Daphne Keller, director of the Program on Platform Regulation at Stanford’s Cyber Policy Center, who specialises in intermediary liability.
- In the United States, the Electronic Frontier Foundation regards Section 230 of the Communications Decency Act as “one of the most valuable tools for protecting freedom of expression and innovation on the Internet”.
- How responsible should Big Tech be for the content on their platforms? In the aftermath of recent terrorist attacks being played out online, they definitely play a part.
- Shita’s research, as mentioned in the podcast, compares the laws of Singapore and Germany on internet intermediary liability, particularly when it comes to content on corporate social media platforms. The COVID-19 pandemic, however, has put into question whether these laws should be revised.
- Last year, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence (AI), which lists seven key requirements in order to achieve this – including transparency of AI systems and how they’re applied by internet intermediaries.
Pretty Good Podcast by EngageMedia is produced remotely and primarily using FOSS tools. We’ve documented our process and learnings at Video4Change.org.