top of page

Theodora Skeadas: AI Governance and Pathways to the Sector

Ms. Theodora Skeadas
Ms. Theodora Skeadas

In May 2024, the Saint Pierre International Security Center (SPCIS) launched the “Global Tech Policy at the Forefront” series, featuring conversations with leading experts on the impact of emerging technologies—such as AI, blockchain, biometrics, and robotics—on global governance and public policy.


On June 13, 2025, we had the pleasure of interviewing Ms. Theodora Skeadas. With over 13 years of professional experience in Tech policy field, Theo is the Community Policy Manager at DoorDash and Chief of Staff at Humane Intelligence, where she leads efforts to assess AI’s societal impact. A Harvard College alumna (Class of 2012) with a degree in Philosophy and Government, she also earned a Master’s in Public Policy from the Harvard Kennedy School in 2016. She is now pursuing her PhD in applied philosophy at King's College London Department of War Studies. In this interview, we delve into Theo’s professional perspective on AI safety and the pathways to building a career in tech policy.

 


SPCIS: Your career trajectory is incredibly inspiring, spanning philosophy, public policy, and now technology policy and AI safety. To start, could you briefly share what drew you to focus on trust, safety, and the societal impact of AI, including your current work at DoorDash and your role at Humane Intelligence?


Theodora Skeadas: I became interested in online speech governance, content moderation, and trust and safety through my years of work and study in the Middle East and North Africa. Witnessing firsthand how social media advanced both political movements for democracy and harmful behavior, disinformation, and violence deeply impacted me.


During my time in Morocco, Turkey, and Greece, I observed the complexities surrounding online governance. I lived in Morocco during the 2011 revolutions in Tunisia and Egypt, which led to proactive constitutional reform in Morocco. Later, I was in Turkey during the 2013 Gezi Park protests, a pivotal moment of political unrest. Platforms like Twitter played transformational roles in advancing critical political and social conversations. Yet I also saw governments block internet access, online violence against women proliferate, disinformation spread unchecked, and systemic biases in platform algorithms emerge.


In Morocco, my work with nonprofits focused on education, youth empowerment, poverty alleviation, and conflict resolution. I supported immigrant women from Francophone African countries and provided educational services to underprivileged children and women in Casablanca. In Turkey, I taught at Akdeniz University and researched barriers to employment for Syrian refugee youth in southeast Turkey and Kurdish Iraq.


Later, at Booz Allen Hamilton, I spent six years analyzing public sentiment, social movements, and disinformation through social media for the U.S. Federal Government. My work spanned countering violent extremism, cybersecurity, and counter-terrorism. I used tools like sentiment analysis, natural language processing, and econometrics to study issues such as ISIS’ recruitment strategies, Al Shabaab’s use of radio for propaganda, and reactions to NATO’s military expansion. Over time, I observed increasingly sophisticated use of social media by both violent non-state actors and non-violent civic protestors.


At Twitter, I managed the Trust and Safety Council, a global consultative body, and supported journalists and human rights defenders through programs aimed at combating impersonation, fraud, human trafficking, and terrorism. I also developed global policies, coordinated consultations, and drove initiatives like the Content Governance Initiative and the Moderation Research Consortium. Through this work, I saw how online harms directly translate into offline consequences. For example, people in Turkey lost their jobs after content depicting them drinking during Ramadan was posted online. Others faced imprisonment, torture, harassment, or threats.


Ultimately, these experiences solidified my commitment to ensuring that internet-based services remain free of illegal and harmful material while upholding freedom of expression.



SPCIS: You’ve worked extensively on addressing the societal impact of AI in real time. Based on your experience, how can companies like DoorDash and others in the private sector better contribute to identifying and mitigating AI-related risks, particularly in enhancing AI safety?


Theodora Skeadas: Private sector companies can enhance AI safety through several strategies:


  • Keeping humans in the loop during critical decision-making processes.

  • Assessing AI model performance through evaluations such as red teaming.

  • Bolstering transparency by clearly communicating how AI systems operate.


These steps are essential to mitigating risks and ensuring responsible AI deployment.



SPCIS: How do you see proposed auditing and monitoring projects influencing the development and deployment of AI systems, especially when risks only become apparent after deployment?


Theodora Skeadas: Nimble approaches are crucial in this context. As risks emerge post-deployment, companies must be willing to reassess their AI implementations and invest in strengthening guardrails or other mitigation measures. Flexibility and adaptability are key to addressing these evolving challenges.



SPCIS: As someone deeply involved in AI governance, what do you think is the next frontier for governance frameworks? Are there specific areas beyond transparency and accountability that require urgent attention to ensure responsible AI development?


Theodora Skeadas: The proliferation of AI agents will undoubtedly complicate this space. Governance frameworks must prioritize issues such as maintaining system integrity, addressing algorithmic bias, and ensuring meaningful human oversight at scale.



SPCIS: I understand you recently started a PhD program in applied philosophy at King’s College London. Your research on using just war theory to analyze online harms is fascinating, especially given the increasing role of social media and AI in fueling hate speech, violence, and political instability. How do you think your work can reshape our understanding of these issues?


Theodora Skeadas: Extensive research has explored the moral considerations surrounding physical harms, particularly through frameworks like just war theory. However, there has been less focus on the governance of online harms, especially in the context of war.


Online harms—ranging from hate speech and disinformation to extremist narratives—have been linked to real-world violence, including mass shootings, ethnic cleansing, and rising hate crimes. Women journalists, in particular, face escalating online attacks, contributing to reduced political participation and increased violence against women in what has been termed the “shadow pandemic.”


My research seeks to bridge the gap between online and offline harms. Specifically, I aim to explore how just war theory, traditionally applied to offline conflicts, can inform our understanding of online violence. Questions I hope to address include:


  • How do online and offline harms relate to one another in the context of conflict?

  • Can traditional just war principles, such as proportionality and non-combatant immunity, be applied to the online realm?

  • What is the threshold for online terrorism, and how does it differ from offline terrorism?


By using a decolonial lens, I hope to expand the ethical frameworks available for governing digital technologies and mitigating the harms they facilitate. This is a critical step toward developing better safeguards for both online and offline ecosystems.

 

 

 

 

 

 

 
 
 

Comments


Join our mailing list for updates on publications and events

Thanks for submitting!

© 2023 by Saint Pierre Center for International Security

bottom of page