#IamthefutureofAI: Emine Ozge Yildirim-Vranckaert

Come along as we delve into Emine Ozge Yildirim-Vranckaert’s story.

This interview is part of Women in AI Ethics (WAIE) “I am the future of AI” story series, which highlights multidisciplinary talent in AI by featuring inspiring career journeys of women and non-binary people from historically underrepresented groups in tech.

Learn how Emine found her passion for advocating freedom of expression. Join us as we delve into her journey and get her insights into this vital space, aiming to address the notable gap in existing literature and legal precedents, particularly in the algorithm-driven era.

Can you share an incident that inspired you to join this space?

It all started with a personal experience in a society where freedom of expression and access to information were severely restricted. I witnessed firsthand how narratives were tightly controlled, and dissenting voices were silenced, leaving a lasting impact on me. This early exposure set the stage for my heightened awareness during the 2016 U.S. election. The strategies employed in propaganda and personalized targeting vividly showcased the challenges of preserving mental autonomy in the digital age. These experiences made it clear that technology and fundamental human rights, such as freedom of thought and expression, were in a precarious balance. This realization became a turning point, driving me to immerse myself in AI ethics as a means to confront these urgent issues.

How did you land your current role?

I first started as an advocate for online freedom of expression while at Georgetown Law in Washington, DC. This advocacy deepened during my work with a non-profit online encyclopedia, where I was immersed in the ethos of the open-source and open-access communities. In these environments, I became acutely aware of the importance of free, unmanipulated access to information as a cornerstone of true freedom of expression. This experience revealed a notable gap in existing literature and legal precedents regarding the right to freedom of thought, especially crucial in the algorithmic-driven era. This realization propelled me to pursue my current doctoral research.

What kind of issues in AI do you tackle in your day-to-day work?

In my daily work, I focus on the intersection of law, ethics, and technology, with a special emphasis on freedom of expression and digital mental autonomy. My responsibilities include assessing platform regulations and the impact of AI on online speech and digital freedom of thought while advocating for ethical AI practices that uphold individual rights. Complementing this, my doctoral research, which is in the final stages of completion, delves into how digital technologies have transformed propaganda methods, further influencing mental autonomy and freedom of thought. This research takes a legal and philosophical approach, analyzing the societal implications of AI within the framework of international human rights law.

If you have a non-traditional or non-technical background, what barriers did you encounter and how did you overcome them?

Navigating AI as a legal scholar presents unique challenges, particularly in understanding the intricate technology behind rights implications. My strategy has been to collaborate with experts from various fields, fostering a shared comprehension. Interdisciplinary research projects have been instrumental in this journey, offering invaluable insights. However, with the ever-evolving nature of technology, I recognize the importance of continuous learning to stay abreast of new developments in AI.

Why is more diversity — gender, race, orientation, socio-economic background, other — in the AI ethics space important?

In a world marked by historical inequalities and narratives dominated by the powerful, the lack of diversity in voices risks repeating past failures. My Ph.D. research on propaganda narratives underscores this; it questions the unbiased nature of information without diverse contributions. Diversity in AI ethics is not just about representation but also about enabling unimpeded mental autonomy, allowing individuals from all backgrounds to develop their mental faculties and make informed life decisions. This ensures that AI, and society at large, benefit from a multitude of perspectives, making it more equitable and just.

What is your advice to those from non-traditional backgrounds who want to do meaningful work in this space on how to overcome barriers like tech bro culture, lack of ethical funding/opportunities, etc.?

I must admit that I’m still navigating these challenges myself, and I’m not sure if I have all the answers. However, I believe that providing concrete evidence of how these barriers hinder societal progress and patiently explaining the consequences of these issues until they are heard sound like strategies worth trying. It’s an ongoing journey, and through our collective strength and resilience, by standing for each other and trying together, we can break those barriers.

Emine holds an LL.B. from Marmara University in Turkey and an LL.M. degree from Georgetown University Law Center in Washington, DC. Presently, she serves as a Doctoral Researcher at the KU Leuven Center for IT & IP Law (CiTiP). She is deeply committed to human rights and animal rights activism and is admitted to practice law in New York State.

You can connect with her on LinkedIn.

Sign up for the Women in AI Ethics mailing list to stay connected with this community of AI pioneers, experts, and emerging talent