#IamthefutureofAI: Irene Solaiman

Irene Solaiman, Policy Director at Hugging Face joins us today to share her own journey and personal experiences as a person of color, her views on AI as a means for bridging divides and creating a better future for humanity, and thoughtful advice for those interested in exploring the world of AI.

This interview is part of Women in AI Ethics (WAIE)’s “I am the future of AI” campaign launched with support from the Ford Foundation and Omidyar Network to showcase multidisciplinary talent in this space by featuring career journeys and work of women as well as non-binary folks from diverse backgrounds building the future of AI. By raising awareness about the different pathways into AI and making it more accessible, this campaign aspires to inspire participation from historically underrepresented groups for a more equitable and ethical tech future.

Can you share an incident that inspired you to join this space?

Prior to working on AI, I worked on human rights policy and crisis intervention. I found these roles, while greatly meaningful, were not sustainable for my mental health. When I went to grad school to reflect on how to have that level of impact on people’s well-being in a different medium. I learned to code and learned about generative modeling — an area that needed new perspectives and was full of unanswered questions. That’s my favorite challenge: asking questions that have never been asked before for systems that have never before existed. I continue to deeply admire people doing human rights work and see our roles as complementary; AI research needs to take lessons from parallel fields.

How did you land your current role?

My career has been a series of being open to novel challenges and largely creating my own roles and responsibilities. I think this is more feasible in flexible environments, such as startups, but is dependent on the culture. And I do this with major appreciation to leadership and colleagues who encourage taking risks and being my thought partners. AI as a research field moves so quickly, I often have to trust my gut in what is most impactful work. When I started doing AI bias research, it was not nearly as well-resourced as it is today. I’m so glad I charged forward on this work and that incredible researchers fought to make this area more prominent.

What kind of issues in AI do you tackle in your day-to-day work?

In a startup, the issues change and move quickly. However, to anchor yourself, it’s important to craft priority lists that are informed by both urgency and your personal expertise. I have strong expertise in AI policy, which is often not urgent due to its pace relative to AI progress. This should change. My personal passions are in understanding the values and cultural contexts of AI systems, measuring their social impacts, and updating systems so they work for the many groups they affect. This research is necessarily interdisciplinary, requiring research on bias, languages, and how people engage with AI systems in addition to understanding the performance and capability side of models such as the more standard performance benchmarks.

If you have a non-traditional or non-technical background, what barriers did you encounter and how did you overcome them?

While a computer science background is not absolutely necessary for a social scientist, it is immensely helpful. And while social sciences are not absolutely necessary for a computer scientist, this knowledge will help systems work better for different people. Coming from my non-traditional background, I became accustomed to often being the only person in the room with a social science perspective. This should change. What was most helpful was understanding the specific AI system I was working on, such as a language model, and how to apply my unique skills in a way that other experts could engage with my thoughts and research. My research paper on a Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets (https://arxiv.org/abs/2106.10328) is an example.

Why is more diversity — gender, race, orientation, socio-economic background, other — in the AI ethics space important?

Being from an underrepresented background, I am attuned to the need for fields to not only represent the groups they strive to benefit but also actively include voices most often overlooked. My personal experiences living and working across four continents give me deep insight into the disparate levels of attention given to systems that are deployed globally; for example, dominant research on generative modeling is largely conducted in English and from a Western lens. People affected by AI are not one entity, but many groups. Interdisciplinary expertise is needed to adapt systems to the many groups in our society.

What is your advice to those from non-traditional backgrounds who want to do meaningful work in this space on how to overcome barriers like tech bro culture, lack of ethical funding/opportunities, etc.?

There are so many invisible ways people from underrepresented groups and non-traditional backgrounds are at a disadvantage outside the more visible discrimination. The mentorship was truly life-changing for me. I am infinitely grateful to the experienced people in the field who taught me about salary negotiation, and finding funding opportunities, and just believed in me even when I did not believe in myself. Mentorship and support can also come from a peer network; my friend circles of women of color keep me sane on days I desperately need grounding.

Irene Solaiman is an AI safety and policy expert. She is Policy Director at Hugging Face, where she is conducting social impact research and building public policy. She also advises responsible AI initiatives at OECD and IEEE. Her research includes AI value alignment, responsible releases, and combating misuse and malicious use.

Irene formerly initiated and led bias and social impact research at OpenAI, where she also led public policy. Her research on adapting GPT-3 behavior received a spotlight at NeurIPS 2021. She also built AI policy at Zillow Group and advised policymakers on responsible autonomous decision-making and privacy as a fellow at Harvard’s Berkman Klein Center.

Outside of work, Irene enjoys her ukulele, making bad puns, and mentoring underrepresented people in tech. Irene holds a B.A. in International Relations from the University of Maryland and a Master in Public Policy from the Harvard Kennedy School.

You can find her on LinkedInTwitter, and Mastodon.