Q & A with Dr. Alex Hanna, Google Social Scientist and AI Ethics Researcher
Written for WAIE 2020 by Danielle Trierweiler 8/28/20
*Please note: questions and answers have been paraphrased for the purpose of this piece and are not exact wording
Public (Mis) Perceptions of AI
Q1: AI Research is very powerful and it can be quite nuanced. What have you found effective when explaining the importance of ethical and inclusive algorithms and AI to the layperson?
A: First, I try to clarify what AI is- there is a common perception is 2001 Space Odyssey or Star Trek but in practice, a lot of AI is more mundane, such as probability and models, AI is not quite as sensational as imagined. However, mundane isn’t to say that AI is inconsequential. When broken down, you can communicate with specific examples such as credit scoring, or Facebook ad targeting. In ad targeting as an example, what you will (or won’t see) is influenced by AI. This may seem somewhat benign but when you apply similar AI targeting to other spaces such as healthcare or criminal justice, there are much more concerning implications.
Creating Change within Companies
Q2: What are the biggest challenges you currently face around providing guidance within a large private company like Google? In contrast, what do you think are the unique opportunities?
A: Companies [like Google] are trying to understand where AI theory meets business practice. Discussions around technical elements and solutions in engineering are common yet AI Ethics as a concept is non-technical in nature [it is social, political, philosophical etc.]. Thinking about AI has social implications- one challenge of working within companies is pulling back a narrower company view of AI into broader views. When a company has hundreds of different products, trying to understand where to intervene for influence and impact can be challenging as well. I look for where there might be opportunity (for governance for change) If challenging the company status quo becomes necessary- what decisions will the company likely make?
On unique opportunities- companies like Google offer access to resources such as product teams, processes, and policies. These can be beneficial in efforts to influence technology design decisions and move the needle on the more sociological aspects of AI within company. It is important to note that operations within a private company do come with some expected logistical elements such as capitalism, politics, and prevailing industry culture. These can both help or hinder AI Ethics work, depending on the situation.
AI Work
Q3: What would you want those who are interested pursuing work in the AI and Ethics space but are hesitant to know, particularly for individuals coming from underrepresented communities?
A: The most important work in the space is often contributed by individuals from underrepresented comms. This is because they bring with them the experiences of their respective communities and how that manifests in their AI work. This implies that a lot of the important work to be done is how to engage- if folks wish to engage, I recommend that they should engage with socio-cultural theory and philosophy as part of their AI work, not simply the technical AI. The AI space needs members of underrepresented groups. Their value is in context of larger conversations about the world we already see in dialogue with what we want to see.
Personal take
Q4: What excites you most about the AI Ethics space? What makes you optimistic about the future of AI?
A: More people looking at AI Ethics work now, so there are more voices in the space, (and unfortunately, also opportunists.) One of the virtues of the field opening however is that we now have unprecedented takes from different fields contributing. For example, this year, the American Sociological Association opened their conference with a discussion about tech and ethics, which was a first. New contributors offer an opportunity for interdisciplinary and cross-disciplinary takes on AI and Ethics.
Special thanks to Dr. Alex Hanna for agreeing to be interviewed. Dr. Hanna presented at #WAIE2020 on how gender bias in AI harms trans and gender nonconforming in AI.
WAIE 2020 — Dr.Alex Hanna — Gender& AI
Author: Danielle Trierweiler is an information professional and librarian-at-heart who is passionate about information equity, social justice, and dachshunds. Her areas of interest include user-centered design, information architecture, and content strategy. https://www.linkedin.com/in/danielle-trierweiler-0519b213/