#IamthefutureofAI Series: Arathi Sethumadhavan

Today we will hear from Arathi Sethumadhavan, a Principal Research Manager at Microsoft who was introduced to the field of human factors and ergonomics during her undergraduate days as a computer science major.

This interview is part of Women in AI Ethics (WAIE)’s “I am the future of AI” campaign launched with support from Ford Foundation and Omidyar Network to showcase multidisciplinary talent in this space by featuring career journeys and work of women as well as non-binary folks from diverse backgrounds building the future of AI. By raising awareness about the different pathways into AI and making it more accessible, this campaign aspires to inspire participation from historically underrepresented groups for a more equitable and ethical tech future.

You can listen to the podcast or read through their conversation below.

Hey everyone, I’m Arathi, I’m a Principal Research Manager at Microsoft. During my undergraduate days as a computer science major, I was introduced to the field of human factors and ergonomics. The feed for those who are unaware, it’s really one where you draw from various disciplines like psychology, anthropometry, interaction design, biomechanics, sociology, visual design, et cetera, with the end goal of reducing human error, increasing productivity, and enhancing safety.

I was really drawn to this field because there has been a history of a disturbing share of technological disasters caused by incompatibilities between the way products are designed and the way users actually perceive, think, and act. These disasters can be catastrophic, when they took us in a cockpit, in a nuclear reactor, or in an operating room.

Often times the blame is placed on the human operator when the fault actually lies in pure system integration, in other words, the system is not engineered to match the capabilities and limitations of its users. And that’s what I wanted to study. So I pursued a Ph.D. in the field and I’ve had several amazing role models whose work I’ve been inspired by the late Dr. Raja ParasuramanDr. Christopher Wickens, and my own advisor, Dr. Frank Durso, to name a few.

I worked on several human factors issues in aviation during graduate school and later on in healthcare. Again, my goal was to make these domains safer and more effective. A role of Microsoft, exploring the impact of emerging technologies on people. It was a natural career progression for me because I felt that I had the ability to impact the lives of so many people in positive ways.

After completing my undergrad in computer science from India, I worked in Siemens India as a systems engineer. I’ve always wanted to move to United States to pursue graduate education so less than two years into working at Siemens, I moved to Lubbock, Texas to pursue my Ph.D. in experimental psychology with a specialization in human factors and ergonomics from Texas Tech University.

Graduate school, I have to say, is really transformative. Here I learned a lot of post case from technical writing, presenting to an analytical audience, breaking down complex problems, hypothesis testing, designing experiments, working on complex context search domains.

During my Ph.D. years, as a collaboration with the Federal Aviation Administration, I studied everything from display design to air traffic controller training to automation support in the airspace. My dissertation was in fact focused on understanding how different types of automation in the airspace impacted the performance and the situation awareness and the workload of air traffic controllers. During grad school, I also interned with Dr. Micah Ensley, who was considered one of the leading voices on situation awareness and I had the opportunity to develop situation awareness treatment programs for the US Marine Corps and HAZMAT Teams. In short, I have to say that during this time, I studied all the concepts of trust in automation, system reliability, meaningful human control, all of which we hear even today in the context of AI ethics. That I would say was my introduction to responsible product development.

Now after graduation, I started working at Medtronic, one of the leading medical device manufacturers in the world in the cardiac rhythm and disease management business, as a scientist. Later on, I also spent time consulting for several pharmaceutical and medical device companies. Now throughout this while, I focused very much on the human factor, in other words, how do you design medical devices that are safe and effective for end-users? This, the development process, very much involved combining analytical approaches like failure modes effects analysis with more empirical approaches like qualitative and quantitative studies with actual humans to make sure that the residual risk is as low as possible. My time spent in the healthcare space was very rewarding I have to say, where I got to work on several life-changing and life-saving devices. Fast forward to today, I work on the ethics and society team at Microsoft where I lead research. We are embedded within an engineering organization in the cloud and AI business.

And I work on several AI and emerging technologies, such as computer vision into the regions and realities and speech, natural language models. You name it. My work in Microsoft predominantly focuses on bringing the voices of impacted communities, including traditionally disempowered communities to help shape the AI experience. I work on a wide range of technologies and therefore tackling a wide range of topics from privacy and consent, fairness and bias, inclusion, accountability, et cetera.

I may give you a few examples. Last year Microsoft released a new service called Custom Neural Voice. This is a text-to-speech feature that allows you to enter a highly naturally sounding voice by providing your audio samples as training data. In fact, you only need 500 utterances to create a realistic and naturally sounding voice. It was very important for us to acknowledge that the voice acting industry is directly impacted by this technology. So we brought in the perspectives of voice actors. So we interviewed them to understand their perceptions towards this technology. And they expressed a desire for transparency and clarity about what their voice likeness could be useful, in this kind of technology, for how long they impact future recording opportunities, and so forth. These were reflected in the terms and conditions of use of the technology. So Custom Neural Voice customers must actually obtain explicit written permission to use a voice actor’s voice, or to create a voice bond and provide them with a disclosure.

I’m going to give you another example. A few years ago, we worked on Touchless Access Control, which is a facial recognition-based leading access system. With the technology, you can do identity verification using the face data. So we asked ourselves a question, what do people need in order to feel safe if the technology were to be deployed at their workplace. So we treated design and testing with representative users, including people with visual impairments, mobility impairments, as well as individuals belonging to racial minority groups. We designed an enrollment experience that people actually felt good about. And we did that by prioritizing four main aspects of consent: awareness, understanding, freedom of choice, and giving that control of data to the users.

Let me walk you through another example. Last year, we previewed the Dynamics 365 Connected Spaces, which uses data from video cameras to help retailers improve store efficiency and customer service. Now because Connected Spaces uses cameras to make inferences about the physical environment, we needed it to be especially thoughtful in how we would protect the privacy of both shoppers and employees who might be captured in the video footage. So we designed connected spaces to preserve the anonymity of individuals as much as possible by focusing on movement, patterns, and locations, rather than actually identifying individuals or even their facial characteristics. We also wanted to make sure that shoppers and employees can actually understand in a meaningful way how the system works and what observations are being captured by the cameras and how might these observations be used. So we invested significant time to understand people’s questions about the technology and the privacy protections in place. We then used these insights to create a disclosure strategy for store employees as well as the shoppers.

Lastly, I worked on quite a bit of fairness and representation problems as well, such as how do we systematically design avatars or design datasets, face datasets, or speech datasets that actually it comes to the spectrum of human diversity. Depending on the technology, diversity, as you can imagine, can be anything from skin pigmentation to speech patterns, to hairstyles. So never a dull day. And then diversity across various parameters like age, gender, ancestry, socioeconomic status; all of these matter because diverse teams bring in diverse perspectives. They address the needs of a wide range of people being challenged during interviews and we bring about more innovation. And studies, in fact, have shown that as well that companies with employees and leaders who are diverse across traits like gender ethnicity, and cross-cultural experiences are 70% likely to report capturing our new markets. So if you want to create products that work for lots and lots of people, you should care about diversity.

I want to touch on two aspects of diversity that I’m personally passionate about and perhaps are not spoken about as much. One is diversity disciplines. And I believe that that should not be compromised. Engineers and AI scientists cannot solve problems on their own because you’re no longer solving problems. You’re no longer solving just technically problems, you’re solving source connecting problems. For example, let’s say you want to create an automatic speech recognition system that works well for different kinds of English speakers. To do this well, one needs to think through the composition of the training data, the accounts for different kinds of speakers, because there is no one way of speaking in English. So how do we go about doing that? This requires an understanding of speech and linguistic deviations, factors that impact these deviations from the age, the language, in this case, English was acquired to one’s educational leveling to speech impediments or to any field.

The next step is really prioritizing this understanding in creating a data collection strategy that works for majority of the speakers. This is not an easy thing to do, and no one discipline, let me tell you, can do this well. Multidisciplinary teams involving social scientists, data scientists, engineers, project managers, legal experts; they’re all important. In some situations, we have even brought in external experts like biological technologists, sociolinguists, and military ethicists to help us in novel problems basis. So diversity can even extend beyond your immediate team composition to external partnerships. With specialists, the domain experts, and civil society actors.

Now the second kind of diversity that I would like to emphasize is including a diverse pool of end-users. And are the impacted stakeholders part of your product development process? This means including those whose perspectives are typically excluded or forgotten such as those from LGBTQ+ community, racial minorities, those whose jobs are impacted by AI tools with speech and visual impairments, bystanders to name a few.

One, I would say to not let the AI take any jargon or that you’re not familiar with concepts like machine learning, deep learning, et cetera. There are many ways to learn and contribute as you begin your journey. Two, depending on your area, whatever that might be, demonstrate, thought leadership through talks, blogs and publications. The so-called tech bros are more likely to listen to the voices of a publicly recognized expert in the field. Three, form a solid network of peers that you can leverage for career opportunities, for mentorship, and constant learning. Lastly, I would say that it’s important to realize that changing conventionally, our ways of thinking and doing things, conventional ways of developing technology, that engineering culture, changing that takes time. So be patient, work hard, enjoy the journey.

#IamthefutureofAI campaign is sponsored by the Ford Foundation and the Omidyar Network. You can watch this and other inspiring career stories on our YouTube channel.

Arathi Sethumadhavan is the Head of Research for Ethics & Society at Microsoft, where she works at the intersection of research, ethics, and product innovation. She has brought in the perspectives of more than 13,000 people including traditionally disempowered communities, to help shape ethical development of AI and emerging technologies such as computer vision, Natural Language Processing, and mixed reality. She was a recent Fellow at the World Economic Forum, where she worked on unlocking opportunities for positive impact with AI, to address the needs of a globally aging population. Prior to joining Microsoft, she worked on creating human-machine systems that enable individuals to be effective in complex environments in aviation and healthcare. During her tenure at Medtronic, she has worked on several life changing and life saving technologies, holds patents, and received the Star of Excellence Award for innovation. She has also advised several pharmaceutical and medical device companies on their human factors and regulatory strategy. Her collaboration with the Federal Aviation Administration culminated in several areas of empirical research including shift change, workload, situation awareness, and performance in air traffic control. She has delivered more than 80 talks at national and international venues, edited a book on health, authored 60+ articles, and taken on Adjunct Faculty roles. She has been cited by the Economist and the American Psychological Association and was included in LightHouse3’s 2022 100 Brilliant Women in AI Ethics list. She is currently editing a book on Collaborative Intelligence. Arathi has a PhD in Experimental Psychology with a specialization in Human Factors from Texas Tech University and an undergraduate degree in Computer Science.