As a society, everyone should be educated on being aware of this concept of responsible AI until we get to the stage where any time anyone talks about AI, it’s automatically responsible. And that’s how the world and society thinks about it. Anusha Sethuraman, VP, Marketing at Fiddler AI.
I had the pleasure of interviewing Anusha, and we talked about the upcoming Women in AI Ethics Event. We are honored that she is speaking at the event. The interview has been edited for length and clarity.
Sharvari: How did you hear about the Women in AI Ethics (WAIE) initiative, and being part of it, what does it symbolize to you?
Anusha: I think I heard about Women in AI Ethics through
’s Twitter. I think I came across it there, and I started following her and I reached out to her on LinkedIn and connected, and then we just started chatting. What does it symbolize to me? I’m a big proponent of diversity and inclusion, especially in the stem world of technology. And I think very often we’ve seen so many stories of women’s voices, especially women of color voices being shot down or left out of the conversation.
There’s just so much in the space that happens negatively towards women. So I think this is a great positive initiative that’s going on to recognize all of these women who are working in this space and continuously updating and talking about the work that’s being done. So this is a huge deal, and it’s a great initiative from
on the team every year.
Sharvari: WAIE has been known to be different or unique compared to other initiatives like Women in AI, Women in Tech, and others. What resonated with you the most about the initiative?
Anusha: Yeah, I think ethics is the biggest part of it. Right. That’s different. It’s this concept of responsible AI and how the focus is on people who are working, or especially women who are working in the responsibility space, and ensuring that their voices are heard and the work that they’re doing is recognized.
Ultimately, you want to get to a situation where AI and responsible AI are not differentiated. When people think about it, I think automatically think about responsibility.
But since we’re not there yet, it’s important to call this out and educate everyone on this and bring awareness to this concept of ethics and responsibility.
Sharvari: Why do events like Rethink AI matter, and why should folks attend it?
Anusha: I think I’ve come back to responsible AI again, Rethink AI is one of those events that’s helping people understand more about what they can use to really build technology and AI that’s more responsible and get an idea of what it is that’s going on in the space. What are the things that you should be thinking about? Who is responsible for this? I feel like everyone in an organization should be thinking about responsible Ai. It’s not necessarily just the data.
Scientists are the ones who are building these tools, but how do you kind of think more broadly about it? I think this would be a good way to get some insight into some of that if you’re working in this field.
Sharvari: Support for women/non-binary/BIPOC communities has been highlighted of late, how should organizations encourage this internally?
Anusha: I’m part of a startup which is a small company. Right. So it’s not that we do anything specifically around here, but just culturally, how can you be thinking about this all the time, whether it’s in your hiring? You might go out of your way to find these diverse candidates and bring them into your process, and you’re giving an opportunity to multiple different voices. That could be a start. And then participating in some of these different initiatives like that Heritage Month for different communities, like how do you just go contribute to some of this and show not just your employees, especially on employees?
I think that’s one way to kind of show them that, hey, we care about these things, and we’re doing what we can, even if we are a small company. But even outside of that, it’s like trying to find ways to support organizations like Women in AI Ethics. And because it’s completely, I think, self-funded, nonprofit initiative that’s more like us. The team with diverse backgrounds, which has been one of the main obstacles for the small group, because once you start bringing diverse candidates or bringing diverse voices into your organizations, you just get access to so many perspectives that you might not have thought about, might also help you in building a better product, building a better business.
Sharvari: How is Fiddler AI rethinking AI?
Anusha: One of the core things that we do at Fiddler AI, we’re rethinking A.I. By bringing, by powering everything AI related by with explainability. So if you’re thinking about the entire lifecycle of model performance management, starting with training, going all the way to monitoring and that feedback loop, everything is powered by explainability and this whole model performance management lifecycle. And that’s kind of how we’re rethinking A.I. by bringing explainable responsibility to this whole life cycle.
Sharvari: What are the initiatives towards ethical and responsible AI development achieved through the Fiddler AI platform?
Anusha: If you have models out in production and they are trained on specific datasets, that can be very different when out in production when they are encountering different data. By employing model monitoring which Fiddler AI provides, you are constantly maintaining a level of performance. You are looking for bias, drift in data and outliers. You are ensuring high performance is maintained when the model encounters different data than on what it was trained on. Likewise, you can catch what these anomalies might be and why they are drifting. Furthermore, you can understand why and how behind these issues, analyze and compare local and global data. How do you slice and dice these results to make sure there are no discrepancies or inherent unfairness inside your model? We will give all this information and leave it to the data scientist or the creators of this model. We give them all this information, and we look at it, drill down and see what’s happening. You as a human in the loop to figure out and make the decision if you need to retrain the model.
Come hear Anusha and Mary at RETHINK AI: ETHICAL AI TECH & TOOLS SUMMIT on May 25th where she will share more of her insights along with other multi-disciplinary experts from Mozilla Foundation, Omidyar Network, ACLU, Access Now, DuckDuckGo, Dataiku, Fiddler AI and many others building a diverse and ethically diverse future.