In today’s podcast, we have invited Beth Rudden — distinguished Engineer and Principal Data Scientist, Cognitive and AI Services at IBM for a conversation about her fascinating career journey from anthropology to AI, explore the role of trust in AI, and discuss how we can deploy responsible AI at scale.
Note: The views on this podcast are those of the person being interviewed and don’t necessarily represent IBM’s positions, strategies or opinions.
Mia Dand: Let’s start with the first question we ask all the brilliant women working in responsible AI. How did you get started in this space and land your current role?
Beth Rudden: I started in this space many years ago, and I think that part of the thing that I do see things differently is I’m a trained archeologist and cultural anthropologist so I think through data and the way that data is put in place in a very different way. I see data as artifacts of human behavior.
And when I was in my former business unit, which is now Kyndryl, I was the acting Chief Data Officer and I was in charge of performing a workforce transformation where we had 16 months to quickly up-skill and re-skill the entire population in order to prepare the entire business unit to become Kyndryl and that was about 110,000 humans over 175 countries.
And it was an incredible journey that we took in about 16 months and I got hooked on how we can use data to inspire and delight our people, to enable and empower them to go up-skill and re-skill themselves. And I will tell you that the things that I find in data are these stories, and I’m a huge fan of Brené Brown who says, “Data is a story without a soul.” And I think it’s our job to really start to understand either the human that put the data in the system or the human that created the system that generates the data. And I think that we have so much more tools than we think we do, that we can apply to data to be able to start to understand it but it’s very difficult for people to trust the data. And when I first started out my career, I remember doing reporting where I would show a summary to someone and they’d be like, “Well, how’d you get there?” And then, “Oh, okay. So I don’t show just a summary with a drill down.” And then they’d be like, “Well, why did you filter it out this way or filtered it out that way?” And the importance of showing your work and really making sure that anything that you do, whether you’re applying a convolutional neural network or you’re doing a linear regression, or you’re just doing data prep.
I was trained as an anthropologist to actually write down what I ate in the morning when I was doing an ethnography with a human being. It’s important. And the intent by which we look at data is incredibly important in how we see that data and how that data really goes into the algorithms that can truly affect people’s lives.
Today, my role is in consulting. I am the Talent Domain Leader, and most of my peers, I have to say, look at me and say, “I’m so glad you’re doing that squishy human data science, Beth, because that’s not something that I like to do.” So I’m very much on the soft side of science in that qualitative analysis. I love understanding many, many different impacts that you can get with small data. My latest adventure is in causal inference so I’m deeply seated in trying to explore all of my curiosity about what data can tell us about the people and how we can really create a human centered service leadership.
Mia Dand: I am fascinated by your background and your journey. It speaks to the need for more multidisciplinary approaches to AI, as for far too long, it’s always been the domain of the tech brothers. It’s highly technical but the squishiness that you mentioned is so critical to making sure that we are considering all the important and critical aspects, which were often ignored in the past.
So I have a question about trust for you, but before we go into that, for those of us who are not as familiar with IBM’s approach to AI, can you just walk us through in broad strokes, what are the principles that IBM follows in the development of AI?
Beth Rudden: So being that we are over a 110-year old company, we have seen ebbs and flows, economies decline. We’ve seen so many different things and we have the bones and the structure and the foundation to really be an adult in the industry. And I don’t say that unfairly because, I look at my teenage son who gets his paycheck and says, “Wait a second, I have to pay taxes?”
And there’s a responsibility that corporations have. And I am a very proud IBMer because we create jobs in countries and in places where you wouldn’t think that jobs are there. So in Brno, in the Central and Eastern Europe, in Belarus, in Budapest. We have a new CIC center opening up in Buffalo, New York, one in Louisiana.
And we’re always looking to have that diversity of thought that was so important to our Founder. And I will say that that’s the ethos, the atmosphere that we have created in order to establish our ethical principles and remember ethics and ethos are inextricably linked. And I think that one of the things that I’m most proud of is that we came out with these ethical principles several years before AI ethics became a thing and before people started talking about algorithmic justice and this is something that is part of our culture because it’s part of our responsibility as a corporation to really engender social good with the communities that we inhabit and that includes our employees, obviously.
So we have three ethical principles.
The purpose of AI is to augment human intelligence. If you are creating AI to control humans, to make decisions on behalf of humans, to make decisions on behalf of machines, you really need to think through what your intent is. And we have an incredible amount of tools and exercises and workshops in order to establish this intent.
I talked about the intent with data earlier. The intent in building AI for everything, we do everything that we are. All of our solutions for our clients is to augment human intelligence, not replace it.
Data and insights belong to their creator. This is another incredible one that started with our data responsibility all the way back in 2016 that Ginni Rometty established.
And if you think about it, if you are using AI, using data that you have acquired without the consent of that human being, is that kind of like catfishing, maybe? I think that the experience that you will have when you actually use AI, when you have given consent to use your data and use that AI, your experience is ten times better. And it is like going on a date with your spouse or your husband or your partner that you’ve had for a long time. It’s that different of an experience. And that, that’s the type of change management that we put into our solutions in order to get the adoption we need. And it’s because we believe we are indemnifying these principles into our solutions, data insights belong to the creator.
And then the third one is always a lot more difficult than people realize, but it’s also a lot more accessible when you choose the right trained data scientists. New technology, including AI systems, must be transparent and explainable.
So within these three principles, we also have five pillars: explainability, fairness, robustness, transparency, and privacy, but the understanding that every AI system must be transparent and explainable, this is where I’m so glad that Mia, you had me on this show because I think that there is a very important message out there. When you do AI properly, when you are creating and establishing your machine learning or your black box algorithm, it is your responsibility in order to set up that experiment to make sure that it meets or exceeds human level performance or a proxy for Bayesian error. And when we are thinking about the machine learning program, we are feeding it training data and segmented training data that we make decisions on how to segment that training data, how to filter it, what to use, what not to use.
And then we have since 2013 in our IBM research division built amazing tools that can see inside the piles of linear algebra and the neural nets and the black box algorithms to get the tokens or the features or the predictors in order to see this is the predictor for this neural net against this training set. And then we can use orthogonalization or we can use tuning parameters in order to tune what kind of features that we want to use. This is really a basic understanding of how to set up a machine learning program.
What people forget is there’s an entirely different realm of AI, natural language processing, understanding text analytics using semantics in addition to statistics, and I’m going to get back to your multidisciplinary comment. It takes a holistic perspective. And when I get to work with my social science peers, my IO psychologists in working with the workforce, in addition to my data engineers and data architects and DevOps engineers in order to put it into a robust and governed CI/CD implementation. All of these things have to come together and you have to prepare and you have to also make sure that you…I think I’m going to summarize this by a quote from one of my peer distinguished engineers. She said, you know what, Beth, there’s really no hand-waving in math. And the people who tell you that they can’t explain something are not really working that hard.
And so I really…in some ways I don’t want to discount how hard it is to do, because sometimes the answer is not to use AI. Sometimes the answer is to use a rules-based engine or some sort of decision support system. So I think that we need to maybe come off the hype curve a little bit on the whole machine learning and black box and remember that when Tim Berners Lee established the World Wide Web all the way back in the early two thousands, he’s the Semantic Web. And that’s something that has full lineage and provenance. It has full traceability. I’m not sure why we don’t combine a lot of these efforts.
And the more social scientists that I can get into the data science arena, I’m hoping the more balanced we get in being able to use all of these various tools and all the toolkits that we have as IBM and that we have donated to the Linux Foundation as well as the tools that are out there and use them appropriately so that we can actually have that transparency and explainability as well as robustness and privacy so that we can really work with our clients to understand, is this algorithm treating the humans fairly? Who is benefiting from the output of the algorithm? And those are the types of questions that we ask that truly differentiate IBM.
Mia Dand: What I appreciate about what you’ve just shared is the human centricity of the approach including why AI needs to be multi-disciplinary and why it has to be cross-functional because what AI is suffering from right now is a lot of ethical blind spots. And you can only solve these by what you described as a more holistic approach and introducing more “squishiness”. In your writings, you’ve mentioned the need for building trust. What are some different ways that you’ve found we can engender trust in our AI systems. What are the different ways that we can actually start addressing this trust deficit?
Beth Rudden: So I think that when we use the word trust, not everybody has the same definition and I’m an anthropologist so I define my terms. I’m a huge fan of Charles Feltman, and this again comes via Brené Brown, who says that “Trust is making something you value vulnerable to somebody else’s actions.”
And if you think of that in terms of AI, you are making yourself vulnerable to a machine’s actions if you’re not following our principles for trust and transparency. You’re using that to make a decision and I know that there are so many negative examples, but there’s even more amazing, delightful examples of well designed implementations, where we have enabled and empowered. And when I was doing the workforce transformation, the worst case scenario was that I told somebody that they had a propensity to be a data scientist or propensity to be a DevOps engineer or a site reliability engineer. And the worst case is the person would go out and take a class and learn something.
So when you are doing these types of decisions where you are inferring things like the propensity to learn, or the propensity to be into a particular skill, think about it in the ways where you’re measuring the positive. And I always think about trying to invert the model. So instead of looking at AI — and this comes from some of my really good friends that are designers in IBM. And this established the relationship that you want as a human being with AI and frankly, as software engineers, or as any engineer, you should measure your value on consumption. How well did you do based on how many people are consuming your output? Not buying it and putting it on a shelf, consuming the output of the AI model, which means that you understood the problem well enough to put in a model and you say, this person really needs this information at this time to make this decision. That’s not a solved problem. And so when we do that, we’re designing the relationship that that person has with the AI model.
And I look at it like children, like my kids. I would never teach my children to quit relationships. I would never teach my children all of the toxic words that they could say in all of the languages. Instead I would teach my kids why people stay in relationships and how to build those relationships. Why do people stay at companies? That’s a much better question to ask and a much better question to ask AI, or if you’re thinking about what are all the good ways that people can support and enable good behavior?
This is frankly Positive Psychology 101 and I’m not going to take credit for it because it’s so huge, but it’s also so simple where instead of using AI to predict what kind of sentence a human being should have and give that sentence to a judge or predict whether somebody will become a drug addict and give that prediction to a pharmacist or a doctor instead use AI to augment that judge and say, Hey, you make really bad decisions when you don’t eat lunch at 11:30. Look at this trend. That’s what AI is good at. And if we could start to use AI to augment people which requires trust, which requires really deepening that relationship and making sure that that person is going to make something that they value vulnerable, to somebody else’s action to the AI’s action that means that that person really needs to understand what that AI is doing. And to me, that means that we, as data scientists, we must do better to show our work and stop waving our hands around.
I have to say, I see a lot of new data scientists using all of these tools and I’ve watched data scientists shove data into all kinds of different types of models until they get to the pattern or the feature or the level of accuracy that they want to get to. And it strikes me as odd because I always ask the question, where did the data come from? Who put that data there? What were they doing at that time? Did they have lunch? Because I think that there’s so many more exploratory questions we could be asking, but in order to engender trust, we want to get the business value. We want to get that return on the investment. We want to get that quarterly returns so that we can increase the stock value. And I think that AI and the new development of AI software that is happening on the frontline of services, I think it’s a much more profound mission. And I think that really understanding that you are making something that you value vulnerable to somebody else’s decisions and actions, you need to understand how to trust that. So we all need to have the same definition of trust and we all need to understand what are we making vulnerable to somebody else’s decisions and who is paying that bill? And who’s benefiting?
Mia Dand: So many great questions. It’s one of the things we try to address with Women in AI ethics, the reason we look at diversity and we think it’s absolutely critical in development of these systems is because when you have more folks like yourself and from different marginalized communities included then they start asking the right questions, because when you think about vulnerability is bringing those communities, bringing those voices in because they are most likely to experience those. Those are most at risk and they also more likely to ask the right questions and we cannot overstate the importance of asking the right questions, which is what you’ve just eloquently shared with us.
So I have two questions related to what you just walked us through. One is IBM has done a lot of research and amazing research in this space and you also have created tools to help organizations and their consumers better understand the systems, make better decisions. Can you walk us through at a high level, what those tools and systems are that you’ve built?
Beth Rudden: So we have the AI Explainability 360, AI Fairness 360, Adversarial Robustness 360, Fact Sheets, Uncertainty Quantification 360. All of these are open source toolkits that developers can use. We also have a series of services that we bundle around some of these toolkits, because many of our clients really need to use these toolkits to be able to get to that token that I mentioned before, because many machine learning algorithms, you do need to understand what is your feature. So even if you’re not using privileged information, but let’s say you are redirecting compute from one application to another, you really want to know what is that feature. Is that a compute heavy application or is that a memory heavy application? So you can use these things to tune and it’s very critical to know what’s going on in machine learning. So when you have these toolkits, these toolkits can be applied in order to be able to get those tokens and the features and the predictors.
So the services that we wrap around them is also a lot of education. I would say that many people are not trained to think probabilistically and to understand that algorithms are only probably correct some of the time, and people forget that. And so when we go in with our services we show what test retest reliability looks like. We show what making different choices for some of the hyper-parameters would look like. We do simulations. And the area that I am very interested in, in ontologies, we show people all of the different connections that they can make when they understand the language that people are using in business.
And I think that as we in IBM are really investing in making sure that we’re preparing the market. But also I think it’s much more about educational understanding of what happens when we put a confidence interval in front of somebody. Do they understand what that means? What happens when we tell grandma that she’s not going to get her surgery because she is qualified, but the algorithm was only 80% correct that day and sorry, that’s what 20% looks like grandma. Like how do we do this? And that’s what we are doing in services. Where we’re working with our clients and showing them, and trying to explain in plain English and not mathsplain what these algorithms are doing and what the data really is about. And that’s why the approach that I always love to think about is that customer service desk person who put the data in that field at that time, were they having a bad day? What did they have to do on the screen in addition to whatever they’re talking about with the client and customer? How did they resolve that problem? And I know that this sounds very simplistic, but I think that that’s where our stories are. And that’s where we can inspire and delight people into thinking differently about data and thinking differently about the fact that we assume that there’s so much data everywhere that we can’t really understand it. The important things in life are not necessarily in Google or in the world of data or even digitized.
I always liked the feel of the callus of my husband’s hand. There’s information and data that is beyond our wildest imagination and we should use data and all augmented experience to truly start to explore these new worlds. And I think that one of the hardest things that I do is I have to rein myself back because I find so many interesting things in our clients and customers data, and it always tends to boil down to somebody didn’t understand that somebody else had a different understanding. And it goes back to that ontological, philosophical question of how do you know what you know.
Mia Dand: I like the systematic approach that you’re using, and I think it’s because of your training, your unique background, which also brings this fresh perspective. In your consulting role, you have a big challenge, right? Like you’re working with these large companies, large organizations, and they’ve done things a certain way. So when they have to change their thinking or the way they’re developing these models, like how do you work with them? How do you influence them to do things in different ways? Like build more ethical systems and when they don’t even know, maybe that they are having these issues or they’re not doing things the right way.
Beth Rudden: I would say that the secret is simple. It’s hard work and we’re not always successful.
I mean, I think that — if people say they are, maybe you should think about that, but the track that I like to take is to inspire and delight. And when we are working with our design organization, we have a series of exercises that really go through how to communicate to people and how to get people in the same room.
And there was this one client that I had the opportunity to work with the C-Suite. And I asked the CHRO, so the head of HR, I said, “How much do your people in Costa Rica cost on average? What’s the total cost of Costa Rica? She’s like, “Oh, I don’t know. Go ask the CFO.” So I go ask the CFO and I say “If a hurricane hit Costa Rica, how many people would be — how much would that impact your business? How would that impact your people? He goes, “Oh, I don’t know. Go ask the CHRO.” Okay. I think you guys need to communicate together and imagine what it would be like if you guys all walked in the room and you had a trusted source of data and you walked in the room with the same information. You guys could actually get down to the business of doing business, correct? And those are the types of things that we do and like I said, it’s not easy.
We do have many different types of opportunities to try to get people to think a little bit differently. And you mentioned this earlier about understanding data in a little bit of a different way and tackling cognitive biases. There’s over 185 — and counting — of different cognitive biases that human beings have. And we run Academy of Technology studies in which we can get cross disciplinary teams and it’s all give back and employees that want to be a participant in a study so that they can work with their peers and meet different people as well as do different things.
And one of the most recent ones is that we ran a study about sort of pretending to be in the era of the Titanic. And understanding that the people who were in steerage would not have gotten a boat no matter what, because that was part of the culture that only the people with the first class tickets would have been looked at as worth saving per se enough to get into a boat.
And it gives people a very different perspective. When you think about the equanimity that you can feel when you’re looking over history. And I love that the word stochastic means, the changing in history and the randomness of history and time. And I think that this hopefully lends a little bit of a model of how we can inspire and delight people and our clients and customers to think differently about what is the information that they have. What is the information that is completely unknown? And then what is their bias that they might not even hear or see.
And back to your point, Mia, about diversity, I’m always like, “Alexa, could you please understand me and not my husband?” I’m trying to lower my voice because if you just think about these common everyday things that we’re intersecting with, there is so much that is socialized. That makes us blind to things that we need to be able to see. Thus, I’m trying to put as diverse as neocortex is in a room as possible so that we can see all of those unknowns or at least have the people who are underprivileged have that voice within there. So these are some of the design exercises and these are workshops that we do. We try to train our clients and customers to think of these workshops and the garage sessions that we submit to them as a tool for prioritizing and thinking differently about what they are doing in order to really get the usability, the consumability as well as the business outcome.
And I think that, that’s the secret sauce is aim for the business outcome. Pull as many diverse neocortex is in the room. Start with some exercises and start really understanding the intent by which you want to create AI or really any responsible system.
Mia Dand: I really like how you make things real for people. Like those analogies, like back in history, because everybody thinks, oh, if I was there or I would’ve done things differently. It’s taking it into the real world, applying it to the jobs and the professions say, okay, here’s the opportunity, how would you do things differently? So I find that it sounds so much more effective than some theory about how you need to do some right things. And along with your tools, your perspective, I can definitely see how that’s much more of an effective approach rather than some esoteric examples of this is what ethics looks like. So very practical.
So taking a step back. There’s so much changing in just the AI landscape. And one of the big forces in the space is the regulation. And that’s been on a lot of people’s minds and for a good reason, because the regulatory landscape keeps evolving and changing and these affect organizations, individuals. So what is your guidance like? How are you seeing this space evolve and what kind of guidance are you providing to your organizations and the companies that you work with?
Beth Rudden: So in addition to the Garage sessions and the enterprise design thinking and getting people in the room and really making sure that they have established priorities, we’re doing a lot of work with NGOs across the board, and with the different standards committees that are really affecting some of these change and effecting some of these policies. I was also part in principle of working on GDPR implementation as it rolled through most of the companies. And again, IBM being a global adult company, we were really understanding what the implementation of GDPR is.
And I think that as these regulations roll through, New York just came through where any algorithm that is impacting an employee must be explainable and transparent. There’s a reason that we have these principles out on our AI ethics site. There’s a reason that this is public domain knowledge. Everything I’m talking about is part of our public domain.
We want people to understand that companies that treat their employees and their employees’ data are going to be the companies that have the correct talent to be able to push those companies into the next future, whatever that may be. And I think it’s that trust in the employees that is something that we do and we like to make sure that our clients understand that that’s going to be a game changer. And if you think about an employee as your partner or as part of your company and your corporation — and I always like — credit unions is a great example. Every credit union employee is a member of that credit union. And I think that’s where we’re going to go is because this human construct of trust is going to require us to step back from trying to apply these algorithms in lieu of automating all of the humans. Instead, let’s automate all of the robots out of the humans.
If a human being is doing this repeatable task, let’s automate that, to liberate that person from having to do these things. We have incredible success. Where we took an onboarding process from 15 days to 15 seconds using AI. And we’ve done so many different things where we’re reducing the cycle time so that employees can be freed up from having to look in 17 different places to get the information that they need, or having to take paper and put it into a digital construct.
There are so many applications. Mia. I don’t understand why we’re always talking about the negative ones when there’s so many positive ones. And I think that we have — I have certainty that the regulations and the policies and the charters, we help companies create their own AI ethics board. We help companies create their own charters, create their own policies.
Many of those pillars, those are non-functional requirements that we turn into functional requirements with components that have system context and that they can show 30 or 40 KPIs and metrics to say, this is why we know it is auditable as being explainable. So there’s so many things that we are doing and we’re trying to get through.
I think that we just need people to either maybe it needs to get broken a little bit more, but the easy button needs to stop being hit in our world because we have a lot of work to do. And I think that that work could be fun if we start to think about AI in a way that augments the human being, that uses the data with consent, and that really makes sure that anything that we are doing that is algorithmically it is explainable and transparent. And explainable and transparent in a way that is like GDPR in plain English or in plain speech, so that a five-year-old can understand it.
Mia Dand: Amen to everything you just said. I especially like the emphasis on simplicity. Because when you have these massive documents and complex, that only a few people — it’s like you need a PhD to make sense of it. Right there, you’re creating this massive barrier that no one can actually take action. So I love the focus on simplicity. And I do like what you said and appreciate that we should be looking at more positive applications.
I publish a weekly newsletter on AI ethics and as I’m just looking at latest developments, you’re absolutely right. There are so many headlines about the negative outcomes from these models built and unleashed. That we need to get back to the basics, back to the promise of this technology and what good we could be doing.
Beth Rudden: Thank you so much for your time as well and thank you for singing the song, because I think that your podcast and how we can reach people, if even one person listens to this and they start thinking about how they can use AI to explore history or to explore their understanding of something, that is worth every minute.
Join us again next week as we’ve invited Saishruthi Swaminathan, Advisory Data Scientist — AI Strategy & Innovation at IBM to discuss how we can take responsible AI from talk to action with tools and toolkits that she has been actively developing at IBM.