Read the compelling story of Nidhi Sudhan, Co-founder of Citizen Digital Foundation.
This interview is part of Women in AI Ethics (WAIE) “I am the future of AI” story series, which highlights multidisciplinary talent in AI by featuring inspiring career journeys of women and non-binary people from historically underrepresented groups in tech.
As an expert in media and communications, her turning point came when perverse incentives compromised the integrity of the fourth pillar of democracy.
Here, she recounts her career path, and the challenges she faced, and offers valuable insights for those seeking encouragement to pursue an ethical career in AI.
Can you share an incident that inspired you to join this space?
My work in AI ethics has primarily been in my current capacity as Co-founder of Citizen Digital Foundation (CDF) India. As a media and communications expert, the hijack of the fourth pillar of democracy by perverse incentives led to my inflection point.
CDF took shape in Aug 2021 out of my love for media’s transformative power over 17 years of practicing it, and the disheartening realisation of its consequent harms and misuse. Witnessing how tech-driven media, meant to connect and inform, was increasingly used to manipulate, and fragment individuals, and democracies, and distort information and knowledge systems, disturbed me.
Through CDF I advocate AI Governance, Responsible Tech/AI to a cross-section of audiences including higher education, startups, business leaders, media organisations, policymakers, and government entities.
How did you land your current role?
After working in the media for 17 years, I did a responsible-leadership focused MBA in 2018–20 in the UK. I studied and researched hyper-personalised communication’s impact on consumer autonomy, evaluated content policies, dark patterns, behaviour manipulation, and overall values chains of digital media businesses, and arrived at a point where it was difficult to un-see what was going on.
As I applied to digital media companies for content moderation and editorial integrity roles to influence the system from within, conversations with my lecturers led me to start on my own. The decision to advocate AI ethics in India was driven by the awareness lacuna among end-users, high levels of exploitative tech business practices in a development-driven economy, and a surveillance-driven regulatory environment.
What kind of issues in AI do you tackle in your day-to-day work?
At CDF, we address the entire spectrum of techno-social challenges and work on influencing systems change and circularity towards safer, responsible, equitable AI in digital ICTs.
We cover data privacy, misinformation and polarisation, behaviour manipulation, bias & discrimination in AI, AI governance, online child safety, surveillance, frauds & scams, etc., from a systems perspective, in our training and awareness sessions and other interventions. We are also working towards facilitating Responsible AI development by channeling funds and support in the direction.
If you have a non-traditional or non-technical background, what barriers did you encounter and how did you overcome them?
My background is in media and communications. It’s not uncommon to come across a conflation of AI ethics with primarily technical or policy expertise. In an ecosystem dominated largely by men, paternalistic and condescending approaches tend to be more commonplace than open, equitable inclusion or conversation.
In a country like India battling a bunch of other socioeconomic priorities, it’s also difficult to find funding for ‘Responsible Tech’ which is still considered a niche and is not sufficiently understood.
These barriers are not going away anytime soon, and we navigate it with tact, assertiveness, and doggedness as the instance demands. We conceal any exasperation and continue to facilitate more representation in tech to even things out for those who come after us.
Why is more diversity — gender, race, orientation, socio-economic background, other — in the AI ethics space important?
The current AI ecosystem suffers from a ‘blinders’ problem. Far too often, tools, services, and products are conceived and built by a select few, for a few. This limited perspective hinders progress for diverse communities and exacerbates existing inequalities, preventing economies from flourishing holistically. Unless technologies become equitably accessible and beneficial for all, purely profit and power-driven entities will continue to set the course.
The lack of lived experiences among developers, often disconnected from the people, communities, and cultures they aim to serve, creates blind spots that lead to exclusion, discrimination, and division when AI is integrated into critical systems like education, law enforcement, justice, employment, healthcare, and finance. This issue is further amplified by the current global AI arms race, fuelled by a multi-polar trap, where half-baked AI solutions are rapidly shipped out, creating entrenched economic dependencies that are difficult to untangle. Diverse, representative data and perspectives in the design, development, and deployment of AI are a must to course-correct many of the past misses and prevent large-scale exacerbation of inequities and injustices.
What is your advice to those from non-traditional backgrounds who want to do meaningful work in this space on how to overcome barriers like tech bro culture, lack of ethical funding/opportunities, etc.?
No voice is too small. If you are a journalist, agriculturist, social worker, educator, artist, healthcare professional, or even a student, and feel strongly about the need for equitable tech development and how AI is impacting your field, then you can step up and make your voice heard and be the diversity, representation, and leadership that’s missing in AI ethics right now.
If you are already part of a system, then start by questioning present approaches and recommending RAI to your institutions/organisations. If you are outside, then join RAI communities and forums online, apply for fellowships and courses at the intersection of your area and AI, or join organisations and civil society bodies working on AI ethics and lend your skills to develop the narrative.
Nidhi is a remarkable individual who serves as a co-founder at Citizen Digital Foundation (CDF), a non-profit based in Trivandrum, India. At CDF, their mission is to foster information literacy and advocate for responsible technology, promoting safe and mindful navigation and innovation in the digital ecosystem. Furthermore, CDF actively advocates for responsible innovation, urging informed action among media professionals, governance bodies, and bureaucracy, thus shaping a more responsible digital landscape.
Learn more about her and link up with her on LinkedIn.
Sign up for the Women in AI Ethics mailing list to stay connected with this community of AI pioneers, experts, and emerging talent.