BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Women in AI Ethics™ - ECPv6.8.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Women in AI Ethics™
X-ORIGINAL-URL:https://womeninaiethics.org
X-WR-CALDESC:Events for Women in AI Ethics™
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Halifax
BEGIN:DAYLIGHT
TZOFFSETFROM:-0400
TZOFFSETTO:-0300
TZNAME:ADT
DTSTART:20250309T060000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0300
TZOFFSETTO:-0400
TZNAME:AST
DTSTART:20251102T050000
END:STANDARD
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20250828T110000
DTEND;TZID=America/Halifax:20250828T120000
DTSTAMP:20260409T184043
CREATED:20250126T042110Z
LAST-MODIFIED:20250827T072043Z
UID:3048-1756378800-1756382400@womeninaiethics.org
SUMMARY:Webinar – Panel Discussion:  Promise and Perils of AI in Education\, Thursday\, August 28
DESCRIPTION:Big tech is making massive investments in education through institutional partnerships\, increased funding\, and launching new initiatives to expand the adoption of AI and other digital tools in classrooms. Governments and educational institutions are partnering with tech companies to introduce AI into schools and universities\, with the purported goal of preparing students for the future workforce. \nThe accelerated adoption of AI & digital technologies with little or no oversight as well as lack of transparency in the usage of collected information has led to a data privacy challenge. This rapid expansion has also been met with concerns from educators who believe overreliance on AI technologies undermines teaching objectives and can diminish the learner’s critical thinking skills. \nExperts say that laws crafted without input from educators\, administrators\, parents\, and students often fail to address these and other emerging issues. The backlash against negative impact of AI on learners has included educators resisting the use of AI in classrooms and education departments blocking access to Generative AI tools like ChatGPT\, amidst concerns about safety and accuracy of AI-generated content. \nJoin us on Thursday\, August 28 at 11a ET for a timely and relevant discussion with experts on the promise and pitfalls of usage of AI and digital tools in education. \n  \nSpeaker/s Profile:\n \n  \n \nAmelia Vance is a globally recognized expert in child and student privacy\, is president of the Public Interest Privacy Center\, an organization that equips stakeholders with the insights\, training\, and tools needed to cultivate effective\, ethical\, and equitable privacy safeguards for all children and students. PIPC staffs the new Student & Child Privacy Center at AASA\, the School Superintendents Association. Amelia is also an adjunct professor at William & Mary Law School\, the co-chair of the Federal Education Privacy Coalition\, and the founder of Public Interest Privacy Consulting\, LLC. \nAmelia is a regular speaker at privacy and education conferences in the U.S. and abroad\, has testified before Congress and several state legislatures\, and has presented at events hosted by the U.S. Department of Education and the Federal Trade Commission. She currently serves on the Maryland Student Data Privacy Council. Amelia has published several resources on child and student privacy and is regularly cited in the press. \nRead more about Amelia’s work on LinkedIn. \n  \n \n Munenyashaishe (Ishe) Hove is a Data Scientist and AI Researcher with a deep commitment to building ethical\, transparent\, and socially responsible technology. Originally trained in Accounting & Finance\, she transitioned into AI through rigorous self-directed learning and global fellowships such as the Women Techsters Fellowship and WorldQuant University. Her work sits at the intersection of machine learning\, public interest technology\, and algorithmic accountability. \nIshe has explored both technical and human-centered domains—from applying AI to dynamic particle motion systems\, to researching how algorithmic systems impact people in public services\, especially in low-resource and marginalized contexts. Ishe believes that AI must serve people\, not just problems. That’s why she advocates transparency\, community participation\, and justice at every stage of the AI lifecycle. She is also the founder of the Data Science & AI Community Hub – Botswana\, a platform dedicated to mentoring aspiring African data scientists and advancing STEM education for women and girls. \nRead more about Ishe’s work on LinkedIn. \n  \nLinks & Resources: \n\nhttps://openletter.earth/an-open-letter-from-educators-who-refuse-the-call-to-adopt-genai-in-education-cb4aee75?limit=0\nhttps://www.nannainie.com/_files/ugd/cf986a_96612c9ab2bb4864be2bbbf3b73f416b.pdf\nhttps://www.unicef.org/innocenti/stories/when-schools-rush-innovate\nhttps://www.chalkbeat.org/newyork/2023/1/3/23537987/nyc-schools-ban-chatgpt-writing-artificial-intelligence/\nhttps://www.bestcolleges.com/news/schools-colleges-banned-chat-gpt-similar-ai-tools/#schoolswithdrawn\nhttps://thescreentimeconsultant.com/resources/blog/a-rude-and-necessary-awakening-what-a-recent-ftc-amicus-brief-means-for-edtech-and-what-parents-and-schools-need-to-know
URL:https://womeninaiethics.org/event/webinar-panel-discussion-promise-and-perils-of-ai-in-education/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/png:https://womeninaiethics.org/wp-content/uploads/2025/01/IMG_8075.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250829T110000
DTEND;TZID=America/New_York:20250829T120000
DTSTAMP:20260409T184043
CREATED:20250427T073308Z
LAST-MODIFIED:20250825T180425Z
UID:3522-1756465200-1756468800@womeninaiethics.org
SUMMARY:Reading Circle – Programmed Inequality by Mar Hicks\, Friday\, August 29
DESCRIPTION:Join us on last Fridays at 11a ET for our monthly AI Ethics Reading Circle\, where we discuss critical works by authors and scholars from marginalized and underrepresented  communities in tech.   \n\nBOOK TITLE: Programmed Inequality: how Britain discarded women technologists and lost its edge  in computing\, MIT Press\, 2017  \nAUTHOR: Mar Hicks  \nBUY THE BOOK: WAIE affiliate bookstore (support independent bookstores)  \n\n \n\nIn 1944\, Britain led the world in electronic computing. By 1974\, the British computer industry was all but extinct. What happened in the intervening thirty years holds lessons for all postindustrial superpowers. As Britain struggled to use technology to retain its global power\, the nation’s inability to manage its technical labor force hobbled its transition into the information age. \nIn Programmed Inequality\, Mar Hicks explores the story of labor feminization and gendered technocracy that undercut British efforts to computerize. That failure sprang from the government’s systematic neglect of its largest trained technical workforce simply because they were women. Women were a hidden engine of growth in high technology from World War II to the 1960s. As computing experienced a gender flip\, becoming male-identified in the 1960s and 1970s\, labor problems grew into structural ones and gender discrimination caused the nation’s largest computer user–the civil service and sprawling public sector–to make decisions that were disastrous for the British computer industry and the nation as a whole. \nDrawing on recently opened government files\, personal interviews\, and the archives of major British computer companies\, Programmed Inequality takes aim at the fiction of technological meritocracy. Hicks explains why\, even today\, possessing technical skill is not enough to ensure that women will rise to the top in science and technology fields. Programmed Inequality shows how the disappearance of women from the field had grave macroeconomic consequences for Britain\, and why the United States risks repeating those errors in the twenty-first century. \n  \nAUTHOR BIO: \n \nMar Hicks is a historian of technology who investigates how gender and sexuality change what we think we know about technological progress and the global “computer revolution.” \nThey are currently an Associate Professor at The University of Virginia’s School of Data Science\, in Charlottesville\, where they do research and teach courses on the history of technology\, computing and society\, and the larger implications of powerful and widespread digital infrastructures. \nTheir award-winning book\, Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing\, investigates why the proportion of women declined as electronic computing matured\, and how this labor situation had grave effects on the technological aspirations of that waning superpower. It shows what lessons this holds for other nations\, especially the United States\, and how history can help us make sense of the present and the future by focusing not just on technological success stories\, but also stories of technological failure. \nMargot Shetterly\, author of Hidden Figures\, has called it an “important lesson for scholars and policymakers seeking ways to improve inclusion in STEM fields.” Maria Klawe\, President of Harvey Mudd College and an expert on diversity in STEM\, has described the book as “one of the best researched and most compelling examples of the negative impact of gender and class discrimination on a country’s economy.” \n  \nRELATED LINKS: \n\nLinkedIn\nAuthor website\nProgrammed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing\nYour Computer is on Fire (MIT Press)\nResearch Affiliate\, Centre for Democracy and Technology\, University of Cambridge\nMember\, Scholars’ Council\, Center for Critical Internet Inquiry\, UCLA\n\nAssociate Editor\, IEEE Annals of the History of Computing
URL:https://womeninaiethics.org/event/reading-circle-programmed-inequality-by-mar-hicks/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/png:https://womeninaiethics.org/wp-content/uploads/2025/04/Reading-Circle-August-29-2025-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20250925T110000
DTEND;TZID=America/Halifax:20250925T120000
DTSTAMP:20260409T184043
CREATED:20250126T042403Z
LAST-MODIFIED:20250918T105240Z
UID:3050-1758798000-1758801600@womeninaiethics.org
SUMMARY:Webinar: From Talk to Practice: Operationalizing AI Ethics in Tech
DESCRIPTION:For all the talk and media buzz around it\, incorporating AI Ethics into technology development continues to be challenging. Part of the difficulty is reconciling ethical principles with the company  culture and other is alignment of ethical AI practices with business processes and objectives. In our  September AI Expert webinar\, we will go beyond rhetoric to dive into practical challenges in  implementing AI Ethics principles in tech creation and how to address them.  \nFor this important and timely discussion\, we have invited Dr. Stacy Hobson\, Director\, Responsible  Tech Research\, IBM Research. Dr. Hobson will share some of the work done by her team\, including research to uncover ongoing challenges of practically embedding and operationalizing ethics\,  especially in large organizations. We will also discuss the practical methods and tools that her  team has developed to support responsible technology creation.  \nJoin us on Thursday\, September 25 at 11a ET for this much-needed discussion on how to start  redefining AI practices and culture to foster ethical technology development practices. \n  \nSPEAKER/S PROFILE: \n  \n \nDr. Stacy Hobson is Director of the Responsible Technologies Research group at IBM Research. Her group’s research focuses on understanding the societal impacts of technology and promoting tech practices that minimize harms\, biases\, and other negative outcomes. Her team also develops practical methods and tools to support responsible technology creation. Her prior research has spanned multiple areas including topics such as addressing social inequities through technology\, AI transparency\, data sharing platforms for governmental crisis management and risk management techniques for financial services. \nStacy has authored more than 20 peer-reviewed publications and holds 16 US patents. Stacy earned a Bachelor of Science degree in Computer Science from South Carolina State University\, a Master of Science degree in Computer Science from Duke University and a PhD in Neuroscience and Cognitive Science from the University of Maryland at College Park. Stacy is fortunate to have the opportunity to use her expertise as a scientist in an area that she is very passionate about – addressing problems that matter for the world. \n  \nLINKS & RESOURCES\n\nResearch at IBM – People\nA Responsible and Inclusive Technology Framework for Attending to Business-to-Business Contexts\nTowards Labor Transparency in Situated Computational Systems Impact Research\nHistorical Methods for AI Evaluations\, Assessments and Audits\nRethinking AI Safety: Provocations from the History of Community-based Safety Practices\nCan LLMs Recommend More Responsible Prompts?\nSPRI: Aligning Large Language Models with Context-Situated Principles\nUnlocking AI Opportunities with the Responsible Generation and Use of Synthetic Data
URL:https://womeninaiethics.org/event/webinar-from-talk-to-practice-operationalizing-ai-ethics-in-tech/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Virtual.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250926T110000
DTEND;TZID=America/New_York:20250926T120000
DTSTAMP:20260409T184043
CREATED:20250427T073428Z
LAST-MODIFIED:20250925T085502Z
UID:3524-1758884400-1758888000@womeninaiethics.org
SUMMARY:Reading Circle:  Nannie Helen Burroughs: A Tower of Strength in the Labor World
DESCRIPTION:BOOK TITLE: Nannie Helen Burroughs: A Tower of Strength in the Labor World \nAUTHOR: Danielle Phillips-Cunningham  \nOVERVIEW: \n \nBlack girls and women were at the forefront of the labor movements of the twentieth century. However\, many were relegated to the footnote of history and their contributions minimized or erased. \nIn her recent book\, Dr. Danielle Phillips- Cunningham\, associate professor in the School of Management and Labor Relations at Rutgers university\, shares the story of Nannie Helen Burroughs whose leadership as an educator and civil rights leader was revolutionary in transforming the economic landscape for Black girls and women. \nNannie Burroughs established the National Training School for Women and Girls (NTS) in Washington\, DC\, which along with her work in the National Association of Colored Women’s Clubs\, was integral to and resulted in a powerful labor movement. Dr. Phillips Cunningham’s book is the first time that Nannie Burroughs’ story has been told\, and it definitively establishes her as one of America’s most influential labor leaders in the twentieth century. \nFor our September Women in AI Ethics™ AI Ethics Reading Circle\, we will discuss this insightful and timely book at the intersection of race\, gender\, and labor\, and the crucial lessons from history for tech labor organizers and racial justice activists. \n\nJoin the discussion on September 26\, 2025 at 11a ET: https://us02web.zoom.us/webinar/register/8017397968639/WN_SwUgaJBpROCJ6BVOn1M0DQ \nBuy the book: Nannie Helen Burroughs: A Tower of Strength in the Labor World
URL:https://womeninaiethics.org/event/reading-circle-nannie-helen-burroughs-a-tower-of-strength-in-the-labor-world/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/png:https://womeninaiethics.org/wp-content/uploads/2025/04/IMG_8307.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20251030T110000
DTEND;TZID=America/Halifax:20251030T120000
DTSTAMP:20260409T184043
CREATED:20250126T042625Z
LAST-MODIFIED:20251027T090409Z
UID:3052-1761822000-1761825600@womeninaiethics.org
SUMMARY:Webinar: AI & Digital Safety: Preventing Generative AI Data Leakages
DESCRIPTION:AI safety is an interdisciplinary field focused on prevention of AI-driven harms. However\, many of these initiatives focus on future existential risks\, while current AI and digital threats are growing exponentially and need to be addressed urgently. These include emotional manipulation\, deepfake impersonation\, fraud\, spam\, data leakage\, and deceptive behaviors. \nA study analyzing tens of thousands of prompts for ChatGPT\, Copilot\, Gemini\, Claude\, and Perplexity found that 8.5% of employee prompts into Generative AI include sensitive data. 45.77% of sensitive code was customer data\, including billing and authentication data\, while employee data\, including payroll and personally identifiable information (PII) accounted for 27% of sensitive prompts. Access Keys and proprietary source code accounted for 5.6%1 . Generative AI data leakage continues to challenge organizations trying to secure their systems from these leakages. \nOn Thursday\, October 30 at 11a ET for our monthly AI Expert webinar\, we have invited Alexandra (Alex) Robinson and Annette Tamakloe to discuss preventing sensitive data leakages in GenAI systems. Both have worked closely together on testing and evaluating GenAI systems\, with Alex serving as a governance strategist and team leader\, and Annette as a Responsible GenAI expert who bridges governance solutions into code. \nRegister is free but required.\nhttps://us02web.zoom.us/webinar/register/6617048052185/WN_iDbKXgdSSh-AbrDiZFduZw \n  \nSPEAKERS: \n\nHost: Mia Shah-Dand is the CEO of Lighthouse3\, a technology advisory firm with operations in Asia\, North America\, and Europe. Mia is the creator of the “100 Brilliant Women in AI EthicsTM” annual list and founder of Women in AI EthicsTM\, a crucial platform for inclusion and ethics in the AI sector. In 2025\, Mia also launched WAIE+\, an AI expert community and media network by and for 99% of humanity. She is part of UNESCO’s AI Ethics Experts Without Borders (AIEB) network and Women 4 Ethical AI platform. \nhttps://www.harmonic.security/resources/from-payrolls-to-patents-the-spectrum-of-data-leaked-into-genai \n  \n\nSpeaker: Alexandra Robinson is an internationally recognized expert in responsible AI\, governance\, and cybersecurity with more than 15 years of experience advising executives and boards across federal\, nonprofit\, and private sectors worldwide. Named one of the 100 Brilliant Women in AI Ethics (2024)\, she has led enterprise AI strategies in high-stakes contexts—from building forensic counter-trafficking systems in Nepal and advancing Ebola recovery in West Africa to directing AI governance and security at the U.S. Department of Homeland Security. Alexandra has developed impact strategies and global AI frameworks\, risk assessments\, and curricula adopted across 40+ countries and advised organizations like GitHub\, Mercy Corps\, and Omidyar Group philanthropies. A sought-after speaker and published thought leader\, she brings a unique perspective at the intersection of technology\, ethics\, and leadership—helping organizations innovate responsibly while safeguarding security\, trust\, and human rights. \nLinkedIn: https://www.linkedin.com/in/alexandra-l-robinson \n  \n\nSpeaker: Annette Tamakloe is a Data Scientist who creates secure\, governable AI systems for federal agencies and enterprises. For the past six years\, she’s worked across DHS\, the State Department\, DoD\, and the Department of Labor— tackling everything from real-time evacuation dashboards during international crises to AI governance frameworks that align with NIST standards and federal security protocols. Annette’s current work focuses retrieval-augmented generation (RAG) architectures\, utilizing tools within cloud platforms such as AWS\, Azure to build agentic pipelines as well as testing and evaluation frameworks. She believes AI should be transparent and explainable—not a black box. She thinks about how decisions get made\, whether users can trace where information comes from\, and if systems can be audited when needed. \nLinkedIn: https://www.linkedin.com/in/annette-tamakloe
URL:https://womeninaiethics.org/event/webinar-ai-digital-safety-preventing-generative-ai-data-leakages/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/01/WAIE-October-Monthly-AI-Experts-webinar.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251031T110000
DTEND;TZID=America/New_York:20251031T120000
DTSTAMP:20260409T184043
CREATED:20250427T073507Z
LAST-MODIFIED:20250806T074137Z
UID:3526-1761908400-1761912000@womeninaiethics.org
SUMMARY:Reading Circle: AI + Privacy - Obfuscation | Helen Nissembaum
DESCRIPTION:BOOK TITLE: Obfuscation: A User’s Guide for Privacy and Protest \nAUTHOR: Prof. Helen Nissenbaum \nDESCRIPTION: (Source: Bookshop.org) \n\n  \n \n\nWith Obfuscation\, Finn Brunton and Helen Nissenbaum mean to start a revolution. They are calling us not to the barricades but to our computers\, offering us ways to fight today’s pervasive digital surveillance—the collection of our data by governments\, corporations\, advertisers\, and hackers. To the toolkit of privacy protecting techniques and projects\, they propose adding obfuscation: the deliberate use of ambiguous\, confusing\, or misleading information to interfere with surveillance and data collection projects.  \nBrunton and Nissenbaum provide tools and a rationale for evasion\, noncompliance\, refusal\, even sabotage—especially for average users\, those of us not in a position to opt out or exert control over data about ourselves. Obfuscation will teach users to push back\, software developers to keep their user data safe\, and policy makers to gather data without misusing it. \nBrunton and Nissenbaum present a guide to the forms and formats that obfuscation has taken and explain how to craft its implementation to suit the goal and the adversary. They describe a series of historical and contemporary examples\, including radar chaff deployed by World War II pilots\, Twitter bots that hobbled the social media strategy of popular protest movements\, and software that can camouflage users’ search queries and stymie online advertising. They go on to consider obfuscation in more general terms\, discussing why obfuscation is necessary\, whether it is justified\, how it works\, and how it can be integrated with other privacy practices and technologies. \n \nAUTHOR BIO \n\n \nHelen Nissenbaum is the Andrew H. and Ann R. Tisch Professor at Cornell Tech and in the  Information Science Department at Cornell University. She is also Director of the Digital Life Initiative\, which was launched in 2017 at Cornell Tech to explore societal perspectives surrounding the development and application of digital technology\, focusing on ethics\, policy\, politics\, and quality of life. Her own research takes an ethical perspective on policy\, law\, science\, and engineering relating to information technology\, computing\, digital media and data science. Topics have included privacy\, trust\, accountability\, security\, and values in technology design.  \nHer books include Obfuscation: A User’s Guide for Privacy and Protest\, with Finn Brunton (MIT Press\, 2015) and Privacy in Context: Technology\, Policy\, and the Integrity of Social Life (Stanford\, 2010). \nRELATED LINKS: \n\nAuthor website\nWhy Data Privacy Based on Consent Is Impossible\nPrivacy in Context | Stanford University Press\nPrivacy as Contextual Integrity\nObfuscation: A User’s Guide for Privacy and Protest | MIT Press eBooks | IEEE Xplore\nFinn Brunton and Helen Nissenbaum: Obfuscation: a user’s guide for privacy and protest
URL:https://womeninaiethics.org/event/reading-circle-ai-privacy-obfuscation-helen-nissembaum/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/04/Obfuscation_-A-Users-Guide-for-Privacy-and-Protest.jpg
END:VEVENT
END:VCALENDAR