BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Women in AI Ethics™ - ECPv6.8.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Women in AI Ethics™
X-ORIGINAL-URL:https://womeninaiethics.org
X-WR-CALDESC:Events for Women in AI Ethics™
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Halifax
BEGIN:DAYLIGHT
TZOFFSETFROM:-0400
TZOFFSETTO:-0300
TZNAME:ADT
DTSTART:20250309T060000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0300
TZOFFSETTO:-0400
TZNAME:AST
DTSTART:20251102T050000
END:STANDARD
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20250925T110000
DTEND;TZID=America/Halifax:20250925T120000
DTSTAMP:20260409T171616
CREATED:20250126T042403Z
LAST-MODIFIED:20250918T105240Z
UID:3050-1758798000-1758801600@womeninaiethics.org
SUMMARY:Webinar: From Talk to Practice: Operationalizing AI Ethics in Tech
DESCRIPTION:For all the talk and media buzz around it\, incorporating AI Ethics into technology development continues to be challenging. Part of the difficulty is reconciling ethical principles with the company  culture and other is alignment of ethical AI practices with business processes and objectives. In our  September AI Expert webinar\, we will go beyond rhetoric to dive into practical challenges in  implementing AI Ethics principles in tech creation and how to address them.  \nFor this important and timely discussion\, we have invited Dr. Stacy Hobson\, Director\, Responsible  Tech Research\, IBM Research. Dr. Hobson will share some of the work done by her team\, including research to uncover ongoing challenges of practically embedding and operationalizing ethics\,  especially in large organizations. We will also discuss the practical methods and tools that her  team has developed to support responsible technology creation.  \nJoin us on Thursday\, September 25 at 11a ET for this much-needed discussion on how to start  redefining AI practices and culture to foster ethical technology development practices. \n  \nSPEAKER/S PROFILE: \n  \n \nDr. Stacy Hobson is Director of the Responsible Technologies Research group at IBM Research. Her group’s research focuses on understanding the societal impacts of technology and promoting tech practices that minimize harms\, biases\, and other negative outcomes. Her team also develops practical methods and tools to support responsible technology creation. Her prior research has spanned multiple areas including topics such as addressing social inequities through technology\, AI transparency\, data sharing platforms for governmental crisis management and risk management techniques for financial services. \nStacy has authored more than 20 peer-reviewed publications and holds 16 US patents. Stacy earned a Bachelor of Science degree in Computer Science from South Carolina State University\, a Master of Science degree in Computer Science from Duke University and a PhD in Neuroscience and Cognitive Science from the University of Maryland at College Park. Stacy is fortunate to have the opportunity to use her expertise as a scientist in an area that she is very passionate about – addressing problems that matter for the world. \n  \nLINKS & RESOURCES\n\nResearch at IBM – People\nA Responsible and Inclusive Technology Framework for Attending to Business-to-Business Contexts\nTowards Labor Transparency in Situated Computational Systems Impact Research\nHistorical Methods for AI Evaluations\, Assessments and Audits\nRethinking AI Safety: Provocations from the History of Community-based Safety Practices\nCan LLMs Recommend More Responsible Prompts?\nSPRI: Aligning Large Language Models with Context-Situated Principles\nUnlocking AI Opportunities with the Responsible Generation and Use of Synthetic Data
URL:https://womeninaiethics.org/event/webinar-from-talk-to-practice-operationalizing-ai-ethics-in-tech/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Virtual.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250926T110000
DTEND;TZID=America/New_York:20250926T120000
DTSTAMP:20260409T171616
CREATED:20250427T073428Z
LAST-MODIFIED:20250925T085502Z
UID:3524-1758884400-1758888000@womeninaiethics.org
SUMMARY:Reading Circle:  Nannie Helen Burroughs: A Tower of Strength in the Labor World
DESCRIPTION:BOOK TITLE: Nannie Helen Burroughs: A Tower of Strength in the Labor World \nAUTHOR: Danielle Phillips-Cunningham  \nOVERVIEW: \n \nBlack girls and women were at the forefront of the labor movements of the twentieth century. However\, many were relegated to the footnote of history and their contributions minimized or erased. \nIn her recent book\, Dr. Danielle Phillips- Cunningham\, associate professor in the School of Management and Labor Relations at Rutgers university\, shares the story of Nannie Helen Burroughs whose leadership as an educator and civil rights leader was revolutionary in transforming the economic landscape for Black girls and women. \nNannie Burroughs established the National Training School for Women and Girls (NTS) in Washington\, DC\, which along with her work in the National Association of Colored Women’s Clubs\, was integral to and resulted in a powerful labor movement. Dr. Phillips Cunningham’s book is the first time that Nannie Burroughs’ story has been told\, and it definitively establishes her as one of America’s most influential labor leaders in the twentieth century. \nFor our September Women in AI Ethics™ AI Ethics Reading Circle\, we will discuss this insightful and timely book at the intersection of race\, gender\, and labor\, and the crucial lessons from history for tech labor organizers and racial justice activists. \n\nJoin the discussion on September 26\, 2025 at 11a ET: https://us02web.zoom.us/webinar/register/8017397968639/WN_SwUgaJBpROCJ6BVOn1M0DQ \nBuy the book: Nannie Helen Burroughs: A Tower of Strength in the Labor World
URL:https://womeninaiethics.org/event/reading-circle-nannie-helen-burroughs-a-tower-of-strength-in-the-labor-world/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/png:https://womeninaiethics.org/wp-content/uploads/2025/04/IMG_8307.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20251030T110000
DTEND;TZID=America/Halifax:20251030T120000
DTSTAMP:20260409T171616
CREATED:20250126T042625Z
LAST-MODIFIED:20251027T090409Z
UID:3052-1761822000-1761825600@womeninaiethics.org
SUMMARY:Webinar: AI & Digital Safety: Preventing Generative AI Data Leakages
DESCRIPTION:AI safety is an interdisciplinary field focused on prevention of AI-driven harms. However\, many of these initiatives focus on future existential risks\, while current AI and digital threats are growing exponentially and need to be addressed urgently. These include emotional manipulation\, deepfake impersonation\, fraud\, spam\, data leakage\, and deceptive behaviors. \nA study analyzing tens of thousands of prompts for ChatGPT\, Copilot\, Gemini\, Claude\, and Perplexity found that 8.5% of employee prompts into Generative AI include sensitive data. 45.77% of sensitive code was customer data\, including billing and authentication data\, while employee data\, including payroll and personally identifiable information (PII) accounted for 27% of sensitive prompts. Access Keys and proprietary source code accounted for 5.6%1 . Generative AI data leakage continues to challenge organizations trying to secure their systems from these leakages. \nOn Thursday\, October 30 at 11a ET for our monthly AI Expert webinar\, we have invited Alexandra (Alex) Robinson and Annette Tamakloe to discuss preventing sensitive data leakages in GenAI systems. Both have worked closely together on testing and evaluating GenAI systems\, with Alex serving as a governance strategist and team leader\, and Annette as a Responsible GenAI expert who bridges governance solutions into code. \nRegister is free but required.\nhttps://us02web.zoom.us/webinar/register/6617048052185/WN_iDbKXgdSSh-AbrDiZFduZw \n  \nSPEAKERS: \n\nHost: Mia Shah-Dand is the CEO of Lighthouse3\, a technology advisory firm with operations in Asia\, North America\, and Europe. Mia is the creator of the “100 Brilliant Women in AI EthicsTM” annual list and founder of Women in AI EthicsTM\, a crucial platform for inclusion and ethics in the AI sector. In 2025\, Mia also launched WAIE+\, an AI expert community and media network by and for 99% of humanity. She is part of UNESCO’s AI Ethics Experts Without Borders (AIEB) network and Women 4 Ethical AI platform. \nhttps://www.harmonic.security/resources/from-payrolls-to-patents-the-spectrum-of-data-leaked-into-genai \n  \n\nSpeaker: Alexandra Robinson is an internationally recognized expert in responsible AI\, governance\, and cybersecurity with more than 15 years of experience advising executives and boards across federal\, nonprofit\, and private sectors worldwide. Named one of the 100 Brilliant Women in AI Ethics (2024)\, she has led enterprise AI strategies in high-stakes contexts—from building forensic counter-trafficking systems in Nepal and advancing Ebola recovery in West Africa to directing AI governance and security at the U.S. Department of Homeland Security. Alexandra has developed impact strategies and global AI frameworks\, risk assessments\, and curricula adopted across 40+ countries and advised organizations like GitHub\, Mercy Corps\, and Omidyar Group philanthropies. A sought-after speaker and published thought leader\, she brings a unique perspective at the intersection of technology\, ethics\, and leadership—helping organizations innovate responsibly while safeguarding security\, trust\, and human rights. \nLinkedIn: https://www.linkedin.com/in/alexandra-l-robinson \n  \n\nSpeaker: Annette Tamakloe is a Data Scientist who creates secure\, governable AI systems for federal agencies and enterprises. For the past six years\, she’s worked across DHS\, the State Department\, DoD\, and the Department of Labor— tackling everything from real-time evacuation dashboards during international crises to AI governance frameworks that align with NIST standards and federal security protocols. Annette’s current work focuses retrieval-augmented generation (RAG) architectures\, utilizing tools within cloud platforms such as AWS\, Azure to build agentic pipelines as well as testing and evaluation frameworks. She believes AI should be transparent and explainable—not a black box. She thinks about how decisions get made\, whether users can trace where information comes from\, and if systems can be audited when needed. \nLinkedIn: https://www.linkedin.com/in/annette-tamakloe
URL:https://womeninaiethics.org/event/webinar-ai-digital-safety-preventing-generative-ai-data-leakages/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/01/WAIE-October-Monthly-AI-Experts-webinar.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251031T110000
DTEND;TZID=America/New_York:20251031T120000
DTSTAMP:20260409T171616
CREATED:20250427T073507Z
LAST-MODIFIED:20250806T074137Z
UID:3526-1761908400-1761912000@womeninaiethics.org
SUMMARY:Reading Circle: AI + Privacy - Obfuscation | Helen Nissembaum
DESCRIPTION:BOOK TITLE: Obfuscation: A User’s Guide for Privacy and Protest \nAUTHOR: Prof. Helen Nissenbaum \nDESCRIPTION: (Source: Bookshop.org) \n\n  \n \n\nWith Obfuscation\, Finn Brunton and Helen Nissenbaum mean to start a revolution. They are calling us not to the barricades but to our computers\, offering us ways to fight today’s pervasive digital surveillance—the collection of our data by governments\, corporations\, advertisers\, and hackers. To the toolkit of privacy protecting techniques and projects\, they propose adding obfuscation: the deliberate use of ambiguous\, confusing\, or misleading information to interfere with surveillance and data collection projects.  \nBrunton and Nissenbaum provide tools and a rationale for evasion\, noncompliance\, refusal\, even sabotage—especially for average users\, those of us not in a position to opt out or exert control over data about ourselves. Obfuscation will teach users to push back\, software developers to keep their user data safe\, and policy makers to gather data without misusing it. \nBrunton and Nissenbaum present a guide to the forms and formats that obfuscation has taken and explain how to craft its implementation to suit the goal and the adversary. They describe a series of historical and contemporary examples\, including radar chaff deployed by World War II pilots\, Twitter bots that hobbled the social media strategy of popular protest movements\, and software that can camouflage users’ search queries and stymie online advertising. They go on to consider obfuscation in more general terms\, discussing why obfuscation is necessary\, whether it is justified\, how it works\, and how it can be integrated with other privacy practices and technologies. \n \nAUTHOR BIO \n\n \nHelen Nissenbaum is the Andrew H. and Ann R. Tisch Professor at Cornell Tech and in the  Information Science Department at Cornell University. She is also Director of the Digital Life Initiative\, which was launched in 2017 at Cornell Tech to explore societal perspectives surrounding the development and application of digital technology\, focusing on ethics\, policy\, politics\, and quality of life. Her own research takes an ethical perspective on policy\, law\, science\, and engineering relating to information technology\, computing\, digital media and data science. Topics have included privacy\, trust\, accountability\, security\, and values in technology design.  \nHer books include Obfuscation: A User’s Guide for Privacy and Protest\, with Finn Brunton (MIT Press\, 2015) and Privacy in Context: Technology\, Policy\, and the Integrity of Social Life (Stanford\, 2010). \nRELATED LINKS: \n\nAuthor website\nWhy Data Privacy Based on Consent Is Impossible\nPrivacy in Context | Stanford University Press\nPrivacy as Contextual Integrity\nObfuscation: A User’s Guide for Privacy and Protest | MIT Press eBooks | IEEE Xplore\nFinn Brunton and Helen Nissenbaum: Obfuscation: a user’s guide for privacy and protest
URL:https://womeninaiethics.org/event/reading-circle-ai-privacy-obfuscation-helen-nissembaum/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/04/Obfuscation_-A-Users-Guide-for-Privacy-and-Protest.jpg
END:VEVENT
END:VCALENDAR