BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Women in AI Ethics™ - ECPv6.8.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://womeninaiethics.org
X-WR-CALDESC:Events for Women in AI Ethics™
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Halifax
BEGIN:DAYLIGHT
TZOFFSETFROM:-0400
TZOFFSETTO:-0300
TZNAME:ADT
DTSTART:20250309T060000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0300
TZOFFSETTO:-0400
TZNAME:AST
DTSTART:20251102T050000
END:STANDARD
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20251030T110000
DTEND;TZID=America/Halifax:20251030T120000
DTSTAMP:20260409T135932
CREATED:20250126T042625Z
LAST-MODIFIED:20251027T090409Z
UID:3052-1761822000-1761825600@womeninaiethics.org
SUMMARY:Webinar: AI & Digital Safety: Preventing Generative AI Data Leakages
DESCRIPTION:AI safety is an interdisciplinary field focused on prevention of AI-driven harms. However\, many of these initiatives focus on future existential risks\, while current AI and digital threats are growing exponentially and need to be addressed urgently. These include emotional manipulation\, deepfake impersonation\, fraud\, spam\, data leakage\, and deceptive behaviors. \nA study analyzing tens of thousands of prompts for ChatGPT\, Copilot\, Gemini\, Claude\, and Perplexity found that 8.5% of employee prompts into Generative AI include sensitive data. 45.77% of sensitive code was customer data\, including billing and authentication data\, while employee data\, including payroll and personally identifiable information (PII) accounted for 27% of sensitive prompts. Access Keys and proprietary source code accounted for 5.6%1 . Generative AI data leakage continues to challenge organizations trying to secure their systems from these leakages. \nOn Thursday\, October 30 at 11a ET for our monthly AI Expert webinar\, we have invited Alexandra (Alex) Robinson and Annette Tamakloe to discuss preventing sensitive data leakages in GenAI systems. Both have worked closely together on testing and evaluating GenAI systems\, with Alex serving as a governance strategist and team leader\, and Annette as a Responsible GenAI expert who bridges governance solutions into code. \nRegister is free but required.\nhttps://us02web.zoom.us/webinar/register/6617048052185/WN_iDbKXgdSSh-AbrDiZFduZw \n  \nSPEAKERS: \n\nHost: Mia Shah-Dand is the CEO of Lighthouse3\, a technology advisory firm with operations in Asia\, North America\, and Europe. Mia is the creator of the “100 Brilliant Women in AI EthicsTM” annual list and founder of Women in AI EthicsTM\, a crucial platform for inclusion and ethics in the AI sector. In 2025\, Mia also launched WAIE+\, an AI expert community and media network by and for 99% of humanity. She is part of UNESCO’s AI Ethics Experts Without Borders (AIEB) network and Women 4 Ethical AI platform. \nhttps://www.harmonic.security/resources/from-payrolls-to-patents-the-spectrum-of-data-leaked-into-genai \n  \n\nSpeaker: Alexandra Robinson is an internationally recognized expert in responsible AI\, governance\, and cybersecurity with more than 15 years of experience advising executives and boards across federal\, nonprofit\, and private sectors worldwide. Named one of the 100 Brilliant Women in AI Ethics (2024)\, she has led enterprise AI strategies in high-stakes contexts—from building forensic counter-trafficking systems in Nepal and advancing Ebola recovery in West Africa to directing AI governance and security at the U.S. Department of Homeland Security. Alexandra has developed impact strategies and global AI frameworks\, risk assessments\, and curricula adopted across 40+ countries and advised organizations like GitHub\, Mercy Corps\, and Omidyar Group philanthropies. A sought-after speaker and published thought leader\, she brings a unique perspective at the intersection of technology\, ethics\, and leadership—helping organizations innovate responsibly while safeguarding security\, trust\, and human rights. \nLinkedIn: https://www.linkedin.com/in/alexandra-l-robinson \n  \n\nSpeaker: Annette Tamakloe is a Data Scientist who creates secure\, governable AI systems for federal agencies and enterprises. For the past six years\, she’s worked across DHS\, the State Department\, DoD\, and the Department of Labor— tackling everything from real-time evacuation dashboards during international crises to AI governance frameworks that align with NIST standards and federal security protocols. Annette’s current work focuses retrieval-augmented generation (RAG) architectures\, utilizing tools within cloud platforms such as AWS\, Azure to build agentic pipelines as well as testing and evaluation frameworks. She believes AI should be transparent and explainable—not a black box. She thinks about how decisions get made\, whether users can trace where information comes from\, and if systems can be audited when needed. \nLinkedIn: https://www.linkedin.com/in/annette-tamakloe
URL:https://womeninaiethics.org/event/webinar-ai-digital-safety-preventing-generative-ai-data-leakages/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/01/WAIE-October-Monthly-AI-Experts-webinar.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251031T110000
DTEND;TZID=America/New_York:20251031T120000
DTSTAMP:20260409T135932
CREATED:20250427T073507Z
LAST-MODIFIED:20250806T074137Z
UID:3526-1761908400-1761912000@womeninaiethics.org
SUMMARY:Reading Circle: AI + Privacy - Obfuscation | Helen Nissembaum
DESCRIPTION:BOOK TITLE: Obfuscation: A User’s Guide for Privacy and Protest \nAUTHOR: Prof. Helen Nissenbaum \nDESCRIPTION: (Source: Bookshop.org) \n\n  \n \n\nWith Obfuscation\, Finn Brunton and Helen Nissenbaum mean to start a revolution. They are calling us not to the barricades but to our computers\, offering us ways to fight today’s pervasive digital surveillance—the collection of our data by governments\, corporations\, advertisers\, and hackers. To the toolkit of privacy protecting techniques and projects\, they propose adding obfuscation: the deliberate use of ambiguous\, confusing\, or misleading information to interfere with surveillance and data collection projects.  \nBrunton and Nissenbaum provide tools and a rationale for evasion\, noncompliance\, refusal\, even sabotage—especially for average users\, those of us not in a position to opt out or exert control over data about ourselves. Obfuscation will teach users to push back\, software developers to keep their user data safe\, and policy makers to gather data without misusing it. \nBrunton and Nissenbaum present a guide to the forms and formats that obfuscation has taken and explain how to craft its implementation to suit the goal and the adversary. They describe a series of historical and contemporary examples\, including radar chaff deployed by World War II pilots\, Twitter bots that hobbled the social media strategy of popular protest movements\, and software that can camouflage users’ search queries and stymie online advertising. They go on to consider obfuscation in more general terms\, discussing why obfuscation is necessary\, whether it is justified\, how it works\, and how it can be integrated with other privacy practices and technologies. \n \nAUTHOR BIO \n\n \nHelen Nissenbaum is the Andrew H. and Ann R. Tisch Professor at Cornell Tech and in the  Information Science Department at Cornell University. She is also Director of the Digital Life Initiative\, which was launched in 2017 at Cornell Tech to explore societal perspectives surrounding the development and application of digital technology\, focusing on ethics\, policy\, politics\, and quality of life. Her own research takes an ethical perspective on policy\, law\, science\, and engineering relating to information technology\, computing\, digital media and data science. Topics have included privacy\, trust\, accountability\, security\, and values in technology design.  \nHer books include Obfuscation: A User’s Guide for Privacy and Protest\, with Finn Brunton (MIT Press\, 2015) and Privacy in Context: Technology\, Policy\, and the Integrity of Social Life (Stanford\, 2010). \nRELATED LINKS: \n\nAuthor website\nWhy Data Privacy Based on Consent Is Impossible\nPrivacy in Context | Stanford University Press\nPrivacy as Contextual Integrity\nObfuscation: A User’s Guide for Privacy and Protest | MIT Press eBooks | IEEE Xplore\nFinn Brunton and Helen Nissenbaum: Obfuscation: a user’s guide for privacy and protest
URL:https://womeninaiethics.org/event/reading-circle-ai-privacy-obfuscation-helen-nissembaum/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/04/Obfuscation_-A-Users-Guide-for-Privacy-and-Protest.jpg
END:VEVENT
END:VCALENDAR