BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Women in AI Ethics™ - ECPv6.8.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Women in AI Ethics™
X-ORIGINAL-URL:https://womeninaiethics.org
X-WR-CALDESC:Events for Women in AI Ethics™
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Halifax
BEGIN:DAYLIGHT
TZOFFSETFROM:-0400
TZOFFSETTO:-0300
TZNAME:ADT
DTSTART:20250309T060000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0300
TZOFFSETTO:-0400
TZNAME:AST
DTSTART:20251102T050000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20251030T110000
DTEND;TZID=America/Halifax:20251030T120000
DTSTAMP:20260409T135730
CREATED:20250126T042625Z
LAST-MODIFIED:20251027T090409Z
UID:3052-1761822000-1761825600@womeninaiethics.org
SUMMARY:Webinar: AI & Digital Safety: Preventing Generative AI Data Leakages
DESCRIPTION:AI safety is an interdisciplinary field focused on prevention of AI-driven harms. However\, many of these initiatives focus on future existential risks\, while current AI and digital threats are growing exponentially and need to be addressed urgently. These include emotional manipulation\, deepfake impersonation\, fraud\, spam\, data leakage\, and deceptive behaviors. \nA study analyzing tens of thousands of prompts for ChatGPT\, Copilot\, Gemini\, Claude\, and Perplexity found that 8.5% of employee prompts into Generative AI include sensitive data. 45.77% of sensitive code was customer data\, including billing and authentication data\, while employee data\, including payroll and personally identifiable information (PII) accounted for 27% of sensitive prompts. Access Keys and proprietary source code accounted for 5.6%1 . Generative AI data leakage continues to challenge organizations trying to secure their systems from these leakages. \nOn Thursday\, October 30 at 11a ET for our monthly AI Expert webinar\, we have invited Alexandra (Alex) Robinson and Annette Tamakloe to discuss preventing sensitive data leakages in GenAI systems. Both have worked closely together on testing and evaluating GenAI systems\, with Alex serving as a governance strategist and team leader\, and Annette as a Responsible GenAI expert who bridges governance solutions into code. \nRegister is free but required.\nhttps://us02web.zoom.us/webinar/register/6617048052185/WN_iDbKXgdSSh-AbrDiZFduZw \n  \nSPEAKERS: \n\nHost: Mia Shah-Dand is the CEO of Lighthouse3\, a technology advisory firm with operations in Asia\, North America\, and Europe. Mia is the creator of the “100 Brilliant Women in AI EthicsTM” annual list and founder of Women in AI EthicsTM\, a crucial platform for inclusion and ethics in the AI sector. In 2025\, Mia also launched WAIE+\, an AI expert community and media network by and for 99% of humanity. She is part of UNESCO’s AI Ethics Experts Without Borders (AIEB) network and Women 4 Ethical AI platform. \nhttps://www.harmonic.security/resources/from-payrolls-to-patents-the-spectrum-of-data-leaked-into-genai \n  \n\nSpeaker: Alexandra Robinson is an internationally recognized expert in responsible AI\, governance\, and cybersecurity with more than 15 years of experience advising executives and boards across federal\, nonprofit\, and private sectors worldwide. Named one of the 100 Brilliant Women in AI Ethics (2024)\, she has led enterprise AI strategies in high-stakes contexts—from building forensic counter-trafficking systems in Nepal and advancing Ebola recovery in West Africa to directing AI governance and security at the U.S. Department of Homeland Security. Alexandra has developed impact strategies and global AI frameworks\, risk assessments\, and curricula adopted across 40+ countries and advised organizations like GitHub\, Mercy Corps\, and Omidyar Group philanthropies. A sought-after speaker and published thought leader\, she brings a unique perspective at the intersection of technology\, ethics\, and leadership—helping organizations innovate responsibly while safeguarding security\, trust\, and human rights. \nLinkedIn: https://www.linkedin.com/in/alexandra-l-robinson \n  \n\nSpeaker: Annette Tamakloe is a Data Scientist who creates secure\, governable AI systems for federal agencies and enterprises. For the past six years\, she’s worked across DHS\, the State Department\, DoD\, and the Department of Labor— tackling everything from real-time evacuation dashboards during international crises to AI governance frameworks that align with NIST standards and federal security protocols. Annette’s current work focuses retrieval-augmented generation (RAG) architectures\, utilizing tools within cloud platforms such as AWS\, Azure to build agentic pipelines as well as testing and evaluation frameworks. She believes AI should be transparent and explainable—not a black box. She thinks about how decisions get made\, whether users can trace where information comes from\, and if systems can be audited when needed. \nLinkedIn: https://www.linkedin.com/in/annette-tamakloe
URL:https://womeninaiethics.org/event/webinar-ai-digital-safety-preventing-generative-ai-data-leakages/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/01/WAIE-October-Monthly-AI-Experts-webinar.jpg
END:VEVENT
END:VCALENDAR