BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Women in AI Ethics™ - ECPv6.8.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://womeninaiethics.org
X-WR-CALDESC:Events for Women in AI Ethics™
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
TZID:America/Halifax
BEGIN:DAYLIGHT
TZOFFSETFROM:-0400
TZOFFSETTO:-0300
TZNAME:ADT
DTSTART:20240310T060000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0300
TZOFFSETTO:-0400
TZNAME:AST
DTSTART:20241103T050000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0400
TZOFFSETTO:-0300
TZNAME:ADT
DTSTART:20250309T060000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0300
TZOFFSETTO:-0400
TZNAME:AST
DTSTART:20251102T050000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231201T080000
DTEND;TZID=America/New_York:20231201T170000
DTSTAMP:20260403T152136
CREATED:20240109T063439Z
LAST-MODIFIED:20240125T075127Z
UID:2283-1701417600-1701450000@womeninaiethics.org
SUMMARY:Women in AI Ethics™ Annual Summit – 2023
DESCRIPTION:
URL:https://womeninaiethics.org/event/women-in-ai-ethics-annual-summit-2023/
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2023/11/Collage-Speaker-Post_LI-09-1-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20240125T110000
DTEND;TZID=America/Halifax:20240125T120000
DTSTAMP:20260403T152136
CREATED:20240107T074404Z
LAST-MODIFIED:20240314T060735Z
UID:2263-1706180400-1706184000@womeninaiethics.org
SUMMARY:WAIE Monthly AI Expert Webinar Series
DESCRIPTION:Join Women in AI Ethics™ (WAIE) on January 25th\, as we kick off our AI expert webinar series for 2024 with Merve Hickok\, the globally renowned expert on AI policy\, ethics\, and governance. We will hear Merve’s insights on the latest developments in AI policy and global regulations\, discuss her upcoming book on turning trustworthy AI principles into public procurement practices\, and get her predictions for the AI regulatory space in 2024 and beyond.
URL:https://womeninaiethics.org/event/waie-monthly-ai-expert-webinar-series-4/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Web_MonthlyWebinar_Merve-1-scaled-e1705812151860.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240131T173000
DTEND;TZID=America/New_York:20240131T193000
DTSTAMP:20260403T152136
CREATED:20240107T072055Z
LAST-MODIFIED:20240120T031835Z
UID:2251-1706722200-1706729400@womeninaiethics.org
SUMMARY:WAIE Monthly Networking Happy Hour
DESCRIPTION:Join Women in AI Ethics on January 31st for our first monthly networking happy hour of 2024. On last Wednesdays. we bring together our community of diverse AI Ethics experts and rising stars to unwind\, meet\, and get inspired by other brilliant peers in this critical space. \nThe venue will alternate between Manhattan and Brooklyn. Location will be shared with all registered attendees via email.
URL:https://womeninaiethics.org/event/member-networking-happy-hour/
LOCATION:New York
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Happy-Hour.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20240229T110000
DTEND;TZID=America/Halifax:20240229T120000
DTSTAMP:20260403T152136
CREATED:20240109T073806Z
LAST-MODIFIED:20240314T060719Z
UID:2307-1709204400-1709208000@womeninaiethics.org
SUMMARY:Women in AI Ethics™ Monthly AI Expert Webinar
DESCRIPTION:Speaker: Mar Hicks \nDescription: On the eve of Women’s History Month\, we have invited Data Science Professor\, Author\, and Historian Mar Hicks to discuss their book ‘Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing.’ We will also discuss their co-edited volume\, ‘Your Computer is on Fire\,’ which presents a variety of case studies from leading scholars of technology and society to help connect the history of technology to our current\, pressing problems with high tech. Join us on February 29 as the leading historian of our times shares urgent lessons from Britain’s missteps for other nations\, especially the United States as it seeks to retain its leadership position in AI amidst a growing diversity crisis.
URL:https://womeninaiethics.org/event/women-in-ai-ethics-monthly-ai-expert-webinar/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Monthly-Webinar-Series_Mar-Hicks_D4_LinkedIn-02-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240307T190000
DTEND;TZID=America/New_York:20240307T200000
DTSTAMP:20260403T152136
CREATED:20240225T040918Z
LAST-MODIFIED:20240226T171711Z
UID:2542-1709838000-1709841600@womeninaiethics.org
SUMMARY:Building AI for the Greater Good (hosted by Grace Farms)
DESCRIPTION:Event: Building AI for the Greater Good (hosted by Grace Farms) \nSpeaker/s: Moderator Karen Kariuki\, Mia Shah-Dand\, Liz Grennan\, Stephanie Dinkins. \nDescription: Hear from leaders exploring the vast field of AI as a tool for fostering positive outcomes and advancing equity. Moderator Karen Kariuki\, Grace Farms Senior Program Officer\, will be joined onstage by Liz Grennan\, Global Co-Lead for McKinsey’s Digital Trust service line\, Mia Shah-Dand\, Founder of Women in AI Ethics and Lighthouse3\, a consultancy focusing on responsible AI and data governance\, and Stephanie Dinkins\, Kusama Endowed Chair in Art at Stony Brook University and advocate for inclusive AI.
URL:https://womeninaiethics.org/event/building-ai-for-the-greater-good/
LOCATION:Grace Farms\, 365 Lukes Wood Road\, New Canaan\, CT\, 06840\, United States
ATTACH;FMTTYPE=image/png:https://womeninaiethics.org/wp-content/uploads/2024/02/Building-AI-for-the-Greater-Good.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240318T163000
DTEND;TZID=America/New_York:20240318T193000
DTSTAMP:20260403T152136
CREATED:20240109T074027Z
LAST-MODIFIED:20240305T084354Z
UID:2315-1710779400-1710790200@womeninaiethics.org
SUMMARY:Beyond the AI Hype
DESCRIPTION:Brief Description: On March 18\, join us at the Brooklyn Public Library to celebrate Women’s History Month with a special AI literary event hosted by Women in AI Ethics™ where leading AI authors and scholars will demystify algorithms\, debunk popular myths about AI\, and share how we can ensure these powerful technologies benefit all of us and not just a select few.
URL:https://womeninaiethics.org/event/beyond-the-ai-hype/
LOCATION:Info Commons Lab\, Central Library\, 10 Grand Army Plaza\, Brooklyn\, NY\, 11238\, United States
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Events_BeyondAIHype_Website-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240326T160000
DTEND;TZID=America/New_York:20240326T180000
DTSTAMP:20260403T152136
CREATED:20240215T193541Z
LAST-MODIFIED:20240325T171254Z
UID:2530-1711468800-1711476000@womeninaiethics.org
SUMMARY:Disinformation\, Deepfakes\, and other AI-Generated Threats to Women and Democracy
DESCRIPTION:Brief Description: \nJoin Women in AI Ethics™ on March 26 for Women’s History Month\, as we host a fireside chat with the Honorable Yvette D. Clarke\, Member of Congress at the Info Commons Lab\, Central Library. We will discuss Rep. Clarke’s latest bill to require content provenance like digital watermarking on AI-generated videos and images and giving victims the ability to seek recourse\, the urgent need to protect women from image abuse\, as well as the importance of AI literacy programs in keeping our communities safe in the AI age.
URL:https://womeninaiethics.org/event/disinformation-deepfakes-and-other-ai-generated-threats-to-women-and-democracy/
LOCATION:Info Commons Lab\, Central Library\, 10 Grand Army Plaza\, Brooklyn\, NY\, 11238\, United States
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/02/EDITED-Events_Disinfo-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20240328T110000
DTEND;TZID=America/Halifax:20240328T120000
DTSTAMP:20260403T152136
CREATED:20240109T073837Z
LAST-MODIFIED:20240321T180219Z
UID:2309-1711623600-1711627200@womeninaiethics.org
SUMMARY:WAIE Monthly AI Expert Webinar Series
DESCRIPTION:Speaker: Hilke Schellmann\nTitle: Freelance Reporter/NYU Journalism Professor\nPronouns: She/Her\n\nOrganization: NYU/Freelance Reporter \n\nJoin us on Thursday\, March 28th for a conversation with Hilke Schellmann\, an Emmy award winning investigative reporter and assistant professor of journalism at New York University. In this not-to-missed virtual event\, we will discuss her book\, The Algorithm: How AI Decides Who Gets Hired\, Monitored\, Promoted\, and Fired\, And Why We Need To Fight Back (Hachette). As a contributor to The Wall Street Journal and The Guardian\, Schellmann writes about holding artificial intelligence (AI) accountable and in her book\, she investigates the rise of AI in the world of work. Drawing on exclusive information from whistleblowers\, internal documents\, and real‑world tests\, Schellmann discovers that many of the algorithms making high‑stakes decisions are biased\, racist\, and do more harm than good. \n\nHeadshot: Credit: Jennifer S. Altman\n\nSpeaker profile: https://www.linkedin.com/in/hilkeschellmann\n\nBuy the book: https://www.hilkeschellmann.com \n 
URL:https://womeninaiethics.org/event/waie-monthly-ai-expert-webinar-series-2/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Monthly-Webinar-Series_HilkeS_Web-640x360px-07-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20240425T110000
DTEND;TZID=America/Halifax:20240425T120000
DTSTAMP:20260403T152136
CREATED:20240109T073855Z
LAST-MODIFIED:20240314T060758Z
UID:2311-1714042800-1714046400@womeninaiethics.org
SUMMARY:WAIE Monthly AI Expert Webinar Series
DESCRIPTION:Join Women in AI Ethics (WAIE) every month on last Thursdays for a virtual chat with diverse experts in the responsible and ethical AI space.
URL:https://womeninaiethics.org/event/waie-monthly-ai-expert-webinar-series/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Virtual.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20240725T110000
DTEND;TZID=America/Halifax:20240725T120000
DTSTAMP:20260403T152136
CREATED:20240507T054631Z
LAST-MODIFIED:20240507T054631Z
UID:2740-1721905200-1721908800@womeninaiethics.org
SUMMARY:WAIE Monthly AI Expert Webinar Series
DESCRIPTION:Join Women in AI Ethics (WAIE) every month on last Thursdays for a virtual chat with diverse experts in the responsible and ethical AI space.
URL:https://womeninaiethics.org/event/waie-monthly-ai-expert-webinar-series-6/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Virtual.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20240829T110000
DTEND;TZID=America/Halifax:20240829T120000
DTSTAMP:20260403T152136
CREATED:20240507T054643Z
LAST-MODIFIED:20240507T054643Z
UID:2741-1724929200-1724932800@womeninaiethics.org
SUMMARY:WAIE Monthly AI Expert Webinar Series
DESCRIPTION:Join Women in AI Ethics (WAIE) every month on last Thursdays for a virtual chat with diverse experts in the responsible and ethical AI space.
URL:https://womeninaiethics.org/event/waie-monthly-ai-expert-webinar-series-7/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Virtual.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20240926T110000
DTEND;TZID=America/Halifax:20240926T120000
DTSTAMP:20260403T152136
CREATED:20240507T054656Z
LAST-MODIFIED:20240924T040401Z
UID:2742-1727348400-1727352000@womeninaiethics.org
SUMMARY:WAIE Monthly AI Expert Webinar Series
DESCRIPTION:This week\, world leaders convened for the ‘Summit of the Future’ at the United Nations in New York to adopt the Pact for the Future\, including a Global Digital Compact and a Declaration on Future Generations. For our monthly webinar\, Women in AI Ethics™ has invited global experts and AI leaders to share their key insights and takeaways from this summit. \nRegister for this timely and highly relevant online event at:
URL:https://womeninaiethics.org/event/waie-monthly-ai-expert-webinar-series-8/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/05/ethical-AI-space.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241024T160000
DTEND;TZID=America/New_York:20241024T193000
DTSTAMP:20260403T152136
CREATED:20240822T032711Z
LAST-MODIFIED:20241022T031702Z
UID:2842-1729785600-1729798200@womeninaiethics.org
SUMMARY:Responsible AI Ecosystem Summit
DESCRIPTION:Women in AI Ethics™ (WAIE) is delighted to welcome you to our inaugural Responsible AI Ecosystem summit. Co-hosted with the Canadian Consulate in New York\, this summit will bring together startup founders\, funders\, researchers\, corporate leaders\, academic scholars\, and other innovators building the responsible and trustworthy AI ecosystem. \nSince 2018\, WAIE has elevated the diverse voices and expertise of underrepresented experts in the critical space of AI ethics. This event reflects our core mission and key role as a catalyst in the global movement towards sustainable and responsible AI. We will showcase innovators across Canada building new innovative AI solutions\, highlight new emerging roles in the responsible AI ecosystem\, and foster connections between funders\, founders\, and the next generation of AI talent. \nWAIE believes that a thriving responsible AI ecosystem is the pathway to new opportunities\, economic growth\, prosperity\, and vibrant technological futures that include all of us. \nPROGRAM DETAILS:\nOpening remarks: 4.00PM – 4.20PM (20mins)\nHessie Jones\, Altitude Accelerator\nMia Shah-Dand\, Lighthouse3\, Women in AI Ethics\nConsul General – Tom Clark \nBREAK 4.20PM – 4.25PM (5mins)\nSession 1: TBD 4.25PM – 4.45PM (20mins)\nTBD \nBREAK 4.45PM – 4.50PM (5mins)\nSession 2: AI Safety & Security 4.50PM- 5.10PM (20mins)\nSaima Fancy\, Ontario Health\nAakansha\, Cohere \nBREAK 5.10PM – 5.15PM (5mins)\nSession 3: AI Privacy 5.15PM- 5.35PM (20mins)\nJurgita Miseviciute\, Proton\nPatricia Thaine\, Private AI \nBREAK 5.35PM – 5.40PM (5mins)\nSession 4: VC Chat – Funding Diverse AI Founders 5.40PM- 6.00PM (20mins)\nHessie Jones\, Altitude Accelerator\nGayatri Sarker\, Advaita Capital\nGiselle Melo\, MATR Ventures \nCLOSING REMARKS 6.00PM – 6.05PM (5mins)\nNETWORKING RECEPTION 6.05PM – 7.30PM (85mins)\nEVENT CLOSE 7.30PM \n\nThis event is sold out but waiting list is available.
URL:https://womeninaiethics.org/event/responsible-ai-ecosystem-summit/
LOCATION:New York
ATTACH;FMTTYPE=image/png:https://womeninaiethics.org/wp-content/uploads/2024/08/Responsible-AI-Ecosystem-Summit.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20241031T110000
DTEND;TZID=America/Halifax:20241031T120000
DTSTAMP:20260403T152136
CREATED:20240507T054755Z
LAST-MODIFIED:20241024T041811Z
UID:2746-1730372400-1730376000@womeninaiethics.org
SUMMARY:AI\, Disinformation\, & Democracy
DESCRIPTION:With the rise of deepfake technology and its use in disinformation campaigns\, the need for proactive measures to counter its harms has never been more urgent. The mass adoption of Generative AI has introduced unprecedented risks as conspiracy theories and lies have spread like wildfire online. Researchers have found that women on average receive more abuse on social media platforms like Facebook and Twitter (X) than male politicians. This is a disturbing statistic given the historic nature of the upcoming US presidential election with Kamala Harris\, the first Black woman and Asian American to lead a major party’s presidential ticket. We have invited tech and democracy activist\, Yael Eisenstat to discuss solutions for countering the harmful effects of social media\, AI-powered algorithms\, and Generative AI on our political discourse and democracy. Join us for this timely and highly relevant discussion on Thursday\, October 31st at 11a ET.
URL:https://womeninaiethics.org/event/waie-monthly-ai-expert-webinar-series-9/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/05/Eisenstat_headshot.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20241128T110000
DTEND;TZID=America/Halifax:20241128T120000
DTSTAMP:20260403T152136
CREATED:20240507T054803Z
LAST-MODIFIED:20240507T054803Z
UID:2747-1732791600-1732795200@womeninaiethics.org
SUMMARY:WAIE Monthly AI Expert Webinar Series
DESCRIPTION:Join Women in AI Ethics (WAIE) every month on last Thursdays for a virtual chat with diverse experts in the responsible and ethical AI space.
URL:https://womeninaiethics.org/event/waie-monthly-ai-expert-webinar-series-10/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Virtual.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20241226T110000
DTEND;TZID=America/Halifax:20241226T120000
DTSTAMP:20260403T152136
CREATED:20240507T054817Z
LAST-MODIFIED:20240507T054817Z
UID:2748-1735210800-1735214400@womeninaiethics.org
SUMMARY:WAIE Monthly AI Expert Webinar Series
DESCRIPTION:Join Women in AI Ethics (WAIE) every month on last Thursdays for a virtual chat with diverse experts in the responsible and ethical AI space.
URL:https://womeninaiethics.org/event/waie-monthly-ai-expert-webinar-series-11/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Virtual.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250130T110000
DTEND;TZID=America/New_York:20250130T120000
DTSTAMP:20260403T152136
CREATED:20250109T100834Z
LAST-MODIFIED:20250109T100834Z
UID:2982-1738234800-1738238400@womeninaiethics.org
SUMMARY:AI and Mental Health - The good\, the bad\, and the ugly.
DESCRIPTION:In 2023\, a Belgian man committed suicide after chatting with an AI chatbot. In October 2024\, a Florida mom sued an AI chat platform\, which she blames for her teenage son’s suicide. The site Character.ai is the second-most popular AI tool after ChatGPT and known to be highly addictive for young people. \nAs AI is increasingly used in healthcare from diagnosis to mental health support\, it raises serious ethical questions about these technological interventions. The lack of accountability and high potential for harm has overshadowed the potential benefits of AI. Researchers caution against use of AI in mental health without appropriate safeguards and have proposed policies to address privacy protection\, bias mitigation\, and self-harm prevention. \nWe have invited the Associate Professor of Engineering and Science at Adolfo Ibañez University\, Chile. Romina Torres will explain the components of a hypothetical system for supporting clinicals in the area of mental health\, identify potential dangers (ethical issues)\, and AI vulnerabilities. \nJoin us for this timely and highly relevant discussion on Thursday\, January 30th at 11a ET.
URL:https://womeninaiethics.org/event/ai-and-mental-health-the-good-the-bad-and-the-ugly/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Virtual.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250131T110000
DTEND;TZID=America/New_York:20250131T120000
DTSTAMP:20260403T152136
CREATED:20250122T071411Z
LAST-MODIFIED:20250123T051425Z
UID:3008-1738321200-1738324800@womeninaiethics.org
SUMMARY:Monthly AI Ethics Reading Circles | AI & Mental Health
DESCRIPTION:Join WAIE on last Fridays at 11a ET for an informal discussion about AI ethics books and research papers authored by women and non-binary experts. The theme for each month will be announced in advance.
URL:https://womeninaiethics.org/event/monthly-ai-ethics-reading-circles-ai-mental-health/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2024/01/Virtual.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20250227T110000
DTEND;TZID=America/Halifax:20250227T120000
DTSTAMP:20260403T152136
CREATED:20250126T035628Z
LAST-MODIFIED:20250224T051243Z
UID:3036-1740654000-1740657600@womeninaiethics.org
SUMMARY:WAIE Monthly AI Expert Webinar Series
DESCRIPTION:For Black History Month\, we have invited Digital Rights Specialist\, Emsie Erastus to discuss how historically marginalized communities are more likely to be left out of the Responsible AI discourse\, how this exclusion is reflected in algorithmic systems outcomes\, and best practices for nation states to undertake ethical deployments of AI.
URL:https://womeninaiethics.org/event/waie-monthly-ai-expert-webinar-series-3/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/01/speaker.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250228T110000
DTEND;TZID=America/New_York:20250228T120000
DTSTAMP:20260403T152136
CREATED:20250122T071538Z
LAST-MODIFIED:20250527T095144Z
UID:3010-1740740400-1740744000@womeninaiethics.org
SUMMARY:Monthly AI Ethics Reading Circles | AI & Race
DESCRIPTION:BOOK TITLE: Algorithms of Oppression: How Search Engines Reinforce Racism \nAUTHOR: Dr. Safiya Umoja Noble \nDESCRIPTION: (Source: Bookshop.org)  \n  \n \nRun a Google search for “Black girls”–what will you find? “Big Booty” and other sexually explicit terms are likely to come up as top search terms. But\, if you type in “white girls\,” the results are radically different. The suggested porn sites and un-moderated discussions about “why Black women are so sassy” or “why Black women are so angry” presents a disturbing portrait of Black womanhood in modern society. \nIn Algorithms of Oppression\, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas\, identities\, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites\, along with the monopoly status of a relatively small number of Internet search engines\, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color\, specifically women of color. \nThrough an analysis of textual and media searches as well as extensive research on paid online advertising\, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance–operating as a source for email\, a major vehicle for primary and secondary school learning\, and beyond–understanding and reversing these disquieting trends and discriminatory practices is of utmost importance. \n \nAUTHOR BIO \n\n\nPhoto credit: Stella Kallnina \nDr. Safiya U. Noble is the David O. Sears Presidential Endowed Chair of Social Sciences and Professor of Gender Studies\, African American Studies\, and Information Studies at the University of California\, Los Angeles (UCLA). She is the Director of the Center on Resilience & Digital Justice and Co-Director of the Minderoo Initiative on Tech & Power at the UCLA Center for Critical Internet Inquiry (C2i2). She currently serves as a Director of the UCLA DataX Initiative\, leading work in critical data studies for the campus. Professor Noble is the author of the best-selling book on racist and sexist algorithmic harm in commercial search engines\, entitled Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press)\, which has been widely reviewed in scholarly and popular publications. In 2021\, she was recognized as a MacArthur Foundation Fellow for her ground-breaking work on algorithmic discrimination. \nRELATED LINKS: \n\nLinkedIn\nAuthor website \nBuy the book “Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press)” \nImagining a Future Free from Algorithms of Oppression \nTime 100 Talks \nTEDx talk\n\nUSC Annenberg talks  Data X
URL:https://womeninaiethics.org/event/monthly-ai-ethics-reading-circles-ai-race/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/01/Algorithms-of-Oppression_-How-Search-Engines-Reinforce-Racism.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20250327T110000
DTEND;TZID=America/Halifax:20250327T120000
DTSTAMP:20260403T152136
CREATED:20250126T035803Z
LAST-MODIFIED:20250806T073619Z
UID:3038-1743073200-1743076800@womeninaiethics.org
SUMMARY:Webinar: RACE\, GENDER\, AND LABOR
DESCRIPTION:BOOK TITLE: Nannie Helen Burroughs: A Tower of Strength in the Labor World \nAUTHOR: Danielle Phillips-Cunningham  \nOVERVIEW:  \nBlack girls and women were at the forefront of the labor movements of the twentieth century. However\, many were relegated to the footnote of history and their contributions minimized or erased.  \nIn her recent book\, Dr. Danielle Phillips- Cunnigham\, associate professor in the School of Management and Labor Relations at Rutgers university\, shares the story of Nannie Helen Burroughs whose leadership as an educator and civil rights leader was revolutionary in transforming the economic landscape for Black girls and women.  \nNannie Burroughs established the National Training School for Women and Girls (NTS) in Washington\, DC\, which along with her work in the National Association of Colored Women’s Clubs\, was integral to and resulted in a powerful labor movement. Dr. Phillips-Cunnigham’s book is the first time that Nannie Burroughs’ story has been told\, and it definitively establishes her as one of America’s most influential labor leaders in the twentieth century.   \nFor our March Women in AI Ethics™ monthly AI Experts webinar\, we have invited Dr. Phillips-Cunningham to discuss her insightful book\, the intersectionality of race\, gender\, and labor\, and the crucial lessons from history for tech labor organizers and racial justice activists. \nZoom registration link: https://us02web.zoom.us/webinar/register/WN_iDbKXgdSSh-AbrDiZFduZw#/registration  \nHost: Mia Shah-Dand\, Founder – Women in AI Ethics™\, CEO – Lighthouse3 \nSpeaker: Dr. Danielle Phillips- Cunnigham\, associate professor – School of Management and Labor Relations\, Rutgers university \n  \n\n  \nAUTHOR BIO: \nDanielle Phillips-Cunningham is an associate professor of Labor Studies and Employment Relations at Rutgers University. She is the author of the award-winning book ‘Putting Their Hands on Race: Irish Immigrant and Southern Black Domestic Workers.’ She is also author of the book ‘Nannie Helen Burroughs: A Tower of Strength in the Labor World’ (Georgetown University Press\, February 2025).  \nDr. Phillips-Cunnigham is a creative professor and researcher who integrates a deep interrogation of race\, labor\, and love for women’s history into her teaching\, research\, and service. Experienced in developing and leading social justice initiatives in academic settings while working collaboratively with community organizations\, students\, professors\, and university administrators. She is interested in expanding institutional initiatives and connecting with women’s labor researchers. \nRELATED LINKS: \nWebsite: https://www.daniellephillips-cunningham.com/ \nLinkedIn: https://www.linkedin.com/in/danielle-phillips-cunningham-675209135/  \nNannie Helen Burroughs: A Tower of Strength in the Labor World https://press.georgetown.edu/Book/Nannie-Helen-Burroughs \nMy Texas public history project:  \n\nhttps://www.youtube.com/watch?v=8MubIekbIQo\nhttps://www.youtube.com/watch?v=MNOnf4jwmM8\n\nAuthor of Putting Their Hands on Race: Irish Immigrant and Southern Black Domestic Workers \n*2020 Sara A. Whaley Book Prize\, National Women’s Studies Association \nhttps://www.rutgersuniversitypress.org/putting-their-hands-on-race/9781978800465/ \nOther:  \nhttps://www.cambridge.org/core/journals/du-bois-review-social-science-research-on-race/article/intersectionality/1E5E73E8E54A487B4CCFE85BB299D0E6 \nhttps://supportny.org/wp-content/uploads/2018/04/mapping-the-margins.pdf
URL:https://womeninaiethics.org/event/webinar-race-gender-and-labor/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/01/Nannie-Helen-Burroughs_-A-Tower-of-Strength-in-the-Labor-World.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250328T110000
DTEND;TZID=America/New_York:20250328T180000
DTSTAMP:20260403T152136
CREATED:20250109T090552Z
LAST-MODIFIED:20250806T073648Z
UID:2961-1743159600-1743184800@womeninaiethics.org
SUMMARY:Reading Circle: AI Ethics Book Festival - 2025
DESCRIPTION:Women in AI Ethics™ (WAIE) is delighted to announce our first virtual AI Ethics book festival. This festival reflects our core mission and key role as a catalyst in the global movement towards inclusive and ethical AI. While AI presents many benefits\, there is an urgent need to elevate voices that ensure the harms to society are minimized and benefits from AI are distributed equitably. \n🎙️Join us to hear from pioneers and scholars whose work reflects a deep commitment to diverse and vibrant technological futures that include all of us! \n  \n👇🏾 DETAILS: \nType : Virtual – Online\nDate: Friday | March 28\, 2025\nTime: 11:00 AM – 6:00 PM ET\n\n \nPROGRAM AGENDA (Eastern Time) \n11:00 AM – Event Open\n11:15 AM – The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking\, Shannon Vallor\n12:10 PM – Obfuscation: A User’s Guide for Privacy and Protest\, Helen Nissenbaum\n1:00 PM – Algorithms of Oppression: How Search Engines Reinforce Racism\, Safiya Noble\n2:00 PM – Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge In Computing\, Mar Hicks\n3:00 PM – Data Conscience: Algorithmic Siege on our Humanity\, Brandeis Marshall\n4:00 PM – Cyborg\, Laura Forlano\n5:00 PM – Social Hour & Audience Book Discussion\n6:00 PM – Event Close \n 
URL:https://womeninaiethics.org/event/reading-circle-ai-ethics-book-festival-2025/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/gif:https://womeninaiethics.org/wp-content/uploads/2025/01/unnamed.gif
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20250424T110000
DTEND;TZID=America/Halifax:20250424T120000
DTSTAMP:20260403T152136
CREATED:20250418T185629Z
LAST-MODIFIED:20250806T073714Z
UID:3498-1745492400-1745496000@womeninaiethics.org
SUMMARY:Webinar: AI\, Surveillance & Filmmaking | Nidhi Sinha
DESCRIPTION:Leading scholars and experts have warned about the increasing threat of surveillance to our autonomy and democracy. Relentless data collection has increased the likelihood of sensitive information being misused\, eroding public trust\, and putting the most vulnerable users at risk. Across the United States\, surveillance is being used to restrict reproductive rights and enforce abortion bans. Facial recognition technology is used at airports around the world to enforce restrictions\, often against individuals from marginalized and targeted groups. Resurgence in cyberattacks\, and global rise of authoritarianism has renewed concerns about privacy and highlighted the urgent need for increased user protection from surveillance technologies. There is a growing number of documentaries and films advocating for greater transparency and accountability for these invasive technologies. \nFor our monthly AI Expert webinar on Thursday\, April 24 at 11a ET\, we have invited Nidhi Sinha to discuss her current project\, “Under Surveillance”\, a documentary about surveillance in San Francisco Bay Area. Join us for a timely conversation with Nidhi on the pervasiveness of surveillance in our everyday lives and the role of films in helping us take back power from those who surveil us. \nRegistration link:  https://us02web.zoom.us/webinar/register/8817428498455/WN_iDbKXgdSSh-AbrDiZFduZw \nHost: Mia Shah-Dand\, Founder – Women in AI Ethics™\, CEO – Lighthouse3 \nSpeaker: Nidhi Sinha\, Filmmaker\, Analyst \n\nSPEAKER BIO: \n  \n \nNidhi Sinha works at the intersection of human rights and technology. Born and raised in the Bay Area\, Nidhi has a degree in Math and Computer Science from NYU. She has been in the center of tech and\, more importantly\, tech culture since a young age. Her own intersection of identities has shaped her understanding of the world. At the center of all of her work is the core question of who is being left out of the conversation and how to bring them in. \nAs a lover of art\, dance\, shapes\, colors\, and trees\, Nidhi’s goal is to help build a world of innovation\, kindness\, and collaboration.
URL:https://womeninaiethics.org/event/webinar-ai-surveillance-filmmaking-nidhi-sinha/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/png:https://womeninaiethics.org/wp-content/uploads/2025/04/Nidhi-Sinha-Montly-Webinar-1-e1745002650433.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250425T110000
DTEND;TZID=America/New_York:20250425T120000
DTSTAMP:20260403T152136
CREATED:20250427T072458Z
LAST-MODIFIED:20250806T073734Z
UID:3514-1745578800-1745582400@womeninaiethics.org
SUMMARY:Reading Circle: AI + Values - The AI Mirror | Shannon Vallor
DESCRIPTION:BOOK TITLE: The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking \nAUTHOR: Prof. Shannon Vallor \nBOOK OVERVIEW: (Source: Bookshop.org) \n\n \nFor many\, technology offers hope for the future—that promise of shared human flourishing and liberation that always seems to elude our species. Artificial intelligence (AI) technologies spark this hope in a particular way. They promise a future in which human limits and frailties are finally overcome—not by us\, but by our machines. \nYet rather than open new futures\, today’s powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors\, they reflect the same errors\, biases\, and failures of wisdom that we strive to escape. Our new digital mirrors point backward. They show only where the data say that we have already been\, never where we might venture together for the first time. \nTo meet today’s grave challenges to our species and our planet\, we will need something new from AI\, and from ourselves. \nShannon Vallor makes a wide-ranging\, prophetic\, and philosophical case for what AI could be: a way to reclaim our human potential for moral and intellectual growth\, rather than lose ourselves in mirrors of the past. Rejecting prophecies of doom\, she encourages us to pursue technology that helps us recover our sense of the possible\, and with it the confidence and courage to repair a broken world. Prof. Vallor calls us to rethink what AI is and can be\, and what we want to be with it. \n \nAUTHOR BIO: \n \n \nProf. Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh\, where she is also appointed in Philosophy. She directs EFI’s Centre for Technomoral Futures and is co-Director of the UKRI’s BRAID (Bridging Responsible AI Divides) programme. Professor Vallor’s research explores how AI\, robotics\, and data science reshape human character\, habits\, and practices. Her work includes advising policymakers and industry on the ethical design and use of AI\, and she is a former AI Ethicist at Google. She is the author of Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press\, 2016) and The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford University Press\, 2024). The book is a Finalist for the 2025 PROSE Awards and was shortlisted for the Al-Rodhan Book Prize of the Royal Institute of Philosophy. \nRELATED LINKS: \n\n\n\nLinkedIn\nBook website\nHow philosopher Shannon Vallor delivered the year’s best critique of AI (Fast Company)\nShannon Vallor says AI does present an existential risk — but not the one you think (Vox) \nIn the Age of A.I.\, What Makes People Unique? (New Yorker)\nHow to be human in an age of AI (The New Statesman)\nA Faustian fable (TLS)\n\n\n\n\nThe AI Mirror — how technology blocks human potential (Financial Times)\n\nAI Is the Black Mirror (Nautilus)
URL:https://womeninaiethics.org/event/reading-circle-ai-values-the-ai-mirror-shannon-vallor/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/04/The-AI-Mirror_-How-to-Reclaim-Our-Humanity-in-an-Age-of-Machine-Thinking.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20250529T110000
DTEND;TZID=America/Halifax:20250529T120000
DTSTAMP:20260403T152136
CREATED:20250126T040712Z
LAST-MODIFIED:20250806T073825Z
UID:3042-1748516400-1748520000@womeninaiethics.org
SUMMARY:Webinar:  AI & LABOR
DESCRIPTION:TITLE: THE QUANTIFIED WORKER\, CAMBRIDGE UNIVERSITY PRESS\, 2023 \n(Buy the book from our affiliate store and support independent bookstores) \n  \nOVERVIEW: \n  \n \nThe information revolution has ushered in a data-driven reorganization of the workplace. Big data and AI are used to surveil workers and shift risk. Workplace wellness programs appraise our health. Personality job tests calibrate our mental state. The monitoring of social media and surveillance of the workplace measure our social behavior. With rich historical sources and contemporary examples\, The Quantified Worker explores how the workforce science of today goes far beyond increasing efficiency and threatens to erase individual personhood. With exhaustive detail\, Ifeoma Ajunwa shows how different forms of worker quantification are enabled\, facilitated\, and driven by technological advances. Timely and eye-opening\, The Quantified Worker advocates for changes in the law that will mitigate the ill effects of the modern workplace. \nTo join us for this and future WAIE webinars\, register Webinar Registration – Zoom \n  \nAuthor: Ifeoma Ajunwa \n \nhttps://ifeomaajunwa.com/ \nIfeoma Ajunwa\, J.D.\, LL.M.\, Ph.D.\, is an award-winning tenured law professor and author of the highly acclaimed book\, The Quantified Worker\, published by Cambridge University Press. At Emory\, she is Asa Griggs Candler Professor of Law and founding director of the AI and the Future of Work Program at Emory Law. She is also the Associate Dean for Projects and Partnerships and Founding Director of the AI and Future of Work Program at Emory University School of Law. Ajunwa was recruited from the University of North Carolina School of Law where she was a tenured law professor and the founding director of the Artificial Intelligence Decision-Making Research (AI-DR) Program at UNC Law. Ajunwa is currently a Senior Correspondence Fellow at Center for the Study of Private Law at Yale School and Affiliate fellow at Yale Law School’s Information Society Project (ISP). She has been a faculty associate at the Berkman Klein Center at Harvard University since 2017. She was a 2019 recipient of the NSF CAREER Award and a 2018 recipient of the Derrick A. Bell Award from the Association of American Law Schools (AALS). She is an elected member of the American Law Institute and a Life Fellow of the American Bar Foundation. Ajunwa’s research interests focus on global A.I. law and regulation\, A.I. and discrimination issues\, Privacy Law\, Business Law\, Health Law\, Labor Law\, Law and Film\, etc.
URL:https://womeninaiethics.org/event/webinar-ai-labor/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/01/AI-Workers-_-Ifeoma-Ajunwa.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250530T110000
DTEND;TZID=America/New_York:20250530T120000
DTSTAMP:20260403T152136
CREATED:20250427T072731Z
LAST-MODIFIED:20250806T073845Z
UID:3516-1748602800-1748606400@womeninaiethics.org
SUMMARY:Reading Circle: AI + Labor - The Quantified Worker | Ifeoma Ajunwa
DESCRIPTION:TITLE: THE QUANTIFIED WORKER\, CAMBRIDGE UNIVERSITY PRESS\, 2023 \n(Buy the book from our affiliate store and support independent bookstores) \n  \nOVERVIEW: \n  \n \nThe information revolution has ushered in a data-driven reorganization of the workplace. Big data and AI are used to surveil workers and shift risk. Workplace wellness programs appraise our health. Personality job tests calibrate our mental state. The monitoring of social media and surveillance of the workplace measure our social behavior. With rich historical sources and contemporary examples\, The Quantified Worker explores how the workforce science of today goes far beyond increasing efficiency and threatens to erase individual personhood. With exhaustive detail\, Ifeoma Ajunwa shows how different forms of worker quantification are enabled\, facilitated\, and driven by technological advances. Timely and eye-opening\, The Quantified Worker advocates for changes in the law that will mitigate the ill effects of the modern workplace. \n  \nAuthor: Ifeoma Ajunwa \n \nhttps://ifeomaajunwa.com/ \nIfeoma Ajunwa\, J.D.\, LL.M.\, Ph.D.\, is an award-winning tenured law professor and author of the highly acclaimed book\, The Quantified Worker\, published by Cambridge University Press. At Emory\, she is Asa Griggs Candler Professor of Law and founding director of the AI and the Future of Work Program at Emory Law. She is also the Associate Dean for Projects and Partnerships and Founding Director of the AI and Future of Work Program at Emory University School of Law. Ajunwa was recruited from the University of North Carolina School of Law where she was a tenured law professor and the founding director of the Artificial Intelligence Decision-Making Research (AI-DR) Program at UNC Law. Ajunwa is currently a Senior Correspondence Fellow at Center for the Study of Private Law at Yale School and Affiliate fellow at Yale Law School’s Information Society Project (ISP). She has been a faculty associate at the Berkman Klein Center at Harvard University since 2017. She was a 2019 recipient of the NSF CAREER Award and a 2018 recipient of the Derrick A. Bell Award from the Association of American Law Schools (AALS). She is an elected member of the American Law Institute and a Life Fellow of the American Bar Foundation. Ajunwa’s research interests focus on global A.I. law and regulation\, A.I. and discrimination issues\, Privacy Law\, Business Law\, Health Law\, Labor Law\, Law and Film\, etc.
URL:https://womeninaiethics.org/event/reading-circle-ai-labor-the-quantified-worker-ifeoma-ajunwa/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/04/original-40E36518-57A6-4D6C-835F-ABFFE41E4C5A.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20250626T110000
DTEND;TZID=America/Halifax:20250626T120000
DTSTAMP:20260403T152136
CREATED:20250126T040604Z
LAST-MODIFIED:20250623T095125Z
UID:3040-1750935600-1750939200@womeninaiethics.org
SUMMARY:Webinar: AI & Socio-Technical Values\, Carla Vieira
DESCRIPTION:As AI becomes increasingly embedded in our daily lives\, it is shifting society’s core socio-technical values and leaving many grappling with the ethical implications of its use. Concepts like privacy\, accountability\, and fairness are being reshaped by AI-driven applications. For instance\, the recent AI-generated Ghibli-style images has sparked debates over privacy\, copyright\, artistic integrity\, and the nature of creativity itself. The machine learning models that power AI don’t just reflect existing values—they actively shape them\, influencing the way we navigate an AI-driven world. \nFor this month’s AI Expert webinar\, we have invited Senior Data Engineer and Google Developer Expert in Machine Learning\, Carla Vieira to explore these critical intersections\, examine how AI challenges and redefines our ethical and societal norms\, how sector-specific interests shape ethical considerations\, and highlight best-case examples where societies are actively safeguarding human values in the age of AI. \nJoin us for this timely and highly relevant session on Thursday\, June 26th at 11a ET. \nRegister at: https://us02web.zoom.us/webinar/register/WN_iDbKXgdSSh-AbrDiZFduZw \nSpeaker profile: \n \nCarla is a Brazilian Senior Data Engineer and Google Developer Expert in Machine Learning. She holds a master’s degree in Artificial Intelligence\, with research focused on building trustworthy and explainable AI systems. She was listed as a Rising Star (2021) in Women in AI Ethics.  She is an active speaker and mentor\, dedicated to promoting diversity and inclusion in tech. Carla’s work focuses on the ethical implications of AI\, ensuring that advancements benefit all sectors of society. Outside of her professional life\, she enjoys doing crossfit\, reading\, and travelling.
URL:https://womeninaiethics.org/event/webinar-ai-socio-technical-values-carla-vieira/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/01/AI-Socio-Technical-Values-Carla-Vieira.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250627T110000
DTEND;TZID=America/New_York:20250627T120000
DTSTAMP:20260403T152136
CREATED:20250427T073032Z
LAST-MODIFIED:20250623T095208Z
UID:3518-1751022000-1751025600@womeninaiethics.org
SUMMARY:Reading Circle: AI + Feminism - Cyborg | Laura Forlano
DESCRIPTION:BOOK TITLE: Cyborg \nAUTHOR: Dr. Laura Forlano \nDESCRIPTION: (source: Bookshop.org) \n \nThis introduction to cyborg theory provides a critical vantage point for analyzing the claims around emerging technologies like automation\, robots\, and AI. Cyborg analyzes and reframes popular and scholarly conversations about cyborgs from the perspective of feminist cyborg theory. Drawing on their combined decades of training\, teaching\, and research in the social sciences\, design\, and engineering education\, Laura Forlano and Danya Glabau introduce an approach called critical cyborg literacy. Critical cyborg literacy foregrounds power dynamics and pays attention to the ways that social and cultural factors such as gender\, race\, and disability shape how technology is imagined\, developed\, used\, and resisted. \nForlano and Glabau offer critical cyborg literacy as a way of thinking through questions about the relationship between humanity and technology in areas such as engineering and computing\, art and design\, and health care and medicine\, as well as the social sciences and humanities. Cyborg examines whether modern technologies make us all cyborgs—if we consider\, for instance\, the fact that we use daily technologies at work\, have technologies embedded into our bodies in health care applications\, or use technology to critically explore possibilities as artists\, designers\, activists\, and creators. Lastly\, Cyborg offers perspectives from critical race\, feminist\, and disability thinkers to help chart a path forward for cyborg theory in the twenty-first century. \n  \nAUTHOR BIO:  \n  \n \nLaura Forlano is a Fulbright award-winning and National Science Foundation funded scholar\, is a disabled writer\, social scientist and design researcher. She is Professor in the departments of Art + Design and Communication Studies in the College of Arts\, Media\, and Design and Senior Fellow at The Burnes Center for Social Change at Northeastern University. Forlano is also an Affiliated Fellow at the Information Society Project at Yale Law School. She received her Ph.D. in communications from Columbia University. \nLinkedIn: https://www.linkedin.com/in/laura4lano/  \nRELATED LINKS \n\nLinkedIn \nAuthor website \n[PDF] TechnoFeminism by Judy Wajcman \nInfrastructuring as Critical Feminist Technoscientific Practice – spheres\nFeminist Hacking/Making: Exploring new gender horizons of possibility » The Journal of Peer Production\nA New AI Lexicon: Smart – AI Now Institute\n\n 
URL:https://womeninaiethics.org/event/reading-circle-ai-feminism-cyborg-laura-forlano/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/04/Cyborg.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250725T110000
DTEND;TZID=America/New_York:20250725T120000
DTSTAMP:20260403T152136
CREATED:20250427T073213Z
LAST-MODIFIED:20250806T073930Z
UID:3520-1753441200-1753444800@womeninaiethics.org
SUMMARY:Reading Circle: AI + Governance Data Conscience | Brandeis Marshall
DESCRIPTION:BOOK TITLE: Data Conscience: Algorithmic Siege on our Humanity \nAUTHOR: Dr. Brandeis Marshall \nDESCRIPTION: (Source: Bookshop.org) \n \nData has enjoyed ‘bystander’ status as we’ve attempted to digitize responsibility and morality in tech. In fact\, data’s importance should earn it a spot at the center of our thinking and strategy around building a better\, more ethical world. It’s use―and misuse―lies at the heart of many of the racist\, gendered\, classist\, and otherwise oppressive practices of modern tech. \nIn Data Conscience: Algorithmic Siege on our Humanity\, computer science and data inclusivity thought leader Dr. Brandeis Hill Marshall delivers a call to action for rebel tech leaders\, who acknowledge and are prepared to address the current limitations of software development. In the book\, Dr. Brandeis Hill Marshall discusses how the philosophy of “move fast and break things” is\, itself\, broken\, and requires change. \nYou’ll learn about the ways that discrimination rears its ugly head in the digital data space and how to address them with several known algorithms\, including social network analysis\, and linear regression. \nA can’t-miss resource for junior-level to senior-level software developers who have gotten their hands dirty with at least a handful of significant software development projects\, Data Conscience also provides readers with: \n\n\n\nDiscussions of the importance of transparency\nExplorations of computational thinking in practice\nStrategies for encouraging accountability in tech\nWays to avoid double-edged data visualization\n\n\n\n\nSchemes for governing data structures with law and algorithms\n\n  \nAUTHOR BIO: \n \nDr. Brandeis Marshall is the founder and CEO of DataedX Group\, LLC. DataedX provides learning and development training to help educators\, scholars and practitioners in developing more responsible data/AI practices. \nDr. Marshall speaks\, writes and consults on how to move slower and build better people-first tech. She has been a Stanford PACS Practitioner Fellow and Partner Research Fellow at Siegel Family Endowment.  \nMarshall has served as faculty at both Purdue University and Spelman College. Her scholarly work in data literacy\, data science and computing has been supported by the National Science Foundation and philanthropy organizations. She is the author of Data Conscience: Algorithmic Siege on our Humanity (Wiley\, 2022)\, co-editor of Mitigating Bias in Machine Learning (McGraw-Hill\, 2024) and contributing author in The Black Agenda (Macmillan\, 2022). \nRELATED LINKS: \n\n\n\nLinkedIn\nAuthor website\nExplain Which AI You Mean\, please? \nWhat’s UnAI-able?\n\n\n\n\nAI Ethics with Dr. Brandeis Marshall\n\n 
URL:https://womeninaiethics.org/event/reading-circle-ai-governance-data-conscience-brandeis-marshall/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/04/Data-Conscience_-Algorithmic-Siege-on-our-Humanity.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20250731T110000
DTEND;TZID=America/Halifax:20250731T120000
DTSTAMP:20260403T152136
CREATED:20250126T041811Z
LAST-MODIFIED:20250806T073942Z
UID:3046-1753959600-1753963200@womeninaiethics.org
SUMMARY:Webinar: AI & human rights | Shahla Naimi
DESCRIPTION:The widescale deployment of Artificial Intelligence (AI) has serious implications for human rights and dignity. Popular AI models are plagued with pervasive biases\, resulting in discrimination against individuals from marginalized groups such as women and people of color. As these technologies are increasingly used in surveillance and autonomous weapons\, they pose additional risks to vulnerable segments of our society. There is an urgent imperative to ensure these powerful technologies are developed ethically and deployed responsibly to minimize harmful outcomes and maximize positive benefits for humanity. \nOn July 31 for our AI Expert Webinar\, we have invited Shahla Naimi\, Policy Director\, AI &amp; Human Rights at Salesforce to discuss how we can build technological safeguards to ensure protection of human rights in the AI age. \nRegister to join us for this and future Women in AI Ethics™(WAIE) webinars. \n\n  \nSpeaker: \nShahla Naimi\, Policy Director\, AI &amp; Human Rights at Salesforce\nLinkedIn: https://www.linkedin.com/in/snaimi \n\nBased in New York\, Shahla Naimi is the Policy Director in the Office of Ethical and Humane Use at Salesforce\, where she focuses on issues around AI and human rights. Her broader team – the Ethical Use Policy team – develops and implements policies to ensure the company’s technology is not abused or used for harm in society. Prior to joining Salesforce\, Shahla helped lead Google’s human rights program on the Government Affairs and Public Policy team. She previously worked for the Aga Khan Trust for Culture in Afghanistan on conservation and intangible heritage projects and for international nonprofits and organizations like the World Bank\, Oxfam\, CARE and others. She received her master’s in sociology and anthropology from the Graduate Institute of International and Development Studies in Geneva\, and she earned her bachelor’s degree in political science and philosophy from Yale University. She sits on the board of the Arab American Family Support Center. Shahla is a trained pro-bono doula and the co-founder of Kolba\, a nonprofit supporting Afghan refugees resettling in the United States. \n\nOther links:\nhttps://www.salesforce.com/blog/author/shahla-naimi/
URL:https://womeninaiethics.org/event/webinar-ai-human-rights-shahla-naimi/
LOCATION:Virtual – Online
ATTACH;FMTTYPE=image/jpeg:https://womeninaiethics.org/wp-content/uploads/2025/01/Virtual-1.jpg
END:VEVENT
END:VCALENDAR