Recordings and Session Details

Day One |  Tuesday, February 3, 2026 from 1:00 to 4:30 PM ET

Synthetic Media, GenAI, Gender and the Law in Canada

Presented By: Suzie Dunn, Interim Director of the Law and Technology Institute and Assistant Professor, Dalhousie University’s Schulich School of Law


click here for presentation slides

Presentation Description: This session looks at some of the trends in AI and its connection to gender-based violence with a specific focus on sexualized images. It will provide a brief legal overview of Canada’s current and proposed laws to address AI and gender-based harms.

Learning Objectives:

  1. Understand the landscape of AI and its connection to GBV, including emerging technologies and trends.
  2. Gain an understanding of the current and proposed laws and highlight potential gaps in the legal system.
  3. Understand how AI is impacting sexual integrity and autonomy.

What We Heard: A Snapshot of Perspectives on AI & GBV

Presented By: Learning Network


click here for presentation slides

Presentation Description This brief presentation shares a high-level snapshot from a Learning Network survey that invited people across the GBV and allied sectors to reflect on how artificial intelligence (AI) is showing up in their work and in the lives of those they support. I t highlights where respondents are noticing AI in practice, the risks and concerns they are naming, and early ways AI is intersecting with prevention, response, and service delivery. The findings are presented at  a high level and are intended to support orientation and a shared context for Forum discussions. 


Spotlight Session: Intersectional Systems, Harms, and Responses in AI & GBV

Disrupting AI: Addressing AI-enabled forms of GBV

Presented By:  Mitzie Hunter, President and CEO, Canadian Women’s Foundation


click here for presentation slides

Presentation Description: Gendered digital harm is widespread in Canada and rising with the arrival of AI. For women and gender-diverse people the consequences are serious. The Canadian Women’s Foundation research study Challenging Gendered Digital Harm revealed that the majority of women and gender-diverse people experienced online harm including harassment, hate speech, stalking, and non-consensual sharing of intimate images. Harms are intersectional and disproportionate: Black, Indigenous, racialized, 2SLGBTQIA+, youth (18-25) and people with disabilities are most frequently targeted and impacts are significant. Survivors reported major mental health effects, including stress, anxiety, and depression. Women and gender-diverse people with intersecting identities reported higher levels of trauma, isolation, and safety concerns. Many reduce online activity or leave platforms altogether.

Foundation research explicitly recognized generative AI as an emerging amplifier of gendered digital harm. AI expands tech-facilitated gender-based violence exponentially by increasing the scale, speed, and anonymity of abuse as evidenced in non-consensual explicit images, automated harassment, fabricated content, and biased facial recognition systems. With AI transformation disrupting ways of working unlike anything since the advent of the internet and cloud computing, how do we prevent replication and deepening of inequities? What can we do to ensure that an intersectional gender lens is applied across AI policies? What are the levers can disrupt AI’s role in tech-facilitated gender-based violence?

Preventing AI-Enabled Harms Through Digital Media Literacy

Presented By:  Dr. Kara Brisson-Boivin, Director of Research, MediaSmarts


click here for presentation slides

Presentation Description: This presentation will highlight MediaSmarts’ prevention-focused approach to AI-enabled harms (e.g. gender-based violence) through digital media literacy education. Drawing on the Resilience through DigitalSmarts program and MediaSmarts’ AI literacy resources, we will examine how to better support youth, parents, educators, and other adults as they navigate AI systems, to better understand harms such as deepfakes and image-based abuse and build skills to respond with resilience. We will emphasize that AI literacy is an equity issue: access to AI tools does not necessarily mean people have the critical skills needed to identify risks or engage safely.

The Algorithm Is Not Neutral: How Racialized Care, Control, and Criminalization Are Coded Into Everyday Life

Presented By: Dr. Gifty Asare, Director of Research & Community Impact, WomenatthecentrE


click here for presentation slides

Presentation Description: Algorithmic systems are often framed as objective, efficient, and impartial. Yet for Black women, girls, gender-diverse and trans people in Canada, these technologies frequently reproduce the same racial profiling, surveillance, and criminalization long embedded in institutions—and increasingly, in everyday life. Drawing on WomenatthecentrE’s Truth and Transformation: Advancing Gender Equity project and the Amourgynoir Framework, this presentation connects lived realities of gender-based violence to the logics underlying algorithmic decision-making. It asks not only how bias operates in systems, but what it would mean to refuse carceral responses altogether and instead root care in revolutionary love, community accountability, and transformative justice.

Learning Objectives:

  1. Understand how algorithmic bias is embedded within broader systems of racial profiling, criminalization, and gender-based violence affecting Black women, girls, gender-diverse and trans people.
  2. Explore how insights from the Truth and Transformation project illuminate the real-world impacts of AI and data-driven systems on everyday life.
  3. Be introduced to the Amourgynoir Framework as a lens for imagining non-carceral, care-centered responses to harm grounded in transformative justice and community accountability.

Using AI Safely in Survivor Advocacy

Presented By: Rhiannon Wong, Project Manager of the Technology Safety Canada initiative with the BC Society of Transition Houses and Women’s Shelters Canada 


CLICK HERE FOR PRESENTATION SLIDES

Presentation Description: This session looks at how technology, including emerging AI tools, can affect safety, privacy, and confidentiality in the context of intimate partner violence. Using established tech safety and anti-violence principles, the presentation explores how digital tools may be helpful in some situations, while also creating new risks for survivors.

Participants will consider practical AI examples relevant to shelter and frontline work and learn how to approach these tools thoughtfully and responsibly. The session focuses on ethical use, legal responsibilities, and clear boundaries to support survivor safety and maintain trust in anti-violence services.


 

Day Two | Wednesday,  February 4, 2026 from 1:00 to 4:30 PM ET

Artificial Intelligence and Gender-Based Technology-Facilitated Abuse: Harms, Limits, and Possibilities

Presented By: Dr. Nicola Henry, Professor of Global Studies and Deputy Director of the Social Equity Research Centre at RMIT University, Australia 


CLICK HERE FOR PRESENTATION SLIDES

Presentation Description: Rapid advancements in digital technologies have contributed to new and evolving manifestations of gender-based violence. This includes a range of behaviors such as cyberstalking, image-based sexual abuse, digital dating abuse, online sexual harassment, and sexualized threats. Among these developments, artificial intelligence (AI) has emerged as both a new facilitator of harm and a proposed site of intervention. AI-enabled tools have intensified existing forms of abuse, including AI-generated image-based sexual abuse and sextortion, while also being promoted as solutions through detection systems, automated moderation, and survivor-support tools.

Drawing on international empirical research, policy analysis, and the development of survivor-centred digital tools, this keynote critically examines the dual role of AI in the GBV landscape. Using a design justice framework, the session explores whose harms are made visible, whose voices shape technological responses, and what ethical, effective, and trauma-informed AI interventions might look like in practice.

Learning Objectives:

  1. Gain a clearer understanding of how AI is being used to facilitate emerging forms of gender-based technology-facilitated abuse, including AI-generated image-based sexual abuse.
  2. Understand key tensions in the use of AI as both a source of harm and a proposed solution in GBV prevention and response.
  3. Be introduced to the concept of "design justice" as a useful lens for thinking about ethical, survivor-centred digital interventions.

The presentation will be followed by a fireside chat featuring Dr. Nicola Henry and Deepa Mattoo, Executive Director of the Barbra Schlifer Commemorative Clinic. 


Spotlight Session: Navigating AI Engagement in GBV Prevention and Response

Reclaim Your Digital Space: A Course for Ending Online GBV

Presented By:  Aliina Vaisanen, Canadian Women's Foundation


CLICK HERE FOR PRESENTATION SLIDES

Presentation Description: Digital harm is a serious and growing issue that affects women, girls, and gender-diverse people across Canada, particularly those with intersecting marginalized identities. Research shows that 61% of women and gender-diverse people in Canada have experienced gendered digital harm, compared to 53% of the general population. Black, Indigenous, racialized, 2SLGBTQIA+, youth, and people with disabilities are targeted most frequently. This alarming trend impacts people’s safety, mental health, and ability to participate fully and confidently in online spaces.

Addressing digital harm requires a collaborative response from organizations, governments, technology companies, schools, and communities. This session explores how hate, abuse, and harassment have been normalized online, and how gender equality organizations can play a critical role in pushing back against digital harm and advocating for safer, more inclusive digital environments.

Grounded in research and insights from women and gender-diverse people with lived experience, this session highlights the Canadian Women’s Foundation’s Reclaim Your Digital Space online learning resources. Designed for individuals experiencing digital harm, organizations concerned about online safety for women and gender-diverse people, and anyone interested in creating safer digital spaces, the session focuses on opportunities to learn, act, and drive change. By sharing practical tools, building knowledge and skills for safer online engagement, and inspiring collective action, Reclaim Your Digital Space supports efforts to address the rising prevalence of gender-based digital harm and strengthen digital safety for all.

The Cost of Virtual Connection: Risks and Opportunities for A.I. Relationships

Presented By: Dr. Ellen M. Kaufman, Kinsey Institute


CLICK HERE FOR PRESENTATION SLIDES

Presentation Description: In step with recent advances in artificial intelligence (AI), AI-enabled “companions”—chatbots designed to offer on-demand intimacy and emulate interpersonal connection—have risen to the fore. These technologies are often viewed as a solution for the growing “loneliness epidemic,” offering an outlet for those yearning for platonic, romantic or sexual connection. But they also raise critical questions about how these relationships might challenge or reinforce harmful attitudes and behaviors that are ultimately inconsistent with existing relationship norms.

Building on nearly a decade of pioneering research in this rapidly growing area of relationship science, this talk explores the ongoing risks and benefits of A.I. relationships, parsing the potential for these technologies to meet individuals’ emotional, psychological and sexual needs while also acknowledging the prospective harms—particularly in terms of gender-based violence—introduced and reinforced by these dynamics. Focusing on key areas of consent and agency, this talk ultimately offers a vision forward for users, developers and policymakers to consider how we can shape the use of A.I. to meet our needs without corroding our relationships with each other.

Lessons from our Survivor AI: What does building a feminist AI look like?

Presented By: Eva Blum-Dumontet, Head of Movement Building and Policy at Chayn


CLICK HERE FOR PRESENTATION SLIDES

Presentation Description: In October 2025, Chayn launched Survivor AI, a letter-generation tool designed to support survivors of image-based abuse to assert their rights and reclaim control. But our Survivor AI was never just about shipping a product. From the start, we asked a bigger question: what would a feminist AI look like? And what would it mean to build an AI grounded in our trauma-informed design principles - accountability, agency, equity, hope, plurality, power sharing, privacy, and safety? This presentation will aim to get us thinking about how AI can be built differently.

Drawing on our process and work with academics who have been observing and documenting our work building our Survivor AI, this talk shares the lessons, tensions, and trade-offs we encountered while attempting to create a feminist AI in practice. We will unpack how our trauma-informed design principles were translated in practice and offer our insights for those wanting to move beyond “ethical AI” as rhetoric and towards building systems that genuinely serve those most impacted by harm.


AI in Gender-Based Violence Programming: Perils and Potentials

Presented By: Caroline Masboungi, UNICEF, Global GBViE Technology and Innovation


CLICK HERE FOR PRESENTATION SLIDES

Presentation Description: This session explores how Artificial Intelligence can be leveraged to enhance Gender-Based Violence programming. Through participatory, scenario-based discussions, participants will examine AI's opportunities and risks, and explore survivor-centered design principles.

Learning Objectives:

  1. Deepen understanding of AI's potential, limitations, and risks in GBV and humanitarian programming, with actionable insights for ethical, survivor-centered integration.
  2. Engage in scenario-based discussions and collaborative problem-solving to critically examine AI use cases and co-design ethical solutions with cross-sectoral relevance.
  3. Gain practical tools for risk management and decision-making applicable across the humanitarian sector.