Glossary of AI-Related Terms

About this Glossary

This glossary was developed to support participants of the Learning Network’s 2026 Virtual Forum (AI & GBV: Harms, Impacts and Emerging Practices in Prevention & Response). It provides plain-language definitions of key AI-related terms that may arise in discussions of AI and GBV prevention, response, research, and advocacy.

Definitions have been adapted from external sources, and each entry includes references to original materials.

This glossary is intended as a living resource and will continue to evolve as language, technology, and GBV-informed practice develop. It is not intended to be a comprehensive or exhaustive glossary of AI terms, but rather a focused resource designed to support shared understanding in this Forum context.

How to Use this Glossary

This glossary is meant to support shared understanding and dialogue during the Forum, rather than to prescribe fixed or universal definitions. Participants may encounter these terms differently in their own work or communities. We encourage readers to use the glossary as a reference and starting point, and to reflect on how these concepts show up in their own practice, questions, and concerns.


Artificial Intelligence (AI)

Artificial intelligence (AI) refers to the ability of machines and computer systems to carry out tasks that are often associated with human intelligence, such as understanding language, recognizing patterns, learning from information, and solving problems. [1] [2]

AI is often described as an interdisciplinary field (commonly associated with computer science) that focuses on developing models and systems that can perform functions typically linked to human intelligence. [2]

In everyday contexts, AI can include technologies that support decision-making, automate tasks, and generate outputs such as text or images. Some AI systems are designed for specific purposes (like translating language or filtering content), while others may be used more broadly across different settings and sectors. [1] [3]

The term “AI” may be used to refer both to the technology itself and to the wider systems around it such as how AI is trained, how it is deployed, what data it relies on, and how its impacts can reinforce or worsen social inequities (including gender-based violence). [1] [3]

Footnotes:
[1] UN Women. (n.d.). Glossary: Glossary: Gender and Technology. Retrieved from https://www.unwomen.org/en/how-we-work/innovation-and-technology/glossary

[2] Government of Canada. (n.d.). Artificial intelligence terminology and concept map. Retrieved from https://www.noslangues-ourlanguages.gc.ca/en/artificial-intelligence-terminology-concept-map-eng

[3] Government of Canada. (n.d.). Guide on the use of generative artificial intelligence. Retrieved from https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-use-generative-ai.html 

Algorithm

An algorithm is a set of step-by-step instructions or rules used to solve a problem or complete a task. In computing, algorithms often take one or more inputs (such as data), follow a defined process, and produce an output (such as a decision, ranking, prediction, or recommendation). [1] [2]

Algorithms are used throughout digital systems and information technologies, including search engines, social media feeds, and automated decision-making tools. People often interact with algorithms without realizing it, because the steps and decision rules are usually built into software and happen behind the scenes. [3]

In the context of AI, algorithms are one of the building blocks that allow systems to learn patterns from data or generate outputs. Importantly, algorithms are not neutral on their own: the choices made by designers, and the data and assumptions an algorithm relies on, can shape outcomes in ways that may reinforce inequities or harms—especially for communities experiencing gender-based violence. [3] [4]

Footnotes:
[1] UN Women. (n.d.). Glossary: Gender and Technology. Retrieved from https://www.unwomen.org/en/how-we-work/innovation-and-technology/glossary

[2] National Institute of Standards and Technology (NIST). (n.d.). Glossary. Retrieved from https://csrc.nist.gov/glossary/term/algorithm

[3] Network of the National Library of Medicine (NNLM). (n.d.). Data Glossary. Retrieved from https://www.nnlm.gov/guides/data-glossary/algorithm

[4] George Brown College. (n.d.). Generative AI Glossary. Retrieved from https://www.georgebrown.ca/teaching-and-learning-exchange/teaching-resources/generative-ai/glossary

Bias

Bias in artificial intelligence (AI) refers to systematic patterns in data, models, or outputs that reflect and reproduce existing social, cultural, or structural inequalities. These biases often arise because AI systems are trained on large datasets that reflect historical and present-day power imbalances, such as gender, racial, cultural, or socioeconomic inequities. As a result, AI systems may generate outputs or make decisions that unfairly advantage some groups while disadvantaging others. [1] [2]

Bias in AI can appear in multiple ways, including gender bias, racial or ethnic bias, cultural bias, and socioeconomic bias. For example, if training data overrepresents certain groups or perspectives, AI systems may reinforce stereotypes, exclude marginalized voices, or normalize dominant viewpoints. Bias can also be shaped by how data is selected, filtered, labeled, or framed, as well as by how users interact with AI systems through prompts and inputs. [2] [3]

In the context of gender-based violence (GBV), bias is a critical concern because AI tools may be used in high-stakes contexts such as information provision, risk assessment, content moderation, resource prioritization, or decision-support. When bias is embedded in these systems, it can contribute to discrimination, misrepresentation, or harm, particularly for survivors who are Indigenous, Black, racialized, disabled, 2SLGBTQIA+, or otherwise marginalized. Bias in AI may therefore exacerbate existing inequities rather than reduce them, unless actively identified and addressed. [1] [4]

Bias can also refer to the behavior of algorithms themselves. That is, when a computer system’s decision-making produces outcomes that are systematically less favorable to individuals within a particular group, even when there is no relevant difference between groups that would justify such outcomes. In this sense, algorithmic bias includes both the influence of biased training data and the ways in which AI systems weigh or act on information in ways that reproduce or amplify social inequities. [5]

Footnotes:
[1] UNESCO. (n.d.). Recommendation on the Ethics of Artificial Intelligence. Retrieved from https://unesco.org.uk/site/assets/files/14137/unesco_recommendation_on_the_ethics_of_artificial_intelligence_-_key_facts.pdf

[2] Government of Canada. (n.d.). Guide on the use of generative artificial intelligence. Retrieved from https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-use-generative-ai.html

[3] University of British Columbia Centre for Teaching, Learning and Technology. (n.d.). Glossary of Generative AI Terms. Retrieved from https://ai.ctlt.ubc.ca/resources/glossary-of-genai-terms/

[4] University of Saskatchewan Library. (n.d.). Generative Artificial Intelligence: Glossary of AI-related Terms. Retrieved from https://libguides.usask.ca/gen_ai/glossary#section_L

[5] UN Women. (n.d.). Glossary: Gender and Technology. Retrieved from https://www.unwomen.org/en/how-we-work/innovation-and-technology/glossary

Chatbots / Conversational AI

Conversational AI refers to artificial intelligence systems designed to understand, process, and respond to human language in a natural, conversational way. These systems allow people to interact with technology using text or speech, rather than technical commands. Conversational AI draws on techniques such as natural language processing (NLP) and machine learning to interpret meaning and generate responses. [1] [4]

A chatbot is a common application of conversational AI. Chatbots are AI-powered software programs designed to simulate conversation with human users, often through text-based interfaces (such as messaging platforms or websites), and sometimes through voice. They can answer questions, provide information, or help users navigate services in real time. Chatbots range from simple, rule-based systems to more advanced AI-driven tools capable of handling complex or context-dependent interactions. [1] [2] [3]

In GBV and allied contexts, chatbots and conversational AI may be used in settings such as information provision, resource navigation, translation, or automated support tools. [2] [4]

Footnotes:
[1] Government of British Columbia. (n.d.). AI terms to know. Retrieved from https://www2.gov.bc.ca/assets/gov/education/administration/kindergarten-to-grade-12/ai-in-education/ai-terms-to-know.pdf

[2] University of Saskatchewan Library. (n.d.). Generative Artificial Intelligence: Glossary of AI-related terms. Retrieved from https://libguides.usask.ca/gen_ai/glossary#section_P

[3] Syracuse University Libraries. (n.d.). Generative AI glossary. Retrieved from https://researchguides.library.syr.edu/c.php?g=1341750&p=10367071

[4] University of British Columbia CTLT. (n.d.). Glossary of GenAI terms. Retrieved from https://ai.ctlt.ubc.ca/resources/glossary-of-genai-terms/  

Data Privacy

Data privacy refers to the protection and responsible handling of personal information in digital systems. It involves practices and safeguards that ensure data about individuals is collected, used, stored, and shared in ways that respect consent, confidentiality, and ethical obligations, and that reduce the risk of misuse or harm. [1] [2]

In the context of artificial intelligence, data privacy includes protecting personal or sensitive information used to train or operate AI systems, as well as information that may be inferred, generated, or indirectly captured through AI-enabled technologies. Privacy risks can arise at multiple stages of the AI lifecycle, including data collection, processing, retention, and disclosure, particularly when individuals are unaware of or unable to meaningfully consent to how their data is used. [2] [3]

Data privacy is especially critical in gender-based violence contexts, where personal information may relate to survivors’ identities, communications, locations, or experiences of harm. Weak data privacy protections can increase risks of surveillance, re-identification, coercion, or retaliation, even when data is anonymized or aggregated. For this reason, strong data privacy practices are foundational to survivor safety, trust, and ethical AI use in this sector. [2] [3]

Footnotes:
[1] Network of the National Library of Medicine (NNLM). (n.d.). Data Glossary. Retrieved from https://www.nnlm.gov/guides/data-glossary/data-privacy
[2] UNESCO. (n.d.). Recommendation on the Ethics of Artificial Intelligence. Retrieved from https://unesco.org.uk/site/assets/files/14137/unesco_recommendation_on_the_ethics_of_artificial_intelligence_-_key_facts.pdf
[3] Office of the Privacy Commissioner of Canada. (n.d.). Privacy and artificial intelligence. Retrieved from https://www.priv.gc.ca/en/privacy-topics/technology/artificial-intelligence/ 

Dataset

A dataset is a structured collection of related data used to train, test, or evaluate artificial intelligence and machine learning systems. Datasets may include text, images, audio, video, or numerical information, and are often organized in tables, lists, or databases. In many AI systems, datasets contain labelled examples that help the system learn patterns and make predictions or classifications. [1]

The quality, composition, and governance of datasets play a critical role in how AI systems perform. Datasets that are incomplete, unrepresentative, outdated, or shaped by historical inequities can result in systems that produce inaccurate, biased, or harmful outputs. For this reason, diverse, well-documented, and responsibly curated datasets are widely recognized as essential to ethical and reliable AI development. [1] [2]

In gender-based violence contexts, datasets may include sensitive information about survivors, service users, or communities, or may be used to inform decisions related to risk assessment, service delivery, prevention, or content moderation. Poor data quality or weak governance can increase the risk of privacy breaches, misclassification, exclusion, or harm, particularly for marginalized groups. Careful attention to dataset design, consent, and accountability is therefore central to survivor-centred and equity-informed AI use. [2]

Footnotes:
[1] University of Saskatchewan Library. (n.d.). Glossary of AI Related Terms: Generative Artificial Intelligence. Retrieved from https://libguides.usask.ca/gen_ai/glossary
[2] UNESCO. (n.d.). Recommendation on the Ethics of Artificial Intelligence. Retrieved from https://unesco.org.uk/site/assets/files/14137/unesco_recommendation_on_the_ethics_of_artificial_intelligence_-_key_facts.pdf

Deepfakes

“Deepfakes are fake videos made through the use of advanced technology. They make it appear as though individuals are in videos they never took part in. Production of a deepfake requires photos or videos that could be taken in-person, from social media, or otherwise found online.” [1]

The term “non-consensual sexual deepfakes” is often used to capture “the non-consensual use of adults’ images and videos in the production and distribution of sexual deepfakes.” [1] It is a form of image-based sexual abuse. 

Footnotes:

[1] Learning Network. (2019). What you need to know about non-consensual sexual deepfakes. Retrieved from https://www.gbvlearningnetwork.ca/our-work/infographics/nonconsensualsexualdeepfakes/What%20You%20Need%20to%20Know%20About%20Non-Consensual%20Sexual%20Deepfakes.pdf

Generative AI

Generative artificial intelligence (generative AI, or GenAI) refers to AI systems that can create new content such as text, images, audio, video, or code in response to a user’s instructions (often called a “prompt”). These systems generate outputs based on patterns learned from large amounts of training data. [1] [2]

Generative AI is different from many other types of AI because it does not only analyze or classify existing information, it can produce new material that may appear original or human-made. For example, a generative AI tool might create an image from a text prompt, draft an email, summarize a document, or generate realistic-sounding dialogue. [2] [3]

Some generative AI systems are built using large language models (LLMs), which generate text by predicting likely word sequences based on patterns learned from large volumes of text. [1] [3]

Generative AI is discussed both for its potential uses (such as drafting, summarizing, and translation) and for its risks and harms in the context of gender-based violence, including the creation of synthetic or deceptive content that can be used to harass, threaten, impersonate, or misinform. [1] [4]

Footnotes:
[1] University of British Columbia, Centre for Teaching, Learning and Technology (CTLT). (n.d.). Glossary of GenAI terms: Generative AI. Retrieved from https://ai.ctlt.ubc.ca/resources/glossary-of-genai-terms/

[2] Network of the National Library of Medicine (NNLM). (n.d.). Data Glossary. Retrieved from https://www.nnlm.gov/guides/data-thesaurus/generative-artificial-intelligence

[3] University of Saskatchewan Library. (n.d.) Generative Artificial Intelligence: Glossary of AI Related Terms. Retrieved from https://libguides.usask.ca/gen_ai/glossary#section_G

[4] Government of Canada. (n.d.) Artificial Intelligence (AI) Terminology Concept Map. Retrieved from https://www.btb.termiumplus.gc.ca/tpv2alpha/alpha-eng.html?lang=eng&i=1&srchtxt=10470706&codom2nd_wet=1#resultrecs   

Hallucination

In the context of generative artificial intelligence (GenAI), a hallucination refers to an output that appears coherent, confident, or well-formed but is factually inaccurate, misleading, incomplete, or not supported by reliable source data. Hallucinations can include fabricated facts, incorrect explanations, nonexistent citations, or illogical responses that are presented as plausible or authoritative. [1] [2] [3]

Hallucinations can occur for several reasons, including limitations in training data, gaps in a model’s knowledge, biases in the data used to train the system, or the way a prompt is framed. Because many generative AI models generate responses by predicting likely patterns rather than verifying facts, they may produce outputs that sound reasonable even when they are incorrect. Content may also be outdated or uneven in quality across languages, depending on how and when the model was trained. [1] [3]

In gender-based violence (GBV) and allied contexts, hallucinations pose particular risks because AI-generated inaccuracies may affect safety-related decisions, legal understanding, access to services, or public-facing information. When used without careful review, hallucinated content can contribute to misinformation, reinforce harm, or undermine trust, especially in high-stakes or survivor-facing settings where accuracy and accountability are essential. [1] [4]

Footnotes:
[1] Government of Canada. (n.d.). Guide on the use of generative artificial intelligence. Retrieved from https://www.canada.ca/en/government/system/digital-government/modern-emerging-technologies/responsible-use-ai/guide-use-generative-ai.html

[2] University of Saskatchewan Library. (n.d.). Glossary of AI Related Terms: Generative Artificial Intelligence. Retrieved from https://libguides.usask.ca/gen_ai/glossary

[3] MIT Sloan Teaching & Learning Technologies. (n.d.). Glossary of Terms: Generative AI Basics. Retrieved from https://mitsloanedtech.mit.edu/ai/basics/glossary/

[4] University of British Columbia Centre for Teaching, Learning and Technology. (n.d.). Glossary of generative AI terms. Retrieved from https://ai.ctlt.ubc.ca/resources/glossary-of-genai-terms/

Human Oversight / Human-in-the-Loop

Human oversight, often described as human-in-the-loop, refers to the principle that AI systems should not operate entirely autonomously in ways that affect people. Instead, meaningful human involvement must be maintained so that humans can monitor, interpret, intervene in, or override AI system outputs and decisions when needed. [1] [2]

Human oversight includes practices such as reviewing AI-generated outputs, setting limits on how AI systems are used, conducting impact assessments, and ensuring systems are auditable and traceable. These mechanisms help support accountability, transparency, and alignment with human rights, particularly in contexts where errors or bias could cause harm. [1] [3]

In gender-based violence (GBV) and allied contexts, human oversight is especially important because AI tools may be used in high-stakes environments involving safety, privacy, legal processes, or access to services. Maintaining human-in-the-loop approaches helps ensure that AI does not replace professional judgment, survivor choice, or trauma- and violence-informed decision-making, and that risks can be identified and addressed before harm occurs. [1] [2] [4]

Footnotes:
[1] UNESCO. (n.d.). Recommendation on the Ethics of Artificial Intelligence – key facts. Retrieved from https://unesco.org.uk/site/assets/files/14137/unesco_recommendation_on_the_ethics_of_artificial_intelligence_-_key_facts.pdf

[2] Government of British Columbia. (n.d.). AI terms to know. Retrieved from https://www2.gov.bc.ca/assets/gov/education/administration/kindergarten-to-grade-12/ai-in-education/ai-terms-to-know.pdf

[3] OECD. (2019). Recommendation of the Council on Artificial Intelligence. Retrieved from https://www.oecd.org/en/topics/ai-principles.html 

[4] Government of Canada. (n.d.). Guide on the use of generative artificial intelligence. Retrieved from https://www.canada.ca/en/government/system/digital-government/modern-emerging-technologies/responsible-use-ai/guide-use-generative-ai.html 

Large Language Model (LLM)

A large language model (LLM) is a type of AI system designed to understand and generate human language. LLMs are trained on very large collections of text (such as books, articles, websites, and other written material), which helps them recognize patterns in language and produce responses that sound human-like. [1] [2]

LLMs can be used for more than just writing new text. They can support tasks such as summarizing and revising content, translating text, answering questions, and drafting messages or documents. [1] [3]

LLMs are commonly used in generative AI tools that people interact with through chat-style interfaces. While these tools can be helpful, their outputs may sound confident even when they are incomplete or incorrect, and they may reflect biases present in the data they were trained on. For this reason, LLM outputs should be approached thoughtfully. [3]

Footnotes:
[1] Government of Canada. (n.d.). Artificial Intelligence (AI) Terminology Concept Map. Retrieved from https://www.btb.termiumplus.gc.ca/tpv2alpha/alpha-eng.html?lang=eng&i=1&srchtxt=10469252&codom2nd_wet=1#resultrecs

[2] Encyclopaedia Britannica. (n.d.). Large language model. Retrieved from https://www.britannica.com/topic/large-language-model

[3] Government of British Columbia. (n.d.). AI terms to know. Retrieved from https://www2.gov.bc.ca/assets/gov/education/administration/kindergarten-to-grade-12/ai-in-education/ai-terms-to-know.pdf

Machine Learning (ML)

Machine learning (ML) is a type of artificial intelligence (AI) that enables computer systems to learn from data and improve their performance over time, without being explicitly programmed with a fixed set of rules for every situation. [1]

Machine learning systems identify patterns in information and use those patterns to make predictions, classifications, or decisions. Machine learning is used in many everyday technologies, including speech recognition, facial recognition, recommendation systems, and automated sorting or filtering of content. [2] [3]

Machine learning can be used in ways that support helpful goals (such as finding patterns in large datasets), but it can also create or amplify harms when the data used to train the system reflects bias or inequities. In the context of gender-based violence, this matters because machine learning tools may be used in high-stakes environments (such as risk assessment, prioritizing services, content moderation, or fraud detection), where errors or bias can have serious consequences. [1] [4]

Footnotes:
[1] UNESCO. (n.d.). Recommendation on the Ethics of Artificial Intelligence – key facts. Retrieved from https://unesco.org.uk/site/assets/files/14137/unesco_recommendation_on_the_ethics_of_artificial_intelligence_-_key_facts.pdf

[2] UN Women. (n.d.). Glossary: Gender and Technology. Retrieved from https://www.unwomen.org/en/how-we-work/innovation-and-technology/glossary

[3] Network of the National Library of Medicine (NNLM). (n.d.). Data Glossary. Retrieved from https://www.nnlm.gov/guides/data-glossary/machine-learning

[4] University of Saskatchewan Library. (n.d.). Generative Artificial Intelligence: Glossary of AI Related Terms. Retrieved from https://libguides.usask.ca/gen_ai/glossary#section_L

Prompt

A prompt is the input, instruction, or set of directions given to a generative artificial intelligence (GenAI) system to guide what it produces. Prompts can take the form of a question, sentence, phrase, or detailed set of instructions, and they shape the content, tone, format, and level of detail in the AI’s output. [1] [2]

Prompts are important because generative AI systems do not independently decide what task to perform; instead, they respond to what the user asks. Clear, specific prompts with appropriate context are more likely to produce relevant and useful outputs, while vague or poorly constructed prompts may result in inaccurate, biased, or unhelpful responses. This relationship is often described as “garbage in, garbage out,” meaning the quality of the output depends heavily on the quality of the input. [2] [3]

In GBV and allied contexts, prompts matter because they can influence whether AI-generated content is accurate, trauma-informed, inclusive, and appropriate for high-stakes situations. The wording of prompts may affect how AI systems frame issues related to violence, safety, identity, or harm, and poorly designed prompts can unintentionally reproduce stereotypes or misinformation. As a result, prompts should be used thoughtfully and reviewed critically, rather than treated as neutral or automatic instructions. [2] [4]

Footnotes:
[1] George Brown College. (n.d.). Generative AI glossary. Retrieved from https://www.georgebrown.ca/teaching-and-learning-exchange/teaching-resources/generative-ai/glossary

[2] University of Saskatchewan Library. (n.d.). Generative AI: Glossary. Retrieved from https://libguides.usask.ca/c.php?g=739123&p=5334202

[3] MIT Sloan Teaching & Learning Technologies. (n.d.). Glossary of Generative AI basics. Retrieved from https://mitsloanedtech.mit.edu/ai/basics/glossary/

[4] University of Saskatchewan Library. (n.d.). Generative Artificial Intelligence: Crafting GenAI Prompts. Retrieved from https://libguides.usask.ca/c.php?g=739123&p=5334202 

Training data

Training data refers to the information (such as text, images, audio, video, code, or other datasets) that is used to teach an AI system how to perform a task. Training data helps a model learn patterns and relationships so that it can produce an output, such as a prediction, classification, or generated content. [1] [2]

The quality, diversity, and relevance of training data can strongly affect how an AI system performs—and whether it produces biased, inaccurate, or harmful outcomes. In the context of generative AI, training data often includes very large collections of human-created content, which can raise important questions about privacy, consent, safety, and intellectual property. [1] [2]

Footnotes:
[1] University of Saskatchewan Library. (n.d.). Generative Artificial Intelligence: Glossary of AI Related Terms. Retrieved from https://libguides.usask.ca/gen_ai/glossary#section_L

[2] Syracuse University Libraries. (n.d.). Glossary of AI Terms. Retrieved from https://researchguides.library.syr.edu/c.php?g=1341750&p=10367071