| Term | Definition |
|---|---|
| Artificial Intelligence (AI) | The ability of a digital computer or computer-controlled system to perform tasks commonly
associated with intelligent beings. This includes learning from experience, reasoning,
problem-solving, understanding language, recognizing patterns, and adapting to new
situations. |
|
AI Literacy |
Refers to the ability to understand how AI tools work. It includes evaluating their outputs for accuracy, bias, and limitations. It also emphasizes the responsible, ethical, and appropriate use of AI. |
|
Prompt Engineering |
The practice of designing and refining inputs given to an AI model to guide its responses.
It involves choosing clear instructions, examples, and context to improve accuracy
and relevance. Effective prompt engineering enables users to obtain more reliable
and useful outputs from AI systems. |
|
Context Window |
The maximum amount of text or information an AI model can consider at one time when
generating a response. It defines how much prior conversation or input the model can
“remember” while responding. A larger context window allows the model to better understand
long documents or extended conversations. |
| Generative Models | A category of machine learning models that create new content by learning from training
data and generating outputs that mimic the patterns and structures found in the data.
Examples include text generation (ChatGPT), image synthesis (DALL·E), and music composition
models. |
| Generative Pre-trained Transformer (GPT) | A deep learning-based language model that generates human-like text by predicting
the next word in a sequence. It is pre-trained on large datasets and can be fine-tuned
for specific tasks such as translation, summarization, and conversational AI. |
| Machine Learning (ML) | A subset of AI that focuses on developing algorithms that allow computers to learn
from data, identify patterns, and make decisions with minimal human intervention.
ML is broadly classified into supervised, unsupervised, and reinforcement learning. |
| Deep Learning (DL) | A more advanced subset of machine learning that uses multi-layered artificial neural
networks to model complex data representations. Deep learning powers applications
such as speech recognition, image analysis, and large-scale NLP models. |
| Natural Language Processing (NLP) | A branch of AI that enables computers to understand, interpret, and generate human
language in a meaningful way. Applications include chatbots, language translation,
sentiment analysis, and speech recognition. |
| Language Model (LM) | A type of AI model designed to understand and generate human-like text by predicting
the probability of word sequences. These models are fundamental to tasks such as text
generation, summarization, and dialogue systems. |
|
Responsible AI |
It refers to the design, development, and use of AI systems in a manner that is ethical,
fair, transparent, and accountable. It emphasizes the protection of privacy, reduction
of bias, and minimization of potential harm. Responsible AI ensures that AI technologies
benefit people and society while respecting human values and rights. |
| Token | In NLP, a token is a fundamental unit of text processing, such as a word, subword, or character. Tokenization breaks down text into smaller components for analysis and modeling. |
|
Guardrails |
Rules, constraints, and safety measures built into or applied around AI systems to
guide their behavior. They help prevent harmful, unethical, or inappropriate outputs
by setting limits on what the AI can generate or do. Guardrails also support responsible
AI use by promoting safety, compliance, and alignment with human values. |
| Fine-Tuning | A method of adapting a pre-trained AI model to a specific task by further training
it on a smaller, domain-specific dataset. Fine-tuning improves performance for targeted
applications while leveraging the model’s existing knowledge. |
|
Text Classification |
The process of automatically assigning text to predefined categories or labels. It
is commonly used for tasks such as spam detection, sentiment analysis, and topic classification.
By analyzing patterns in language, AI models can quickly organize and interpret large
volumes of text. |
| Transfer Learning | A machine learning technique where a model trained on one task is repurposed for a
different but related task, reducing the need for large amounts of new training data. |
| Prompt | A prompt is the input, such as a question, instruction, or example, given to a language model to guide its response. Prompt engineering is the practice of deliberately designing and refining prompts to improve the accuracy, relevance, and usefulness of the model’s output. |
| Training | The process of feeding labeled or unlabeled data into an AI model to enable it to learn patterns, relationships, and decision-making rules for various tasks. |
| Bias in AI | Systematic errors in AI models that lead to unfair or discriminatory outcomes. Bias
can originate from training data, model architecture, or real-world deployment, impacting
social fairness and decision-making. |
| Ethical AI | The development and use of AI systems that prioritize fairness, transparency, privacy,
and accountability to prevent harm and ensure inclusivity. |
| Hallucination | In AI, a hallucination occurs when a generative model produces false or misleading
outputs that do not align with reality. This is common in large language models when
they generate fabricated facts. |
| Alignment | The process of ensuring that an AI system’s behavior and outputs align with human
values, ethical guidelines, and intended objectives to prevent harmful or misleading
responses. |
|
Overfitting |
This occurs when a model learns the training data too closely. As a result, it struggles to generalize and performs poorly on new or unseen data. This issue is known as overfitting and reduces the model’s real-world usefulness. |
| Reinforcement Learning from Human Feedback (RLHF) | A training technique that incorporates human feedback to improve AI model behavior,
often used to make AI outputs more aligned with human preferences and ethical standards. |
|
Agent |
In AI, agents are systems that can independently perform tasks on behalf of a user
or another system. They can observe information, make decisions, and take actions
to achieve specific goals, often across multiple steps. AI agents may utilize tools,
interact with other systems, and adjust their behavior in response to feedback or
changing conditions. |
Automation |
Automation refers to the use of AI to perform repetitive or routine tasks with minimal human intervention. It helps increase efficiency, reduce errors, and save time on predictable work. Automation allows people to focus on more complex, creative, or decision-based tasks. |
|
Embedding |
A numerical representation of words, sentences, or concepts in a continuous vector space that allows AI models to understand relationships and similarities between different pieces of text. |
|
Workflow Integration |
Workflow integration is the process of embedding AI tools and systems into existing
workflows and daily processes. It enables tasks to be completed more efficiently without
disrupting existing workflows. Effective workflow integration enables organizations
to derive practical value from AI by supporting, rather than replacing, human decision-making. |
| Zero-Shot Learning | The ability of an AI model to perform a task without prior training on specific examples,
relying instead on general knowledge and reasoning. |
| Few-Shot Learning | A model’s ability to perform a new task using only a small number of examples. Instead
of being retrained, the model relies on its existing knowledge and the examples provided
in the prompt to understand what is expected. This approach makes AI systems more
flexible and efficient, especially when large labeled datasets are not available. |
| Multimodal AI | AI systems that process and generate multiple types of data simultaneously, such as text, images, audio, and video. An example is OpenAI’s GPT-4, which can analyze both text and images. |
