• Decorative image.
Chapman University

Artificial Intelligence (AI) Hub

»AI Key Terms

 

 
Term Definition
Artificial Intelligence (AI) The ability of a digital computer or computer-controlled system to perform tasks commonly associated with intelligent beings. This includes learning from experience, reasoning, problem-solving, understanding language, recognizing patterns, and adapting to new situations.

Generative Models A category of machine learning models that create new content by learning from training data and generating outputs that mimic the patterns and structures found in the data. Examples include text generation (ChatGPT), image synthesis (DALL·E), and music composition models.

Generative Pre-trained Transformer (GPT) A deep learning-based language model that generates human-like text by predicting the next word in a sequence. It is pre-trained on large datasets and can be fine-tuned for specific tasks such as translation, summarization, and conversational AI.

Machine Learning (ML) A subset of AI that focuses on developing algorithms that allow computers to learn from data, identify patterns, and make decisions with minimal human intervention. ML is broadly classified into supervised, unsupervised, and reinforcement learning.

Deep Learning (DL) A more advanced subset of machine learning that uses multi-layered artificial neural networks to model complex data representations. Deep learning powers applications such as speech recognition, image analysis, and large-scale NLP models.

Natural Language Processing (NLP) A branch of AI that enables computers to understand, interpret, and generate human language in a meaningful way. Applications include chatbots, language translation, sentiment analysis, and speech recognition.

Language Model (LM) A type of AI model designed to understand and generate human-like text by predicting the probability of word sequences. These models are fundamental to tasks such as text generation, summarization, and dialogue systems.

Token In NLP, a token is a fundamental unit of text processing, such as a word, subword, or character. Tokenization breaks down text into smaller components for analysis and modeling.

Fine-Tuning A method of adapting a pre-trained AI model to a specific task by further training it on a smaller, domain-specific dataset. Fine-tuning improves performance for targeted applications while leveraging the model’s existing knowledge.

Text Classification The process of categorizing text into predefined labels, such as spam detection, sentiment analysis, or topic classification.

Transfer Learning A machine learning technique where a model trained on one task is repurposed for a different but related task, reducing the need for large amounts of new training data.

Prompt In AI, a prompt is an input query or instruction given to a language model to generate a response. Prompt engineering optimizes how prompts are structured to achieve better outputs.

Training The process of feeding labeled or unlabeled data into an AI model to enable it to learn patterns, relationships, and decision-making rules for various tasks.

Bias in AI Systematic errors in AI models that lead to unfair or discriminatory outcomes. Bias can originate from training data, model architecture, or real-world deployment, impacting social fairness and decision-making.

Ethical AI The development and use of AI systems that prioritize fairness, transparency, privacy, and accountability to prevent harm and ensure inclusivity.

Hallucination In AI, a hallucination occurs when a generative model produces false or misleading outputs that do not align with reality. This is common in large language models when they generate fabricated facts.

Alignment The process of ensuring that an AI system’s behavior and outputs align with human values, ethical guidelines, and intended objectives to prevent harmful or misleading responses.

Reinforcement Learning from Human Feedback (RLHF) A training technique that incorporates human feedback to improve AI model behavior, often used to make AI outputs more aligned with human preferences and ethical standards.

Embedding A numerical representation of words, sentences, or concepts in a continuous vector space that allows AI models to understand relationships and similarities between different pieces of text.

Zero-Shot Learning The ability of an AI model to perform a task without prior training on specific examples, relying instead on general knowledge and reasoning.

Few-Shot Learning A model's ability to learn a new task with minimal training data, often using a few examples provided in a prompt.

Multimodal AI AI systems that process and generate multiple types of data simultaneously, such as text, images, audio, and video. An example is OpenAI’s GPT-4, which can analyze both text and images.