What’s up everyone!
Today’s post is here to help you better understand some of the terminology you’ve likely encountered in books, the news, videos, and beyond.
The field of Artificial Intelligence is vast, and the sheer amount of terminology can be overwhelming—even for me.
To make things simpler, I’ve put together an "umbrella" of key terms. This guide will give you a solid foundation, making it easier to grasp the basics and navigate the most commonly used terms when exploring educational resources.
Artificial Intelligence (AI): Machines that try to think or act like humans.
Machine Learning (ML): When computers learn from data instead of being programmed step-by-step.
Neural Network: A computer system that works like a tiny, fake brain.
Algorithm: A step-by-step recipe that a computer follows to solve a problem.
Data: Pieces of information that computers use to learn and make decisions.
Training: Teaching a computer by showing it lots of examples.
Model: A trained program that makes predictions or decisions.
Chatbot: A computer program that talks with people using text or voice.
Deep Learning: A special kind of machine learning with many layers in its "fake brain."
Natural Language Processing (NLP): Teaching computers to understand and talk like humans.
Supervised Learning: Learning with the help of labeled examples (like a teacher giving answers).
Unsupervised Learning: Learning by looking for patterns without labeled examples.
Reinforcement Learning: Learning by trial and error to earn rewards.
Computer Vision: Teaching computers to "see" and understand images or videos.
Bias: When a computer makes unfair decisions because of bad training data.
Artificial General Intelligence (AGI): A machine that's as smart as a human in every way.
Artificial Superintelligence (ASI): A machine that's way smarter than any human.
Overfitting: When a model learns too much from its training and can’t handle new problems well.
Underfitting: When a model doesn’t learn enough to solve problems well.
Transfer Learning: Using what a model learned from one problem to solve another.
Hyperparameters: Special settings that help models learn better.
Feature Engineering: Picking or creating the most helpful data for a model.
Embeddings: Turning words or things into numbers so computers can understand them.
Turing Test: A test to see if a computer can act so human-like that people can’t tell it’s a machine.
Gradient Descent: A way for a model to improve itself step by step.
Loss Function: A score that tells the model how bad its guesses are.
Epoch: One round of teaching a model using all the training data.
Backpropagation: A way the model corrects its mistakes during training.
Generative AI: AI that creates new things like pictures, text, or music.
Fine-Tuning: Adjusting a pre-trained model to make it work better on new tasks.
Tokenization: Breaking sentences into small pieces (like words) for computers to understand.
Attention Mechanism: A way for models to focus on the most important information.
Explainability: Helping humans understand how AI made a decision.
Take your AI education further with these posts…
Have a great day!