Vectors and Vector Operations
Learn what vectors are, why they are the fundamental building blocks of machine learning, and how to manipulate them.
Why This Matters for AI
Every piece of data that goes into an AI model — every image pixel, every word in a sentence, every user preference — gets turned into a vector. When GPT reads your prompt, it converts each word into a vector of 12,288 numbers. When Spotify recommends a song, it compares vectors representing your taste to vectors representing songs. Vectors aren't just a math concept — they are literally the language that AI speaks. If you don't understand vectors, you can't understand AI. Let's fix that.
The Intuition (No Math Yet)
Think of a vector as a list of numbers that describes something. That's it. A list of numbers. If you describe a house with 3 numbers — [square feet, number of bedrooms, price] — that's a 3-dimensional vector. If you describe a movie with 100 attributes, that's a 100-dimensional vector. The beautiful thing about vectors is that once you represent things as lists of numbers, you can do math on them. You can measure how similar two things are (dot product). You can add them together (vector addition). You can stretch or shrink them (scalar multiplication). An arrow on a piece of paper is just a visual way to think about a 2D vector. But real AI vectors live in hundreds or thousands of dimensions. Don't worry about visualizing those — just think of them as lists of numbers where each number means something. Here's the key intuition: vectors that are "close" to each other (measured by operations we'll learn) represent things that are similar. The word "king" and the word "queen" have vectors that are close to each other in a language model. A photo of a cat and a photo of a dog have vectors that are closer to each other than either is to a photo of a car. This is the foundation of everything in AI.
The Formal Math
What is a vector?
Vector Addition
Scalar Multiplication
Dot Product
Vector Magnitude (Length)
Cosine Similarity
Interactive Visualization
Interactive: Vector Operations
Drag the sliders to change the vectors and see how operations work visually.
Interactive: Dot Product
The dot product measures how much two vectors point in the same direction. Change the angle and see how it affects the value.
● Projection (yellow dashed) shows how much of b aligns with a
● Positive dot = same direction
● Negative dot = opposite direction
● Zero dot = perpendicular (90°)
Math → Code Bridge
Math → Code Bridge
See the math and its Python equivalent side by side. Same concept, two languages.
Creating Vectors
In NumPy, vectors are 1D arrays. Each element is a component of the vector.
import numpy as np
# A vector is just a NumPy array
v = np.array([3, 7, 1])
print(v) # [3 7 1]
print(v.shape) # (3,) — a 3-dimensional vectorVector Addition
NumPy adds vectors element-by-element, exactly like the math formula.
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
# Addition is element-wise
result = a + b
print(result) # [5 7 9]Dot Product
The @ operator is the modern way to compute dot products in Python. It maps directly to the mathematical dot product.
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
# Two ways to compute dot product
dot1 = np.dot(a, b) # 32
dot2 = a @ b # 32 (@ operator)
dot3 = sum(a * b) # 32 (manual)
print(dot1) # 32Cosine Similarity
np.linalg.norm computes the vector magnitude. This pattern is used everywhere in AI for comparing embeddings.
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
# Cosine similarity
cos_sim = np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))
print(f"{cos_sim:.3f}") # 0.974
# These vectors are very similar in direction!
# (cos_sim close to 1 = similar)Practice
Practice Problems
Apply what you learned to real AI/ML scenarios.
In a movie recommendation system, a user is represented as [0.8, 0.2, 0.9] (loves action, dislikes romance, loves sci-fi) and a movie is represented as [0.7, 0.1, 0.95] (action, romance, sci-fi scores).
Compute the dot product of the user preference vector and the movie feature vector. What does the result tell you about whether this user would like this movie?
In a word embedding space, the word "neural" is represented as [0.5, 0.8, 0.3, 0.9] and the word "network" is [0.4, 0.7, 0.2, 0.85].
Compute the cosine similarity between the two word embedding vectors. Are these words semantically similar?
You have a data point represented as the vector [3, -1, 4, 2]. You want to normalize and then scale the data.
If you scale this feature vector by 2, what is the new vector? Does the direction change? Does the magnitude change?
Summary
Summary Card
Key Formulas
Key Intuitions
- •A vector is a list of numbers that describes something — every data point in AI is a vector.
- •The dot product measures similarity: positive = similar direction, zero = unrelated, negative = opposite.
- •Cosine similarity normalizes the dot product so vector length does not matter — only direction counts.
- •In AI, "close" vectors = similar things (words, images, users, etc.).
AI/ML Connections
- •Word embeddings (Word2Vec, GPT): every word is a vector; similar words have similar vectors.
- •Recommendation systems: users and items are vectors; dot product predicts ratings.
- •Image recognition: images become feature vectors; cosine similarity finds similar images.
- •Transformers use dot products in attention mechanisms to decide which words to focus on.