What are word vectors?

Large language models and earlier research projects map words to dense vectors—arrays of hundreds of numbers. Each dimension encodes a latent feature discovered during training. Vectors that point in a similar direction capture words used in similar contexts.

Why do analogies work?

The training objectives (GloVe and word2vec) nudge word pairs with similar co-occurrence statistics together. Linear algebra lets us compose known words, so king - man + woman lands near queen. WordLab normalises the vectors, sums the requested directions, and searches for the closest neighbours using cosine similarity or Euclidean distance.

What do the charts show?

3D trajectory

The orange polyline walks through up to three principal axes that explain the change from the anchor word to the final suggestion.

Cosine vs distance

The scatter plot compares every candidate neighbour. Higher cosine equals smaller angle, while lower Euclidean distance means the vectors almost overlap.

Vector inspector

Curious about an individual word? Inspect it to see which dimensions dominate and how those dimensions influence analogies.

Playful research ideas

Want to dive deeper?

The original papers—GloVe and word2vec—explain how co-occurrence statistics turn into vectors. WordLab embraces those classics so you can reason about analogies with transparency.