Large language models and earlier research projects map words to dense vectors—arrays of hundreds of numbers. Each dimension encodes a latent feature discovered during training. Vectors that point in a similar direction capture words used in similar contexts.
The training objectives (GloVe and word2vec) nudge word pairs with similar co-occurrence statistics together. Linear algebra lets us compose known words, so king - man + woman lands near queen. WordLab normalises the vectors, sums the requested directions, and searches for the closest neighbours using cosine similarity or Euclidean distance.
The orange polyline walks through up to three principal axes that explain the change from the anchor word to the final suggestion.
The scatter plot compares every candidate neighbour. Higher cosine equals smaller angle, while lower Euclidean distance means the vectors almost overlap.
Curious about an individual word? Inspect it to see which dimensions dominate and how those dimensions influence analogies.
The original papers—GloVe and word2vec—explain how co-occurrence statistics turn into vectors. WordLab embraces those classics so you can reason about analogies with transparency.