Ideas
Apr 2026·Topic Page·NLP

Natural Language Processing

A small room for my notes on transformers, neural language models, word representation, and the ideas that made modern language models possible.

I wanted this section to be a place where I can organize my NLP notes more carefully instead of keeping everything on one long page. Some of these entries are technical summaries, some are concept notes, and some are simply my way of understanding the foundations more clearly by writing them out.

Right now, I am especially interested in the representational side of NLP: how words become vectors, how context changes meaning, and how architectural shifts like the transformer changed what language models can learn at scale. I also like tracing how older ideas still quietly sit underneath newer ones. Even when the field moves fast, the basic conceptual questions remain surprisingly stable.

In this section

These notes are evolving, incomplete, and occasionally wrong.