Context Window Demo
Total Tokens: 0
Context Window: 100
About This Demo
This demonstration shows how token context windows work in Large Language Models:
- Each message shows its token count
- Messages outside the context window appear faded
- Like an LLM's active memory, only the most recent tokens within the context window are available for processing - older tokens "fade" from memory
Want to learn more about Tokens and how different LLMs Tokenize?
Check out the Tokenizer Playground.