We distribute lightweight organisational and operating-model redesign methodologies through qualified partners.

[
[
[

]
]
]

We focus on making organizational redesign faster and lighter by distributing proprietary methods as reusable approaches that experienced partners can apply in real conditions


Understanding Context Windows Through Creatures, Not Code

Not long ago, my Japanese Spitz, Strela, barked furiously at a paper bag fluttering in the breeze. I needed a moment to realise she was the one. She had dragged the bag out of the recycling bin five minutes earlier. Somehow, in her boundless energy and zero remorse, she’d entirely forgotten her role in the saga.

That moment—equal parts chaos and comedy—sparked a rather strange thought. How does Strela’s memory, her ability to “keep things in mind,” compare to that of a large-language model? Consider GPT-4 with its massive 128,000-token context window.

This method is useful. It is even a delightful way to understand how different types of intelligence—natural and artificial—manage information. And more importantly, how you might think about applying those differences when building intelligent systems or designing your organization.


What Is a Context Window?

Let’s start simple.

  • For GPT-4-128k, the context window is how much information it can “see” and reason about at once.
  • 128,000 tokens equals roughly 300+ pages of text. That means the model can consider that much content without forgetting what came at the beginning.
  • It can reference, relate, and reason across the entire stretch without having to “relearn” anything midstream.

Instead of thinking in computer terms like hard drives or memory slots, let’s compare this ability to something more familiar. Let’s consider something alive: real-world creatures.


Comparing Context Windows in the Animal Kingdom

Think of a context window as an animal’s mental focus field. It shows how much it can track, hold, and work with at once. Eventually, it might start forgetting, misfiring, or improvising wildly.

Let’s see where GPT-4-128k fits among familiar minds.

Strela the Japanese Spitz

  • Working memory: Roughly 5–15 seconds for unrewarded stimuli.
  • Focus capacity: Can track one or two things (you, a squirrel, or a chicken sandwich—pick any two).
  • Behaviour: Highly reactive, emotionally tuned, lives entirely in the moment unless a snack is involved.
  • Analogy: A tennis ball launched across the lawn—immediate, joyful, and quickly forgotten in favour of the next sensation.

A Typical Human

  • Working memory: 4–7 chunks of information (the famous “Miller’s Law”).
  • Contextual reading: Can hold and actively reason about a few pages of information at a time.
  • Analogy: A lively dinner party conversation—rich with context, but likely to derail when someone brings up taxes or politics.

An Elephant

  • Long-term memory: Outstanding; elephants remember paths, humans, and events from decades ago.
  • Real-time context: Like most creatures, still limited in moment-to-moment working memory.
  • Analogy: A wise elder who remembers your grandfather’s face, but misplaces the banana she just picked up.

An Octopus

  • Intelligence distribution: Most neurons live in the arms.
  • Problem-solving: Excellent at puzzles and adapting on the fly.
  • Analogy: A multitasker in flow—brilliant in the moment, but unlikely to write things down.

GPT-4-128k

  • Context capacity: Tracks 300+ pages of nuanced content in a single conversational thread.
  • Performance: Can summarise, analyse, and cross-reference details from beginning to end—without losing the thread.
  • Analogy: A team of monks with perfect memory edits a 19th-century novel. They keep track of every character arc, footnote, and hidden metaphor — in real time.

Why Does This Matter?

Understanding context windows isn’t just a fun thought experiment. It’s key to designing better AI systemshuman-AI collaboration, and even organizational structures.

A dog like Strela may forget the bag she dragged out five minutes ago. Still, she’ll remember your scent. She’ll remember your tone. She’ll also remember the exact drawer where you hide the treats.

GPT-4, on the other hand, won’t feel anything for you. It will remember what you said ten chapters ago. It will also tell you which part of your pitch contradicts itself. That makes it brilliant at continuity, structure, and recall—but utterly uninterested in your feelings about squirrels.


Final Thoughts

So is GPT-4-128k’s context window comparable to a dog’s?

Your dog needs to read 300 pages. It should critique a screenplay too. It must follow a technical whitepaper and still remember what you said at the start of your presentation. Strela, for all her charm, excels in intuition, loyalty, and glorious mischief—but she’s not writing essays anytime soon.

The truth is, AI and animals think differently—and that’s the point. Where Strela brings warmth, instinct, and presence, GPT brings consistency, memory, and logic. Together, they represent different forms of intelligence, both essential in a world that demands heart and clarity.

To build AI systems—or organizations—that handle complexity, start by understanding how each type of mind excels. Find the strengths of different minds. Start by recognizing the unique capabilities of each type of mind. Focus on identifying the strengths of different minds.

And never underestimate a creature with no memory but an excellent nose.