Trace | Note

Unpredictable upsides of cognitive offloading

Inspired by this essay, collecting examples of cognitive offloading to LLMs resulting in (ideally unpredictable) upside:

  • Expanding reachable research space: “genAI has expanded the scope of what I seriously consider worth pursuing. It has also brought back lines of inquiry I had set aside for years because they did not seem tractable (or useful) enough relative to the effort required… So I do think this may eventually change the nature of the work we do. I’m less convinced it has already done so. For now, it still looks mostly like task offloading, even if that offloading is already expanding the surrounding research space.”

Some ideas sparked from chatting with Claude about how I might apply the concept of this essay to deeply understanding fundamentals right now.

Deeply understanding fundamentals can be broken down into:

  • Offloadable
    • Encountering the formal structure (definitions, proofs, derivations)
  • Unclear
    • Building intuition for why the structure is the way it is
    • Connecting it to other things you know
  • Not offoadable
    • Finding the weak joints — where assumptions are fragile or where the framework breaks
    • Generating your own questions from inside the framework

Takeaways:

  • Aggressively offload formal structure aspects – I could def do this more. Stop trying to solve integrals or do algebra by hand.
  • Building intuition – worth experimenting, this could be dope
    • I use Anki to try to do this over time… probably not the best medium
    • Experiments
      • “Intuition Builder” Claude project: enter a concept/formalism & the project instructions tell it to generate examples, counterexamples, alternative formulations, explain this differently, now differently again, now show me where this breaks, animations, pressure tests, boundary conditions etc. This could be EXTREMELY useful.
  • Connecting to other things – worth experimenting

Structured experiments I could try (generated by Claude ha):

1. Offload literature synthesis, protect question generation. You already worry about LLMs contaminating your original thinking. So: deliberately split your workflow. Use Claude aggressively for summarizing papers, extracting formal results, mapping citation networks — all the “known structure” work. But keep your probe generation process entirely LLM-free for a fixed period (say 3 months). Then compare: did the freed-up time actually produce new probes you wouldn’t have generated otherwise? This directly tests the essay’s claim that offloading creates room for emergent practices. Your probe log gives you a natural measurement instrument.

2. Let the Anki pipeline get more aggressive. You’ve been carefully designing your spaced repetition system with heavy human oversight (interrogating understanding before card generation). Try a branch where you let Claude generate cards with less gatekeeping on a contained topic — maybe a textbook chapter you’re less invested in. Track whether the time savings let you engage more deeply with material you care about, or whether comprehension on the offloaded topic actually degrades in ways that matter downstream. The essay predicts you’ll discover the answer isn’t obvious in advance.

3. Run parallel research threads with different offloading levels. Pick two comparably difficult open questions in your research (e.g., one in predictive coding convergence, one in chaotic itinerancy). On one, use LLMs maximally — literature search, mathematical exploration, even hypothesis generation. On the other, work the old way. After some fixed period, evaluate not just progress but the character of the ideas you generated. The essay’s prediction is that the high-offloading thread might produce something qualitatively different, not just faster versions of what you’d have done anyway. The interesting finding would be if you can’t even compare them on the same axis.

4. Use AI to explore your own knowledge graph structure. Feed your probes, attractors, and traces into an LLM and ask it to find structural gaps or unexpected connections. This is a practice that literally couldn’t exist before LLMs — it’s the kind of emergent use the essay claims we can’t predict in advance. The experiment is whether this produces genuine new research directions or just plausible-sounding noise. Your independent researcher advantage (no field allegiance) means you’re unusually well-positioned to follow unexpected connections if they appear.

5. Offload mathematical verification to see if it changes what you attempt. When working through proofs or derivations in computational neuroscience papers, use an LLM or proof assistant to check steps you’d normally verify by hand. The essay’s hypothesis is that this won’t just save time — it might change which problems you’re willing to engage with, because the cost of exploring a speculative mathematical direction drops. Track whether your problem selection actually shifts.

6. Build a tutoring tool for your child and observe what emerges. You’ve been thinking about privacy-preserving tutoring. Actually build a prototype, but treat it as an experiment in the essay’s sense: you won’t know what the valuable pedagogical patterns are until you’ve tried. The essay would predict that the interesting insights won’t be about efficiency (“my kid learned multiplication faster”) but about entirely new modes of interaction you didn’t design for.


My previous thinking on this topic, which was framed much more defensively around LLM use given the experience in my Aliveline project:

For things that you enjoy thinking about and areas where you want to develop your own views, it’s worth avoiding having LLMs replace your thinking. It’s too easy for conversations where an LLM is helping you work through some settled material to drift into using the LLM for idea generation or question framing.

Strategies to avoid this (ironically generated by an LLM):

  1. Think first, read to challenge. Form your own views on a question before reading literature. Write down predictions so you can’t unconsciously revise them. Read to stress-test, not to absorb.

  2. Separate tool-knowledge from idea-knowledge. Learn math, techniques, and experimental results efficiently from literature. Be cautious with interpretations, framings, and research programs — that’s where you lose your independence.

  3. Find adversarial collaborators. A small number of people who will genuinely break your ideas, not survey the field for you or validate your thinking.

  4. LLMs as librarians only. Fact retrieval and technical lookup, not idea generation or thinking partnership.

  5. Protect long unstructured thinking time. This is your actual edge. Resist the industry-trained instinct to make things feel productive. The unit of progress is question quality, not output.

  6. Moratorium periods. For your core questions, spend extended time (month+) thinking without reading, building only your own ideas. Then engage literature with your own framework as the anchor.

  7. Accept reinvention as a feature. Independently arriving at known results gives deeper understanding. Arriving somewhere different is where the value is.

  8. Weekly “great thoughts” day. Dedicated time for big questions, meta-questions about what makes good science, and problem-creation work specifically.

Backlinks

Last updated 2026-03-17