Day 30
Day 30
Second to last day! What would I like to get done before the end?
- Code wise
- Re-run Kato shared
- Run Atanas fidelity/stationarity tests
- Update documentation/writeup with results
- Convert my various musings/thoughts/questions throughout this into draft-y Evergreen Questions, so I can refine & sharpen them later
- Reflections on this process while itâs still fresh
- Overall writeup, though Iâll let all of this sit for a little before doing this â mostly just want to make sure raw materials are in place while itâs fresh
Kato shared is running. Did some refactoring of Kato/Atanas code to avoid code going out of sync, itâs much cleaner and clearer to see what is going on now. Tons of documentation updates to get it to a finalized place (for now).
Thinking about this Evergreen Questions format⊠hereâs the original Evergreen Notes:
- Evergreen notes should be atomic
- Evergreen notes should be concept-oriented
- Evergreen notes should be densely linked
- Prefer associative ontologies to hierarchical taxonomies
- Write notes for yourself by default, disregarding audience
What am I thinking for Evergreen Questions? Why do I need a separate format beyond Evergreen Notes?
- Original reasoning was that Evergreen Notes require a lot of maintenance â because they are densely linked, any semi major changes will require changing a lot of them. Evergreen Questions at least a priori seem like they may need less, because questions remain valid even if they are answered; how refined/sharp they are may change over time. Also loosely inspired from Questions are not just for asking
- I also like the idea of practicing the skill of converting questions into falsifiable / testable hypotheses and actual experiments. Each Evergreen Question is in a good state once I am able to sharply define an associated hypothesis/experiment (ideally just a single one to make sure questions are atomic)
- Probably I should just try it out and see what happens. Letâs do that
From Day 23 is my rough list of Evergreen Questions from this aliveline:
- Each neuron is a DD-DC
- Canât you test this by seeing what happens if you perturb a single neuron and see if it behaves like a feedback controller?
- Moving faster allows enables/strengthens learning through proprioception
- Hypothesis: because each neuron is a DD-DC, speed increases quality & quantity of feedback
- Locomotion is implemented as loops in a low-D phase space in C Elegans
- Each brain has its own neural coordinate system to implement the same behavior
- Neural population activity encode tasks like motor control and even higher order cognition as manifolds
- As things become more predictable, tangling decreases (manifold geometry becomes simpler)
- hypothesis: As things become more predictable, some analog to Kolmogorov complexity goes down
- What might a useful KC analog definition look like?
- Minimum number of neurons that need to be activated to achieve a particular neural state space trajectory?
- what if we frame this in terms of feedback/control??
- Theories of the brain have shifted from a single-neuron view to global population dynamics due to advancements in measurement technology
- Whatâs the natural extension here? What kind of measurement would enable this?
- The space of neuromodulators is massive, even in a small nervous system like C Elegans
- Ex: Cytokines paper
- Can the Tracy-Widom distribution (or other universality class distributions) describe the transition of neural population activity before/after a rule is learned?
- Behavior is a top-down computational constraint on a neuronal network
- How does DNA bridge the micro (local neuronal rules) to the macro (evolutionarily relevant behavior)
- hypothesis: perhaps in local neuronal rules are in the language of feedback control laws?
- What is the appropriate measure of âcomplexityâ or ârepresentational capacityâ for a brain?
- How does the parallelism of the brain tie into the feedback control / nonlinear dynamical system view?
- This comment: In CN, the trend is to take algorithms from computer science and statistics and map them onto biology. Whatâs far rarer is extracting new ML algorithms from the biology itself.
- What if intelligence/the brain were simple and they seem complex because the environment is complex? simple in the sense of having a relatively small set of rules
Also asked codex to pull out Evergreen Notes/Questions from the ideas in this folder to see if it gives me interesting ideas. Results:
Process / MetaâResearch  - Constraints as a motivation engine â why âabsurd objectivesâ reliably create momentum.  - Problemâcreator mindset reduces skill bottlenecks â when reframing beats solving.  - Fast iteration beats perfect framing in short timelines â when to pick a path vs keep exploring.  - Failure can be designed to be interesting â criteria for âgood failure.â  - Research pace is gated by social bandwidth â author outreach as default step.  - LLM coâworking needs explicit loop design â prompts, checkpoints, and autonomy boundaries.  - Asking for help earlier changes research speed â how much is âenoughâ independence?  - Negative results are still structure â how to turn âdidnât workâ into a useful note.
 Methods / Measurement  - Stationarity assumptions quietly dominate neural dynamics analyses â when they break.  - âFidelityâ vs âstationarityâ as separate validation axes â why both matter.  - Freely moving vs immobilized data may be different systems â when manifolds disappear.  - Dataset conditions are firstâclass confounds â stimulus regimes as regime switches.  - Small neuron subsets can recover global structure â what that implies and what it doesnât.  - When a manifold is real vs epiphenomenal â criteria for âcomputational relevance.â
 Neuro / Theory  - Behavior as a topâdown constraint on network dynamics â not hardwiring, but shaping.  - Degeneracy vs universality â stable behavior with unstable microâimplementation.  - Contextâgating as a memory strategy in tiny brains â minimal memory without overwrite.  - Manifolds as reused scaffolds â reuse without catastrophic forgetting.  - Learning as stateâspace geometry change â rotations/contractions as signatures.  - Local rules â global loops â how local feedback might yield conserved dynamics.  - Genetic âreachabilityâ instead of explicit encoding â behavior as a basin, not a script.  - Controlâtheoretic language for neurons â feedback laws as a unifying microâvocabulary.
 AI / ML / CrossâDomain  - Emergent lowâD structure in both brains and models â whatâs the shared pressure?  - Macro behavior selection vs micro implementation freedom â analogy to model pruning or lotteries.  - When AI models should be trained near criticality â why structure appears at phase edges.  - âSimple rules, complex environmentâ as a brain hypothesis â where complexity really lives.