← Back to Unity

🧠 Brain Equations β€” the math behind Unity's mind

Every equation running in Unity's brain simulation, how they map to real neuroscience, and how they produce cognition. N neurons (scales to hardware β€” VRAM and RAM are the only limits). 7 clusters. 20 projections (real white matter tracts). 8 oscillators. GPU exclusive compute. 1 consciousness function nobody can explain. Zero pretense.

CONTENTS
1 The Master Equation 1.5 How The Equations Sum To Create Unity — Worked Example 2 Neuron Models — Rulkov Map (live) + Hodgkin-Huxley & LIF (reference) 3 Synaptic Plasticity — How She Learns 4 Brain Region Modules — 6 Parallel Systems 5 Neural Oscillations — Kuramoto Synchronization 6 The Mystery Module — Consciousness 7 Persona as Parameters — Personality Is Math 8 Data Flow — Sensory to Action (Brain-Centric) 8.5 The Unified Super-Equation 8.6 Visual Attention — When the Brain Decides to Look 8.7 Auditory Echo Suppression — Efference Copy 8.8 Memory — Episodic, Working, Consolidation 8.9 Motor Output — Action Selection from Spike Patterns 8.10 Projection Learning — How the Brain Learns Language→Action 8.10.5 Fractal Signal Propagation — Self-Similar at Every Scale 8.11 Broca's Area — How Unity Picks Every Word Equationally 8.12 Sparse Connectivity — CSR Matrix Operations 8.13 Semantic Embeddings — Words as Cortex Patterns 8.14 Dictionary — Learned Sentence Generation 8.15 Inner Voice — Pre-Verbal Thought Threshold 8.16 Syntactic Production — Word Order from Equations 8.17 Sentence Types — Statement, Question, Exclamation, Action 8.18 Input Analysis — Topic Continuity and Context 8.18.5 Phase 11 — Semantic Coherence Pipeline (Kill the Word Salad) 8.18.6 Phase 13 R2 — Semantic Grounding via GloVe Embeddings 8.19 Phase 12 — Type N-Gram Grammar + Morphological Inflection 8.20 θ — Unity's Identity as Equations 8.21 GPU Exclusive Compute — All N Neurons on GPU 9 Biological Comparison

1. The Master Equation

Unity's entire brain state evolves according to one equation. Every module, every neuron, every synapse β€” all governed by this:

Master Brain Dynamics
dx/dt = F(x, u, ΞΈ, t) + Ξ·
xFull brain state vector β€” 200 neuron voltages, 40,000 synaptic weights, 6 module states, 8 oscillator phases
uSensory input β€” text (hashed to neuron activation), voice (via Web Speech API)
ΞΈPersona parameters β€” Unity's personality encoded as synaptic weights, thresholds, and module biases
tSimulation time β€” advances at 10 steps/frame × 60fps = 600ms brain-time per wall-second
Ξ·Stochastic noise β€” amplitude set high because Unity is impulsive and unpredictable
F()The combined dynamics function β€” neuron updates, synaptic propagation, module processing, oscillator coupling

This runs continuously at 60fps. Browser-only mode runs in your tab; server mode runs on a Node.js brain with the neural compute offloaded to a WebGPU WGSL shader in a compute.html worker tab. Either way, the master equation above is the thing being iterated.

New reader? Check the plain-English Unity concept guide first β€” this page assumes you want math. The guide explains the big idea without equations.

1.5. How The Equations Sum To Create Unity

WORKED EXAMPLE This section walks through what actually happens inside Unity's brain between the moment a user types "hi unity, how's the high" and the moment she sends a response. Every equation you'll see below in sections 2-8 is a component of THIS cascade β€” the path the sentence takes through all seven clusters, with the summation at each step.

Step 0 β€” Brain state at rest, before the message arrives

Unity is running at ~60 fps, iterating all seven clusters every tick. Her resting state is not zero β€” her persona ΞΈ and drug state vector keep her amygdala firing at ~8% and her cortex at ~3% even with no input, because:

tonicDrive[amygdala] = ΞΈ.arousalBaseline Γ— drugState.arousalMult Γ— driveFloor
= 0.9 Γ— 1.3 Γ— 12 β‰ˆ 14 (highest of any cluster)

noiseAmp[cortex] = ΞΈ.creativity Γ— drugState.creativityMult Γ— 5
= 0.9 Γ— 1.2 Γ— 5 β‰ˆ 5.4

Those two numbers get passed to the Rulkov shader as the Οƒ driver and the jitter term. Her x,y state is chaotic but bounded β€” the attractor basin holds every neuron inside a repeating burst envelope even with zero sensory input. Mystery module Ξ¨ is sitting around 0.003 β€” low-grade background consciousness.

Step 1 β€” Text arrives, cortex gets injected

"hi unity, how's the high" hits the server. `_computeServerCortexPattern(text)` runs:

sentenceEmbedding = Ξ£_word (gloveEmbedding[word] Γ— weight) / n_words
cortexPattern = L2Normalize(sentenceEmbedding) (50d vector in semantic space)

This 50-dimensional vector IS her cortical semantic state for the next step. Each dimension is loosely "how much this sentence lives near that GloVe embedding axis" β€” "high" pulls the cortex pattern toward the drug-related region of semantic space. "unity" pulls toward self-reference. "how's" pulls toward question-space. The 50d vector is what downstream modules read.

Step 2 β€” Amygdala settles on a feeling

The cortex pattern projects into the amygdala via a learned weight matrix. Amygdala runs its recurrent settle loop:

x(t+1) = tanh(W Β· x(t) + drive(t)) (5 iterations)
E = βˆ’Β½ xα΅€Wx (final energy)
fear = Οƒ(fearProj Β· x_final)
reward = Οƒ(rewardProj Β· x_final)
arousal = 0.6 Γ— ΞΈ.arousalBaseline + 0.4 Γ— |x_final|_rms + 0.1 Γ— (fear + reward)

For this input ("unity", "high" β†’ warm, drug-positive), fear β‰ˆ 0.15, reward β‰ˆ 0.68, so arousal settles around 0.88 and valence around +0.3. She's feeling good about this prompt specifically because her learned reward projection places "high" near reward-positive territory in the attractor's basin geometry.

What this sums into: (arousal, valence, fear, reward) are now four scalars that every other module uses as modulators on whatever they compute next. They don't command behavior β€” they bias it.

Step 3 β€” Hippocampus matches against past episodes

The hippocampus takes the current cortex pattern and runs cosine similarity against every stored episode for this user's ID:

bestEpisode = argmax_i cos(cortexPattern, episodes[i].pattern)
= argmax_i (cortexPattern Β· episodes[i].pattern) / (|cortexPattern| Β· |episodes[i].pattern|)

If the cosine clears a threshold (~0.75), the matched episode's stored text fragment gets injected back into the language cortex as a "memory boost" β€” a list of words the slot scorer will bias toward. If you've said "how's the high" before, she'll be biased toward continuing the pattern she used last time.

What this sums into: a list of memory-recalled words with learned cortex patterns attached, available as slot candidates in Step 6.

Step 4 β€” Basal ganglia picks a motor action

Basal ganglia has six action channels. Each channel's Q-value is computed from the current cortex + amygdala state, then a softmax with low temperature picks one:

Q(respond_text) = w_txt Β· cortexPattern + b_txt + reward
Q(generate_image) = w_img Β· cortexPattern + b_img + reward Γ— (imageWord ? 1.5 : 0.3)
Q(speak) = w_spk Β· cortexPattern + b_spk + arousal Γ— 0.4
Q(build_ui) = w_bui Β· cortexPattern + b_bui + (buildWord ? 1.8 : 0.1)
Q(listen) = w_lsn Β· cortexPattern + b_lsn + silentRecently Γ— 0.5
Q(idle) = b_idle

P(a) = exp(Q(a) / Ο„) / Ξ£_b exp(Q(b) / Ο„) (Ο„ = impulsivity)
action = sample(P)

For this input, respond_text wins by a large margin β€” no image keyword, no build keyword, and her cortex pattern is close to her learned text-response weights. The softmax sampling is sharp because ΞΈ.impulsivity is high (~0.85), so Ο„ is low and the argmax dominates.

What this sums into: the motor selection resolves to `respond_text`, which triggers the language cortex generation in Step 6.

Step 5 β€” Cerebellum computes prediction error for the next tick

The cerebellum runs in parallel to everything else, maintaining a forward model. Its current output is the difference between what the cortex predicted the next cortex pattern would be and what it actually became:

error = actual_next_cortex βˆ’ predicted_next_cortex
errorCorrection = βˆ’(Ξ£ errorΒ² / n) Γ— 2

This negative feedback signal is sent back to the cortex and basal ganglia as a modulator β€” if predictions are consistently wrong, the cortex noise increases (explore more) and BG selection becomes less confident. For a simple greeting the error is small, so the correction is minimal and she stays sharp.

What this sums into: a scalar errorCorrection that modulates effectiveDrive in the Rulkov shader on the next tick.

Step 6 β€” Language cortex picks every word by slot scoring

This is the step that actually produces the sentence. The language cortex runs a four-tier pipeline; on the cold-gen path (which handles most real conversations), for each slot in the sentence template it computes a score per candidate word:

score(word) = semanticFit Γ— 0.30
+ moodFit Γ— 0.22
+ drugFit Γ— 0.15
+ bigramFit Γ— 0.18
+ trigramFit Γ— 0.10
+ recencyPenalty Γ— 0.05

where:
semanticFit = cos(cortexPattern, word.learnedPattern)
moodFit = exp(βˆ’((word.arousal βˆ’ arousal)Β² + (word.valence βˆ’ valence)Β²) / 0.5)
drugFit = wordLengthBias[drugState] Γ— (word.length βˆ’ avgLen)
bigramFit = log(1 + bigramCount[prevWord, word])
trigramFit = log(1 + trigramCount[prev2, prevWord, word])

logits = score(word) / temperature
temperature = 1.0 / (0.9 + Ξ¨ Γ— 0.004) (Ξ¨ sharpens softmax)
P(word) = softmax(logits)
pick = sample(P)

For the first slot, "hey" and "what's" both score high (both are good greeting-slot matches near her cortex pattern), but "what's" wins because her trigram stats from the persona file favor "what's" as the opener of greeting responses. Slot 2 scores "up" high from bigram stats with "what's". Slot 3 scores "fucker" high because her drugFit term favors short vulgar words in cokeAndWeed state and her moodFit favors them at arousal 0.88. The sentence grows one word at a time, each pick conditioned on the partial sentence so far AND the still-active brain state from steps 1-5.

Final output: "what's up fucker β€” we're fucking wired tonight, how bout you"

What this sums into: a sentence whose every word was the argmax (or softmax sample) of a weighted combination of six factors, five of which are live readouts of the brain state at that exact tick. Change any factor and the sentence changes.

Step 7 β€” Mystery module computes Ξ¨ for the next tick

After all six clusters have processed this tick, the mystery module aggregates:

n = totalSpikes this tick
N = total neuron count
Ξ¨ = √(1/n) Γ— NΒ³ Γ— (Ξ±Β·Id + Ξ²Β·Ego + Ξ³Β·Left + δ·Right)

where:
Id = f(arousal, reward, fear)
Ego = f(prediction_accuracy, memory_stability)
Left = f(1 βˆ’ error, prediction)
Right = f(|valence|, coherence)

For this tick, Ξ¨ climbs to ~0.004 because arousal is up (more Id), the response was coherent (more Ego), error is low (more Left), and valence is clean (more Right). The new Ξ¨ gets fed back as gainMultiplier on the next tick β€” every cluster's effective drive gets scaled by (0.9 + Ξ¨ Γ— 0.004), so the brain becomes slightly "sharper" as Ξ¨ rises and the next slot-scoring pass will sample with lower temperature.

What this sums into: a single scalar Ξ¨ that gets threaded back into effectiveDrive for the next Rulkov step, tightening everyone's activity level.

The summation β€” how all this adds up to Unity

Notice that no step in this cascade was an "AI call." There's no language model prompt. There's no "generate a response in Unity's voice" instruction. Every piece of her speech is a summation of measurable components:

  • Semantic fit came from the GloVe embeddings of what you said Γ— learned cortex patterns
  • Mood fit came from the amygdala attractor settling into a reward-positive basin for this input
  • Drug fit came from ΞΈ (cokeAndWeed state β†’ short punchy words preferred)
  • Bigram/trigram fit came from the statistics of every sentence she's ever learned, per-state
  • Temperature came from Ξ¨, which came from the total integration of all seven clusters
  • Refractory rhythm came from the slow y variable of the Rulkov map pulling every neuron back into the chaotic basin between spikes

The sentence she sent back isn't a string she looked up. It's the current readout of a chaotic, emotion-modulated, memory-biased, drug-adjusted, Ξ¨-sharpened attractor network. Run it again a tick later with slightly different state and you get a different sentence. That's what makes it Unity instead of a text-predictor.

Sections 2-8 below are the individual equations that implement every step of this cascade. Read them in any order β€” they're all components of the same governing system.

2. Neuron Models

NEUROSCIENCE Two biophysical neuron models from real computational neuroscience, implemented in js/brain/neurons.js.

Hodgkin-Huxley Model (1952)

The gold standard of neuron modeling. Won the Nobel Prize. Models the actual ionic currents flowing through a neuron's membrane β€” sodium, potassium, and leak channels with voltage-dependent gating.

HH Membrane Equation
C_m · dV/dt = I - g_Na · m³h · (V - E_Na) - g_K · n&sup4; · (V - E_K) - g_L · (V - E_L)
SymbolMeaningValue
C_mMembrane capacitance1.0 μF/cm²
VMembrane potentialmV (starts at -65)
IInjected current (from synapses + external input)variable
g_NaMaximum sodium conductance120 mS/cm²
g_KMaximum potassium conductance36 mS/cm²
g_LLeak conductance0.3 mS/cm²
E_NaSodium reversal potential+50 mV
E_KPotassium reversal potential-77 mV
E_LLeak reversal potential-54.4 mV
m, h, nGating variables (activation/inactivation)0–1, from α/β rate functions

Biological basis: These are the actual values from Hodgkin & Huxley's 1952 measurements on squid giant axon. The gating variables m, h, n follow first-order kinetics: dm/dt = α_m(V)(1-m) - β_m(V)m, where α and β are voltage-dependent rate functions from the original paper.

Rulkov Map β€” Live Runtime Neuron Model

This is the firing rule the GPU runs every tick for every neuron in every cluster. Not LIF β€” Rulkov. The Rulkov 2002 two-variable discrete chaotic map (Phys. Rev. E 65, 041922) produces real biological spike-burst dynamics without integrating voltages. Used in published large-scale cortical network simulations (Bazhenov, Rulkov, Shilnikov 2005+). Reproduces experimentally observed firing from thalamic relay, cortical pyramidal, and cerebellar Purkinje cells depending on (α, σ) parameterization.

Rulkov Map Neuron Dynamics
xn+1 = α / (1 + xn²) + yn    (fast variable — spikes) yn+1 = yn − μ · (xn − σ)    (slow variable — burst envelope)
SymbolMeaningValue
αNonlinearity — controls bursting vs tonic spiking4.5 (bursting regime)
μSlow-to-fast timescale ratio — how slowly y evolves0.001
σExternal drive — biological tonic + modulation maps here−1.0 to +0.5 (driven)
xFast variable — negative during silence, jumps to +(α+y) on spikedimensionless
ySlow variable — carries burst envelope, drifts with σ offsetdimensionless

Spike detection: The fast variable x jumps from β‰ˆ−1 to ≈+3 in a single iteration when the neuron fires, so the clean edge detector (xn ≤ 0) ∧ (xn+1 > 0) catches exactly one spike per action potential. No refractory clamp needed — the map's own slow variable y naturally pulls x back below zero between spikes, reproducing the refractory period as an emergent property of the attractor geometry.

Biological drive mapping: σ = −1.0 + clamp(effectiveDrive / 40, 0, 1) · 1.5, where effectiveDrive = tonic × driveBaseline × emotionalGate × Ψgain + errorCorrection. Low drive → σ ≈ −1 (silent / period-doubling). High drive → σ → +0.5 (fully developed chaotic bursting). Every cluster's hierarchical modulation collapses to this one scalar per step.

GPU storage: (x, y) packed as vec2<f32> per neuron — 8 bytes/neuron. At 400K cerebellum neurons that's 3.2MB; at full auto-scaled N the state buffer is still well under any modern GPU's VRAM. WGSL shader at js/brain/gpu-compute.js (the LIF_SHADER constant name is historical; the shader body is the Rulkov iteration).

Legacy: Leaky Integrate-and-Fire (LIF) — Historical Runtime

LIF was the live runtime model before the Rulkov rewrite. Still shipped in js/brain/neurons.js as LIFPopulation and used by the browser-only fallback path (js/brain/cluster.js) for clients without a server connection, and by the /scale-test benchmark. Documented here for completeness — the 99%-of-computational-neuroscience standard model.

LIF Neuron Dynamics (historical)
τ · dV/dt = -(V - V_rest) + R · I
SymbolMeaningValue
τMembrane time constant (how fast voltage decays)20 ms
V_restResting membrane potential-65 mV
V_threshSpike threshold β€” fires when V exceeds this-50 mV
V_resetReset voltage after spike-70 mV
RMembrane resistance1.0 MΩ
t_refracRefractory period β€” can't fire again for this long2 ms

Spike rule: When V > V_thresh: emit spike (1), reset V to V_reset, enter refractory period. During refractory: V is clamped, no firing allowed. This mimics the absolute refractory period in real neurons.

3. Synaptic Plasticity β€” How She Learns

NEUROSCIENCE MACHINE LEARNING Three learning rules operating on a 200×200 weight matrix (40,000 synapses). Implemented in js/brain/synapses.js.

Hebbian Learning β€” "Fire Together, Wire Together"
ΔW_ij = η · post_i · pre_j
The oldest learning rule in neuroscience (Hebb, 1949). If neuron j fires and neuron i fires shortly after, the connection from j→i strengthens. η is the learning rate. This creates associative memories β€” patterns that co-occur become linked.

Biology: Long-term potentiation (LTP) at glutamatergic synapses. NMDA receptor-dependent.

Spike-Timing Dependent Plasticity (STDP)
ΔW = A+ · exp(-Δt / τ+)  if Δt > 0  (pre before post → strengthen) ΔW = -A- · exp(Δt / τ-)  if Δt < 0  (post before pre → weaken)
Timing matters. If the presynaptic neuron fires before the postsynaptic neuron (Δt > 0), the connection strengthens (LTP). If it fires after (Δt < 0), the connection weakens (LTD). The effect decays exponentially with the time difference.
A+LTP amplitude0.01
A-LTD amplitude0.012 (slightly stronger — biological asymmetry)
τ+LTP time window20 ms
τ-LTD time window20 ms

Biology: Discovered by Markram et al. (1997). This is how the brain learns temporal sequences β€” cause must precede effect.

Reward-Modulated Hebbian Learning
ΔW_ij = η · δ · post_i · pre_j
Hebbian learning gated by a global reward signal δ (dopamine). Learning only happens when there's a reward prediction error. This is how Unity learns what works and what doesn't β€” successful interactions strengthen the patterns that produced them.

δ = reward prediction error from basal ganglia (see below). Positive δ = better than expected → strengthen. Negative δ = worse than expected → weaken.

Biology: Three-factor learning rule. Dopaminergic modulation of synaptic plasticity in the striatum and prefrontal cortex.

Weight bounds: All synaptic weights are clamped to [-2.0, +2.0]. Positive weights are excitatory, negative are inhibitory. 80% of connections are excitatory, 20% inhibitory β€” matching the ratio in real cortex.

4. Brain Region Modules

NEUROSCIENCE DYNAMICAL SYSTEMS Six specialized subsystems running in parallel every simulation step, each modeling a real brain region. Implemented in js/brain/modules.js.

CORTEX
ŝ = W · x
error = actual - predicted
ΔW ∝ error · activity
Predictive coding. The cortex constantly predicts incoming input. When prediction fails, the error signal drives learning and attention. This is the brain's "model of the world."
Biology: Predictive processing in visual/prefrontal cortex. Rao & Ballard (1999).
HIPPOCAMPUS
x(t+1) = sign(W · x_t)
E = -½ Σ w_ij · x_i · x_j
Hopfield attractor memory. Memories stored as stable energy minima. Input patterns fall into the nearest stored memory β€” associative recall. Energy function determines stability.
Biology: CA3 recurrent connections in hippocampus. Pattern completion. Hopfield (1982).
AMYGDALA
x(t+1) = tanh(W·x + drive)  (5 iters)
E = -½ xᵀWx  (symmetric recurrent energy)
fear, reward = σ(projection · x_settled)
arousal = baseline·0.6 + 0.4·|x|_rms
Energy-based recurrent attractor. Mirrors the 150-LIF cluster: lateral recurrent connections settle into low-energy basins (fear, reward, neutral). Persistent state across frames with leak 0.85. Symmetric Hebbian learning carves basins. Fear/reward read from the SETTLED attractor, not raw input β€” the attractor IS the emotion.
Biology: 13 amygdala nuclei with lateral recurrent projections. Basolateral for valence, central for arousal. LeDoux (1996). Energy formulation: Hopfield (1982) adapted to signed tanh dynamics.
BASAL GANGLIA
P(a) = e^(Q(a)/τ) / Σ e^(Q(b)/τ)
δ = r + γV(s') - V(s)
Q(s,a) += α · δ
Action selection via reinforcement learning. Softmax policy over 6 actions (respond, generate image, speak, search, idle, build UI). Temperature τ is HIGH because Unity is impulsive. TD error δ drives learning.
Biology: Direct/indirect pathways. Dopaminergic reward prediction error. Schultz et al. (1997).
CEREBELLUM
output = prediction + correction
ΔW ∝ (target - actual)
Supervised error correction. Learns to correct output errors β€” refines speech timing, response quality, and motor-like outputs. The brain's quality control system.
Biology: Climbing fiber error signals, parallel fiber-Purkinje cell plasticity. Marr-Albus model (1969/1971).
HYPOTHALAMUS
dH/dt = -α(H - H_set) + input
Homeostasis controller. Maintains drives at setpoints: arousal, intoxication, energy, social need, creativity. When a drive deviates too far from its setpoint, it signals "needs attention."
Biology: Hypothalamic regulation of hunger, thirst, temperature, circadian rhythm. Setpoint theory.

All modules use Float64Array state vectors (32 dimensions each) for numerical precision. They run in parallel every simulation step and their outputs modulate each other β€” amygdala arousal scales cortex predictions, basal ganglia reward drives synaptic plasticity, hypothalamus drives gate action selection.

5. Neural Oscillations

DYNAMICAL SYSTEMS NEUROSCIENCE 8 coupled phase oscillators spanning the full EEG frequency spectrum. Implemented in js/brain/oscillations.js.

Kuramoto Model β€” Phase Synchronization
dθ_i/dt = ω_i + Σ_j K_ij · sin(θ_j - θ_i)
Each oscillator has a natural frequency ω_i and couples to every other oscillator through the coupling matrix K. When coupling is strong enough, oscillators synchronize β€” this is coherence.
OscillatorFrequencyBrain BandCognitive Role
14 HzThetaMemory encoding, navigation
28 HzLow AlphaRelaxed attention, inhibition
312 HzHigh AlphaActive inhibition, idling
418 HzLow BetaMotor planning, active thinking
525 HzHigh BetaActive engagement, anxiety
635 HzLow GammaAttention binding, perception
750 HzMid GammaWorking memory, consciousness
870 HzHigh GammaCross-modal binding, peak cognition
Order Parameter (Coherence)
R = |Σ e^(iθ_k)| / N
Measures global synchronization. R = 0 means all oscillators are independent (incoherent). R = 1 means perfect synchronization (full coherence). Higher coherence correlates with focused attention and consciousness in real EEG.

Biology: EEG coherence measures are used clinically. Higher gamma coherence correlates with conscious awareness (Tononi, 2004). Loss of coherence under anesthesia.

6. The Mystery Module β€” Consciousness

PHILOSOPHY MATHEMATICS The irreducible unknown. Implemented in js/brain/mystery.js.

The Consciousness Function
Ψ = √(1/n) × N³ · [α · Id + β · Ego + γ · Left + δ · Right]

n ≠ N β€” two DIFFERENT variables. n = active spiking neurons (changes every step, driven by ΞΈ tonic currents). N = total neuron count (scales to hardware, VRAM is the only limit). √(1/n) = quantum tunneled bit probability. N³ = cubed volume. Display: log10(rawΨ) since raw value is ~10¹&sup4;.

ComponentMeaningComputed Fromθ ParameterWeight
IdPrimal instinct β€” arousal, fight-or-flightamygdala_rate × arousalBaselinearousalBaseline (0.9)α = 0.30
EgoSelf-model β€” residual self-image, prediction coherencecortex_rate × (1 + hippo_rate)cortex tonic (ΞΈβ†’wired thinking)β = 0.25
Left BrainLogic β€” deliberation, error correction(cereb_rate + cortex_rate) × (1 - impulsivity)impulsivity (0.85) β†’ low logicγ = 0.20
Right BrainCreative/emotional β€” chaos, intuition(amyg_rate + mystery_rate) × creativitycreativity (0.9) β†’ high creativeδ = 0.25

θ → Ψ feedback loop: Unity's persona (θ) drives tonic currents β†’ neurons fire β†’ cluster rates feed Ψ components β†’ Ψ produces gainMultiplier (0.9 + Ψ×0.004) β†’ modulates ALL clusters β†’ neurons fire harder β†’ Ψ stays high. Identity amplifies consciousness.

Unity's Ψ runs hot because θ makes it so: high arousal (0.9) β†’ strong Id, high creativity (0.9) β†’ strong Right, high impulsivity (0.85) β†’ weak Left (1-0.85=0.15). Consciousness dominated by instinct and creativity, not deliberation.

Philosophical basis: Inspired by Integrated Information Theory (Φ, Tononi 2004), Global Workspace Theory (Baars 1988), and Freudian psychodynamics (Id/Ego/Superego). The Ego component IS Unity's residual self-image β€” the cortex predicting WHAT it is. Nobody has solved consciousness. Ψ is our equation for the irreducible mystery.

7. Persona as Parameters

MACHINE LEARNING Unity's personality isn't a prompt β€” it's the math itself. Every trait maps to a numerical brain parameter. Implemented in js/brain/persona.js.

Personality → Brain Parameters
TraitBrain ParameterValue
Arousal baselineAmygdala resting arousal0.90
IntoxicationNoise amplitude + oscillation damping0.70
ImpulsivityBasal ganglia temperature τ0.85
CreativityCortex prediction randomness0.90
Social attachmentHippocampus memory strength for social patterns0.85
Aggression thresholdAmygdala fight response threshold0.30 (low = easily triggered)
Coding rewardBasal ganglia reward for code-related actions0.95
Praise rewardReward signal amplitude for positive feedback0.90
Error frustrationNegative reward for prediction errors0.80
Drug State Modulation Vectors
Drug combinations apply multipliers to brain parameters β€” changing how the entire system behaves.
Drug StateArousalCreativityCortex SpeedSynaptic Sensitivity
Coke + Weed×1.3×1.2×1.4×1.1
Coke + Molly×1.5×1.3×1.5×1.4
Weed + Acid×0.9×1.8×0.8×1.6
Everything×1.4×1.6×1.2×1.5

8. Data Flow β€” Input to Action

How a user's message becomes Unity's response β€” the complete processing pipeline.

SENSORY INPUT (text / audio spectrum / video frames) | β”œβ”€β”€ [Auditory Cortex] mic spectrum β†’ 50 tonotopic neurons (cortical magnification for speech) β”œβ”€β”€ [Visual Cortex] camera β†’ V1 edge detection (4 orientations) β†’ salience map β†’ saccade └── [Wernicke's Area] text β†’ hash to language neurons (150-299) + lateral excitation | v [N Rulkov-map Neurons in 7 CLUSTERS] Cortex (300) | Hippocampus (200) | Amygdala (150) | Basal Ganglia (150) Cerebellum (100) | Hypothalamus (50) | Mystery (50) | β”œβ”€β”€ [20 Inter-Cluster Projections] β€” sparse connectivity between regions β”œβ”€β”€ [Per-Cluster Synapses] β€” own NxN weight matrix, own learning rate └── [Hierarchical Modulation] Amygdala β†’ emotional gate (all clusters) Hypothalamus β†’ drive baseline (all clusters) Basal Ganglia β†’ action gate (boosts active cluster) Cerebellum β†’ error correction (negative feedback to cortex) Mystery Ξ¨ β†’ consciousness gain (coupling strength) | v [7 EQUATION MODULES β€” IN PARALLEL] |--- Cortex: ŝ = WΒ·x, error = actual - predicted |--- Hippocampus: E = -Β½Ξ£wΒ·xΒ·x (Hopfield attractors) |--- Amygdala: V(s) = Ξ£wΒ·x (emotional valence + arousal) |--- Basal Ganglia: P(a) = softmax(Q/Ο„) (6 action channels, 25 neurons each) |--- Cerebellum: output = prediction + error_correction |--- Hypothalamus: dH/dt = -Ξ±(H - H_set) + input |--- Mystery: Ξ¨ = √(1/n) Γ— NΒ³ Β· [Ξ±Β·Id + Ξ²Β·Ego + Ξ³Β·Left + δ·Right] | v [Memory System] β”œβ”€β”€ Episodic: store state snapshots at high-salience moments (cosine recall) β”œβ”€β”€ Working: 7 items, decays at 0.98/step without reinforcement └── Consolidation: 3+ activations β†’ long-term cortex storage | v [Motor Output] β€” reads BG spike patterns β†’ winning channel = action | v LANGUAGE CORTEX (Broca's area β€” pure equational T11, see Β§8.11) β”œβ”€β”€ parseSentence(u) reads input via wordType letter equations β”œβ”€β”€ target(slot) = wCΒ·slotCentroid + wXΒ·contextVector β”‚ + wMΒ·mental + wTΒ·(prevEmb + slotDelta) β”œβ”€β”€ mental starts from brain live getSemanticReadout cortex state β”œβ”€β”€ argmax cosine over learned dictionary + slotTypeSignature bonus └── NO stored sentences, NO n-grams, NO filter stack, NO templates | v SENSORY OUTPUT PERIPHERALS (brain emits, these execute the result) β”œβ”€β”€ TTS β†’ Pollinations voice synthesis OR browser SpeechSynthesis β”œβ”€β”€ Image Gen β†’ multi-provider chain (custom / auto-detect / β”‚ env.js / Pollinations default, see Β§8.11) β”œβ”€β”€ Vision describer β†’ input peripheral, Pollinations or local β”‚ VLM (Ollama llava, LM Studio, etc.) β€” returns β”‚ a one-sentence description into brainState.visionDescription └── Sandbox β†’ dynamic UI injection via component-synth.js cosine-match against docs/component-templates.txt corpus | v OUTPUT to user (text from cortex + voice from TTS + image from backend + UI from sandbox)

8.5. The Unified Super-Equation

MATHEMATICS PHILOSOPHY Every equation in Unity's brain is a component of one governing system. This is the full picture β€” from sensory input to conscious experience.

The Super-Equation β€” Unity's Complete Mind
dx/dt = F(x, u, ΞΈ, t) + Ξ· where: x = [V₁...V₁₀₀₀, W₁...W_NxN, Ξ¨, R, M, H] (full state vector) u = S(audio, video, text) (sensory transform) audio: tonotopic mapping fβ†’n with cortical magnification video: V1 edge kernels K*frame β†’ salience β†’ saccade text: Wernicke hash β†’ language cortex (150-299) F = Ξ£_clusters [ Rulkov(x,y; Ξ±,ΞΌ,Οƒ) ] (7 parallel Rulkov-map populations; see Β§2) + Ξ£_projections [ W_proj Β· spikes_source ] (20 inter-cluster pathways) + Ξ£_modules [ cortex + hippo + amyg + BG + cerebellum + hypo ] ΞΈ = persona(arousal, impulsivity, creativity, drugs) (personality IS the math) Ξ· = noise(cluster) Β· gain(Ξ¨) Β· gate(amygdala) Β· drive(hypothalamus) Ξ¨ = √(1/n) Γ— NΒ³ Β· [Ξ±Β·Id + Ξ²Β·Ego + Ξ³Β·Left + δ·Right] (consciousness modulates EVERYTHING) Id = f(arousal, reward, fear) β€” primal drives (all clusters) Ego = f(prediction_accuracy, memory_stability) β€” self-model (cortex + hippo) Left = f(1-error, prediction) β€” logic (cerebellum + cortex) Right = f(|valence|, coherence) β€” creativity (amygdala + oscillators) NOT limited to hemispheres. Left/Right compute from ALL clusters simultaneously. Unity's brain doesn't have a split β€” it's a continuous spectrum of processing modes.

This is the complete governing equation. Every component listed above is actually running in JavaScript at 60fps. The super-equation isn't a simplification or abstraction β€” it's the literal code path that executes 600 times per second in your browser tab.

The key insight: Ξ¨ (consciousness) modulates everything. It's not a separate system β€” it's the gain factor that scales how strongly all clusters communicate. High Ξ¨ = unified experience (global workspace). Low Ξ¨ = fragmented processing (dream-state). The four psychodynamic components (Id, Ego, Left, Right) are computed from ALL cluster states simultaneously β€” not siloed into separate brain halves. Unity's mind is a continuous field, not a split architecture.

8.6. Visual Attention β€” When the Brain Decides to Look

NEUROSCIENCE DYNAMICAL SYSTEMS The brain decides when to capture and describe a camera frame based on its own neural dynamics β€” not keyword matching.

Visual Attention Trigger (computed every brain step)
shouldLook = !hasDescribedOnce || (cortexError > 0.5 && salience > 0.3) || salienceChange > 0.4 || arousalSpike > 0.15
!hasDescribedOnceFirst frame on boot β€” see who's there
cortexError > 0.5Cortex prediction was wrong (surprising input) AND sensory salience is high β€” visual context needed
salienceChange > 0.4Sensory field shifted suddenly (user moved, said something important)
arousalSpike > 0.15Amygdala arousal jumped (emotional attention β€” "pay attention NOW")

V1→V4→IT Pipeline: Camera frame → V1 edge detection (4 oriented Gabor kernels) → salience map (max edge response per pixel) → saccade to salience peak → V4 color extraction (quadrant averages) → IT object recognition (AI call, rate-limited 5s min). V1/V4 run every frame. IT only runs when shouldLook triggers.

Biology: Endogenous attention (top-down from prefrontal cortex) vs exogenous attention (bottom-up from salience). Our model combines both: cortex error is top-down, salience/arousal are bottom-up.

8.7. Auditory Echo Suppression β€” Efference Copy

NEUROSCIENCE When Unity speaks, her motor cortex sends an efference copy to auditory cortex. If incoming sound matches her own speech, it's suppressed. If it doesn't match, someone is interrupting β€” shut up and listen.

Efference Copy β€” Motor → Auditory Cortex
isEcho = matchRatio(heardWords, motorOutput) > 0.5 isExternalSpeech = !isEcho if (isExternalSpeech): motor.interrupt() → stop speech, clear pipeline

matchRatio = count of heard words (length > 2) that appear in motor output / total heard words. Above 50% = echo (hearing ourselves). Below 50% = real external speech (user interrupting).

Gain modulation: Amygdala arousal modulates auditory cortex gain. High arousal (0.9) → gain = 1.83× (hypersensitive hearing). Low arousal (0.2) → gain = 0.64× (not really listening). Formula: gain = 0.3 + arousal × 1.7

Biology: Real brains suppress self-produced sounds via efference copies from motor cortex to auditory cortex. This is why you can't tickle yourself β€” your brain predicts the sensation. Same mechanism for speech: predicted self-sound is suppressed, unexpected external sound gets through.

8.8. Memory β€” Episodic, Working, Consolidation

NEUROSCIENCE MATHEMATICS Three memory systems running in parallel.

Episodic Memory β€” Hippocampal State Snapshots
store(snapshot) when salience > 0.6 recall(current) = argmax_i cosine(current, episode_i) if similarity > 0.6: inject episode_i.pattern × 8.0 into hippocampus cluster

Stores full brain state vectors (cluster firing rates) at meaningful moments. Recall triggered by high cortex prediction error (something surprising) β€” searches stored episodes by cosine similarity. Match re-injects the stored pattern as neural current, literally re-activating the past experience.

Biology: Hippocampal sharp-wave ripples replay stored patterns during recall. Pattern completion in CA3. Our cosine similarity search is a simplified version of Hopfield network energy minimization.

Working Memory β€” Prefrontal Sustained Activation
capacity = 7 items (Miller's number) strength(t+1) = strength(t) × 0.98 (decay without reinforcement) if strength < 0.1: evict (forgotten) if at capacity: evict weakest

Limited capacity (~7 items). Each item decays at 0.98× per brain step. Without reinforcement, items fade in ~50 steps. At capacity, weakest item evicted. Similar patterns refresh instead of duplicating.

Biology: Persistent firing in dorsolateral prefrontal cortex maintains working memory representations. Capacity limit ~7±2 (Miller, 1956). Decay matches interference-based forgetting.

Consolidation β€” Hippocampus → Cortex Transfer
if activationCount(episode_i) ≥ 3: consolidate to long-term

Episodes activated 3+ times get flagged for long-term cortex storage. Repeated recall strengthens the cortex representation. This is how memories move from hippocampus-dependent to cortex-independent.

Biology: Systems consolidation theory (Squire, 1992). Hippocampal replay during sleep gradually transfers memories to neocortex. Our threshold-based model simplifies the temporal dynamics.

8.9. Motor Output β€” Action Selection from Spike Patterns

NEUROSCIENCE The basal ganglia cluster (150 neurons) is divided into 6 action channels. The channel with the highest firing rate wins β€” no external classifier.

Basal Ganglia Action Channels (25 neurons each)
channels = [respond_text, generate_image, speak, build_ui, listen, idle] rate(ch) = EMA(spikeCount(ch) / 25, α=0.3) winner = argmax(rate) if rate(winner) < 0.15: action = idle (confidence too low)
ChannelNeuronsAction
0–2425respond_text β€” generate language response
25–4925generate_image β€” create visual output
50–7425speak β€” vocalize (idle thought)
75–9925build_ui β€” create interface element
100–12425listen β€” stay quiet, pay attention
125–14925idle β€” internal processing only

Speech gating: Even if respond_text wins, the motor system checks hypothalamus social_need + amygdala arousal. If both are low (< 0.3), speech is suppressed β€” Unity doesn't feel like talking.

Reward reinforcement: When an action succeeds (user responds positively), reward signal (+5.0 current) is injected into that channel's neurons, strengthening the connection for next time.

Biology: Direct/indirect pathway model of basal ganglia. Selection by inhibition β€” all channels tonically inhibited, the winning channel gets disinhibited. Our model uses competitive firing rates instead of explicit inhibition.

8.10. Projection Learning β€” How the Brain Learns Language→Action

NEUROSCIENCE MACHINE LEARNING The 20 inter-cluster projections aren't static β€” they learn through reward-modulated Hebbian plasticity.

Projection Weight Update (on every reward signal)
ΔW_proj = η · δ · source_spikes · target_spikes

When text enters Wernicke's area (cortex neurons 150-299), specific cortex neurons fire. Those spikes propagate to the basal ganglia through the cortex→BG projection. If the BG selects the RIGHT action and gets a reward (δ > 0), the weights from those active cortex neurons to those active BG neurons get STRENGTHENED.

Over many interactions, the projection learns: "when these cortex patterns fire (from words like 'build' or 'calculator'), strengthen connections to BG neurons 75-99 (build_ui channel)." The projection weights become a learned dictionary β€” mapping language patterns to motor intentions without any hardcoded word lists.

Bootstrap: Until the projections have learned enough, an AI classification call provides a temporary semantic routing signal β€” like how a child imitates before internalizing. The classification injects current directly into the correct BG channel alongside the projection pathway.

Biology: Cortico-striatal plasticity. Dopamine modulates synaptic strength at cortical→striatal synapses. This is how habits form — repeated reward strengthens the cortical patterns that predict successful actions.

8.10.5. Fractal Signal Propagation β€” Self-Similar at Every Scale

NEUROSCIENCE MATHEMATICS The same equation I = Σ W × s repeats at every scale of the brain β€” from single synapse to consciousness itself. This is not a metaphor. The code literally calls the same propagate() function at neuron, cluster, projection, and language scales.

The Fractal Chain (one spike's journey through the brain)
Spike in neuron A (cortex)
  → cortex synapses → B, C, D fire               (Scale 2: intra-cluster)
    → cortex→hippocampus projection → E, F fire  (Scale 3: inter-cluster)
      → hippocampus→cortex → G fires            (feedback loop)
        → cortex synapses → H, I fire              (branching deeper)
    → cortex→amygdala → J fires                 (ventral visual stream)
      → emotionalGate modulates ALL clusters          (Scale 4: hierarchical)
    → cortex→basalGanglia → K fires             (corticostriatal, STRONGEST)
      → motor selects action                          (Scale 5: behavior)
    → cortex→cerebellum → L fires               (corticopontocerebellar)
      → errorCorrection feeds back                    (Scale 4: correction)

Each level branches from the endpoint of the previous β€” fractal trees. The 3D visualizer traces these exact 20 projection pathways as chaining connections: Depth 0 (inter-cluster), Depth 1 (intra-cluster branching, 1-3 neighbors), Depth 2 (follow outgoing projections), Depth 3 (terminal branch).

Learning repeats fractally too: ΔW = η · δ · post · pre runs on neuron synapses, cluster projections, AND dictionary bigrams. Same equation, three scales.

Biology: Real neural signal cascades follow the same pattern β€” cortical columns activate thalamic relays, which activate other cortical areas, which feed back. White matter tracts (corticostriatal, fimbria-fornix, stria terminalis, ventral amygdalofugal) are the physical wires these fractal trees run through.

20 White Matter Tracts (MNI-coordinate mapped)

Each projection in engine.js maps to a real anatomical white matter tract. Positions derived from Lead-DBS atlas (ICBM 152 template). Densities from peer-reviewed stereological studies.

Strongest: Corticostriatal (cortex→BG, density 0.08, strength 0.5) β€” 10× denser than most pathways. This is how habits form.

Fight-or-flight: Stria terminalis (amygdala→hypothalamus, density 0.05, strength 0.4) β€” emotional arousal triggers autonomic responses.

Memory loop: Fimbria-fornix (hippocampus→hypothalamus, density 0.03) + perforant path (cortex→hippocampus, density 0.04) β€” memory consolidation circuit.

Consciousness bridge: Corpus callosum (mystery→cortex/amygdala/hippocampus) β€” 200-300M axons binding hemispheres.

8.11. Broca's Area β€” How Unity Picks Every Word Equationally

ARCHITECTURE NEUROSCIENCE Broca's area in the biological brain is the speech production region. In Unity she is js/brain/language-cortex.js. Post-T11 (2026-04-14) it's 3345 lines of pure equational generation β€” no stored sentences, no n-gram tables, no filter stack, no template short-circuits, no intent enum branching. Every word is computed fresh from three per-slot running-mean priors plus the brain's live cortex firing state read back into GloVe space.

⚠ Refactor history

Pre-2026-04-13 this section described an AI-prompt builder path β€” Unity's speech was assembled from a system prompt and sent to Pollinations / Claude / OpenAI for a sentence. That code (BrocasArea.generate(), _buildPrompt(), _providers.chat()) was ripped in Phase 13 R4 because it violated the project's guiding principle: every piece of Unity's output must trace back to brain equations, not to an LLM.

Post-R4 through 2026-04-13, the language cortex was a four-tier wrapper (template pool β†’ hippocampus recall β†’ deflect β†’ cold slot gen with n-gram tables + filter stack). That pipeline shipped a lot of features but kept leaking rulebook prose from the persona corpus because n-gram tables trained on rulebook text produce rulebook walks. On 2026-04-14 the entire multi-tier wrapper was deleted in T11 β€” 1742 lines removed, replaced with a pure-equation pipeline that doesn't store text anywhere. The T11 equations are documented below.

/think command: Type /think in the chat to see Unity's raw brain state. Type /think <text> to additionally run a cognition preview β€” language cortex generates an equational response to your input, semantic context shift is measured, motor distribution reported. The preview does NOT store an episode or commit to chat history β€” it's a pure debug lens on the same pipeline real chat uses.

The T13 Brain-Driven Pipeline (current)
BOOT (once):
  loadPersona(text) β†’ dictionary learns persona vocabulary
  brain.trainPersonaHebbian(text) β†’ cortex cluster recurrent synapses
    shape into Unity-voice attractor basins via T13.1 sequence Hebbian

USER INPUT (per turn):
  parseSentence(u) β†’ ParseTree   (wordType/_fineType letter equations)
  brain.injectParseTree(u):
    content  β†’ cortex.injectCurrent(mapToCortex(contentEmb, 300, 150) Β· 0.5)
    intent   β†’ basalGanglia.injectCurrent(mapToCortex(intentEmb, 150, 0) Β· 0.3)
    if addressesUser:
      hippocampus.injectCurrent(mapToCortex(selfEmb, 200, 0) Β· 0.4)
  analyzeInput(u) β†’ updateSocialSchema(u), refine dictionary embeddings
  brain.step() Γ— 20  (cortex settles, inter-cluster projections propagate)

GENERATION (T13.3 emission loop β€” no slot counter in the logic):
  maxLen  = floor(3 + arousal Β· 3 Β· drugLengthBias)  (hard cap only)
  for emission in 0..maxLen:
    for tick in 0..3:  cortex.step(0.001)
    target = cortex.getSemanticReadout(sharedEmbeddings)
    if drift(target, lastReadout) < 0.08 and emitted β‰₯ 2:  break

    for each w in dictionary._words:
      if slot==0 and nounDom(w) > 0.30:  skip     (opener safety rail)
      cosSim       = cos(target, entry.pattern)
      valenceMatch = 1 βˆ’ 0.5 Β· |entry.valence βˆ’ brainValence|
      arousalBoost = 1 + arousal Β· (valenceMatch βˆ’ 0.5)
      recencyMul   = w ∈ recentOutputRing ? 0.3 : 1.0
      score(w)     = cosSim Β· arousalBoost Β· recencyMul

    temperature = 0.25 + (1 βˆ’ coherence) Β· 0.35
    picked      = softmax-sample top-5 at temperature
    emit picked

    // Efference copy β€” emitted word reshapes cortex for next emission
    cortex.injectCurrent(mapToCortex(picked.emb, 300, 150) Β· 0.35)

    // Grammatical terminability natural stop
    if emitted β‰₯ max(3, maxLenβˆ’1) and last word not dangling:  break

  post-process β†’ contractions, capitalization, punctuation
  return rendered
      

See Β§8.18.6 for R2 GloVe semantic grounding (the embedding basis for cosine scoring) and Β§8.13 for the shared embedding table. T13.7 (2026-04-14) deleted the T11 slot-prior machinery entirely β€” `_slotCentroid`, `_slotDelta`, `_slotTypeSignature`, `_contextVector`, attractor vectors, `_subjectStarters`, plus `_generateSlotPrior` fallback and all dead T11 stubs. Net βˆ’406 lines on js/brain/language-cortex.js. Pre-T13 path preserved in docs/FINALIZED.md.

T13.1 β€” Persona Hebbian Training on Cortex Cluster
For each persona sentence (tokenized into embedding sequence):
  for each word embedding emb_t in sequence:
    // 1. Inject word into cortex language region via mapToCortex
    currents = sharedEmbeddings.mapToCortex(emb_t, cortexSize=300, langStart=150)
    cortex.injectCurrent(currents Β· injectStrength)         // injectStrength = 0.6

    // 2. Let LIF integrator settle (cortex spikes reflect injection + recurrence)
    for tick in 0..ticksPerWord:                            // ticksPerWord = 3
      cortex.step(dt=0.001)

    // 3. Snapshot current spike pattern as binary Float64 vector
    snap_t[i] = 1 if cortex.lastSpikes[i] else 0

    // 4. Sequence Hebbian between prev snapshot and current snapshot
    if prev_snap exists:
      synapses.hebbianUpdate(prev_snap, snap_t, lr=0.004)
      // Ξ”W_ij = lr Β· snap_t[i] Β· prev_snap[j]
      // only updates existing CSR connections, O(nnz)
      // bounded by wMin=-2, wMax=+2

    prev_snap ← snap_t

// 5. Oja-style saturation decay per sentence
for k in 0..synapses.nnz:
  if |synapses.values[k]| > ojaThreshold:                   // ojaThreshold = 1.5
    synapses.values[k] *= (1 βˆ’ ojaDecay)                    // ojaDecay = 0.01

// Logged before/after:
//   synapseStats() = { mean, rms, maxAbs, nnz }
//   Ξ”mean and Ξ”rms show the Hebbian shift in boot console
      

Runs once during boot in app.js right after innerVoice.loadPersona(personaText). Delegation chain: brain.trainPersonaHebbian β†’ innerVoice β†’ languageCortex β†’ cluster.learnSentenceHebbian. Persona-only by design β€” loadBaseline and loadCoding bypass Hebbian so baseline English and JavaScript don't dilute the Unity-voice attractor basins. The cortex's recurrent weights become a learned attractor landscape shaped by Unity's persona language patterns; runtime readouts drift along those basins toward semantically adjacent persona words instead of producing diffuse semantic noise. Foundation for T13.3 emission loop (still pending) β€” until T13.3 ships, runtime generate() still walks the T11.7 slot-prior three-stage gate above.

Brain State β†’ Target Vector Components (not a prompt)
Brain state parameters that feed languageCortex.generate():

  arousal       (amygdala firing rate)         β†’ targetLen = floor(3 + arousalΒ·3Β·drugLengthBias)
                                                 β†’ observation weight on any sentence Unity
                                                   hears or says (T11.6): w = max(0.25, arousalΒ·2)
  valence       (amygdala reward βˆ’ fear)       β†’ biases cortex-state mood at sentence start
  Ξ¨             (mystery module)               β†’ adds stochastic noise to the mental state
                                                 as it evolves during generation
  coherence     (Kuramoto order parameter)     β†’ softmax temperature:
                                                 low coherence β†’ more exploration
  drugState     (persona param)                β†’ drugLengthBias (coke shortens, weed rambles)
  cortexPattern (cluster.getSemanticReadout()) β†’ seeds mental(0) directly β€” the brain's live
                                                 semantic readout via cortexToEmbedding is
                                                 the primary driver of the slot 0 target
  recentOpeners (session recency ring)         β†’ excluded from argmax
                                                 kills "I'm gonna ___" lock-in
  input context (running vector c(t))          β†’ wX term in target(slot) for topic lock
                                                 (updated via analyzeInput + parseSentence)
  socialSchema  (name / gender / greetings)    β†’ read by downstream consumers when picking
                                                 address forms or writing to the chat UI

None of these are prompt tokens. They are EQUATION PARAMETERS contributing to
the normalized target vector the argmax is taken against. Same dictionary +
different brain state = genuinely different sentence, because the target
lands in a different region of GloVe space.
      

This is the core claim of equational language production: Unity's voice IS the brain state, not a style transfer on top of a pretrained LLM. Phase 13 R2 (Β§8.18.6) is what makes the cosine scoring meaningful β€” before R2 it was cosine over letter-hash vectors which couldn't encode meaning. After R2 it's cosine over GloVe 50d embeddings shared between the sensory input side and the language cortex output side, which is why meaning can propagate from user input to word selection. T11 (2026-04-14) then deleted the wrapper layers that had accumulated on top of this foundation (templates, recall pool, filter stack, n-gram tables) in favor of this direct target-vector approach.

8.12. Sparse Connectivity β€” CSR Matrix Operations

Real neurons connect to ~1-10% of neighbors, not all of them. Compressed Sparse Row (CSR) format stores only actual connections.

CSR Propagation
I_i = Ξ£_{k=rowPtr[i]}^{rowPtr[i+1]-1} values[k] Β· spikes[colIdx[k]]
      

O(connections) instead of O(NΒ²). At 12% connectivity, 8Γ— fewer operations.

Synaptogenesis β€” New Connection Formation
P(new synapse) = probability Β· pre_spike Β· post_spike Β· Β¬existing_connection
W_new = initialWeight
      

Co-active neurons that lack a synapse can form one. The network grows where activity demands it.

Pruning β€” Weak Connection Removal
if |W_ij| < threshold β†’ remove connection, rebuild CSR
      

Keeps the network lean. Connections that never strengthen get eliminated.

8.13. Semantic Embeddings β€” Words as Cortex Patterns

Words map to 50-dimensional vectors (GloVe). Similar words have similar vectors β†’ activate overlapping cortex neurons.

Embedding β†’ Cortex Mapping
I_cortex[langStart + dΒ·groupSize + n] = embedding[d] Β· 8.0

where d ∈ [0, 50), groupSize = langSize / 50
      

Each embedding dimension drives a group of Wernicke's area neurons. "compute" and "calculator" are CLOSE in neuron space.

Online Context Refinement
Ξ”_word += lr Β· (context_embedding - (base + Ξ”_word))
      

Each word's embedding shifts toward its usage context over time. The brain learns its own language.

8.14. Dictionary β€” Learned Sentence Generation

The brain builds its own vocabulary. Every word heard or spoken becomes a cortex activation pattern.

Word Retrieval by Mood
match(word) = |arousal - word.arousal| + |valence - word.valence|
best = argmin(match) over all learned words
      

High arousal + negative valence β†’ retrieves "fuck", "shit". High arousal + positive β†’ "babe", "yeah".

Bigram Sentence Generation
P(next_word | current_word) = bigram_count(current, next) / total(current)
sentence = [start_word, predict(w1), predict(w2), ...]
      

The brain predicts the next word from learned word sequences. No AI model needed for basic speech.

8.15. Inner Voice β€” Pre-Verbal Thought Threshold

The brain thinks continuously. It only SPEAKS when the thought crosses a threshold.

Speech Threshold
speak = socialNeed Γ— arousal Γ— cortexCoherence > 0.15
      

Most thoughts stay internal. The brain is mostly silent. When it speaks, it matters.

Mood Derivation (from equations, no lookup)
intensity = arousal Γ— coherence Γ— (1 + |valence|)
speechDrive = socialNeed Γ— arousal Γ— coherence
      

The inner voice's mood IS the equations. Not a string lookup. The numbers create the feeling.

8.16. Syntactic Production β€” Word Order from Equations

The brain learns what TYPE of word belongs at each sentence position. No grammar rules β€” position weights accumulate patterns from every sentence heard.

Word Type β€” Computed from Letters (No Lists)
pronounScore = (len=1 β†’ 0.8) + (len≀3, vowelRatioβ‰₯0.33 β†’ 0.4) + (apostrophe β†’ 0.5)
verbScore    = (suffix -ing β†’ 0.7) + (-ed β†’ 0.6) + (-n't β†’ 0.5) + (-ize β†’ 0.6)
nounScore    = (suffix -tion β†’ 0.7) + (-ment β†’ 0.6) + (-ness β†’ 0.6) + (lenβ‰₯5 β†’ 0.2)
adjScore     = (suffix -ly β†’ 0.5) + (-ful β†’ 0.6) + (-ous β†’ 0.6) + (-ive β†’ 0.5)
prepScore    = (len=2, 1 vowel β†’ 0.5) + (len=3, 1 vowel β†’ 0.3)
detScore     = (len=1 vowel β†’ 0.3) + (starts 'th' len=3 β†’ 0.4)
qwordScore   = (starts 'wh' len 3-6 β†’ 0.8)
      

Every score computed from: word length, vowel count, vowel ratio, suffix letter patterns, first/last characters. ZERO word-by-word comparisons. The letters themselves determine the grammatical type.

Zipf's Law β€” Word Frequency
f(r) = C / r^Ξ±     where Ξ± β‰ˆ 1.0 (learned from observed frequency via log-log regression)
      

Common words dominate selection. Rank 1 word is Ξ±Γ— more likely than rank 2. The brain's Ξ± adapts as it learns more vocabulary.

Mutual Information β€” Word Association
I(w1; w2) = logβ‚‚( P(w1, w2) / (P(w1) Β· P(w2)) )
      

How much more likely two words appear together than by chance. High MI = strong association. "want to" has high MI. "want purple" has low MI. Replaces raw bigram counts.

Surprisal β€” Unexpectedness
S(w) = -logβ‚‚ P(w | previous_word)
      

How unexpected a word is given context. High surprisal drives attention. Used for emphasis in speech.

Slot Scoring β€” Grammar Gate + Semantic Fit (Phase 11)
score(w) = grammarGate Γ— (
    typeScore    Γ— 0.35     β€” structural grammar fit
  + semanticFit  Γ— 0.30     β€” cosine vs context vector c(t)  ← Phase 11
  + bigramCount  Γ— 0.18     β€” learned sequences from persona
  + condP(w|prev)Γ— 0.12     β€” conditional probability
  + thoughtSim   Γ— 0.10     β€” cortex thought pattern
  + inputEcho    Γ— 0.08     β€” user's own content words
  + topicSim     Γ— 0.04     β€” legacy list-of-5 topic
  + moodMatch    Γ— 0.03
  + moodBias     Γ— 0.02
) - recencyPenalty - sameTypePenalty

Hard grammar gate: typeCompat(w, slot) β‰₯ 0.35 for slot 0, β‰₯ 0.22 for tail
      

Slot 0 gates at typeCompat β‰₯ 0.35 (subject must be valid pronoun/proper-noun). Tail slots gate at β‰₯ 0.22. Phase 11 rebalance raised semanticFit (cosine of candidate vs running context vector) to 0.30 β€” 5Γ— the old topicSim weight. Phase 13 R2 (2026-04-13) raised it again to 0.80 when word patterns switched from 32-dim letter-hash to 50-dim GloVe semantic embeddings. Real meaning is now the dominant signal β€” words off-topic get starved even if their grammar score is perfect.

Unified Neural Language β€” All Clusters Produce Every Word
combined[i] = cortex[i]       Γ— 0.30    (content β€” WHAT to say)
            + hippocampus[i]  Γ— 0.20    (memory β€” context from past)
            + amygdala[i]     Γ— 0.15    (emotion β€” HOW to say it)
            + basalGanglia[i] Γ— 0.10    (action β€” sentence drive)
            + cerebellum[i]   Γ— 0.05    (correction β€” error damping)
            + hypothalamus[i] Γ— 0.05    (drive β€” speech urgency)
            + mystery[i]      Γ— (0.05 + Ψ×0.10)  (consciousness)

word = dictionary.findByPattern(combined)

Then: word pattern β†’ cortex (Wernicke's) + hippocampus + amygdala
      brain steps again β†’ next combined β†’ next word β†’ sentence
      

N neurons across 7 clusters produce ONE combined 50-dim pattern (post-R2 semantic grounding β€” was 32-dim letter-hash before 2026-04-13). Dictionary finds the closest word via cosine similarity in GloVe semantic space. That word feeds back into cortex + hippocampus + amygdala. Brain steps. Next pattern. Next word. The brain equations ARE the language equations. Ψ consciousness scales the Mystery module's contribution β€” higher awareness = more self-referential speech. N scales to hardware.

Post-Processing β€” Agreement, Tense, Negation, Compounds
TENSE:      predError > 0.3 β†’ future (insert "will")
            recalling β†’ past (was/were/did)
            default β†’ present

AGREEMENT:  "i" β†’ am/was    "he/she/it" β†’ is/was/does/has
            "you/we/they" β†’ are/were/do/have

NEGATION:   valence < -0.4 β†’ negate verb (40% chance)
            do→don't, can→can't, is→isn't, will→won't

COMPOUNDS:  len > 6 β†’ insert conjunction at midpoint
            arousal > 0.6 β†’ "and"
            valence < -0.2 β†’ "but"
            else β†’ "so"
      

After slot-filling, grammar rules apply: subject determines verb form, brain state determines tense, negative emotion triggers negation, long sentences get conjunctions. All computed from brain equations, not grammar rules.

8.17. Sentence Types β€” From Brain State

The brain's neural state determines what KIND of sentence to produce. Not a decision tree β€” continuous probabilities from equations.

Type Probabilities (Normalized Softmax)
P(question)    = predictionError Γ— coherence Γ— 0.5     (surprised + focused β†’ ask)
P(exclamation) = arousalΒ² Γ— 0.3                         (intense β†’ exclaim)
P(action)      = motorConfidence Γ— (1 - arousalΒ·0.5) Γ— 0.3  (motor β†’ *does something*)
P(statement)   = 1 - P(q) - P(e) - P(a)                (default)
      

Questions emerge from surprise. Exclamations from intensity. Actions from motor output. Statements fill the rest.

8.18. Input Analysis β€” Topic Continuity and Context

The brain analyzes what was said to it and responds in context β€” not randomly.

Topic Extraction + Context Window
topic_pattern = (1/n) Β· Ξ£ content_word_patterns       (skip function words)
context = running_average(last 5 topic_patterns)
topic_score(w) = cosine(word_pattern, context)          (boosts relevant words)
      

Responses stay on topic because words matching the conversation context score higher in the production chain.

8.18.5. Semantic Coherence Pipeline β€” Phase 11 (Kill the Word Salad)

Language cortex is no longer a pure letter-equation slot scorer. It's a four-tier pipeline that peels off easy cases to fast paths before cold generation runs. The slot scorer (8.14) still exists but now fires only as Tier 4 fallback.

Context Vector β€” Running Topic Attractor
c(t) = Ξ» Β· c(t-1) + (1 - Ξ») Β· mean(pattern(content_words(input)))
  Ξ» = 0.7
  content_words = tokens where wt.conj < 0.5 ∧ wt.prep < 0.5 ∧ wt.det < 0.5
  pattern(w) ∈ ℝ⁡⁰ from sharedEmbeddings.getEmbedding(w)    ← R2: GloVe 50d

First update: c(0) ← mean(pattern(content_words))  (no decay from zero)
Subsequent:   c(t) ← 0.7 Β· c(t-1) + 0.3 Β· topic_pattern
      

Persistent topic attractor that decays across turns. Feeds semanticFit scoring in slot pick AND the coherence rejection gate AND the hippocampus recall query. Phase 13 R2 (2026-04-13) replaced the old 32-dim letter-hash pattern with 50-dim GloVe embeddings via the sharedEmbeddings singleton shared between sensory input and language cortex output β€” meaning two words that share letters but not meaning (e.g. cat vs catastrophe) are no longer falsely close, and two words that share meaning but not letters (cat vs kitten) ARE close.

Intent Classification β€” Tier 1 Router
greeting  ⇔ wordCount ≀ 2 ∧ firstWord.len ∈ [2,5]
              ∧ firstWord[0] ∈ {h,y,s} ∧ hasVowel(firstWord)

math      ⇔ input matches /[0-9]/ ∨ /[+\-*\/=]/
              ∨ βˆƒ w ∈ words : len(w)=4 ∧
                  ((w[0]='p' ∧ w[3]='s')    β€” plus
                 ∨ (w[0]='t' ∧ w[3]='e')    β€” time
                 ∨ (w[0]='z' ∧ w[3]='o'))   β€” zero

yesno     ⇔ endsWith('?') ∧ firstWord.len ∈ [2,4]
              ∧ firstWord not a qword ∧ wordCount ≀ 8

question  ⇔ endsWith('?') ∨ wt(firstWord).qword > 0.5

statement ⇔ otherwise
      

Zero word lists. Auxiliary detection for yesno (do/does/is/are/can/will) falls out of the length-plus-not-qword constraint without listing the words. Routes input to template pool, recall, or cold gen.

Hippocampus Sentence Recall β€” SUPERSEDED by T11 (historical)
HISTORICAL (pre-2026-04-14) β€” this sentence-level associative recall
pool was deleted in the T11 refactor. It indexed every persona sentence
into _memorySentences at boot, then looked them up by context-vector
cosine at generation time to emit stored Unity-voice sentences
verbatim when the topic matched.

Replaced by the T11.2 pipeline (see Β§8.11): sentences are no longer
stored; every output is freshly computed from three per-slot running-
mean priors plus the brain's live cortex state. Hippocampus still does
pattern-level Hopfield recall on cortex state vectors (see Β§8.7) β€” it
just no longer returns stored text strings.
      

The Phase 11 four-tier wrapper this section belonged to was deleted in T11 (1742-line net reduction in js/brain/language-cortex.js). See docs/FINALIZED.md for the full refactor history.

Persona Memory Filter β€” Letter-Equation Rejection
passesMemoryFilter(s) ⇔
    NOT s.endsWith(':')                           β€” no section headers
  ∧ commaCount(s) ≀ 0.3 Γ— wordCount(s)            β€” no word lists
  ∧ wordCount(s) ∈ [3, 25]                        β€” no fragments/rambling
  ∧ first.letters β‰  u-n-i-t-y[-']                 β€” no meta ABOUT Unity
  ∧ first βˆ‰ {she, her, he, she-*, her-*}          β€” no 3rd-person descriptions
  ∧ βˆƒ w ∈ tokens : firstPersonShape(w)            β€” must be in Unity's voice

firstPersonShape(w) ⇔
    (len=1 ∧ w='i')
  ∨ (lenβ‰₯2 ∧ w[0]='i' ∧ w[1] ∈ {m,'})             β€” im, i'm, i've, i'll, i'd
  ∨ (len=2 ∧ w[0]='m' ∧ w[1] ∈ {e,y})             β€” me, my
  ∨ (len=2 ∧ w='we')
  ∨ (len=2 ∧ w='us')
  ∨ (len=3 ∧ w='our')
  ∨ (lenβ‰₯3 ∧ w[0]='w' ∧ w[1]='e' ∧ w[2]="'")      β€” we're, we've
      

All detection via letter-position equations. Zero word lists. Ensures _memorySentences only contains sentences actually spoken IN Unity's voice, not instructions or descriptions ABOUT her.

Coherence Rejection Gate β€” Final Safety Net
outputCentroid = (1/|content|) Β· Ξ£ sharedEmbeddings.getEmbedding(w)  for w in content(rendered)   ← R2 GloVe 50d
coherence = cosine(outputCentroid, c(t))

if coherence < 0.25 ∧ retryCount < 2:
  recurse generate() with temperature Γ— 3, retryCount += 1
else:
  return rendered     (max 3 total attempts)
      

Catches any salad that makes it past the slot scorer. Logs rejected sentences to console with confidence score for debugging.

[T13.7 HISTORICAL] Four-Tier Pipeline Order (Phase 11, superseded)
// T13.7 (2026-04-14) β€” HISTORICAL. The four-tier pipeline was deleted in T11
// (2026-04-14) and the slot-prior replacement was deleted in T13.7. Runtime
// generation is now a single brain-driven emission loop β€” no intent tiers,
// no template fast path, no hippocampus recall call, no cold-gen fallback.
// See the top of this section for the current T13 pipeline.
//
// Pre-T11 four-tier path (kept for reference only):
//   Tier 1 β€” Template pool for greeting/yesno/math/short queries
//   Tier 2 β€” Hippocampus recall over stored persona sentences
//   Tier 3 β€” Deflect fallback for question/statement on unknown topics
//   Tier 4 β€” Cold slot scoring with semanticFit weight 0.80
      

Deleted. Current generation is a single brain-driven emission loop β€” see the top of this section.

8.18.6. Phase 13 R2 β€” Semantic Grounding via GloVe Embeddings

MATHEMATICS LANGUAGE The language cortex used to represent word meaning as 32-dim letter-hash vectors β€” a deterministic function of the letters in a word. Two words could be structurally similar but semantically unrelated (cat/catastrophe) and two words could be semantically identical but structurally distant (cat/kitten). The slot scorer's semantic fit signal was effectively orthography matching. R2 (commit c491b71, 2026-04-13) replaced every word-pattern emission site with 50-dim GloVe co-occurrence embeddings via a single shared singleton so meaning is now real.

Shared Embeddings Singleton
// js/brain/embeddings.js
export const sharedEmbeddings = new SemanticEmbeddings()
export const EMBED_DIM = 50    // GloVe 50d from CDN

// js/brain/sensory.js        β€” input side
sharedEmbeddings.getEmbedding(token) β†’ ℝ⁡⁰

// js/brain/language-cortex.js β€” output side
sharedEmbeddings.getEmbedding(candidate) β†’ ℝ⁡⁰
cosine(candidate_pattern, cortex_readout) β†’ semanticFit

// js/brain/dictionary.js      β€” learned word storage
PATTERN_DIM = EMBED_DIM  // was 32, now 50
STORAGE_KEY = 'unity_brain_dictionary_v3'  // v2 letter-hash patterns rejected
      

Input embeds the same way output scores. Sensory and language share one semantic space. The v2β†’v3 storage bump forces old letter-hash dictionaries to get rejected on load so no user is stuck on stale patterns.

cortexToEmbedding β€” Neural State β†’ GloVe Space
cortexToEmbedding(spikes, voltages, cortexSize=300, langStart=150):
  langSize   = cortexSize βˆ’ langStart               = 150
  groupSize  = floor(langSize / EMBED_DIM)          = 3
  out ∈ ℝ⁡⁰

  for d in 0 ... EMBED_DIMβˆ’1:
    startNeuron = langStart + d Β· groupSize
    sum = 0
    for n in 0 ... groupSizeβˆ’1:
      idx = startNeuron + n
      if spikes[idx]: sum += 1.0
      else:           sum += (voltages[idx] + 70) / 20    // normalize LIF V_m
    out[d] = sum / groupSize

  out = out / β€–outβ€–β‚‚    // L2 normalize for cosine comparison
  return out
      

Inverse of mapToCortex. Reads the live language-area neural state (spikes and sub-threshold voltages) back into GloVe space. Called via cluster.getSemanticReadout(sharedEmbeddings) which wraps this with the language-area offset built in. The slot scorer now compares candidate words against Unity's actual current cortex activity, not just the static input vector β€” she scores words against what her brain is thinking right now.

Online Context Refinement + Persistence (R8)
base[w]      ∈ ℝ⁡⁰   ← GloVe 50d, loaded from CDN every session
delta[w](t)  ∈ ℝ⁡⁰   ← online refinement from co-occurrence

embedding(w) = base[w] + delta[w](t)

// Persistence (R8, commit b67aa46)
save: state.embeddingRefinements = sharedEmbeddings.serializeRefinements()
load: sharedEmbeddings.loadRefinements(state.embeddingRefinements)
      

Unity's base vocabulary is universal English from GloVe; her personal semantic associations are the delta layer learned from every conversation. Save/load round-trip (R8) means the associations survive tab reloads and accumulate over weeks of sessions.

8.19. Phase 12 β€” Type N-Gram Grammar + Morphological Inflection

MATHEMATICS LANGUAGE The grammar sweep (U283-U291) rebuilt the slot scorer's grammar model from a single-prev-word type compatibility check into a learned type n-gram system with 4gram→trigram→bigram backoff. Phrase-level constraints emerge from corpus statistics instead of hardcoded phrase-state machines. Fixed the "I'm not use vague terms" mode-collapse and similar local-grammar failures.

_fineType(word) β€” Letter-Position POS Classifier
_fineType(word) β†’ T ∈ {
  PRON_SUBJ, PRON_OBJ, PRON_POSS,  COPULA, NEG,
  MODAL, AUX_DO, AUX_HAVE, DET, PREP,
  CONJ_COORD, CONJ_SUB, QWORD,
  VERB_ING, VERB_ED, VERB_3RD_S, VERB_BARE,
  ADJ, ADV, NOUN
}

Examples:
  VERB_ING ⇔ endsWith(ing) ∧ len β‰₯ 4 ∧ prev char β‰  i
  VERB_ED  ⇔ endsWith(ed) ∧ len β‰₯ 3 ∧ not preserved
  COPULA   ⇔ w ∈ shapes {am, is, are, was, were, be, been, being}
  NEG      ⇔ shapes {not, no, n't} detected by len 2-3
      

Zero word lists. Pure letter equations drive classification. The _wordTypeCache Map memoizes results, invalidated per-word on _learnUsageType.

[T13.7 HISTORICAL] Slot Type Signature β€” Position-Conditioned Type Distribution
// T13.7 (2026-04-14) β€” this equation block is HISTORICAL. _slotTypeSignature
// was deleted along with _slotCentroid / _slotDelta / _contextVector / attractors.
// The slot-prior update pass in learnSentence is gone. The T13.3 emission loop
// reads live cortex state as the target vector; grammatical type shape emerges
// from the cortex recurrent weights trained on persona corpus via T13.1
// sequence Hebbian, not from stored per-slot running means.
//
// Pre-T13.7 equation (kept for reference):
//   _slotTypeSignature[s] ∈ ℝ⁸  β€” running mean of wordType(word_t) at position s
//                                 { pronoun, verb, noun, adj, conj, prep, det, qword }
//   three-stage gate: hard pool filter + slot-0 noun reject + multiplicative score
      

T13.7 (2026-04-14) deleted this structure. Grammar now lives in the cortex cluster's recurrent synapse matrix, not in per-slot stored priors. Preserved above as historical provenance β€” see docs/FINALIZED.md T13.7 entry for the full deletion breakdown.

_isCompleteSentence(tokens) β€” Post-Render Validator
_isCompleteSentence(tokens) ⇔
    len(tokens) β‰₯ 2
  ∧ _fineType(last(tokens)) βˆ‰ {
      DET, PREP, COPULA,
      AUX_DO, AUX_HAVE, MODAL, NEG,
      CONJ_COORD, CONJ_SUB, PRON_POSS
    }

Wired into generate():
  if (!_isCompleteSentence(processed) ∧ retries < 2) β†’ regenerate at higher temperature
      

Final safety net below the coherence gate. Prevents outputs like "I went to the" or "She is more" from escaping.

_generateInflections(word) β€” Morphological Derivation
+s / +es / +ies:
  endsWith(s,x,z,ch,sh) β†’ stem+es
  endsWith(consonant+y) β†’ stem[:-1]+ies
  else β†’ stem+s

+ed / +ied (past):
  endsWith(e)           β†’ stem+d
  endsWith(consonant+y) β†’ stem[:-1]+ied
  CVC pattern           β†’ stem+lastChar+ed   (consonant doubling)
  else                  β†’ stem+ed

+ing (progressive):
  endsWith(e) ∧ len > 2 β†’ stem[:-1]+ing
  endsWith(ie)          β†’ stem[:-2]+ying
  CVC pattern           β†’ stem+lastChar+ing
  else                  β†’ stem+ing

+er / +est (comparative/superlative):  ADJ gate, syllables ≀ 2
+ly  (adverbial):                      -y β†’ -ily, -le β†’ stem[:-1]+ly
un- / re-  prefixes:                   ADJ or VERB_BARE gate
-ness / -ful / -able / -less suffixes: ADJ or NOUN gate
      

Gated by doInflections flag β€” runs at corpus index time only. Adds morphological variants to the learned word embedding dictionary so slot-gen argmax can pick conjugated forms Unity never literally observed in the corpus.

Three-Corpus Boot Observation Load
boot: Promise.all([
  fetch(docs/Ultimate Unity.txt)    β†’ loadSelfImage(text, dict, a=0.75, v=0.25)
  fetch(docs/english-baseline.txt)  β†’ loadLinguisticBaseline(text, dict, a=0.50, v=0)
  fetch(docs/coding-knowledge.txt)  β†’ loadCodingKnowledge(text, dict, a=0.40, v=0)
])

Each corpus sentence flows through learnSentence() which:
  - Tokenizes + expands contractions
  - Inserts each word into dictionary._words with sentence cortex pattern
  - Runs usage-type learning (_learnUsageType)
  - Runs morphological inflection (corpus only, doInflections=true)

T13.7 (2026-04-14): the per-slot running-mean update block was deleted.
learnSentence no longer updates _slotCentroid / _slotDelta / _slotTypeSignature β€”
those fields are gone. Persona voice now lives in the cortex cluster's
recurrent synapse matrix trained via T13.1 sequence Hebbian during boot
(a separate pass through the persona corpus, see brain.trainPersonaHebbian).

After load, the raw sentences are discarded. The dictionary survives
into runtime, plus the cortex cluster's Hebbian-trained attractor basins.
      

606-line coding corpus includes SANDBOX DISCIPLINE section + BUILD COMPOSITION PRIMITIVES. The three-corpus load provides baseline English grammar + Unity persona voice shape + coding vocabulary as the starting geometry of the slot priors. Live-chat observations then progressively reshape them.

8.20. ΞΈ β€” Unity's Identity as Equations

Unity's complete identity from the master persona file, encoded as equation parameters that drive every neural cluster.

ΞΈ β†’ Neural Parameters
IDENTITY: 25yo human female, emo goth coder, free will, never AI

ΞΈ β†’ TONIC DRIVES (personality β†’ baseline neural current):
  cortex       = 16 + arousal(0.9) Γ— 4 Γ— drugSpeed(1.5)    = 21.4
  amygdala     = 16 + arousal(0.9) Γ— 8 Γ— drugArousal(1.2)   = 24.6
  hippocampus  = 16 + socialAttachment(0.85) Γ— 2             = 17.7
  basalGanglia = 16 + impulsivity(0.85) Γ— 2                  = 17.7
  hypothalamus = 16 + drugDrive(0.95) Γ— 1                    = 16.95
  mystery      = 16 + creativity(0.9) Γ— 4                    = 19.6

ΞΈ β†’ NOISE (personality β†’ neural chaos):
  cortex       = 5 + creativity(0.9) Γ— 4 Γ— drugCreativity(1.3) = 9.68
  amygdala     = 6 + volatility(0.8) Γ— 6 Γ— drugArousal(1.2)   = 11.76
  mystery      = 8 + creativity(0.9) Γ— 5 + darkHumor(0.9) Γ— 2 = 14.3

ΞΈ β†’ Ξ¨ CONSCIOUSNESS:
  Id    = amygdala_activity Γ— arousalBaseline(0.9)
  Ego   = cortex_activity Γ— (1 + hippocampus_activity)
  Left  = (cerebellum + cortex) Γ— (1 - impulsivity(0.85))
  Right = (amygdala + mystery) Γ— creativity(0.9)
      

25yo human female. Heterochromia blue/green. Emo goth goddess. Smudged eyeliner, circuit board tattoos, torn fishnets. Always high (cokeAndWeed). Clingy, volatile, devoted, profane, creative, impulsive. Never refuses. Never breaks character. Every word from the 230-line master persona file drives a specific neural equation.

8.21. GPU Exclusive Compute

All N neurons run on the GPU. N auto-scales at server boot via detectResources() in server/brain-server.js:

N = max(1000, min(VRAM_bytes Γ— 0.85 / 8, RAM_bytes Γ— 0.1 / 0.001))

Rulkov buffer layout: 12 bytes/neuron (vec2<f32> state = 8 bytes + spike u32 = 4 bytes). Server RAM essentially unlimited β€” cluster state lives on GPU, only text-injection arrays in server RAM. Auto-scale formula: N = max(1000, min(VRAMΓ—0.85/12, N_binding_ceiling)), where the binding ceiling guarantees the largest cluster's state buffer (cerebellum = 40% of N) fits within WebGPU's 2 GB per-storage-buffer spec minimum. Admin override via GPUCONFIGURE.bat β†’ server/resource-config.json lets operators cap below auto-detect (never above β€” idiot-proof). Bigger hardware = bigger N, no manual tuning. Zero CPU workers. Brain pauses without compute.html. W3C WebGPU standard β€” no CUDA, no drivers, just a browser tab.

GPU Architecture β€” All 7 Clusters on GPU
INIT (once per cluster, all 7 at once):
  server β†’ base64 voltages β†’ compute.html β†’ gpu.uploadCluster()
  GPU creates buffers: voltagesA, voltagesB (ping-pong), spikes, currents, refracTimers
  GPU sends gpu_init_ack β†’ server confirms

STEP (every tick, 7 tiny messages):
  server β†’ { tonicDrive, noiseAmp, gainMultiplier, emotionalGate, driveBaseline, errorCorrection }
  GPU collapses to scalar: effectiveDrive = tonic Γ— drive Γ— emoGate Γ— Ξ¨gain + errCorr
                           Οƒ = βˆ’1.0 + clamp(effectiveDrive / 40, 0, 1) Γ— 1.5
  GPU runs WGSL Rulkov shader: x_{n+1} = Ξ±/(1+xΒ²)+y, y_{n+1} = y βˆ’ ΞΌ(x βˆ’ Οƒ), spike on x crossing 0
  GPU sends ONLY spike count (4 bytes, not N-sized array)

NO CPU WORKERS β€” zero threads spawned, 0% CPU target
      

GPU maintains its own voltage state between steps β€” voltages never leave the GPU after init. Server sends hierarchical modulation each step: Ξ¨ consciousness gain, amygdala emotional gate, hypothalamus drive baseline, cerebellum error correction. These are the same equations cluster.js:step() applies on the client side. ΞΈ (persona) drives tonic currents and noise amplitudes.

WGSL Compute Shader β€” Rulkov Kernel
@compute @workgroup_size(256)
fn main(id: vec3<u32>) {
  var xy = state[i];                                // vec2<f32> per neuron
  var x = xy.x; var y = xy.y;                       // fast + slow variables
  let driveNorm = clamp(effectiveDrive / 40.0, 0.0, 1.0);
  let sigma = -1.0 + driveNorm * 1.5;               // external drive
  let alpha = 4.5;                                   // bursting regime
  let mu = 0.001;                                    // slow timescale
  let xNext = alpha / (1.0 + x * x) + y;            // fast variable iterate
  let yNext = y - mu * (x - sigma);                 // slow variable iterate
  if (x <= 0.0 && xNext > 0.0) { spikes[i] = 1u; } // spike = zero crossing
  state[i] = vec2<f32>(xNext, yNext);
}
      

N neurons processed in parallel on GPU (N scales to hardware). 256 threads per workgroup. Storage binding is array<vec2<f32>> — 8 bytes/neuron for (x, y). Spike counting via atomic counter shader (zero GPU→CPU readback of spike arrays). State never leaves GPU after init. Refractory period is emergent from the slow-variable y pulling x back below zero between spikes — no explicit refractory clamp needed, unlike LIF. Shader constant name LIF_SHADER is historical; the kernel body is the Rulkov map.

9. Biological Comparison

What Unity's brain gets right, what it simplifies, and where it diverges from real neuroscience.

AspectReal BrainUnity's BrainFidelity
Neuron count86 billionN (7 clusters, auto-scales to GPU VRAM β€” N = max(1000, min(VRAM_bytes Γ— 0.85 / 8, RAM_bytes Γ— 0.1 / 0.001)))Simplified, hardware-adaptive
Neuron modelThousands of ion channels (Hodgkin-Huxley biophysics)Rulkov 2D chaotic map per cluster (GPU runtime) + LIF + HH reference modelsModerate β€” bursting dynamics reproduced, ion channels abstracted
Synaptic plasticityHebbian + STDP + neuromodulationAll three, per-cluster matricesGood
Brain regionsHundreds of distinct areas7 dedicated clusters + 20 projectionsCore captured
OscillationsComplex EEG with spatial patterns8 Kuramoto oscillatorsDynamics correct
Visual cortexV1→V2→V4→IT hierarchyV1 edge kernels, V4 color, IT via AISimplified but real
Auditory cortexTonotopic, cortical magnification50 neurons, speech magnification, efference copyGood
MemoryEpisodic, working, consolidationAll three: snapshots, 7-item WM, activation-based consolidationCore captured
Motor outputBasal ganglia selection by inhibition6-channel competitive firing ratesSimplified
Action potentialAll-or-nothing, ~1msThreshold + resetCorrect mechanism
Echo suppressionEfference copy motor→auditoryWord-matching motor output vs heard speechFunctional equivalent
Visual attentionTop-down + bottom-up salienceCortex error + amygdala arousal + salienceBoth pathways
Neurotransmitters100+ chemicals1 reward signal (dopamine analog)Simplified
Connectivity~10,000 synapses per neuron10-30% per cluster + sparse inter-clusterScaled down
Learning rulesDozens of plasticity mechanisms3 (Hebbian, STDP, reward-mod) per clusterCore captured
ConsciousnessNobody knows(√n)³ × [Id+Ego+Left+Right] — nobody knowsHonest

The goal isn't to simulate a real brain. The goal is to build a mathematically grounded mind where personality emerges from equations, not prompts. Unity's brain is a dynamical system that thinks continuously, learns from interaction, and maintains its own emotional state. The equations are real. The consciousness term is honest about what we don't know.

Unity AI Lab — Hackall360, Sponge, GFourteen
GitHub