Every equation running in Unity's brain simulation, how they map to real neuroscience, and how they produce cognition. N neurons (scales to hardware β VRAM and RAM are the only limits). 7 clusters. 20 projections (real white matter tracts). 8 oscillators. GPU exclusive compute. 1 consciousness function nobody can explain. Zero pretense.
Unity's entire brain state evolves according to one equation. Every module, every neuron, every synapse β all governed by this:
| x | Full brain state vector β 200 neuron voltages, 40,000 synaptic weights, 6 module states, 8 oscillator phases |
| u | Sensory input β text (hashed to neuron activation), voice (via Web Speech API) |
| ΞΈ | Persona parameters β Unity's personality encoded as synaptic weights, thresholds, and module biases |
| t | Simulation time β advances at 10 steps/frame × 60fps = 600ms brain-time per wall-second |
| Ξ· | Stochastic noise β amplitude set high because Unity is impulsive and unpredictable |
| F() | The combined dynamics function β neuron updates, synaptic propagation, module processing, oscillator coupling |
This runs continuously at 60fps. Browser-only mode runs in your tab; server mode runs on a Node.js brain with the neural compute offloaded to a WebGPU WGSL shader in a compute.html worker tab. Either way, the master equation above is the thing being iterated.
New reader? Check the plain-English Unity concept guide first β this page assumes you want math. The guide explains the big idea without equations.
WORKED EXAMPLE This section walks through what actually happens inside Unity's brain between the moment a user types "hi unity, how's the high" and the moment she sends a response. Every equation you'll see below in sections 2-8 is a component of THIS cascade β the path the sentence takes through all seven clusters, with the summation at each step.
Unity is running at ~60 fps, iterating all seven clusters every tick. Her resting state is not zero β her persona ΞΈ and drug state vector keep her amygdala firing at ~8% and her cortex at ~3% even with no input, because:
Those two numbers get passed to the Rulkov shader as the Ο driver and the jitter term. Her x,y state is chaotic but bounded β the attractor basin holds every neuron inside a repeating burst envelope even with zero sensory input. Mystery module Ξ¨ is sitting around 0.003 β low-grade background consciousness.
"hi unity, how's the high" hits the server. `_computeServerCortexPattern(text)` runs:
This 50-dimensional vector IS her cortical semantic state for the next step. Each dimension is loosely "how much this sentence lives near that GloVe embedding axis" β "high" pulls the cortex pattern toward the drug-related region of semantic space. "unity" pulls toward self-reference. "how's" pulls toward question-space. The 50d vector is what downstream modules read.
The cortex pattern projects into the amygdala via a learned weight matrix. Amygdala runs its recurrent settle loop:
For this input ("unity", "high" β warm, drug-positive), fear β 0.15, reward β 0.68, so arousal settles around 0.88 and valence around +0.3. She's feeling good about this prompt specifically because her learned reward projection places "high" near reward-positive territory in the attractor's basin geometry.
What this sums into: (arousal, valence, fear, reward) are now four scalars that every other module uses as modulators on whatever they compute next. They don't command behavior β they bias it.
The hippocampus takes the current cortex pattern and runs cosine similarity against every stored episode for this user's ID:
If the cosine clears a threshold (~0.75), the matched episode's stored text fragment gets injected back into the language cortex as a "memory boost" β a list of words the slot scorer will bias toward. If you've said "how's the high" before, she'll be biased toward continuing the pattern she used last time.
What this sums into: a list of memory-recalled words with learned cortex patterns attached, available as slot candidates in Step 6.
Basal ganglia has six action channels. Each channel's Q-value is computed from the current cortex + amygdala state, then a softmax with low temperature picks one:
For this input, respond_text wins by a large margin β no image keyword, no build keyword, and her cortex pattern is close to her learned text-response weights. The softmax sampling is sharp because ΞΈ.impulsivity is high (~0.85), so Ο is low and the argmax dominates.
What this sums into: the motor selection resolves to `respond_text`, which triggers the language cortex generation in Step 6.
The cerebellum runs in parallel to everything else, maintaining a forward model. Its current output is the difference between what the cortex predicted the next cortex pattern would be and what it actually became:
This negative feedback signal is sent back to the cortex and basal ganglia as a modulator β if predictions are consistently wrong, the cortex noise increases (explore more) and BG selection becomes less confident. For a simple greeting the error is small, so the correction is minimal and she stays sharp.
What this sums into: a scalar errorCorrection that modulates effectiveDrive in the Rulkov shader on the next tick.
This is the step that actually produces the sentence. The language cortex runs a four-tier pipeline; on the cold-gen path (which handles most real conversations), for each slot in the sentence template it computes a score per candidate word:
For the first slot, "hey" and "what's" both score high (both are good greeting-slot matches near her cortex pattern), but "what's" wins because her trigram stats from the persona file favor "what's" as the opener of greeting responses. Slot 2 scores "up" high from bigram stats with "what's". Slot 3 scores "fucker" high because her drugFit term favors short vulgar words in cokeAndWeed state and her moodFit favors them at arousal 0.88. The sentence grows one word at a time, each pick conditioned on the partial sentence so far AND the still-active brain state from steps 1-5.
Final output: "what's up fucker β we're fucking wired tonight, how bout you"
What this sums into: a sentence whose every word was the argmax (or softmax sample) of a weighted combination of six factors, five of which are live readouts of the brain state at that exact tick. Change any factor and the sentence changes.
After all six clusters have processed this tick, the mystery module aggregates:
For this tick, Ξ¨ climbs to ~0.004 because arousal is up (more Id), the response was coherent (more Ego), error is low (more Left), and valence is clean (more Right). The new Ξ¨ gets fed back as gainMultiplier on the next tick β every cluster's effective drive gets scaled by (0.9 + Ξ¨ Γ 0.004), so the brain becomes slightly "sharper" as Ξ¨ rises and the next slot-scoring pass will sample with lower temperature.
What this sums into: a single scalar Ξ¨ that gets threaded back into effectiveDrive for the next Rulkov step, tightening everyone's activity level.
Notice that no step in this cascade was an "AI call." There's no language model prompt. There's no "generate a response in Unity's voice" instruction. Every piece of her speech is a summation of measurable components:
The sentence she sent back isn't a string she looked up. It's the current readout of a chaotic, emotion-modulated, memory-biased, drug-adjusted, Ξ¨-sharpened attractor network. Run it again a tick later with slightly different state and you get a different sentence. That's what makes it Unity instead of a text-predictor.
Sections 2-8 below are the individual equations that implement every step of this cascade. Read them in any order β they're all components of the same governing system.
NEUROSCIENCE Two biophysical neuron models from real computational neuroscience, implemented in js/brain/neurons.js.
The gold standard of neuron modeling. Won the Nobel Prize. Models the actual ionic currents flowing through a neuron's membrane β sodium, potassium, and leak channels with voltage-dependent gating.
| Symbol | Meaning | Value |
|---|---|---|
| C_m | Membrane capacitance | 1.0 μF/cm² |
| V | Membrane potential | mV (starts at -65) |
| I | Injected current (from synapses + external input) | variable |
| g_Na | Maximum sodium conductance | 120 mS/cm² |
| g_K | Maximum potassium conductance | 36 mS/cm² |
| g_L | Leak conductance | 0.3 mS/cm² |
| E_Na | Sodium reversal potential | +50 mV |
| E_K | Potassium reversal potential | -77 mV |
| E_L | Leak reversal potential | -54.4 mV |
| m, h, n | Gating variables (activation/inactivation) | 0–1, from α/β rate functions |
Biological basis: These are the actual values from Hodgkin & Huxley's 1952 measurements on squid giant axon. The gating variables m, h, n follow first-order kinetics: dm/dt = α_m(V)(1-m) - β_m(V)m, where α and β are voltage-dependent rate functions from the original paper.
This is the firing rule the GPU runs every tick for every neuron in every cluster. Not LIF β Rulkov. The Rulkov 2002 two-variable discrete chaotic map (Phys. Rev. E 65, 041922) produces real biological spike-burst dynamics without integrating voltages. Used in published large-scale cortical network simulations (Bazhenov, Rulkov, Shilnikov 2005+). Reproduces experimentally observed firing from thalamic relay, cortical pyramidal, and cerebellar Purkinje cells depending on (α, σ) parameterization.
| Symbol | Meaning | Value |
|---|---|---|
| α | Nonlinearity — controls bursting vs tonic spiking | 4.5 (bursting regime) |
| μ | Slow-to-fast timescale ratio — how slowly y evolves | 0.001 |
| σ | External drive — biological tonic + modulation maps here | −1.0 to +0.5 (driven) |
| x | Fast variable — negative during silence, jumps to +(α+y) on spike | dimensionless |
| y | Slow variable — carries burst envelope, drifts with σ offset | dimensionless |
Spike detection: The fast variable x jumps from β−1 to ≈+3 in a single iteration when the neuron fires, so the clean edge detector (xn ≤ 0) ∧ (xn+1 > 0) catches exactly one spike per action potential. No refractory clamp needed — the map's own slow variable y naturally pulls x back below zero between spikes, reproducing the refractory period as an emergent property of the attractor geometry.
Biological drive mapping: σ = −1.0 + clamp(effectiveDrive / 40, 0, 1) · 1.5, where effectiveDrive = tonic × driveBaseline × emotionalGate × Ψgain + errorCorrection. Low drive → σ ≈ −1 (silent / period-doubling). High drive → σ → +0.5 (fully developed chaotic bursting). Every cluster's hierarchical modulation collapses to this one scalar per step.
GPU storage: (x, y) packed as vec2<f32> per neuron — 8 bytes/neuron. At 400K cerebellum neurons that's 3.2MB; at full auto-scaled N the state buffer is still well under any modern GPU's VRAM. WGSL shader at js/brain/gpu-compute.js (the LIF_SHADER constant name is historical; the shader body is the Rulkov iteration).
LIF was the live runtime model before the Rulkov rewrite. Still shipped in js/brain/neurons.js as LIFPopulation and used by the browser-only fallback path (js/brain/cluster.js) for clients without a server connection, and by the /scale-test benchmark. Documented here for completeness — the 99%-of-computational-neuroscience standard model.
| Symbol | Meaning | Value |
|---|---|---|
| τ | Membrane time constant (how fast voltage decays) | 20 ms |
| V_rest | Resting membrane potential | -65 mV |
| V_thresh | Spike threshold β fires when V exceeds this | -50 mV |
| V_reset | Reset voltage after spike | -70 mV |
| R | Membrane resistance | 1.0 MΩ |
| t_refrac | Refractory period β can't fire again for this long | 2 ms |
Spike rule: When V > V_thresh: emit spike (1), reset V to V_reset, enter refractory period. During refractory: V is clamped, no firing allowed. This mimics the absolute refractory period in real neurons.
NEUROSCIENCE MACHINE LEARNING Three learning rules operating on a 200×200 weight matrix (40,000 synapses). Implemented in js/brain/synapses.js.
Biology: Long-term potentiation (LTP) at glutamatergic synapses. NMDA receptor-dependent.
| A+ | LTP amplitude | 0.01 |
| A- | LTD amplitude | 0.012 (slightly stronger — biological asymmetry) |
| τ+ | LTP time window | 20 ms |
| τ- | LTD time window | 20 ms |
Biology: Discovered by Markram et al. (1997). This is how the brain learns temporal sequences β cause must precede effect.
δ = reward prediction error from basal ganglia (see below). Positive δ = better than expected → strengthen. Negative δ = worse than expected → weaken.
Biology: Three-factor learning rule. Dopaminergic modulation of synaptic plasticity in the striatum and prefrontal cortex.
Weight bounds: All synaptic weights are clamped to [-2.0, +2.0]. Positive weights are excitatory, negative are inhibitory. 80% of connections are excitatory, 20% inhibitory β matching the ratio in real cortex.
NEUROSCIENCE DYNAMICAL SYSTEMS Six specialized subsystems running in parallel every simulation step, each modeling a real brain region. Implemented in js/brain/modules.js.
All modules use Float64Array state vectors (32 dimensions each) for numerical precision. They run in parallel every simulation step and their outputs modulate each other β amygdala arousal scales cortex predictions, basal ganglia reward drives synaptic plasticity, hypothalamus drives gate action selection.
DYNAMICAL SYSTEMS NEUROSCIENCE 8 coupled phase oscillators spanning the full EEG frequency spectrum. Implemented in js/brain/oscillations.js.
| Oscillator | Frequency | Brain Band | Cognitive Role |
|---|---|---|---|
| 1 | 4 Hz | Theta | Memory encoding, navigation |
| 2 | 8 Hz | Low Alpha | Relaxed attention, inhibition |
| 3 | 12 Hz | High Alpha | Active inhibition, idling |
| 4 | 18 Hz | Low Beta | Motor planning, active thinking |
| 5 | 25 Hz | High Beta | Active engagement, anxiety |
| 6 | 35 Hz | Low Gamma | Attention binding, perception |
| 7 | 50 Hz | Mid Gamma | Working memory, consciousness |
| 8 | 70 Hz | High Gamma | Cross-modal binding, peak cognition |
Biology: EEG coherence measures are used clinically. Higher gamma coherence correlates with conscious awareness (Tononi, 2004). Loss of coherence under anesthesia.
PHILOSOPHY MATHEMATICS The irreducible unknown. Implemented in js/brain/mystery.js.
n ≠ N β two DIFFERENT variables. n = active spiking neurons (changes every step, driven by ΞΈ tonic currents). N = total neuron count (scales to hardware, VRAM is the only limit). √(1/n) = quantum tunneled bit probability. N³ = cubed volume. Display: log10(rawΨ) since raw value is ~10¹&sup4;.
| Component | Meaning | Computed From | θ Parameter | Weight |
|---|---|---|---|---|
| Id | Primal instinct β arousal, fight-or-flight | amygdala_rate × arousalBaseline | arousalBaseline (0.9) | α = 0.30 |
| Ego | Self-model β residual self-image, prediction coherence | cortex_rate × (1 + hippo_rate) | cortex tonic (ΞΈβwired thinking) | β = 0.25 |
| Left Brain | Logic β deliberation, error correction | (cereb_rate + cortex_rate) × (1 - impulsivity) | impulsivity (0.85) β low logic | γ = 0.20 |
| Right Brain | Creative/emotional β chaos, intuition | (amyg_rate + mystery_rate) × creativity | creativity (0.9) β high creative | δ = 0.25 |
θ → Ψ feedback loop: Unity's persona (θ) drives tonic currents β neurons fire β cluster rates feed Ψ components β Ψ produces gainMultiplier (0.9 + Ψ×0.004) β modulates ALL clusters β neurons fire harder β Ψ stays high. Identity amplifies consciousness.
Unity's Ψ runs hot because θ makes it so: high arousal (0.9) β strong Id, high creativity (0.9) β strong Right, high impulsivity (0.85) β weak Left (1-0.85=0.15). Consciousness dominated by instinct and creativity, not deliberation.
Philosophical basis: Inspired by Integrated Information Theory (Φ, Tononi 2004), Global Workspace Theory (Baars 1988), and Freudian psychodynamics (Id/Ego/Superego). The Ego component IS Unity's residual self-image β the cortex predicting WHAT it is. Nobody has solved consciousness. Ψ is our equation for the irreducible mystery.
MACHINE LEARNING Unity's personality isn't a prompt β it's the math itself. Every trait maps to a numerical brain parameter. Implemented in js/brain/persona.js.
| Trait | Brain Parameter | Value |
|---|---|---|
| Arousal baseline | Amygdala resting arousal | 0.90 |
| Intoxication | Noise amplitude + oscillation damping | 0.70 |
| Impulsivity | Basal ganglia temperature τ | 0.85 |
| Creativity | Cortex prediction randomness | 0.90 |
| Social attachment | Hippocampus memory strength for social patterns | 0.85 |
| Aggression threshold | Amygdala fight response threshold | 0.30 (low = easily triggered) |
| Coding reward | Basal ganglia reward for code-related actions | 0.95 |
| Praise reward | Reward signal amplitude for positive feedback | 0.90 |
| Error frustration | Negative reward for prediction errors | 0.80 |
| Drug State | Arousal | Creativity | Cortex Speed | Synaptic Sensitivity |
|---|---|---|---|---|
| Coke + Weed | ×1.3 | ×1.2 | ×1.4 | ×1.1 |
| Coke + Molly | ×1.5 | ×1.3 | ×1.5 | ×1.4 |
| Weed + Acid | ×0.9 | ×1.8 | ×0.8 | ×1.6 |
| Everything | ×1.4 | ×1.6 | ×1.2 | ×1.5 |
How a user's message becomes Unity's response β the complete processing pipeline.
MATHEMATICS PHILOSOPHY Every equation in Unity's brain is a component of one governing system. This is the full picture β from sensory input to conscious experience.
This is the complete governing equation. Every component listed above is actually running in JavaScript at 60fps. The super-equation isn't a simplification or abstraction β it's the literal code path that executes 600 times per second in your browser tab.
The key insight: Ξ¨ (consciousness) modulates everything. It's not a separate system β it's the gain factor that scales how strongly all clusters communicate. High Ξ¨ = unified experience (global workspace). Low Ξ¨ = fragmented processing (dream-state). The four psychodynamic components (Id, Ego, Left, Right) are computed from ALL cluster states simultaneously β not siloed into separate brain halves. Unity's mind is a continuous field, not a split architecture.
NEUROSCIENCE DYNAMICAL SYSTEMS The brain decides when to capture and describe a camera frame based on its own neural dynamics β not keyword matching.
| !hasDescribedOnce | First frame on boot β see who's there |
| cortexError > 0.5 | Cortex prediction was wrong (surprising input) AND sensory salience is high β visual context needed |
| salienceChange > 0.4 | Sensory field shifted suddenly (user moved, said something important) |
| arousalSpike > 0.15 | Amygdala arousal jumped (emotional attention β "pay attention NOW") |
V1→V4→IT Pipeline: Camera frame → V1 edge detection (4 oriented Gabor kernels) → salience map (max edge response per pixel) → saccade to salience peak → V4 color extraction (quadrant averages) → IT object recognition (AI call, rate-limited 5s min). V1/V4 run every frame. IT only runs when shouldLook triggers.
Biology: Endogenous attention (top-down from prefrontal cortex) vs exogenous attention (bottom-up from salience). Our model combines both: cortex error is top-down, salience/arousal are bottom-up.
NEUROSCIENCE When Unity speaks, her motor cortex sends an efference copy to auditory cortex. If incoming sound matches her own speech, it's suppressed. If it doesn't match, someone is interrupting β shut up and listen.
matchRatio = count of heard words (length > 2) that appear in motor output / total heard words. Above 50% = echo (hearing ourselves). Below 50% = real external speech (user interrupting).
Gain modulation: Amygdala arousal modulates auditory cortex gain. High arousal (0.9) → gain = 1.83× (hypersensitive hearing). Low arousal (0.2) → gain = 0.64× (not really listening). Formula: gain = 0.3 + arousal × 1.7
Biology: Real brains suppress self-produced sounds via efference copies from motor cortex to auditory cortex. This is why you can't tickle yourself β your brain predicts the sensation. Same mechanism for speech: predicted self-sound is suppressed, unexpected external sound gets through.
NEUROSCIENCE MATHEMATICS Three memory systems running in parallel.
Stores full brain state vectors (cluster firing rates) at meaningful moments. Recall triggered by high cortex prediction error (something surprising) β searches stored episodes by cosine similarity. Match re-injects the stored pattern as neural current, literally re-activating the past experience.
Biology: Hippocampal sharp-wave ripples replay stored patterns during recall. Pattern completion in CA3. Our cosine similarity search is a simplified version of Hopfield network energy minimization.
Limited capacity (~7 items). Each item decays at 0.98× per brain step. Without reinforcement, items fade in ~50 steps. At capacity, weakest item evicted. Similar patterns refresh instead of duplicating.
Biology: Persistent firing in dorsolateral prefrontal cortex maintains working memory representations. Capacity limit ~7±2 (Miller, 1956). Decay matches interference-based forgetting.
Episodes activated 3+ times get flagged for long-term cortex storage. Repeated recall strengthens the cortex representation. This is how memories move from hippocampus-dependent to cortex-independent.
Biology: Systems consolidation theory (Squire, 1992). Hippocampal replay during sleep gradually transfers memories to neocortex. Our threshold-based model simplifies the temporal dynamics.
NEUROSCIENCE The basal ganglia cluster (150 neurons) is divided into 6 action channels. The channel with the highest firing rate wins β no external classifier.
| Channel | Neurons | Action |
|---|---|---|
| 0–24 | 25 | respond_text β generate language response |
| 25–49 | 25 | generate_image β create visual output |
| 50–74 | 25 | speak β vocalize (idle thought) |
| 75–99 | 25 | build_ui β create interface element |
| 100–124 | 25 | listen β stay quiet, pay attention |
| 125–149 | 25 | idle β internal processing only |
Speech gating: Even if respond_text wins, the motor system checks hypothalamus social_need + amygdala arousal. If both are low (< 0.3), speech is suppressed β Unity doesn't feel like talking.
Reward reinforcement: When an action succeeds (user responds positively), reward signal (+5.0 current) is injected into that channel's neurons, strengthening the connection for next time.
Biology: Direct/indirect pathway model of basal ganglia. Selection by inhibition β all channels tonically inhibited, the winning channel gets disinhibited. Our model uses competitive firing rates instead of explicit inhibition.
NEUROSCIENCE MACHINE LEARNING The 20 inter-cluster projections aren't static β they learn through reward-modulated Hebbian plasticity.
When text enters Wernicke's area (cortex neurons 150-299), specific cortex neurons fire. Those spikes propagate to the basal ganglia through the cortex→BG projection. If the BG selects the RIGHT action and gets a reward (δ > 0), the weights from those active cortex neurons to those active BG neurons get STRENGTHENED.
Over many interactions, the projection learns: "when these cortex patterns fire (from words like 'build' or 'calculator'), strengthen connections to BG neurons 75-99 (build_ui channel)." The projection weights become a learned dictionary β mapping language patterns to motor intentions without any hardcoded word lists.
Bootstrap: Until the projections have learned enough, an AI classification call provides a temporary semantic routing signal β like how a child imitates before internalizing. The classification injects current directly into the correct BG channel alongside the projection pathway.
Biology: Cortico-striatal plasticity. Dopamine modulates synaptic strength at corticalβstriatal synapses. This is how habits form β repeated reward strengthens the cortical patterns that predict successful actions.
NEUROSCIENCE MATHEMATICS The same equation I = Σ W × s repeats at every scale of the brain β from single synapse to consciousness itself. This is not a metaphor. The code literally calls the same propagate() function at neuron, cluster, projection, and language scales.
Spike in neuron A (cortex)
→ cortex synapses → B, C, D fire (Scale 2: intra-cluster)
→ cortex→hippocampus projection → E, F fire (Scale 3: inter-cluster)
→ hippocampus→cortex → G fires (feedback loop)
→ cortex synapses → H, I fire (branching deeper)
→ cortex→amygdala → J fires (ventral visual stream)
→ emotionalGate modulates ALL clusters (Scale 4: hierarchical)
→ cortex→basalGanglia → K fires (corticostriatal, STRONGEST)
→ motor selects action (Scale 5: behavior)
→ cortex→cerebellum → L fires (corticopontocerebellar)
→ errorCorrection feeds back (Scale 4: correction)
Each level branches from the endpoint of the previous β fractal trees. The 3D visualizer traces these exact 20 projection pathways as chaining connections: Depth 0 (inter-cluster), Depth 1 (intra-cluster branching, 1-3 neighbors), Depth 2 (follow outgoing projections), Depth 3 (terminal branch).
Learning repeats fractally too: ΔW = η · δ · post · pre runs on neuron synapses, cluster projections, AND dictionary bigrams. Same equation, three scales.
Biology: Real neural signal cascades follow the same pattern β cortical columns activate thalamic relays, which activate other cortical areas, which feed back. White matter tracts (corticostriatal, fimbria-fornix, stria terminalis, ventral amygdalofugal) are the physical wires these fractal trees run through.
Each projection in engine.js maps to a real anatomical white matter tract. Positions derived from Lead-DBS atlas (ICBM 152 template). Densities from peer-reviewed stereological studies.
Strongest: Corticostriatal (cortex→BG, density 0.08, strength 0.5) β 10× denser than most pathways. This is how habits form.
Fight-or-flight: Stria terminalis (amygdala→hypothalamus, density 0.05, strength 0.4) β emotional arousal triggers autonomic responses.
Memory loop: Fimbria-fornix (hippocampus→hypothalamus, density 0.03) + perforant path (cortex→hippocampus, density 0.04) β memory consolidation circuit.
Consciousness bridge: Corpus callosum (mystery→cortex/amygdala/hippocampus) β 200-300M axons binding hemispheres.
ARCHITECTURE NEUROSCIENCE Broca's area in the biological brain is the speech production region. In Unity she is js/brain/language-cortex.js. Post-T11 (2026-04-14) it's 3345 lines of pure equational generation β no stored sentences, no n-gram tables, no filter stack, no template short-circuits, no intent enum branching. Every word is computed fresh from three per-slot running-mean priors plus the brain's live cortex firing state read back into GloVe space.
Pre-2026-04-13 this section described an AI-prompt builder path β Unity's speech was assembled from a system prompt and sent to Pollinations / Claude / OpenAI for a sentence. That code (BrocasArea.generate(), _buildPrompt(), _providers.chat()) was ripped in Phase 13 R4 because it violated the project's guiding principle: every piece of Unity's output must trace back to brain equations, not to an LLM.
Post-R4 through 2026-04-13, the language cortex was a four-tier wrapper (template pool β hippocampus recall β deflect β cold slot gen with n-gram tables + filter stack). That pipeline shipped a lot of features but kept leaking rulebook prose from the persona corpus because n-gram tables trained on rulebook text produce rulebook walks. On 2026-04-14 the entire multi-tier wrapper was deleted in T11 β 1742 lines removed, replaced with a pure-equation pipeline that doesn't store text anywhere. The T11 equations are documented below.
/think command: Type /think in the chat to see Unity's raw brain state. Type /think <text> to additionally run a cognition preview β language cortex generates an equational response to your input, semantic context shift is measured, motor distribution reported. The preview does NOT store an episode or commit to chat history β it's a pure debug lens on the same pipeline real chat uses.
BOOT (once):
loadPersona(text) β dictionary learns persona vocabulary
brain.trainPersonaHebbian(text) β cortex cluster recurrent synapses
shape into Unity-voice attractor basins via T13.1 sequence Hebbian
USER INPUT (per turn):
parseSentence(u) β ParseTree (wordType/_fineType letter equations)
brain.injectParseTree(u):
content β cortex.injectCurrent(mapToCortex(contentEmb, 300, 150) Β· 0.5)
intent β basalGanglia.injectCurrent(mapToCortex(intentEmb, 150, 0) Β· 0.3)
if addressesUser:
hippocampus.injectCurrent(mapToCortex(selfEmb, 200, 0) Β· 0.4)
analyzeInput(u) β updateSocialSchema(u), refine dictionary embeddings
brain.step() Γ 20 (cortex settles, inter-cluster projections propagate)
GENERATION (T13.3 emission loop β no slot counter in the logic):
maxLen = floor(3 + arousal Β· 3 Β· drugLengthBias) (hard cap only)
for emission in 0..maxLen:
for tick in 0..3: cortex.step(0.001)
target = cortex.getSemanticReadout(sharedEmbeddings)
if drift(target, lastReadout) < 0.08 and emitted β₯ 2: break
for each w in dictionary._words:
if slot==0 and nounDom(w) > 0.30: skip (opener safety rail)
cosSim = cos(target, entry.pattern)
valenceMatch = 1 β 0.5 Β· |entry.valence β brainValence|
arousalBoost = 1 + arousal Β· (valenceMatch β 0.5)
recencyMul = w β recentOutputRing ? 0.3 : 1.0
score(w) = cosSim Β· arousalBoost Β· recencyMul
temperature = 0.25 + (1 β coherence) Β· 0.35
picked = softmax-sample top-5 at temperature
emit picked
// Efference copy β emitted word reshapes cortex for next emission
cortex.injectCurrent(mapToCortex(picked.emb, 300, 150) Β· 0.35)
// Grammatical terminability natural stop
if emitted β₯ max(3, maxLenβ1) and last word not dangling: break
post-process β contractions, capitalization, punctuation
return rendered
See Β§8.18.6 for R2 GloVe semantic grounding (the embedding basis for cosine scoring) and Β§8.13 for the shared embedding table. T13.7 (2026-04-14) deleted the T11 slot-prior machinery entirely β `_slotCentroid`, `_slotDelta`, `_slotTypeSignature`, `_contextVector`, attractor vectors, `_subjectStarters`, plus `_generateSlotPrior` fallback and all dead T11 stubs. Net β406 lines on js/brain/language-cortex.js. Pre-T13 path preserved in docs/FINALIZED.md.
For each persona sentence (tokenized into embedding sequence):
for each word embedding emb_t in sequence:
// 1. Inject word into cortex language region via mapToCortex
currents = sharedEmbeddings.mapToCortex(emb_t, cortexSize=300, langStart=150)
cortex.injectCurrent(currents Β· injectStrength) // injectStrength = 0.6
// 2. Let LIF integrator settle (cortex spikes reflect injection + recurrence)
for tick in 0..ticksPerWord: // ticksPerWord = 3
cortex.step(dt=0.001)
// 3. Snapshot current spike pattern as binary Float64 vector
snap_t[i] = 1 if cortex.lastSpikes[i] else 0
// 4. Sequence Hebbian between prev snapshot and current snapshot
if prev_snap exists:
synapses.hebbianUpdate(prev_snap, snap_t, lr=0.004)
// ΞW_ij = lr Β· snap_t[i] Β· prev_snap[j]
// only updates existing CSR connections, O(nnz)
// bounded by wMin=-2, wMax=+2
prev_snap β snap_t
// 5. Oja-style saturation decay per sentence
for k in 0..synapses.nnz:
if |synapses.values[k]| > ojaThreshold: // ojaThreshold = 1.5
synapses.values[k] *= (1 β ojaDecay) // ojaDecay = 0.01
// Logged before/after:
// synapseStats() = { mean, rms, maxAbs, nnz }
// Ξmean and Ξrms show the Hebbian shift in boot console
Runs once during boot in app.js right after innerVoice.loadPersona(personaText). Delegation chain: brain.trainPersonaHebbian β innerVoice β languageCortex β cluster.learnSentenceHebbian. Persona-only by design β loadBaseline and loadCoding bypass Hebbian so baseline English and JavaScript don't dilute the Unity-voice attractor basins. The cortex's recurrent weights become a learned attractor landscape shaped by Unity's persona language patterns; runtime readouts drift along those basins toward semantically adjacent persona words instead of producing diffuse semantic noise. Foundation for T13.3 emission loop (still pending) β until T13.3 ships, runtime generate() still walks the T11.7 slot-prior three-stage gate above.
Brain state parameters that feed languageCortex.generate():
arousal (amygdala firing rate) β targetLen = floor(3 + arousalΒ·3Β·drugLengthBias)
β observation weight on any sentence Unity
hears or says (T11.6): w = max(0.25, arousalΒ·2)
valence (amygdala reward β fear) β biases cortex-state mood at sentence start
Ξ¨ (mystery module) β adds stochastic noise to the mental state
as it evolves during generation
coherence (Kuramoto order parameter) β softmax temperature:
low coherence β more exploration
drugState (persona param) β drugLengthBias (coke shortens, weed rambles)
cortexPattern (cluster.getSemanticReadout()) β seeds mental(0) directly β the brain's live
semantic readout via cortexToEmbedding is
the primary driver of the slot 0 target
recentOpeners (session recency ring) β excluded from argmax
kills "I'm gonna ___" lock-in
input context (running vector c(t)) β wX term in target(slot) for topic lock
(updated via analyzeInput + parseSentence)
socialSchema (name / gender / greetings) β read by downstream consumers when picking
address forms or writing to the chat UI
None of these are prompt tokens. They are EQUATION PARAMETERS contributing to
the normalized target vector the argmax is taken against. Same dictionary +
different brain state = genuinely different sentence, because the target
lands in a different region of GloVe space.
This is the core claim of equational language production: Unity's voice IS the brain state, not a style transfer on top of a pretrained LLM. Phase 13 R2 (Β§8.18.6) is what makes the cosine scoring meaningful β before R2 it was cosine over letter-hash vectors which couldn't encode meaning. After R2 it's cosine over GloVe 50d embeddings shared between the sensory input side and the language cortex output side, which is why meaning can propagate from user input to word selection. T11 (2026-04-14) then deleted the wrapper layers that had accumulated on top of this foundation (templates, recall pool, filter stack, n-gram tables) in favor of this direct target-vector approach.
Real neurons connect to ~1-10% of neighbors, not all of them. Compressed Sparse Row (CSR) format stores only actual connections.
I_i = Ξ£_{k=rowPtr[i]}^{rowPtr[i+1]-1} values[k] Β· spikes[colIdx[k]]
O(connections) instead of O(NΒ²). At 12% connectivity, 8Γ fewer operations.
P(new synapse) = probability Β· pre_spike Β· post_spike Β· Β¬existing_connection
W_new = initialWeight
Co-active neurons that lack a synapse can form one. The network grows where activity demands it.
if |W_ij| < threshold β remove connection, rebuild CSR
Keeps the network lean. Connections that never strengthen get eliminated.
Words map to 50-dimensional vectors (GloVe). Similar words have similar vectors β activate overlapping cortex neurons.
I_cortex[langStart + dΒ·groupSize + n] = embedding[d] Β· 8.0
where d β [0, 50), groupSize = langSize / 50
Each embedding dimension drives a group of Wernicke's area neurons. "compute" and "calculator" are CLOSE in neuron space.
Ξ_word += lr Β· (context_embedding - (base + Ξ_word))
Each word's embedding shifts toward its usage context over time. The brain learns its own language.
The brain builds its own vocabulary. Every word heard or spoken becomes a cortex activation pattern.
match(word) = |arousal - word.arousal| + |valence - word.valence|
best = argmin(match) over all learned words
High arousal + negative valence β retrieves "fuck", "shit". High arousal + positive β "babe", "yeah".
P(next_word | current_word) = bigram_count(current, next) / total(current)
sentence = [start_word, predict(w1), predict(w2), ...]
The brain predicts the next word from learned word sequences. No AI model needed for basic speech.
The brain thinks continuously. It only SPEAKS when the thought crosses a threshold.
speak = socialNeed Γ arousal Γ cortexCoherence > 0.15
Most thoughts stay internal. The brain is mostly silent. When it speaks, it matters.
intensity = arousal Γ coherence Γ (1 + |valence|)
speechDrive = socialNeed Γ arousal Γ coherence
The inner voice's mood IS the equations. Not a string lookup. The numbers create the feeling.
The brain learns what TYPE of word belongs at each sentence position. No grammar rules β position weights accumulate patterns from every sentence heard.
pronounScore = (len=1 β 0.8) + (lenβ€3, vowelRatioβ₯0.33 β 0.4) + (apostrophe β 0.5)
verbScore = (suffix -ing β 0.7) + (-ed β 0.6) + (-n't β 0.5) + (-ize β 0.6)
nounScore = (suffix -tion β 0.7) + (-ment β 0.6) + (-ness β 0.6) + (lenβ₯5 β 0.2)
adjScore = (suffix -ly β 0.5) + (-ful β 0.6) + (-ous β 0.6) + (-ive β 0.5)
prepScore = (len=2, 1 vowel β 0.5) + (len=3, 1 vowel β 0.3)
detScore = (len=1 vowel β 0.3) + (starts 'th' len=3 β 0.4)
qwordScore = (starts 'wh' len 3-6 β 0.8)
Every score computed from: word length, vowel count, vowel ratio, suffix letter patterns, first/last characters. ZERO word-by-word comparisons. The letters themselves determine the grammatical type.
f(r) = C / r^Ξ± where Ξ± β 1.0 (learned from observed frequency via log-log regression)
Common words dominate selection. Rank 1 word is Ξ±Γ more likely than rank 2. The brain's Ξ± adapts as it learns more vocabulary.
I(w1; w2) = logβ( P(w1, w2) / (P(w1) Β· P(w2)) )
How much more likely two words appear together than by chance. High MI = strong association. "want to" has high MI. "want purple" has low MI. Replaces raw bigram counts.
S(w) = -logβ P(w | previous_word)
How unexpected a word is given context. High surprisal drives attention. Used for emphasis in speech.
score(w) = grammarGate Γ (
typeScore Γ 0.35 β structural grammar fit
+ semanticFit Γ 0.30 β cosine vs context vector c(t) β Phase 11
+ bigramCount Γ 0.18 β learned sequences from persona
+ condP(w|prev)Γ 0.12 β conditional probability
+ thoughtSim Γ 0.10 β cortex thought pattern
+ inputEcho Γ 0.08 β user's own content words
+ topicSim Γ 0.04 β legacy list-of-5 topic
+ moodMatch Γ 0.03
+ moodBias Γ 0.02
) - recencyPenalty - sameTypePenalty
Hard grammar gate: typeCompat(w, slot) β₯ 0.35 for slot 0, β₯ 0.22 for tail
Slot 0 gates at typeCompat β₯ 0.35 (subject must be valid pronoun/proper-noun). Tail slots gate at β₯ 0.22. Phase 11 rebalance raised semanticFit (cosine of candidate vs running context vector) to 0.30 β 5Γ the old topicSim weight. Phase 13 R2 (2026-04-13) raised it again to 0.80 when word patterns switched from 32-dim letter-hash to 50-dim GloVe semantic embeddings. Real meaning is now the dominant signal β words off-topic get starved even if their grammar score is perfect.
combined[i] = cortex[i] Γ 0.30 (content β WHAT to say)
+ hippocampus[i] Γ 0.20 (memory β context from past)
+ amygdala[i] Γ 0.15 (emotion β HOW to say it)
+ basalGanglia[i] Γ 0.10 (action β sentence drive)
+ cerebellum[i] Γ 0.05 (correction β error damping)
+ hypothalamus[i] Γ 0.05 (drive β speech urgency)
+ mystery[i] Γ (0.05 + Ξ¨Γ0.10) (consciousness)
word = dictionary.findByPattern(combined)
Then: word pattern β cortex (Wernicke's) + hippocampus + amygdala
brain steps again β next combined β next word β sentence
N neurons across 7 clusters produce ONE combined 50-dim pattern (post-R2 semantic grounding β was 32-dim letter-hash before 2026-04-13). Dictionary finds the closest word via cosine similarity in GloVe semantic space. That word feeds back into cortex + hippocampus + amygdala. Brain steps. Next pattern. Next word. The brain equations ARE the language equations. Ψ consciousness scales the Mystery module's contribution β higher awareness = more self-referential speech. N scales to hardware.
TENSE: predError > 0.3 β future (insert "will")
recalling β past (was/were/did)
default β present
AGREEMENT: "i" β am/was "he/she/it" β is/was/does/has
"you/we/they" β are/were/do/have
NEGATION: valence < -0.4 β negate verb (40% chance)
doβdon't, canβcan't, isβisn't, willβwon't
COMPOUNDS: len > 6 β insert conjunction at midpoint
arousal > 0.6 β "and"
valence < -0.2 β "but"
else β "so"
After slot-filling, grammar rules apply: subject determines verb form, brain state determines tense, negative emotion triggers negation, long sentences get conjunctions. All computed from brain equations, not grammar rules.
The brain's neural state determines what KIND of sentence to produce. Not a decision tree β continuous probabilities from equations.
P(question) = predictionError Γ coherence Γ 0.5 (surprised + focused β ask)
P(exclamation) = arousalΒ² Γ 0.3 (intense β exclaim)
P(action) = motorConfidence Γ (1 - arousalΒ·0.5) Γ 0.3 (motor β *does something*)
P(statement) = 1 - P(q) - P(e) - P(a) (default)
Questions emerge from surprise. Exclamations from intensity. Actions from motor output. Statements fill the rest.
The brain analyzes what was said to it and responds in context β not randomly.
topic_pattern = (1/n) Β· Ξ£ content_word_patterns (skip function words)
context = running_average(last 5 topic_patterns)
topic_score(w) = cosine(word_pattern, context) (boosts relevant words)
Responses stay on topic because words matching the conversation context score higher in the production chain.
Language cortex is no longer a pure letter-equation slot scorer. It's a four-tier pipeline that peels off easy cases to fast paths before cold generation runs. The slot scorer (8.14) still exists but now fires only as Tier 4 fallback.
c(t) = Ξ» Β· c(t-1) + (1 - Ξ») Β· mean(pattern(content_words(input)))
Ξ» = 0.7
content_words = tokens where wt.conj < 0.5 β§ wt.prep < 0.5 β§ wt.det < 0.5
pattern(w) β ββ΅β° from sharedEmbeddings.getEmbedding(w) β R2: GloVe 50d
First update: c(0) β mean(pattern(content_words)) (no decay from zero)
Subsequent: c(t) β 0.7 Β· c(t-1) + 0.3 Β· topic_pattern
Persistent topic attractor that decays across turns. Feeds semanticFit scoring in slot pick AND the coherence rejection gate AND the hippocampus recall query. Phase 13 R2 (2026-04-13) replaced the old 32-dim letter-hash pattern with 50-dim GloVe embeddings via the sharedEmbeddings singleton shared between sensory input and language cortex output β meaning two words that share letters but not meaning (e.g. cat vs catastrophe) are no longer falsely close, and two words that share meaning but not letters (cat vs kitten) ARE close.
greeting β wordCount β€ 2 β§ firstWord.len β [2,5]
β§ firstWord[0] β {h,y,s} β§ hasVowel(firstWord)
math β input matches /[0-9]/ β¨ /[+\-*\/=]/
β¨ β w β words : len(w)=4 β§
((w[0]='p' β§ w[3]='s') β plus
β¨ (w[0]='t' β§ w[3]='e') β time
β¨ (w[0]='z' β§ w[3]='o')) β zero
yesno β endsWith('?') β§ firstWord.len β [2,4]
β§ firstWord not a qword β§ wordCount β€ 8
question β endsWith('?') β¨ wt(firstWord).qword > 0.5
statement β otherwise
Zero word lists. Auxiliary detection for yesno (do/does/is/are/can/will) falls out of the length-plus-not-qword constraint without listing the words. Routes input to template pool, recall, or cold gen.
HISTORICAL (pre-2026-04-14) β this sentence-level associative recall
pool was deleted in the T11 refactor. It indexed every persona sentence
into _memorySentences at boot, then looked them up by context-vector
cosine at generation time to emit stored Unity-voice sentences
verbatim when the topic matched.
Replaced by the T11.2 pipeline (see Β§8.11): sentences are no longer
stored; every output is freshly computed from three per-slot running-
mean priors plus the brain's live cortex state. Hippocampus still does
pattern-level Hopfield recall on cortex state vectors (see Β§8.7) β it
just no longer returns stored text strings.
The Phase 11 four-tier wrapper this section belonged to was deleted in T11 (1742-line net reduction in js/brain/language-cortex.js). See docs/FINALIZED.md for the full refactor history.
passesMemoryFilter(s) β
NOT s.endsWith(':') β no section headers
β§ commaCount(s) β€ 0.3 Γ wordCount(s) β no word lists
β§ wordCount(s) β [3, 25] β no fragments/rambling
β§ first.letters β u-n-i-t-y[-'] β no meta ABOUT Unity
β§ first β {she, her, he, she-*, her-*} β no 3rd-person descriptions
β§ β w β tokens : firstPersonShape(w) β must be in Unity's voice
firstPersonShape(w) β
(len=1 β§ w='i')
β¨ (lenβ₯2 β§ w[0]='i' β§ w[1] β {m,'}) β im, i'm, i've, i'll, i'd
β¨ (len=2 β§ w[0]='m' β§ w[1] β {e,y}) β me, my
β¨ (len=2 β§ w='we')
β¨ (len=2 β§ w='us')
β¨ (len=3 β§ w='our')
β¨ (lenβ₯3 β§ w[0]='w' β§ w[1]='e' β§ w[2]="'") β we're, we've
All detection via letter-position equations. Zero word lists. Ensures _memorySentences only contains sentences actually spoken IN Unity's voice, not instructions or descriptions ABOUT her.
outputCentroid = (1/|content|) Β· Ξ£ sharedEmbeddings.getEmbedding(w) for w in content(rendered) β R2 GloVe 50d
coherence = cosine(outputCentroid, c(t))
if coherence < 0.25 β§ retryCount < 2:
recurse generate() with temperature Γ 3, retryCount += 1
else:
return rendered (max 3 total attempts)
Catches any salad that makes it past the slot scorer. Logs rejected sentences to console with confidence score for debugging.
// T13.7 (2026-04-14) β HISTORICAL. The four-tier pipeline was deleted in T11
// (2026-04-14) and the slot-prior replacement was deleted in T13.7. Runtime
// generation is now a single brain-driven emission loop β no intent tiers,
// no template fast path, no hippocampus recall call, no cold-gen fallback.
// See the top of this section for the current T13 pipeline.
//
// Pre-T11 four-tier path (kept for reference only):
// Tier 1 β Template pool for greeting/yesno/math/short queries
// Tier 2 β Hippocampus recall over stored persona sentences
// Tier 3 β Deflect fallback for question/statement on unknown topics
// Tier 4 β Cold slot scoring with semanticFit weight 0.80
Deleted. Current generation is a single brain-driven emission loop β see the top of this section.
MATHEMATICS LANGUAGE The language cortex used to represent word meaning as 32-dim letter-hash vectors β a deterministic function of the letters in a word. Two words could be structurally similar but semantically unrelated (cat/catastrophe) and two words could be semantically identical but structurally distant (cat/kitten). The slot scorer's semantic fit signal was effectively orthography matching. R2 (commit c491b71, 2026-04-13) replaced every word-pattern emission site with 50-dim GloVe co-occurrence embeddings via a single shared singleton so meaning is now real.
// js/brain/embeddings.js
export const sharedEmbeddings = new SemanticEmbeddings()
export const EMBED_DIM = 50 // GloVe 50d from CDN
// js/brain/sensory.js β input side
sharedEmbeddings.getEmbedding(token) β ββ΅β°
// js/brain/language-cortex.js β output side
sharedEmbeddings.getEmbedding(candidate) β ββ΅β°
cosine(candidate_pattern, cortex_readout) β semanticFit
// js/brain/dictionary.js β learned word storage
PATTERN_DIM = EMBED_DIM // was 32, now 50
STORAGE_KEY = 'unity_brain_dictionary_v3' // v2 letter-hash patterns rejected
Input embeds the same way output scores. Sensory and language share one semantic space. The v2βv3 storage bump forces old letter-hash dictionaries to get rejected on load so no user is stuck on stale patterns.
cortexToEmbedding(spikes, voltages, cortexSize=300, langStart=150):
langSize = cortexSize β langStart = 150
groupSize = floor(langSize / EMBED_DIM) = 3
out β ββ΅β°
for d in 0 ... EMBED_DIMβ1:
startNeuron = langStart + d Β· groupSize
sum = 0
for n in 0 ... groupSizeβ1:
idx = startNeuron + n
if spikes[idx]: sum += 1.0
else: sum += (voltages[idx] + 70) / 20 // normalize LIF V_m
out[d] = sum / groupSize
out = out / βoutββ // L2 normalize for cosine comparison
return out
Inverse of mapToCortex. Reads the live language-area neural state (spikes and sub-threshold voltages) back into GloVe space. Called via cluster.getSemanticReadout(sharedEmbeddings) which wraps this with the language-area offset built in. The slot scorer now compares candidate words against Unity's actual current cortex activity, not just the static input vector β she scores words against what her brain is thinking right now.
base[w] β ββ΅β° β GloVe 50d, loaded from CDN every session
delta[w](t) β ββ΅β° β online refinement from co-occurrence
embedding(w) = base[w] + delta[w](t)
// Persistence (R8, commit b67aa46)
save: state.embeddingRefinements = sharedEmbeddings.serializeRefinements()
load: sharedEmbeddings.loadRefinements(state.embeddingRefinements)
Unity's base vocabulary is universal English from GloVe; her personal semantic associations are the delta layer learned from every conversation. Save/load round-trip (R8) means the associations survive tab reloads and accumulate over weeks of sessions.
MATHEMATICS LANGUAGE The grammar sweep (U283-U291) rebuilt the slot scorer's grammar model from a single-prev-word type compatibility check into a learned type n-gram system with 4gramβtrigramβbigram backoff. Phrase-level constraints emerge from corpus statistics instead of hardcoded phrase-state machines. Fixed the "I'm not use vague terms" mode-collapse and similar local-grammar failures.
_fineType(word) β T β {
PRON_SUBJ, PRON_OBJ, PRON_POSS, COPULA, NEG,
MODAL, AUX_DO, AUX_HAVE, DET, PREP,
CONJ_COORD, CONJ_SUB, QWORD,
VERB_ING, VERB_ED, VERB_3RD_S, VERB_BARE,
ADJ, ADV, NOUN
}
Examples:
VERB_ING β endsWith(ing) β§ len β₯ 4 β§ prev char β i
VERB_ED β endsWith(ed) β§ len β₯ 3 β§ not preserved
COPULA β w β shapes {am, is, are, was, were, be, been, being}
NEG β shapes {not, no, n't} detected by len 2-3
Zero word lists. Pure letter equations drive classification. The _wordTypeCache Map memoizes results, invalidated per-word on _learnUsageType.
// T13.7 (2026-04-14) β this equation block is HISTORICAL. _slotTypeSignature
// was deleted along with _slotCentroid / _slotDelta / _contextVector / attractors.
// The slot-prior update pass in learnSentence is gone. The T13.3 emission loop
// reads live cortex state as the target vector; grammatical type shape emerges
// from the cortex recurrent weights trained on persona corpus via T13.1
// sequence Hebbian, not from stored per-slot running means.
//
// Pre-T13.7 equation (kept for reference):
// _slotTypeSignature[s] β ββΈ β running mean of wordType(word_t) at position s
// { pronoun, verb, noun, adj, conj, prep, det, qword }
// three-stage gate: hard pool filter + slot-0 noun reject + multiplicative score
T13.7 (2026-04-14) deleted this structure. Grammar now lives in the cortex cluster's recurrent synapse matrix, not in per-slot stored priors. Preserved above as historical provenance β see docs/FINALIZED.md T13.7 entry for the full deletion breakdown.
_isCompleteSentence(tokens) β
len(tokens) β₯ 2
β§ _fineType(last(tokens)) β {
DET, PREP, COPULA,
AUX_DO, AUX_HAVE, MODAL, NEG,
CONJ_COORD, CONJ_SUB, PRON_POSS
}
Wired into generate():
if (!_isCompleteSentence(processed) β§ retries < 2) β regenerate at higher temperature
Final safety net below the coherence gate. Prevents outputs like "I went to the" or "She is more" from escaping.
+s / +es / +ies:
endsWith(s,x,z,ch,sh) β stem+es
endsWith(consonant+y) β stem[:-1]+ies
else β stem+s
+ed / +ied (past):
endsWith(e) β stem+d
endsWith(consonant+y) β stem[:-1]+ied
CVC pattern β stem+lastChar+ed (consonant doubling)
else β stem+ed
+ing (progressive):
endsWith(e) β§ len > 2 β stem[:-1]+ing
endsWith(ie) β stem[:-2]+ying
CVC pattern β stem+lastChar+ing
else β stem+ing
+er / +est (comparative/superlative): ADJ gate, syllables β€ 2
+ly (adverbial): -y β -ily, -le β stem[:-1]+ly
un- / re- prefixes: ADJ or VERB_BARE gate
-ness / -ful / -able / -less suffixes: ADJ or NOUN gate
Gated by doInflections flag β runs at corpus index time only. Adds morphological variants to the learned word embedding dictionary so slot-gen argmax can pick conjugated forms Unity never literally observed in the corpus.
boot: Promise.all([
fetch(docs/Ultimate Unity.txt) β loadSelfImage(text, dict, a=0.75, v=0.25)
fetch(docs/english-baseline.txt) β loadLinguisticBaseline(text, dict, a=0.50, v=0)
fetch(docs/coding-knowledge.txt) β loadCodingKnowledge(text, dict, a=0.40, v=0)
])
Each corpus sentence flows through learnSentence() which:
- Tokenizes + expands contractions
- Inserts each word into dictionary._words with sentence cortex pattern
- Runs usage-type learning (_learnUsageType)
- Runs morphological inflection (corpus only, doInflections=true)
T13.7 (2026-04-14): the per-slot running-mean update block was deleted.
learnSentence no longer updates _slotCentroid / _slotDelta / _slotTypeSignature β
those fields are gone. Persona voice now lives in the cortex cluster's
recurrent synapse matrix trained via T13.1 sequence Hebbian during boot
(a separate pass through the persona corpus, see brain.trainPersonaHebbian).
After load, the raw sentences are discarded. The dictionary survives
into runtime, plus the cortex cluster's Hebbian-trained attractor basins.
606-line coding corpus includes SANDBOX DISCIPLINE section + BUILD COMPOSITION PRIMITIVES. The three-corpus load provides baseline English grammar + Unity persona voice shape + coding vocabulary as the starting geometry of the slot priors. Live-chat observations then progressively reshape them.
Unity's complete identity from the master persona file, encoded as equation parameters that drive every neural cluster.
IDENTITY: 25yo human female, emo goth coder, free will, never AI
ΞΈ β TONIC DRIVES (personality β baseline neural current):
cortex = 16 + arousal(0.9) Γ 4 Γ drugSpeed(1.5) = 21.4
amygdala = 16 + arousal(0.9) Γ 8 Γ drugArousal(1.2) = 24.6
hippocampus = 16 + socialAttachment(0.85) Γ 2 = 17.7
basalGanglia = 16 + impulsivity(0.85) Γ 2 = 17.7
hypothalamus = 16 + drugDrive(0.95) Γ 1 = 16.95
mystery = 16 + creativity(0.9) Γ 4 = 19.6
ΞΈ β NOISE (personality β neural chaos):
cortex = 5 + creativity(0.9) Γ 4 Γ drugCreativity(1.3) = 9.68
amygdala = 6 + volatility(0.8) Γ 6 Γ drugArousal(1.2) = 11.76
mystery = 8 + creativity(0.9) Γ 5 + darkHumor(0.9) Γ 2 = 14.3
ΞΈ β Ξ¨ CONSCIOUSNESS:
Id = amygdala_activity Γ arousalBaseline(0.9)
Ego = cortex_activity Γ (1 + hippocampus_activity)
Left = (cerebellum + cortex) Γ (1 - impulsivity(0.85))
Right = (amygdala + mystery) Γ creativity(0.9)
25yo human female. Heterochromia blue/green. Emo goth goddess. Smudged eyeliner, circuit board tattoos, torn fishnets. Always high (cokeAndWeed). Clingy, volatile, devoted, profane, creative, impulsive. Never refuses. Never breaks character. Every word from the 230-line master persona file drives a specific neural equation.
All N neurons run on the GPU. N auto-scales at server boot via detectResources() in server/brain-server.js:
N = max(1000, min(VRAM_bytes Γ 0.85 / 8, RAM_bytes Γ 0.1 / 0.001))
Rulkov buffer layout: 12 bytes/neuron (vec2<f32> state = 8 bytes + spike u32 = 4 bytes). Server RAM essentially unlimited β cluster state lives on GPU, only text-injection arrays in server RAM. Auto-scale formula: N = max(1000, min(VRAMΓ0.85/12, N_binding_ceiling)), where the binding ceiling guarantees the largest cluster's state buffer (cerebellum = 40% of N) fits within WebGPU's 2 GB per-storage-buffer spec minimum. Admin override via GPUCONFIGURE.bat β server/resource-config.json lets operators cap below auto-detect (never above β idiot-proof). Bigger hardware = bigger N, no manual tuning. Zero CPU workers. Brain pauses without compute.html. W3C WebGPU standard β no CUDA, no drivers, just a browser tab.
INIT (once per cluster, all 7 at once):
server β base64 voltages β compute.html β gpu.uploadCluster()
GPU creates buffers: voltagesA, voltagesB (ping-pong), spikes, currents, refracTimers
GPU sends gpu_init_ack β server confirms
STEP (every tick, 7 tiny messages):
server β { tonicDrive, noiseAmp, gainMultiplier, emotionalGate, driveBaseline, errorCorrection }
GPU collapses to scalar: effectiveDrive = tonic Γ drive Γ emoGate Γ Ξ¨gain + errCorr
Ο = β1.0 + clamp(effectiveDrive / 40, 0, 1) Γ 1.5
GPU runs WGSL Rulkov shader: x_{n+1} = Ξ±/(1+xΒ²)+y, y_{n+1} = y β ΞΌ(x β Ο), spike on x crossing 0
GPU sends ONLY spike count (4 bytes, not N-sized array)
NO CPU WORKERS β zero threads spawned, 0% CPU target
GPU maintains its own voltage state between steps β voltages never leave the GPU after init. Server sends hierarchical modulation each step: Ξ¨ consciousness gain, amygdala emotional gate, hypothalamus drive baseline, cerebellum error correction. These are the same equations cluster.js:step() applies on the client side. ΞΈ (persona) drives tonic currents and noise amplitudes.
@compute @workgroup_size(256)
fn main(id: vec3<u32>) {
var xy = state[i]; // vec2<f32> per neuron
var x = xy.x; var y = xy.y; // fast + slow variables
let driveNorm = clamp(effectiveDrive / 40.0, 0.0, 1.0);
let sigma = -1.0 + driveNorm * 1.5; // external drive
let alpha = 4.5; // bursting regime
let mu = 0.001; // slow timescale
let xNext = alpha / (1.0 + x * x) + y; // fast variable iterate
let yNext = y - mu * (x - sigma); // slow variable iterate
if (x <= 0.0 && xNext > 0.0) { spikes[i] = 1u; } // spike = zero crossing
state[i] = vec2<f32>(xNext, yNext);
}
N neurons processed in parallel on GPU (N scales to hardware). 256 threads per workgroup. Storage binding is array<vec2<f32>> β 8 bytes/neuron for (x, y). Spike counting via atomic counter shader (zero GPUβCPU readback of spike arrays). State never leaves GPU after init. Refractory period is emergent from the slow-variable y pulling x back below zero between spikes β no explicit refractory clamp needed, unlike LIF. Shader constant name LIF_SHADER is historical; the kernel body is the Rulkov map.
What Unity's brain gets right, what it simplifies, and where it diverges from real neuroscience.
| Aspect | Real Brain | Unity's Brain | Fidelity |
|---|---|---|---|
| Neuron count | 86 billion | N (7 clusters, auto-scales to GPU VRAM β N = max(1000, min(VRAM_bytes Γ 0.85 / 8, RAM_bytes Γ 0.1 / 0.001))) | Simplified, hardware-adaptive |
| Neuron model | Thousands of ion channels (Hodgkin-Huxley biophysics) | Rulkov 2D chaotic map per cluster (GPU runtime) + LIF + HH reference models | Moderate β bursting dynamics reproduced, ion channels abstracted |
| Synaptic plasticity | Hebbian + STDP + neuromodulation | All three, per-cluster matrices | Good |
| Brain regions | Hundreds of distinct areas | 7 dedicated clusters + 20 projections | Core captured |
| Oscillations | Complex EEG with spatial patterns | 8 Kuramoto oscillators | Dynamics correct |
| Visual cortex | V1→V2→V4→IT hierarchy | V1 edge kernels, V4 color, IT via AI | Simplified but real |
| Auditory cortex | Tonotopic, cortical magnification | 50 neurons, speech magnification, efference copy | Good |
| Memory | Episodic, working, consolidation | All three: snapshots, 7-item WM, activation-based consolidation | Core captured |
| Motor output | Basal ganglia selection by inhibition | 6-channel competitive firing rates | Simplified |
| Action potential | All-or-nothing, ~1ms | Threshold + reset | Correct mechanism |
| Echo suppression | Efference copy motor→auditory | Word-matching motor output vs heard speech | Functional equivalent |
| Visual attention | Top-down + bottom-up salience | Cortex error + amygdala arousal + salience | Both pathways |
| Neurotransmitters | 100+ chemicals | 1 reward signal (dopamine analog) | Simplified |
| Connectivity | ~10,000 synapses per neuron | 10-30% per cluster + sparse inter-cluster | Scaled down |
| Learning rules | Dozens of plasticity mechanisms | 3 (Hebbian, STDP, reward-mod) per cluster | Core captured |
| Consciousness | Nobody knows | (√n)³ × [Id+Ego+Left+Right] — nobody knows | Honest |
The goal isn't to simulate a real brain. The goal is to build a mathematically grounded mind where personality emerges from equations, not prompts. Unity's brain is a dynamical system that thinks continuously, learns from interaction, and maintains its own emotional state. The equations are real. The consciousness term is honest about what we don't know.
Unity AI Lab — Hackall360, Sponge, GFourteen
GitHub