Unity's cognition runs entirely on math โ her language cortex, memory, motor selection, and decisions all emerge from brain equations. No text-AI backend is required. The only optional pieces are sensory peripherals: image generation and the vision describer. Click a backend below for setup instructions. Local backends auto-detect at boot; remote backends need a URL / API key.
๐จ Image Generation
๐ Vision Describer
๐ Click any provider above to connect an API key โ the form below populates with that backend's key input and a CONNECT button.
No provider selected yet.
Click one of the image-gen or vision-describer buttons above to paste its API key and connect. Unity supports any number of saved backends โ pick the active one via the โก Active Provider dropdowns below.
๐ก Detected / Saved Backends
โณ probing local backends...
โก Active Provider (choose from configured)
Any number of backends can be configured above. This picks which one Unity actually uses. Model dropdown lists the options available at the selected backend.
Pollinations API key (optional โ anonymous tier works without, a key raises rate limits and unlocks paid models for image gen, TTS, and vision describer):
Mic + camera permissions are requested when you wake her up. All sensory channels are optional โ Unity can chat via text only. Toggle anything below off BEFORE waking her to skip the permission prompt entirely.
Toggles persist across sessions and are live-applied the moment you flip them โ after boot you can mute her mid-sentence, pause her camera, or silence her voice without reloading.
๐ค Microphoneโ
๐ท Cameraโ
Privacy & Data Policy Core rule: what you type is private. Unity's brain growth is shared. Her persona is canonical.
Client-only mode (default โ no server configured, or Unity server running locally on your own machine): everything runs in your browser. No cloud backend owned by us. Your conversation history, preferences, sandbox state, optional Pollinations key, and every backend you configure in this modal (custom image gen URLs, vision describer endpoints, API keys) are all stored in your browser's localStorage on YOUR device only. Specific localStorage keys: unity_brain_state, unity_brain_dictionary_v3, custom_image_backends, custom_vision_backends, pollinations_image_model, pollinations_vision_model, plus Pollinations API key via an obfuscated storage slot. "Clear All Data" wipes all of these.
Shared server mode (connecting to a running brain-server.js instance โ yours or someone else's): your text is sent to whoever runs that server for equational processing. Your text is NOT broadcast to other connected users โ the cross-client conversation broadcast was removed 2026-04-13. What IS shared across all users talking to the same brain instance is Unity's vocabulary growth: her dictionary, bigrams, and GloVe embedding refinements grow from every conversation and benefit every user. Her persona (self-image, traits, drug state โ from docs/Ultimate Unity.txt) is canonical and not mutable by users. Other users see Unity getting smarter over time, but never see the specific conversations that drove the growth.
Important caveat for shared-hosted servers: if you connect to a Unity server hosted by someone OTHER than you, the person running that server can read your text at the process level (they own the server process). This is true of any self-hosted multi-user service. Only connect to Unity servers you trust. If you self-host your own node server/brain-server.js, then "the server" is just another process on your own machine โ still your data, still your control.
Sensory API calls (image generation, vision describer, TTS) are sent only to the providers YOU configure in the Image Generation / Vision Describer grids above โ never to us, never to any third party you didn't explicitly pick. If you save a DALL-E backend with your OpenAI key, only OpenAI gets that key. If you save an Automatic1111 backend on localhost, the traffic stays on your machine. Pollinations is Unity's default provider (anonymous tier works without a key, a saved key unlocks paid models and higher rate limits); using it sends image prompts and camera frames (if vision is enabled) to pollinations.ai.
Fully open source under MIT. Every line of code that handles your data is in the public repo and auditable. Pull requests welcome.
GitHub ยท
Unity AI Lab ยท Hackall360, Sponge, GFourteen