Creating a novel AI framework from the ground up based on neuroscience principles, exploring biologically-inspired intelligence systems
Concept
Million Minds is an ambitious project aimed at developing a novel AI framework inspired by the fundamental principles of neuroscience. Unlike traditional AI systems that rely heavily on deep learning and large datasets, Million Minds seeks to emulate the brain's architecture and functionality to create more efficient, adaptable, and intelligent systems.
Neuroscience Foundations
- Brain is a sensorimotor, space processor
- Brain operates in analog; no bits, no clocks, no messages
- Brain has a repeating cortical structure
- Each structure is running the same algorithm
- Neurons don’t understand what they are doing, they leverage:
- Sparse encoding
- Homeostasis
- Co-firing and association (Hebbian learning)
- Predictive state
- Oscillation
- Layering
Benefits
- True intelligence:
- Builds direct, indirect and abstract world models
- Learns patterns, concepts, relationships
- Can learn at any time, learns continuously
- Learning does not require retaining everything
- Needs much less data to train
- Uses far less memory, compute, energy to run
- Modality agnostic, can work with text, image, audio, video, sensor data and more
Bottom Line
By grounding AI development in the principles that govern biological intelligence, Million Minds will overcome the limitations of current AI technologies, such as their lack of generalization, adaptability, and energy efficiency. This approach has the potential to achieve breakthroughs leading the way to AGI and beyond.
Latest Notes
- Nov 16, 2025
When this system is up and running, because it can learn continuously (unlike current AI), it will be just like Johnny 5 from Short Circuit movie.…
- Nov 15, 2025
Finally we have a sensor, a thalamus, a column and a voting module. It is all neuroscience legit although we did keep some things simple at the…
- Nov 14, 2025
I see the current AI as the next evolution of search.. It brings search into the context where i'm working.. therefore it can build more context…
Roadmap
Last Updated: Nov 15, 2025
Implement Basics
- ✅ Sensor (Text Retina)
Convert character-based text into multi-scale feature SDRs using biologically inspired “retina” patches
- ✅ Thalamus
Gate sensor SDRs and mark landmarks in the input text
- ✅ Pose System
Add sensorimotor grounding via 1-D grid-cell–like modules, internal to the cortical column
- ✅ TransitionPool
Build an associative memory with sparse projections
- ✅ Cortical Column
Implement a model that learns temporal transitions and pools stable features
- ✅ Lateral Bus
Calculate consensus between neighboring columns
- Demo
Run the system end-to-end and demonstrate learning and recall
More Advanced Features
- Real synapse/segment-like TM
Represent sub-patterns (dendritic segments) that detect specific combinations of bits on context
- Homeostasis + sparsity control
Adaptations per column based on usage so it doesn’t saturate or go silent.
- Voting
Use consensus to influence object SDR calculation in the column
- Pose alignment between columns
When object consensus is high, adjust pose slightly so pose overlap between columns increases
System-level Behavior
- Inference mode
Interact with the system, i.e. ask questions, get answers
- Hierarchy / higher regions
Higher-level columns to chunk smaller features into larger concepts
- Replay / consolidation (offline pass)
Keep the system from becoming a junkyard while still letting it learn long-term structure
- A real motor system
Necessary for mental simulation/planning and language generation
Other Modalities
- Images
- Videos
- 3D Virtual Environment
- Hardware sensors