Simulation of Large-Scale Brain Models

From Eugene Izhikevich's website:

On October 27, 2005 I finished simulation of a model that has the size of the human brain. The model has 100,000,000,000 neurons (hundred billion or 1011) and almost 1,000,000,000,000,000 (one quadrillion or 1015) synapses. It represents 300×300 mm2 of mammalian thalamo-cortical surface, specific, non-specific, and reticular thalamic nuclei, and spiking neurons with firing properties corresponding to those recorded in the mammalian brain. The model exhibited alpha and gamma rhythms, moving clusters of neurons in up- and down-states, and other interesting phenomena.
One second of simulation took 50 days on a beowulf cluster of 27 processors (3GHz each). Why did I do that?

This sounds like a great achievement, but I've always been interested in modeling a smallest, simplest, and slowest brain that exhibits intelligence. How many neurons and connections would it have? What modules and components are essential and what are not? How much of intelligence would be lost because of its size and simplicity? How fast can it be trained to do something useful? What properties would need to be wired in and what properties would emerge?

Leave a comment

what will you say?