Biologically Accurate Modeling

Chris Chatman on his weblog portraits a gloomy picture awaiting biologically accurate neural modeling:

By most estimates, there are 100 billion neurons in the brain. Some neurons are known to have more than 1,000 dendrites, and up to about 1,000 different branchings of their axons. There are some 50 known neurotransmitters, and who knows how many other neuromodulators may exist (hormones, neural growth factors, neurosteroids). There are also many different receptor types for each of the neurotransmitters. A conservative estimate of the number of interactions you'd have to model to be biologically accurate is somewhere around 225,000,000,000,000,000 (225 million billion).

While this is definitely true, I'm almost equally split between two positions. On one side, the situation may be even more hopeless if you agree with Eugene Izhikevich's polychronization theory:

...spiking networks with delays have more groups than neurons. Thus, the system has potentially enormous memory capacity and will never run out of groups, which could explain how networks of mere 1011 neurons (the size of the human neocortex) could have such a diversity of behavior. (p.270)

At the same time, I would not try to set a goal of building/modeling/simulating something that can exhibit general purpose intelligence; after all, you don't expect to have a conversation about weather with one of the participants of the Grand Challenge. I'm more interested in building brains that demonstrate special purpose intelligence. After all, there are "only" 20000 nerve cells in Aplysia, which is a much more manageable size to model.

I'm keenly interested in modeling the smallest and simplest brain that shows signs of intelligence (defined in Cotterill's terms).

Leave a comment

what will you say?