Few days ago I attended one of the Google Seattle Tech Talks; this one was given by Helen Greiner, iRobot's co-founder and chairman. She was visiting Seattle to sign the licensing agreement for UW's Seagliders and stopped at Google Kirkland to talk about iRobot and its robots.
Her talk was interesting, not so much for the descriptions of robots that iRobot develops, but mostly for the history of the company and their learning experience as they were trying to design and build robots and find their market to sell products to survive. This somewhat reminded me the story of Webmind AI Engine (Waking Up from the Economy of Dreams), albeit with a happy ending as iRobot is a live and profitable company (although its stock has been trading recently close to 52-week low).
There was a brief Q-and-A session in the end and I had an opportunity to ask a few questions, one of them was "When do you see your robots becoming more intelligent?" Helen's response was that intelligence is a complex term, but their robots are intelligent to some degree: they don't fall off the stairs, can sense objects around them, interact with virtual walls, can re-dock themselves, and, most importantly, they clean carpets and floors and do a good job at that. Essentially -- what follows is obviously my interpretation -- they are well designed for a particular task and there is no obvious need or an easy way for them to become more intelligent.
As I own several generations of Roombas I'm familiar with how they operate and what they are capable of. Still, I struggled to counter Helen's definition of intelligence without falling prey to Tesler's Theorem -- "AI is whatever hasn't been done yet" (Godel, Escher, Bach: an Eternal Golden Braid, p.601). Do Roombas possess even limited intelligence? I do not believe so and here is why.
It's not so much the ability to solve a problem that is critical to intelligence as it is the ability to learn how to solve problems. Most of the tests for intelligence seem to be focused on testing ability to solve problems, relying on the assumption that if you can solve those problems you learned the ability to solve them, hence you're intelligent. A better way to test intelligence would be have some sort of dynamic test when you learn to solve problems as part of the test and the progress you make would define your level of intelligence. Otherwise we don't really test intelligence so much as we test experience.
Roombas don't learn; they don't improve their actions. For example, I have 5-inch space between floor transition in my kitchen and thick carpet in my living room and Roomba keeps getting stuck there (literally; it can't move and needs to be picked up). With more intelligence, it could possibly learn to avoid those spots or maybe go through them using a different direction. They also don't acquire new actions and don't generate actions to do their job more effectively. Even their initial "experience" is not acquired, but hard wired based on experience of its designers. While there are noticeable differences between the first and last generations of Roomba -- more autonomy, ability to dock itself and to sense objects and slow down before hitting an obstacle, and other things -- this is still not enough to call it intelligent.
The more difficult question is, if we add more sensors, mapping capabilities, working memory, and other components, would it make it intelligent then? I will not attempt to answer this question here, but will do a separate post on how I define intelligence and what it may take to make an intelligent robot.