I was listing to one of my regular podcasts on Saturday: 60-Second Science from Scientific American. [iTunes] This particular snippet of science in the news was titled Box Jellyfish Eyes Aim at the Trees. It seems that box jellyfish have 24 eyes, and four of them point above the water’s surface. They use those four exclusively to navigate. They live in mangrove swamps and use the tree limbs as navigation markers.
Box jellyfish, and most jellyfish, have minimal brains. The multiple visual sensors allows less central processing by a brain able to pull out the variety of signals, redirect sensors to different targets when desired, and perhaps screen peripheral information to change task of a particular sensing organ.
For years, human language recognition tried to model language the way linguists understand it. There were large breakthroughs in the area when massive computation became more available and we started treating it more like an engineering problem. Solving the problem the way we thought we solved it seemed a good direction. Early AI research has many similarities to this.
There have been a lot of robotic devices that have had one or two cameras for “eyes”, and we have done some work on image recognition from these. Feeding this information (perhaps with other information from infrared or pressure sensors) to a central programming location. Maybe an attempt to decentralize robot computation, and increase the number of sensors might lead to some interesting uses and solutions.
Just a thought.