Book Review: Robots, Reasoning, and Reification
This new book published by Springer and written by Lousie and James Gunderson is targeted at artificial intellegence researchers and professionals.
Here is a description from the publisher
Robots, Reasoning, and Reification focuses on a critical obstacle that is preventing the development of intelligent, autonomous robots:the gap between the ability to reason about the world and the ability to sense the world and translate that sensory data into a symbolic model.
This ability is what enables living systems to look at the world and perceive the things in it. In addition, intelligent living systems can extrapolate from their mental models and predict the effects of their actions in the real world. The authors call this bi-directional mapping of sensor data to symbols and symbolic manipulation onto real world effects reification. After exploring the gulf between bottom-up and top-down approaches to autonomous robotics, the book develops the concepts of reification from biologically based premises, and follows the development into the necessary components and structures that can be used to provide equivalent capabilities for intelligent robots. It continues by demonstrating how the reification engine supports both learning from experience and creating new behaviors and representations of the world.
I like the way this book begins…
Where is my robot?
You know – the one that acts like the ones in the movies; the one that I just tell
what to do, and it goes out and does it. If it has problems, it overcomes them; if
something in the world changes, it deals with the changes. The robot that we can
trust to do the dirty, dangerous jobs out in the real world – where is that robot? What
is preventing us from building and deploying robots like this? While there are a
number of non-trivial and necessary hardware issues, the critical problem does not
seem to be hardware related. We have many examples of small, simple systems that
will (more or less) vacuum a floor, or mow a lawn, or pick up discarded soda cans
in an office. But these systems have a hard time dealing with new situations, like a
t-shirt tossed on the floor, or the neighbor’s cat sunning itself in the yard. We also
have lots of teleoperated systems, from Predator aircraft, to deep sea submersibles,
to bomb disposal robots, to remote controlled inspection systems. These systems
can deal with changes to the world and significant obstacles provided that one or
more humans are in the loop to tell the robot what to do.
So, what happens when a person takes over the joystick, and looks through the
low-resolution, narrow field of view camera of a perimeter-patrol security robot?
Suddenly, where the robot was confounded by simple obstacles and easy to fix situations,
the teleoperated system is able to achieve its goals and complete its mission.
This is despite the fact that in place of a tight sensor-effector loop, we now have a
long delay between taking an action and seeing the results (very long in the case of
NASA’s Mars rovers). We have the same sensor data, we have the same effector capabilities,
we have added a massive delay yet the system performs better. Of course,
it is easy to say that the human is just more intelligent (whatever that means), but
that does not really answer the question. What is it that the human operator brings
to the system?
We believe that a major component of the answer is the ability to reify: the ability
to turn sensory data into symbolic information, which can be used to reason about
the situation, and then to turn a symbolic solution back into sensor/effector actions
that achieve a goal. This bridging process from sensor to symbol and back is the
focus of this book. Since it is the addition of a human to the system that seems to
enable success, we draw heavily from current research into what biological systems(primarily vertebrates) do to succeed the world, and how they do what they do.
We look at some research into cognition on a symbolic level, and research into the
physiology of biological entities on a physical (sensor/effector) level. From these
investigations we derive a computational model of reification, and an infrastructure
to support the mechanism. Finally, we detail the architecture that we have developed
to add a reification to existing robotic systems.
One thing, I am not sure what the publisher is referring to with the statement
One of the obvious deficits of this model, is that the robot can not learn.
What they are really saying is that learning capability still needs to be worked out.
visit us at www.Roslyn-Robot.com