In my first post ever, I discussed how specialized circuits in the spinal cord (called "central pattern generators," or CPGs) coordinate the intricate motions and muscle patterns involved in running and walking, without significant input from the brain. The autonomy of these circuits allows animals to run and walk while their mental efforts are otherwise engaged; for example, we can talk on the phone while walking to dinner, and decapitated chickens can still run away.
One of the most important features of CPGs is their adaptability. Whether running through a forest, walking on an oily surface, or dribbling a soccer ball, we need to continuously modify our movements. Thus, as opposed to generating rigid action patterns, CPGs provide a flexible template for coordinating our muscles and various joints. This template interacts with sensory information, allowing us to elegantly adapt to our unpredictable world. Flexibility, however, poses a challenging computational problem; not only must we decipher how circuits of neurons coordinate hundreds of muscles, but also how their output can be refined by incoming sensory information.
Without understanding these fundamental issues, it is difficult to produce machines that can move as intelligently as we. Honda's ASIMO, "The World's Most Advanced Humanoid Robot," is capable of executing an astounding range of human-like movements (running, walking smoothly, reaching for objects), but has previously stumbled and fallen down stairs. A recent article in PLoS Computational Biology describes a new and improved droid named RunBot, which is capable of adapting to unfamiliar terrain in an animal-like way.
Although not nearly as cute as ASIMO, RunBot's motor circuitry is more "intelligent" (i.e. more human). As I mentioned in my earlier post, the motor system is arranged in a hierarchy: the "higher" control centers give the signal to initiate a movement, recruiting the "lower" CPGs to take care of the details. These lower circuits respond to the environment reflexively, incorporating localized feedback to generate intricate adjustments in muscle tone. This responsiveness allows us to immediately compensate for small perturbations, such as unnoticed rocks on a trail. When we need to significantly modify our gait, however, such as stepping over a baby, we must enlist the higher centers, which will generate more dramatic modifications to the CPGs.
ASIMO lacks this hierarchy, requiring it to continuously calculate the position and motion of every joint. RunBot, however, has been engineered with several levels of control, allowing it to adapt to changes in terrain in a more computationally efficient manner. RunBot interprets the environment with an infrared sensor, which communicates with the lower circuits to regulate their activity. Thus, when RunBot encounters an alteration to its terrain and becomes unbalanced, this sensor modifies the pattern of the lower circuits, allowing the bot to change its gait.
However, like humans, RunBot must learn how to modify its movements with respect to sensory input. When we learn how to walk, our brains "train" our CPGs until they can execute the movement relatively independently. These same mechanisms come into play when a runner learns to hurdle or a soccer player learns a new move; these behaviors initially require significant concentration, but with practice can be executed with little mental effort. To replicate this learning process, RunBot's circuitry includes, according to the authors, "online learning mechanisms based on simulated synaptic plasticity." Thus, when RunBot first attempts to climb a slope, it falls over like poor ASIMO. With trial and error, however, its circuitry learns to properly compensate for the relevant sensory input, shortening and slowing its steps just like a human.