Kelly wanders enthusiastically over a wide landscape of cool techno-bio projects in the Wired-typical ecstasy over scary change that will likely do some people a lot of good. I find it easy to grant that most of the projects he describes might do a lot of people a lot of good, but the bland acceptance of the risks puts me off. However, it's no worse than the magazine, and it has plain old legible typography and an extensive bibliography. Brooks wrote the journal article Fast, Cheap and Out of Control from which Kelly gets his title. Flesh and Machines is principally about his research developing robots and using their interaction with the physical world to drive their behavior almost directly. His specific discussion of how very simple states in each of six legs of a robot can produce successful walking is delightful. From this he derives the subsumption architecture, building complex behavior out of unchanged elements of simpler behavior (instead of overlapping small programs into a big program ), and a belief as much philosophical and practical that intelligence requires embodied existence. The book ends with understandable pseudocode for one of his early robots.
There are huge social implications in Brooks' discussions of his work, though. He and his students have already had practical results, ranging from the Sojourner mission to Mars to emotion-aping toys aimed at the mass market. He expects more, from the autonomous exploration of the solar system to autonomous cheap housecleaning robots to widespread international labor markets using telepresence to avoid immigration. His imagined housecleaning robots would make subsumption architecture obvious; each of them deeply stupid and nearly random in its motion, in a range of sizes none very big, and reliably keeping a house clean by their interactions. On the other hand, those haven't been built yet, and his description of trying to use robotic lawn mowers makes rabbits seem safer, simpler, cheaper, and more usefully controlled.
One of the oddites in Brooks' book is in guessing what his attitude towards humans is; I was somewhat taken aback by a comment, early on, that he yearned to see Hal exist even though that computer went mad and murderous. He also describes human behavior as automatic in a way that I associate with politics that consider human happiness irrelevant. I suspect this of being a subtle joke, though; late in the book, after several anecdotes of people who had programmed emotion-imitations reacting to their robot with (human and therefore presumably) real emotions, he branches into a discussion of the philosophical arguments about whether "real life" could ever exist in a machine. He summarizes most of the arguments against as versions of There Must Be Some Special Stuff In Us Because We're Special, descendants of vitalism, and himself strongly states that he thinks of people as machines. However, he also points out that although he believes his children to be fundamentally machines, he loves them dearly and not for their biochemistry; he wants robots to be treated well when they can feel well or ill, even though we may have made them differently than our born children are made. He also has some practical arguments about why autonomous robots are not likely to take over the world, and an interesting discussion of Asimov's Three Rules.
After deflating Searle and other robot-pessimists, he deflates the extropian freeze-me-for-later techno-eschatologists, citing a history of such predictions that put the date just about when the predictor would turn 70. The belief that we live in exactly the right Special Time is about as irrational as the belief that we contain exactly the right Special Stuff, he implies.
I'm not convinced that telepresence work - for instance, staffing
Japanese hospitals over phone lines from the Philippines - would
actually be a boon for the poor workers. Much depends on the relative
value of autonomy and physical safety; while sewing machines are
inherently less dangerous than unmechanized farm labor, working in a
sweatshop can be much more dangerous than working one's own farm. Also,
if the rich never meet their poor help, there's less likelihood that
the poor will be paid enough for the infrastructure the prevents
So wrote clew in