I've been spending a lot of time looking at industrial automation the past few days, and had an idle thought:
I've touched on this before, if only obliquely, when writing about MFG.com's role in manufacturing logistics. Much attention is being paid to companies who want to simplify (or circumvent) some part of the product development value chain. Many of these are companies I admire, and think are doing really valuable things. Take Within, whose 3D design software generates structures that are driven directly from functional constraints (but can't, as far as I can tell, deal well with thin-walled structures). Or Willow Garage's PR2, the really slick research robot (that takes charmingly long - 20 minutes per bath towel - to fold laundry).
Each of these is an incredibly impressive feat, and one that follows an ambitious (and I would argue honorable) line of thinking:
If we can encode all of the information needed to complete a routine yet complex task, then we can use machines to automate the process, freeing up our minds to do other (presumably more important) things.
But consider an alternate proposal:
If we can get machines to mimic a series of behaviors that humans can plan and execute with relative ease, then we can decrease the amount of rote mechanical work that humans need to do.
This is the tact taken seriously by Baxter, the admittedly not-too-serious (but cool nonetheless) humanoid task robot built by Rethink Robotics. Baxter learns by physically training his movements, presumably by the technician who he's "collaborating" with:
Even the traditional robotics companies, like Kuka, are moving in the direction of using robots simply to execute the complex tasks that humans calculate and perform with ease. Here a Kuka robot is trained how to clean a permanent mold by a BMW employee:
Both of these robots' use cases share a key feature: There's still a human doing the "hard" planning and calculation about how the task will be completed. In each case the robot doesn't understand the physical constraints or goals per se. Baxter has some awareness of his surroundings for sure, but all he knows is that his arms hit something; he doesn't have the vision or awareness of why that happened or how to correct for it.
Similarly, the Kuka bot doesn't understand that he's cleaning a mold, or have the facilities to learn how to do better work. He's just repeating a toolpath that he knows a human told him to do. Which, in this case, is good enough - and a hell of a lot faster than waiting for a computer vision expert to give him the intelligence required to do better.
I'm not sure what the implications of this are for the companies working to automate the design and supply chain. But the philosophical difference is striking, and I must say that the more hands-on model is very compelling - and I expect it to be so for the foreseeable future.
Parenthetically: All of the industrial 3D printing market is currently driven off of this same model: An intelligent, experienced technician makes manual edits to 3D CAD data in order to get a part to print within its design constraints. Anyone who suggests that build optimization is "right around the corner" is, in my opinion, *not* to be trusted. We're in a world of basic research still, and an automated design-print-post process chain is many years away.