From Artificial Intelligence to
Artificial Biology?
By Claire Tristram The point of autonomic computing—and by extension, of self-healing software—is to give networks and computer systems the ability to regulate and repair things that now require human thought and intervention. For example, servers need to be rebooted now and then to keep them working. That can happen because of "memory leaks" created by software bugs, explains Steve White, who heads up IBM's autonomic-computing research from the T. J. Watson Research Center in Hawthorne, NY. "A program will take up more and more memory to run," he says, "so eventually it breaks. Start over, and it will work." At the moment, users need to recognize problems themselves and physically reboot their systems. But with autonomic computing, "You can make it possible to reboot a system easily and automatically," says White. In the future, the biological metaphor may even affect the way we program to begin with. Software could eventually "heal" some of its own bugs, supplementing catch-all fixes—like automatic rebooting—that don't get at the core problem. But that will require an entirely new approach to programming. "We need to move towards a programming philosophy where we look at the global system and understand what properties it needs to have, rather than thinking about programming as a sequence of instructions," says David Evans, who is pursuing biologically inspired programming methods as a computer science professor at the University of Virginia. "It's really a different way of approaching problems." Evans notes that software today is written linearly, with each step depending on the previous one, more or less guaranteeing that bugs will wreak havoc: in biological terms, organisms with no redundancy don't survive long if one means of accomplishing a task fails. More robust software would include many independent components so that it will continue to work even if several of its components fail. Even today, programs such as Microsoft's Windows XP operating system are beginning to exhibit the biologically inspired ability to detect problems and to fix them, albeit in a simple way, by storing models of their original configurations. The programs can then be restored to their original states if bugs corrupt them later. And good compilers—the programs that translate human-readable languages into machine-readable code—will identify potential errors and return error messages along with suggested fixes. But these methods still require programmers to predict problems and write code that guards against them to begin with—and we predict flaws in our software about as well as Dr. Frankenstein predicted the flaws of his artificial man. How close have we come to writing software that, like the human body, can identify and correct problems we haven't thought of? "We haven't developed anything that is very persuasive yet for healing unanticipated conditions," says Tom DeMarco, longtime software pundit and principal with the Atlantic Systems Guild, an international software training and consulting group. "You have to remember that software doesn't break. It is flawed to begin with. So for software to self-heal, you have to find a way to have the program create things that were not there when the program was written. "We'll get there someday." |
|