Evolutionary computing lets anyone discover new laws of nature – but does this mean the end of science as we know it?
IT WAS long after midnight when Michael Schmidt noticed a strangely familiar equation pop up on his monitor. His computer was crunching the data from an experiment to measure the chaotic motion of a double pendulum, a type that has one swinging arm hanging from another.
Schmidt recorded this movement using a motion tracking camera which fed numbers into his computer. What he was looking for was an equation describing the motion of the pendulums.
Initially, the task looked hopeless. When a double pendulum moves chaotically, its arms swing in a way that is almost impossible to predict, with seemingly no pattern whatsoever. For a human, finding an equation for this would be almost impossible. And yet the computer found something. To Schmidt, a PhD student studying computer science at Cornell University in Ithaca, New York, it was a hugely significant
moment. “It’s probably the most exciting thing that has happened to me in science,” he says.
That’s because Schmidt’s computer had found one of the immutable laws of nature: the law of conservation of energy, which says you can never add or take energy away from a system. What had taken many scientists hundreds of years to discover took his computer just one day (see diagram).
Schmidt and his supervisor Hod Lipson had hit upon a new way of doing science, no less. Their method bodes well for areas of research thought to be too complicated to follow set rules. It also promises a future in which computers can find the laws of nature faster than we can, leaving humans forever playing catch-up.
The approach is a radical departure from the usual scientific method. Normally, scientists propose a hypothesis to explain an observation. They then devise an experiment to test their hypothesis, throwing it out if the experiment shows it to be wrong. The hypothesis is then revised and tested over and over again in experiments until it is generally accepted to be true.
Schmidt and Lipson’s approach is the very opposite: rather than coming up with a hypothesis to test, they carry out experiments first, feeding the data into their computer to discover the laws of nature (see diagram).
Their success is all down to something called evolutionary computing. This is where robots or computers are given a goal – learning to fly, say – and produce lots of programs that could potentially achieve it. These programs are tested against the goal, and the most promising ones are selected and merged. This process repeats until, after many generations of testing, a program is produced that can complete the set task perfectly. In fact, Lipson is best known for his work as a robotics engineer, and in particular for creating software that can evolve to control weird and wonderful machines, like robotic aircraft, walking robots and the parts for a device that prints food.
Evolutionary computing allows computers to do things that they haven’t been programmed to do and is already being use to solve problems as diverse as creating train timetables to designing aircraft.
The same process is behind Lipson and Schmidt’s law-finding computer. It begins by randomly stringing together simple mathematical expressions to create equations: 10,000 of them to be exact.