Collaborating with robotics and computer scientists, my team and I worked on just such a set of robots. Produced with a 3D printer – and assembled independently – the robots we create are constantly evolving to quickly optimize the conditions in which they find themselves.
Our work represents the latest progress toward the kind of autonomous robotic ecosystems that could help build humanity’s future homes, far from Earth and far from human supervision.
Robots have made a lot of progress since our first awkward forays into artificial motion many decades ago. Today, companies like Boston Dynamics produce ultra-efficient robots that load trucks, build pallets, and move crates around factories, doing tasks you might think only humans could accomplish.
Despite these advances, designing robots to work in unfamiliar or unfavorable environments – such as exoplanets or deep ocean trenches – still presents a considerable challenge for scientists and engineers. In the cosmos, what shape and size should the ideal robot be? Should it crawl or walk? What tools will it need to manipulate its environment – and how will it survive extremes of pressure, temperature and chemical corrosion?
Impossible brainstorming for humans, nature has already solved this problem. Darwinian evolution has resulted in millions of species perfectly adapted to their environment. Although biological evolution takes millions of years, artificial evolution – modeling evolutionary processes within a computer – can occur in hours, or even minutes. Computer scientists have been exploiting its power for decades, resulting in gas aids to satellite antennas that are well suited for their function, for example.
But the current artificial evolution of moving, physical objects still requires a lot of human supervision, requiring strict feedback between a robot and a human. If artificial evolution is to design a useful robot for exoplanetary research, we will need to remove the human from the loop. Basically developed robotic projects have to be manufactured, assembled and tested on their own – unrelated to human supervision.
Some developed robots will need to be able to perceive their surroundings and have a variety of means to move around – for example using wheels, articulated legs or even mixtures of the two. And to address the inevitable real gap that occurs during the delivery of a software project to hardware, it is also desirable that at least some development take place in hardware – within an ecosystem of robots that evolve in real time and real space.
The Autonomous Robot Development (ARE) project addresses just this, bringing together scientists and engineers from four universities in an ambitious four-year project to develop this radical new technology.
As described above, robots will be “born” through the use of 3D manufacturing. We use a new kind of hybrid hardware software development architecture for design. That means every physical robot has a digital clone. Physical robots are tested in real-world environments, while their digital clones enter a program where they undergo rapid simulated development. This hybrid system introduces a new kind of development: new generations can be produced by combining the most successful traits of a virtual “mother” and a physical “father”.
In addition to being featured in our simulator, “child” robots produced by our hybrid evolution are also 3D-printed and introduced into a real-world chalk-like environment. The most successful individuals in this physical training center provide their “genetic code” for breeding and improving future generations, while less “appropriate” robots can simply be raised and recycled into new ones as part of an ongoing evolutionary cycle.
Two years after the project, significant progress has been made. From a scientific perspective, we have designed new artificial evolutionary algorithms that have produced a diverse set of robots that travel or crawl, and can learn to navigate through complex mazes. These algorithms develop both the robot’s body plan and brain.
The brain contains a controller that determines how the robot moves, interpreting sensory information from the environment and translating this into motor controls. Once the robot is built, a learning algorithm quickly refines the child’s brain to account for any possible mismatch between its new body and its inherited brain.
From an engineering perspective, we designed the “RoboFab” to fully automate manufacturing. This robot arm attaches wires, sensors and other “organs” selected from development to the robot’s 3D-printed chassis. We designed these items to facilitate quick assembly, giving RoboFab access to a large set of robotic limbs and organs.
The first major use case we plan to address is the deployment of this technology to design robots to undertake cleaning of inherited waste in a nuclear reactor – like the one seen in the TV miniseries Chernobyl. Using humans for this task is both dangerous and expensive, and necessary robotic solutions remain to be developed.
Looking ahead, the long-term vision is to develop the technology enough to enable the development of entire autonomous robotic ecosystems that live and work for long periods in difficult and dynamic environments without the need for direct human supervision.
In this radical new paradigm, robots are conceived and born, rather than designed and manufactured. Such robots will fundamentally change the concept of machines, showing a new race that can change their shape and behavior over time – just like us.
Emma Hart is Professor of Natural Computing at Edinburgh Napier University. She has been Editor-in-Chief of Development Computing (MIT Press) since January 2016 and elected a member of the Executive Board of ACM SIGEVO. She is also a member of the British Research Commission on Operations Research. Find Emma on Twitter @HartEmma
A version of this article was originally posted at the Conversation and was posted here with permission. The Conversation can be found on Twitter @KonversacioUS