Researchers in Maynooth are developing algorithms which could allow robots to navigate disasters zones, mapping damage and searching for survivors

Self-driving cars, flying drones and mobile robots all need to be able to perceive and understand the world around them. Dr John McDonald’s Computer Vision Group, part of Maynooth University’s Department of Computer Science, has created software that will help them do just that. This could allow future robots to navigate through buildings in disaster zones, mapping the insides and searching for survivors. As robots move from university research labs to the complex and unstructured real-world, their algorithms allow them not just to see their surroundings, but to map them in real time.  

Electronic Engineering - John McDonald with robocup - Maynooth University

SLAM

To make this happen, the Maynooth scientists have been grappling with a technical problem known as SLAM - Simultaneous Localisation and Mapping. In a nutshell, SLAM refers to the problem of building a map of an environment from a moving sensor whilst simultaneously estimating the motion of the sensor relative to the map. “It turns out this is a difficult chicken-and-egg problem, to build the map you need to know where you are, but to know where you are you need a map,” says Dr McDonald, who will be lecturing on Maynooth's new BSc in Robotics and Intelligent Devices, to commence in September 2016.

Dr McDonald first started work on the problem in 2010, when he spent a year as a visiting scientist at MIT with Prof John Leonard’s group from the Computer Science and Artificial Intelligence Laboratory (CSAIL). “During this time we were focused on the problem of long-term autonomy, where a robot would be required to operate autonomously for months or even years.” Their contribution, referred to as multi-session SLAM, allowed multiple independent SLAM sessions to be combined into a single global map. As McDonald explains, “How we understand the layout of a city’s streets is the accumulation of knowledge from lots of visits to the city. Multi-session SLAM provides robots with similar capabilities of combining knowledge from multiple journeys into a single map.

Kintinuous

More recently, an algorithm Dr McDonald and his former student Tom Whelan developed creates 3-D maps at new levels of detail, speed and accuracy.  Known as Kintinuous, due to its ability to continuously map a space using a Microsoft Kinect type depth camera, the algorithm was developed in collaboration with Prof Leonard’s team at MIT.  The technique built on a 3-D mapping system called KinectFusion, devised by researchers at Imperial College London and Microsoft Research Cambridge

KinectFusion brought visual mapping systems to a new level. Previous SLAM systems created 3-D reconstructions of environments that provided the location of individual points in the scene, but didn’t capture how the points connected into surfaces. KinectFusion overcame this issue by creating dense maps at an incredible level of detail in real-time. However, the technique only worked over limited scales such as a desk or perhaps a room. Kintinuous solved this limitation, allowing mapping of spaces over hundreds of meters.

The technique was initially presented at the RGB-D: Advanced Reasoning with Depth Cameras Workshop at the Robotics Science and Systems Conference in Sydney, Australia, in 2012.

Since then, Kintinuous has resulted in a number of follow-up publications in leading robotics conferences and journals, fostering further collaboration with the MIT group and with other leading universities such as the Eindhoven University of Technology (TU/e), and Imperial College London.

Closing the Loop

The Maynooth technique gets over a big hurdle in robot mapping known as “closing the loop.”  As the camera moves through the world small errors are introduced, which although typically imperceptible at any point in the map, over longer trajectories accumulate into “drift.” Dr McDonald describes drift through the analogy of “walking around a city block. At each point in the map the geometry is quite precise, but when we get around the block and back to where we started the two points in the map don’t line up. This is drift, and solving it is called the loop closure problem.”

To solve this problem the Maynooth team drew on the experience of the computer graphics community in generating natural looking movement in mesh-based models. The innovation in the team’s approach was to combine an existing SLAM algorithm developed by Michael Kaess, a research scientist at Leonard’s lab (now a Professor at Carnegie Mellon University), with computer graphics style techniques. The resulting system automatically figures out when it revisits a previously mapped area and then applies the same computer graphics principles, but in a SLAM setting, to smoothly deform the map and close the loop (see video here).

The solution was a significant step forward in dense mapping research and a major element of Whelan’s PhD work, published in the International Journal of Robotics Research in 2014. The work received a lot of media interest, appearing on BBC’s Click, the Discovery Channel Canada, Bloomberg, wired.co.uk, and was the subject of an MIT News spotlight in August 2013.

The researchers also have received commercial interest in the technology, which is currently working its way through the patenting process. “Although the technology has its roots in robotics, it has the potential to impact on a much wider set of areas, such as augmented reality, computer gaming, architecture, archaeology and building information systems,” says McDonald.

Rise of the Robots

In 2014, the team went back to the RGBD workshop at Robotics Science and Systems Conference, where they presented their latest work, which showed Kintinuous enabling a low-cost mobile robot explore and map a space, automatically recognising, locating and retrieving objects using the map. The work combined Kintinuous with the work of other researchers at Leonard’s group at MIT, including Ross Finman, Dr Maurice Fallon and Dr Hordur Johannson.

More recently, the Maynooth team have worked with Dr Fallon on a more ambitious project that integrated Kintinuous into the Boston Dyanmics ‘Atlas’ robot, one of the world’s most advanced humanoid robots. Fallon, who recently took up a post at Edinburgh University, did the work as part of the MIT DARPA Robotics Challenge Team, whose work used Kintinuous to allow ‘Atlas’ to safely walk on complex terrains (see example here). 

The aim of the DARPA Robotics Challenge (DRC) is to push state-of-the-art use of robotics in disaster response scenarios. “For example, situations such as the Fukishima nuclear disaster in 2011 are a key use case for future robots, where you need some way of assessing the situation without sending in humans,” says Dr McDonald, “The DRC is a worldwide competition that is demonstrating technology at the pinnacle of robotics, so to have worked with MIT’s team on this project was very exciting for us.”

Of course, Dr McDonald’s group also is interested in robots with a more down-to-earth price tag. Sitting in his office, behind him, a two-foot-high Nao robot named Marvin stands on a desk. The robot is a new signing to RoboEireann, Ireland’s robotic soccer team which Dr McDonald co-leads with Dr. Rudi Villing in Maynooth’s Department of Electronic Engineering. Marvin has his work cut out for him over the next few weeks with training sessions intensifying for the Robot World Cup this summer in China.

“We have been working with the Nao robots for a number of years now, in particular as part of Robocup, but also in connecting research with the undergraduate students through project-based work. In 2016 Maynooth will welcome the first intake of students on the new B.Sc. in Robotics and Intelligent Devices, a collaborative programme between the two departments, which will integrate our work in robotics into the undergraduate curriculum even further.”

Looking to the future, McDonald highlights the significant convergence in robotics, computer vision, mobile devices, and cloud computing and their potential to impact on areas such as wearables and the Internet of Things (IoT). The expectation is that these areas will create completely new types of computer systems and applications, but more importantly raise compelling research challenges for the Computer Vision Group for the next decade.