Navigate

Robust robot navigation is a cornerstone for application domains such as Social Care, Logistics and Manufacturing, where productivity often requires mobile manipulation. In Service & Inspection robotics, in particular, understanding surroundings – in terms of location and circumstance – is a prerequisite for autonomy. This theme examines how unusual sensors, novel processing, and new perspectives on cross-modal mapping and localisation can improve performance and endurance of machines with embodied intelligence. We focus on developing the algorithmic techniques required for minimally supervised, long-term autonomy providing robust and efficient navigation capability
across the whole gamut of deployment scenarios – from factory floors and care homes to vast, unstructured, challenging environments.
 

Musat V, De Martini D, Gadd M, Newman P

2022 International Conference on Robotics and Automation (ICRA)

In this paper we present a compositing image synthesis method that generates RGB canvases with well aligned segmentation maps and sparse depth maps, coupled with an in-painting network that transforms the RGB canvases into high quality RGB images and the sparse depth maps into pixel-wise dense depth maps.

depth simsteaserimage

We benchmark our method in terms of structural alignment and image quality, showing an increase in mIoU over SOTA by 3.7 percentage points and a highly competitive FID. Furthermore, we analyse the quality of the generated data as training data for semantic segmentation and depth completion, and show that our approach is more suited for this purpose than other methods.

 

Depth-SIMS: Semi-Parametric Image and Depth Synthesis

 

 

 

 

Pramatarov G, De Martini D, Gadd M, Newman P

2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

This paper is about extremely robust and lightweight localisation using LiDAR point clouds based on instance segmentation and graph matching. We model 3D point clouds as fully-connected graphs of semantically identified components where each vertex corresponds to an object instance and encodes its shape.

boxgraphteaserimage

Optimal vertex association across graphs allows for full 6-Degree-of-Freedom (DoF) pose estimation and place recognition by measuring similarity. This representation is very concise, condensing the size of maps by a factor of 25 against the state-of-the-art, requiring only 3 kB to represent a 1.4 MB laser scan. We verify the efficacy of our system on the SemanticKITTI dataset, where we achieve a new state-of-the-art in place recognition, with an average of 88.4 % recall at 100 % precision where the next closest competitor follows with 64.9 %. We also show accurate metric pose estimation performance - estimating 6-DoF pose with median errors of 10cm and 0.33 deg.

 

BoxGraph: Semantic Place Recognition and Pose Estimation from 3D LiDAR