Integrating Multi-Source Visual Synthetic Data for Multi Road Defect Detection
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
While technology captures raw data of on- and off-road assets, prepared data remains insufficient for effective defect detection across multiple road assets. Despite extensive research on detecting defects on the pavement and applying synthetic data in different industries, the creation of synthetic defects for multiple road assets remains an open question. This research aims to create synthetic defects for the pavement and road signs from multiple visual data sources, including local road capture and generative art solutions. Our solution first includes transforming source and destination images into orthonormal or elevation views to eliminate perspective differences. The transformed source images will be annotated with the assistance of auto-annotation tools adapted from Segment Anything and pixel intensity thresholds. The annotated source instances are cropped and can align with the style of destination images with FastPhotoStyle, before inserting onto the destination image with an object mask. This introduces defects unseen in the pavement and panoramic images in the CAM Dataset and lays the groundwork for a comprehensive road defect detector spanning major road assets. The work on synthetic data can be advanced to capture textural changes on road assets and environmental changes in the road scene.
Description
Keywords
Journal Title
Conference Name
Journal ISSN
Volume Title
Publisher
Publisher DOI
Publisher URL
Rights and licensing
Sponsorship
EPSRC (EP/V056441/1)
