Open ML Training Data For Visual Tagging Of Construction-specific Objects (ConTag)
Authors
Boehm, Jan
Publication Date
2019-07Publisher
CDBB
Number
CDBB_REP_35
Type
Report
Metadata
Show full item recordCitation
Boehm, J. (2019). Open ML Training Data For Visual Tagging Of Construction-specific Objects (ConTag). (CDBB_REP_35)https://doi.org/10.17863/CAM.43316
Abstract
ConTag has generated open datasets for visual machine learning (ML) specific to the construction industry. ML technology has enabled a revolutionary leap in many digital economies generating growth in activity and business mainly for the ITC sector. Part of the growth is generated through sharing of IP, knowledge, tools and datasets. We want to adopt this approach for the digital construction sector. ConTag provides visual and 3D training datasets for training deep neural networks (DNNs) and provides weights for pre-trained networks. The research output is to support visual tagging of assets from reality capture data. Such automatically generated semantic information can be used to generate or populate digital twins in the example scenarios. The first dataset is a collection of fire safety equipment typically found in indoor environments. The dataset contains the classified images, per-pixel label images and bounding box data for object detection. The second dataset is a synthetic 3D point cloud of an outdoor urban street scenario. The dataset contains the point cloud data and per-point label data. We expect this shared and open datasets to kick-start further ML developments in both academia and industry. It is intended as a seed point for collaborative research.
Identifiers
This record's DOI: https://doi.org/10.17863/CAM.43316
This record's URL: https://www.repository.cam.ac.uk/handle/1810/296271
Rights
Licence:
http://www.rioxx.net/licenses/all-rights-reserved