No Cover Image

Journal article 43 views

Self-adaptive 2D 3D image fusion for automated pixel-level pavement crack detection

Jiayv Jing, Xu Yang, Ling Ding, Hainian Wang, Jinchao Guan, Yue Hou Orcid Logo, Sherif M. El-Badawy

Automation in Construction, Volume: 168, Start page: 105756

Swansea University Author: Yue Hou Orcid Logo

Full text not available from this repository: check for access using links below.

Abstract

Current 2D and 3D image-based crack detection methods in transportation infrastructure often struggle with noise robustness and feature diversity. To overcome these challenges, the paper use CSF-CrackNet, a self-adaptive 2Dsingle bond3D image fusion model utilizes channel and spatial modules for aut...

Full description

Published in: Automation in Construction
ISSN: 0926-5805
Published: Elsevier BV 2024
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa67682
Tags: Add Tag
No Tags, Be the first to tag this record!
first_indexed 2024-09-13T10:35:42Z
last_indexed 2024-09-13T10:35:42Z
id cronfa67682
recordtype SURis
fullrecord <?xml version="1.0" encoding="utf-8"?><rfc1807 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><bib-version>v2</bib-version><id>67682</id><entry>2024-09-13</entry><title>Self-adaptive 2D 3D image fusion for automated pixel-level pavement crack detection</title><swanseaauthors><author><sid>92bf566c65343cb3ee04ad963eacf31b</sid><ORCID>0000-0002-4334-2620</ORCID><firstname>Yue</firstname><surname>Hou</surname><name>Yue Hou</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2024-09-13</date><deptcode>ACEM</deptcode><abstract>Current 2D and 3D image-based crack detection methods in transportation infrastructure often struggle with noise robustness and feature diversity. To overcome these challenges, the paper use CSF-CrackNet, a self-adaptive 2Dsingle bond3D image fusion model utilizes channel and spatial modules for automated pavement crack segmentation. CSF-CrackNet consists of four parts: feature enhanced and field sensing (FEFS) module, channel module, spatial module, and semantic segmentation module. A multi-feature image dataset was established using a vehicle-mounted 3D imaging system, including color images, depth images, and color-depth overlapped images. Results show that the mean intersection over union (mIOU) of most models under the CSF-CrackNet framework can be increased to above 80 %. Compared with original RGB and depth images, the average mIOU increases with image fusion by 10 % and 5 %, respectively. The ablation experiment and weight significance analysis further demonstrate that CSF-CrackNet can significantly improve semantic segmentation performance by balancing information between 2D and 3D images.</abstract><type>Journal Article</type><journal>Automation in Construction</journal><volume>168</volume><journalNumber/><paginationStart>105756</paginationStart><paginationEnd/><publisher>Elsevier BV</publisher><placeOfPublication/><isbnPrint/><isbnElectronic/><issnPrint>0926-5805</issnPrint><issnElectronic/><keywords/><publishedDay>1</publishedDay><publishedMonth>12</publishedMonth><publishedYear>2024</publishedYear><publishedDate>2024-12-01</publishedDate><doi>10.1016/j.autcon.2024.105756</doi><url>http://dx.doi.org/10.1016/j.autcon.2024.105756</url><notes/><college>COLLEGE NANME</college><department>Aerospace, Civil, Electrical, and Mechanical Engineering</department><CollegeCode>COLLEGE CODE</CollegeCode><DepartmentCode>ACEM</DepartmentCode><institution>Swansea University</institution><apcterm/><funders>This work was supported in part by the National Natural Science Foundation of China under Grant 52078049, Grant 52378431 and Grant 52408454, in part by the Fundamental Research Funds for the Central Universities, CHD under Grant 300102210302 and Grant 300102210118 and in part by the 111 Project of Low Carbon Smart Road Infrastructure Construction and Maintenance Discipline Innovation and Talent Introduction Base of Shaanxi Province.</funders><projectreference/><lastEdited>2024-09-13T11:48:48.8083112</lastEdited><Created>2024-09-13T11:32:06.9557529</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Aerospace, Civil, Electrical, General and Mechanical Engineering - Civil Engineering</level></path><authors><author><firstname>Jiayv</firstname><surname>Jing</surname><order>1</order></author><author><firstname>Xu</firstname><surname>Yang</surname><order>2</order></author><author><firstname>Ling</firstname><surname>Ding</surname><order>3</order></author><author><firstname>Hainian</firstname><surname>Wang</surname><order>4</order></author><author><firstname>Jinchao</firstname><surname>Guan</surname><order>5</order></author><author><firstname>Yue</firstname><surname>Hou</surname><orcid>0000-0002-4334-2620</orcid><order>6</order></author><author><firstname>Sherif M.</firstname><surname>El-Badawy</surname><order>7</order></author></authors><documents/><OutputDurs/></rfc1807>
spelling v2 67682 2024-09-13 Self-adaptive 2D 3D image fusion for automated pixel-level pavement crack detection 92bf566c65343cb3ee04ad963eacf31b 0000-0002-4334-2620 Yue Hou Yue Hou true false 2024-09-13 ACEM Current 2D and 3D image-based crack detection methods in transportation infrastructure often struggle with noise robustness and feature diversity. To overcome these challenges, the paper use CSF-CrackNet, a self-adaptive 2Dsingle bond3D image fusion model utilizes channel and spatial modules for automated pavement crack segmentation. CSF-CrackNet consists of four parts: feature enhanced and field sensing (FEFS) module, channel module, spatial module, and semantic segmentation module. A multi-feature image dataset was established using a vehicle-mounted 3D imaging system, including color images, depth images, and color-depth overlapped images. Results show that the mean intersection over union (mIOU) of most models under the CSF-CrackNet framework can be increased to above 80 %. Compared with original RGB and depth images, the average mIOU increases with image fusion by 10 % and 5 %, respectively. The ablation experiment and weight significance analysis further demonstrate that CSF-CrackNet can significantly improve semantic segmentation performance by balancing information between 2D and 3D images. Journal Article Automation in Construction 168 105756 Elsevier BV 0926-5805 1 12 2024 2024-12-01 10.1016/j.autcon.2024.105756 http://dx.doi.org/10.1016/j.autcon.2024.105756 COLLEGE NANME Aerospace, Civil, Electrical, and Mechanical Engineering COLLEGE CODE ACEM Swansea University This work was supported in part by the National Natural Science Foundation of China under Grant 52078049, Grant 52378431 and Grant 52408454, in part by the Fundamental Research Funds for the Central Universities, CHD under Grant 300102210302 and Grant 300102210118 and in part by the 111 Project of Low Carbon Smart Road Infrastructure Construction and Maintenance Discipline Innovation and Talent Introduction Base of Shaanxi Province. 2024-09-13T11:48:48.8083112 2024-09-13T11:32:06.9557529 Faculty of Science and Engineering School of Aerospace, Civil, Electrical, General and Mechanical Engineering - Civil Engineering Jiayv Jing 1 Xu Yang 2 Ling Ding 3 Hainian Wang 4 Jinchao Guan 5 Yue Hou 0000-0002-4334-2620 6 Sherif M. El-Badawy 7
title Self-adaptive 2D 3D image fusion for automated pixel-level pavement crack detection
spellingShingle Self-adaptive 2D 3D image fusion for automated pixel-level pavement crack detection
Yue Hou
title_short Self-adaptive 2D 3D image fusion for automated pixel-level pavement crack detection
title_full Self-adaptive 2D 3D image fusion for automated pixel-level pavement crack detection
title_fullStr Self-adaptive 2D 3D image fusion for automated pixel-level pavement crack detection
title_full_unstemmed Self-adaptive 2D 3D image fusion for automated pixel-level pavement crack detection
title_sort Self-adaptive 2D 3D image fusion for automated pixel-level pavement crack detection
author_id_str_mv 92bf566c65343cb3ee04ad963eacf31b
author_id_fullname_str_mv 92bf566c65343cb3ee04ad963eacf31b_***_Yue Hou
author Yue Hou
author2 Jiayv Jing
Xu Yang
Ling Ding
Hainian Wang
Jinchao Guan
Yue Hou
Sherif M. El-Badawy
format Journal article
container_title Automation in Construction
container_volume 168
container_start_page 105756
publishDate 2024
institution Swansea University
issn 0926-5805
doi_str_mv 10.1016/j.autcon.2024.105756
publisher Elsevier BV
college_str Faculty of Science and Engineering
hierarchytype
hierarchy_top_id facultyofscienceandengineering
hierarchy_top_title Faculty of Science and Engineering
hierarchy_parent_id facultyofscienceandengineering
hierarchy_parent_title Faculty of Science and Engineering
department_str School of Aerospace, Civil, Electrical, General and Mechanical Engineering - Civil Engineering{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Aerospace, Civil, Electrical, General and Mechanical Engineering - Civil Engineering
url http://dx.doi.org/10.1016/j.autcon.2024.105756
document_store_str 0
active_str 0
description Current 2D and 3D image-based crack detection methods in transportation infrastructure often struggle with noise robustness and feature diversity. To overcome these challenges, the paper use CSF-CrackNet, a self-adaptive 2Dsingle bond3D image fusion model utilizes channel and spatial modules for automated pavement crack segmentation. CSF-CrackNet consists of four parts: feature enhanced and field sensing (FEFS) module, channel module, spatial module, and semantic segmentation module. A multi-feature image dataset was established using a vehicle-mounted 3D imaging system, including color images, depth images, and color-depth overlapped images. Results show that the mean intersection over union (mIOU) of most models under the CSF-CrackNet framework can be increased to above 80 %. Compared with original RGB and depth images, the average mIOU increases with image fusion by 10 % and 5 %, respectively. The ablation experiment and weight significance analysis further demonstrate that CSF-CrackNet can significantly improve semantic segmentation performance by balancing information between 2D and 3D images.
published_date 2024-12-01T11:48:47Z
_version_ 1810077610670030848
score 11.028798