Journal article 741 views 149 downloads
MLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images
Machine Vision and Applications, Volume: 33, Issue: 1
Swansea University Authors: Majedaldein Almahasneh, Xianghua Xie
-
PDF | Version of Record
© The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License
Download (1.86MB)
DOI (Published version): 10.1007/s00138-021-01261-y
Abstract
Precisely localising solar Active Regions (AR) from multi-spectral images is a challenging but important task in understanding solar activity and its influence on space weather. A main challenge comes from each modality capturing a different location of the 3D objects, as opposed to typical multi-sp...
Published in: | Machine Vision and Applications |
---|---|
ISSN: | 0932-8092 1432-1769 |
Published: |
Springer Science and Business Media LLC
2021
|
Online Access: |
Check full text
|
URI: | https://cronfa.swan.ac.uk/Record/cronfa58518 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
first_indexed |
2021-10-31T18:00:58Z |
---|---|
last_indexed |
2023-01-11T14:39:10Z |
id |
cronfa58518 |
recordtype |
SURis |
fullrecord |
<?xml version="1.0"?><rfc1807><datestamp>2022-10-31T13:48:43.0850148</datestamp><bib-version>v2</bib-version><id>58518</id><entry>2021-10-31</entry><title>MLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images</title><swanseaauthors><author><sid>8ca3681c1e01492c49612ce56fbf1c0c</sid><firstname>Majedaldein</firstname><surname>Almahasneh</surname><name>Majedaldein Almahasneh</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>b334d40963c7a2f435f06d2c26c74e11</sid><ORCID>0000-0002-2701-8660</ORCID><firstname>Xianghua</firstname><surname>Xie</surname><name>Xianghua Xie</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2021-10-31</date><deptcode>SCS</deptcode><abstract>Precisely localising solar Active Regions (AR) from multi-spectral images is a challenging but important task in understanding solar activity and its influence on space weather. A main challenge comes from each modality capturing a different location of the 3D objects, as opposed to typical multi-spectral imaging scenarios where all image bands observe the same scene. Thus, we refer to this special multi-spectral scenario as multi-layer. We present a multi-task deep learning framework that exploits the dependencies between image bands to produce 3D AR localisation (segmentation and detection) where different image bands (and physical locations) have their own set of results. Furthermore, to address the difficulty of producing dense AR annotations for training supervised machine learning (ML) algorithms, we adapt a training strategy based on weak labels (i.e. bounding boxes) in a recursive manner. We compare our detection and segmentation stages against baseline approaches for solar image analysis (multi-channel coronal hole detection, SPOCA for ARs) and state-of-the-art deep learning methods (Faster RCNN, U-Net). Additionally, both detection and segmentation stages are quantitatively validated on artificially created data of similar spatial configurations made from annotated multi-modal magnetic resonance images. Our framework achieves an average of 0.72 IoU (segmentation) and 0.90 F1 score (detection) across all modalities, comparing to the best performing baseline methods with scores of 0.53 and 0.58, respectively, on the artificial dataset, and 0.84 F1 score in the AR detection task comparing to baseline of 0.82 F1 score. Our segmentation results are qualitatively validated by an expert on real ARs.</abstract><type>Journal Article</type><journal>Machine Vision and Applications</journal><volume>33</volume><journalNumber>1</journalNumber><paginationStart/><paginationEnd/><publisher>Springer Science and Business Media LLC</publisher><placeOfPublication/><isbnPrint/><isbnElectronic/><issnPrint>0932-8092</issnPrint><issnElectronic>1432-1769</issnElectronic><keywords>Image segmentation; object detection; deep learning; weakly supervised learning; multi-spectral images; solar image analysis; solar active regions</keywords><publishedDay>29</publishedDay><publishedMonth>11</publishedMonth><publishedYear>2021</publishedYear><publishedDate>2021-11-29</publishedDate><doi>10.1007/s00138-021-01261-y</doi><url/><notes/><college>COLLEGE NANME</college><department>Computer Science</department><CollegeCode>COLLEGE CODE</CollegeCode><DepartmentCode>SCS</DepartmentCode><institution>Swansea University</institution><apcterm>SU Library paid the OA fee (TA Institutional Deal)</apcterm><funders/><projectreference/><lastEdited>2022-10-31T13:48:43.0850148</lastEdited><Created>2021-10-31T17:57:26.0407111</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>Majedaldein</firstname><surname>Almahasneh</surname><order>1</order></author><author><firstname>Adeline</firstname><surname>Paiement</surname><order>2</order></author><author><firstname>Xianghua</firstname><surname>Xie</surname><orcid>0000-0002-2701-8660</orcid><order>3</order></author><author><firstname>Jean</firstname><surname>Aboudarham</surname><order>4</order></author></authors><documents><document><filename>58518__21791__fe030db4547846cdad4baba3193021ea.pdf</filename><originalFilename>58518.pdf</originalFilename><uploaded>2021-12-06T10:26:46.2033292</uploaded><type>Output</type><contentLength>1948160</contentLength><contentType>application/pdf</contentType><version>Version of Record</version><cronfaStatus>true</cronfaStatus><documentNotes>© The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language><licence>http://creativecommons.org/licenses/by/4.0/</licence></document></documents><OutputDurs/></rfc1807> |
spelling |
2022-10-31T13:48:43.0850148 v2 58518 2021-10-31 MLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images 8ca3681c1e01492c49612ce56fbf1c0c Majedaldein Almahasneh Majedaldein Almahasneh true false b334d40963c7a2f435f06d2c26c74e11 0000-0002-2701-8660 Xianghua Xie Xianghua Xie true false 2021-10-31 SCS Precisely localising solar Active Regions (AR) from multi-spectral images is a challenging but important task in understanding solar activity and its influence on space weather. A main challenge comes from each modality capturing a different location of the 3D objects, as opposed to typical multi-spectral imaging scenarios where all image bands observe the same scene. Thus, we refer to this special multi-spectral scenario as multi-layer. We present a multi-task deep learning framework that exploits the dependencies between image bands to produce 3D AR localisation (segmentation and detection) where different image bands (and physical locations) have their own set of results. Furthermore, to address the difficulty of producing dense AR annotations for training supervised machine learning (ML) algorithms, we adapt a training strategy based on weak labels (i.e. bounding boxes) in a recursive manner. We compare our detection and segmentation stages against baseline approaches for solar image analysis (multi-channel coronal hole detection, SPOCA for ARs) and state-of-the-art deep learning methods (Faster RCNN, U-Net). Additionally, both detection and segmentation stages are quantitatively validated on artificially created data of similar spatial configurations made from annotated multi-modal magnetic resonance images. Our framework achieves an average of 0.72 IoU (segmentation) and 0.90 F1 score (detection) across all modalities, comparing to the best performing baseline methods with scores of 0.53 and 0.58, respectively, on the artificial dataset, and 0.84 F1 score in the AR detection task comparing to baseline of 0.82 F1 score. Our segmentation results are qualitatively validated by an expert on real ARs. Journal Article Machine Vision and Applications 33 1 Springer Science and Business Media LLC 0932-8092 1432-1769 Image segmentation; object detection; deep learning; weakly supervised learning; multi-spectral images; solar image analysis; solar active regions 29 11 2021 2021-11-29 10.1007/s00138-021-01261-y COLLEGE NANME Computer Science COLLEGE CODE SCS Swansea University SU Library paid the OA fee (TA Institutional Deal) 2022-10-31T13:48:43.0850148 2021-10-31T17:57:26.0407111 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science Majedaldein Almahasneh 1 Adeline Paiement 2 Xianghua Xie 0000-0002-2701-8660 3 Jean Aboudarham 4 58518__21791__fe030db4547846cdad4baba3193021ea.pdf 58518.pdf 2021-12-06T10:26:46.2033292 Output 1948160 application/pdf Version of Record true © The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License true eng http://creativecommons.org/licenses/by/4.0/ |
title |
MLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images |
spellingShingle |
MLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images Majedaldein Almahasneh Xianghua Xie |
title_short |
MLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images |
title_full |
MLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images |
title_fullStr |
MLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images |
title_full_unstemmed |
MLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images |
title_sort |
MLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images |
author_id_str_mv |
8ca3681c1e01492c49612ce56fbf1c0c b334d40963c7a2f435f06d2c26c74e11 |
author_id_fullname_str_mv |
8ca3681c1e01492c49612ce56fbf1c0c_***_Majedaldein Almahasneh b334d40963c7a2f435f06d2c26c74e11_***_Xianghua Xie |
author |
Majedaldein Almahasneh Xianghua Xie |
author2 |
Majedaldein Almahasneh Adeline Paiement Xianghua Xie Jean Aboudarham |
format |
Journal article |
container_title |
Machine Vision and Applications |
container_volume |
33 |
container_issue |
1 |
publishDate |
2021 |
institution |
Swansea University |
issn |
0932-8092 1432-1769 |
doi_str_mv |
10.1007/s00138-021-01261-y |
publisher |
Springer Science and Business Media LLC |
college_str |
Faculty of Science and Engineering |
hierarchytype |
|
hierarchy_top_id |
facultyofscienceandengineering |
hierarchy_top_title |
Faculty of Science and Engineering |
hierarchy_parent_id |
facultyofscienceandengineering |
hierarchy_parent_title |
Faculty of Science and Engineering |
department_str |
School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science |
document_store_str |
1 |
active_str |
0 |
description |
Precisely localising solar Active Regions (AR) from multi-spectral images is a challenging but important task in understanding solar activity and its influence on space weather. A main challenge comes from each modality capturing a different location of the 3D objects, as opposed to typical multi-spectral imaging scenarios where all image bands observe the same scene. Thus, we refer to this special multi-spectral scenario as multi-layer. We present a multi-task deep learning framework that exploits the dependencies between image bands to produce 3D AR localisation (segmentation and detection) where different image bands (and physical locations) have their own set of results. Furthermore, to address the difficulty of producing dense AR annotations for training supervised machine learning (ML) algorithms, we adapt a training strategy based on weak labels (i.e. bounding boxes) in a recursive manner. We compare our detection and segmentation stages against baseline approaches for solar image analysis (multi-channel coronal hole detection, SPOCA for ARs) and state-of-the-art deep learning methods (Faster RCNN, U-Net). Additionally, both detection and segmentation stages are quantitatively validated on artificially created data of similar spatial configurations made from annotated multi-modal magnetic resonance images. Our framework achieves an average of 0.72 IoU (segmentation) and 0.90 F1 score (detection) across all modalities, comparing to the best performing baseline methods with scores of 0.53 and 0.58, respectively, on the artificial dataset, and 0.84 F1 score in the AR detection task comparing to baseline of 0.82 F1 score. Our segmentation results are qualitatively validated by an expert on real ARs. |
published_date |
2021-11-29T04:15:06Z |
_version_ |
1763754017292288000 |
score |
11.03559 |