Journal article 45 views 3 downloads
Reinforcement Learning-Based Intelligent Path Planning for Optimal Navigation in Dynamic Environments
Neural Processing Letters, Volume: 58, Issue: 1, Start page: 10
Swansea University Author:
Cheng Cheng
-
PDF | Version of Record
© The Author(s) 2026. This article is licensed under a Creative Commons Attribution 4.0 International License.
Download (2.59MB)
DOI (Published version): 10.1007/s11063-025-11821-2
Abstract
Path selection and planning are crucial for autonomous mobile robots (AMRs) to navigate efficiently and avoid obstacles. Traditional methods rely on analytical search to identify the shortest distance. However, Reinforcement learning enhances performance by optimizing a sequence of actions efficient...
| Published in: | Neural Processing Letters |
|---|---|
| ISSN: | 1370-4621 1573-773X |
| Published: |
Springer Nature
2026
|
| Online Access: |
Check full text
|
| URI: | https://cronfa.swan.ac.uk/Record/cronfa71192 |
| first_indexed |
2026-01-05T12:34:44Z |
|---|---|
| last_indexed |
2026-01-31T05:34:32Z |
| id |
cronfa71192 |
| recordtype |
SURis |
| fullrecord |
<?xml version="1.0"?><rfc1807><datestamp>2026-01-30T13:14:04.6857950</datestamp><bib-version>v2</bib-version><id>71192</id><entry>2026-01-05</entry><title>Reinforcement Learning-Based Intelligent Path Planning for Optimal Navigation in Dynamic Environments</title><swanseaauthors><author><sid>11ddf61c123b99e59b00fa1479367582</sid><ORCID>0000-0003-0371-9646</ORCID><firstname>Cheng</firstname><surname>Cheng</surname><name>Cheng Cheng</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2026-01-05</date><deptcode>MACS</deptcode><abstract>Path selection and planning are crucial for autonomous mobile robots (AMRs) to navigate efficiently and avoid obstacles. Traditional methods rely on analytical search to identify the shortest distance. However, Reinforcement learning enhances performance by optimizing a sequence of actions efficiently. It is an iterative approach used for computational sequence modeling and dynamic programming. RL received sensory input from the environment in the form of observation or state. The agent interpreted every reward or penalty through trial-and-error interaction. Policy maximizes the rewards and selects the optimal action among all possible actions. A challenging problem in traditional reinforcement learning is environment generalization for dynamic systems. Q-learning faces challenges in dynamic environments because it relies on rewards or penalties based on the entire sequence of actions from the start to the end state. This approach often fails to produce optimal results when the environment changes unexpectedly due to state transitions, iterations, or blocked routes. Such limitations make Q-learning less effective for dynamic path planning. To overcome these challenges, this study focuses on optimizing reward functions for efficient navigation in RL-based path planning, aiming to enhance navigation efficiency and obstacle avoidance. The proposed method evaluates the shortest decision path by considering total steps, counted steps, and discount rates in dynamic environments. By implementing this RL with an optimized reward mechanism, the study analyzes state reward values across different environments, and it evaluates the effect on state-action pair-based Q-Learning and neural networks using Deep Q-Learning algorithms. Here, results demonstrate that the optimized reward function effectively decreases the number of iterations and episodes while achieving a 30% to 70% reduction in overall trajectory distance. These results highlight the effectiveness of reward-based reinforcement learning, demonstrating its potential to improve path optimization, learning rate, episode completion, and decision accuracy in intelligent navigation systems. Q-learning-based reinforcement learning becomes more effective by combining multiple agents and utilizing decision-making techniques such as federated and transfer learning on larger maps to ensure convergence.</abstract><type>Journal Article</type><journal>Neural Processing Letters</journal><volume>58</volume><journalNumber>1</journalNumber><paginationStart>10</paginationStart><paginationEnd/><publisher>Springer Nature</publisher><placeOfPublication/><isbnPrint/><isbnElectronic/><issnPrint>1370-4621</issnPrint><issnElectronic>1573-773X</issnElectronic><keywords>Q-learning (QL); Reinforcement learning (RL); Reward function; Policy iteration; Path optimization; Trajectory planning; Navigation</keywords><publishedDay>1</publishedDay><publishedMonth>2</publishedMonth><publishedYear>2026</publishedYear><publishedDate>2026-02-01</publishedDate><doi>10.1007/s11063-025-11821-2</doi><url/><notes/><college>COLLEGE NANME</college><department>Mathematics and Computer Science School</department><CollegeCode>COLLEGE CODE</CollegeCode><DepartmentCode>MACS</DepartmentCode><institution>Swansea University</institution><apcterm>SU Library paid the OA fee (TA Institutional Deal)</apcterm><funders>Authors have been supported by UKRI EPSRC Grant funded Doctoral Training Centre at Swansea University, through project RS718. Authors also have been supported by UKRI EPSRC Grant EP/W020408/1.</funders><projectreference/><lastEdited>2026-01-30T13:14:04.6857950</lastEdited><Created>2026-01-05T12:32:40.9799197</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>Anil Kumar</firstname><surname>Yadav</surname><order>1</order></author><author><firstname>Purushottam</firstname><surname>Sharma</surname><order>2</order></author><author><firstname>Cheng</firstname><surname>Cheng</surname><orcid>0000-0003-0371-9646</orcid><order>3</order></author><author><firstname>Shiv Shankar Prasad</firstname><surname>Shukla</surname><order>4</order></author></authors><documents><document><filename>71192__36147__ffbf7a0abbaf4339ba74d181e09ffa8b.pdf</filename><originalFilename>71192.VOR.pdf</originalFilename><uploaded>2026-01-30T13:11:59.8060812</uploaded><type>Output</type><contentLength>2715332</contentLength><contentType>application/pdf</contentType><version>Version of Record</version><cronfaStatus>true</cronfaStatus><documentNotes>© The Author(s) 2026. This article is licensed under a Creative Commons Attribution 4.0 International License.</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language><licence>http://creativecommons.org/licenses/by/4.0/</licence></document></documents><OutputDurs/></rfc1807> |
| spelling |
2026-01-30T13:14:04.6857950 v2 71192 2026-01-05 Reinforcement Learning-Based Intelligent Path Planning for Optimal Navigation in Dynamic Environments 11ddf61c123b99e59b00fa1479367582 0000-0003-0371-9646 Cheng Cheng Cheng Cheng true false 2026-01-05 MACS Path selection and planning are crucial for autonomous mobile robots (AMRs) to navigate efficiently and avoid obstacles. Traditional methods rely on analytical search to identify the shortest distance. However, Reinforcement learning enhances performance by optimizing a sequence of actions efficiently. It is an iterative approach used for computational sequence modeling and dynamic programming. RL received sensory input from the environment in the form of observation or state. The agent interpreted every reward or penalty through trial-and-error interaction. Policy maximizes the rewards and selects the optimal action among all possible actions. A challenging problem in traditional reinforcement learning is environment generalization for dynamic systems. Q-learning faces challenges in dynamic environments because it relies on rewards or penalties based on the entire sequence of actions from the start to the end state. This approach often fails to produce optimal results when the environment changes unexpectedly due to state transitions, iterations, or blocked routes. Such limitations make Q-learning less effective for dynamic path planning. To overcome these challenges, this study focuses on optimizing reward functions for efficient navigation in RL-based path planning, aiming to enhance navigation efficiency and obstacle avoidance. The proposed method evaluates the shortest decision path by considering total steps, counted steps, and discount rates in dynamic environments. By implementing this RL with an optimized reward mechanism, the study analyzes state reward values across different environments, and it evaluates the effect on state-action pair-based Q-Learning and neural networks using Deep Q-Learning algorithms. Here, results demonstrate that the optimized reward function effectively decreases the number of iterations and episodes while achieving a 30% to 70% reduction in overall trajectory distance. These results highlight the effectiveness of reward-based reinforcement learning, demonstrating its potential to improve path optimization, learning rate, episode completion, and decision accuracy in intelligent navigation systems. Q-learning-based reinforcement learning becomes more effective by combining multiple agents and utilizing decision-making techniques such as federated and transfer learning on larger maps to ensure convergence. Journal Article Neural Processing Letters 58 1 10 Springer Nature 1370-4621 1573-773X Q-learning (QL); Reinforcement learning (RL); Reward function; Policy iteration; Path optimization; Trajectory planning; Navigation 1 2 2026 2026-02-01 10.1007/s11063-025-11821-2 COLLEGE NANME Mathematics and Computer Science School COLLEGE CODE MACS Swansea University SU Library paid the OA fee (TA Institutional Deal) Authors have been supported by UKRI EPSRC Grant funded Doctoral Training Centre at Swansea University, through project RS718. Authors also have been supported by UKRI EPSRC Grant EP/W020408/1. 2026-01-30T13:14:04.6857950 2026-01-05T12:32:40.9799197 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science Anil Kumar Yadav 1 Purushottam Sharma 2 Cheng Cheng 0000-0003-0371-9646 3 Shiv Shankar Prasad Shukla 4 71192__36147__ffbf7a0abbaf4339ba74d181e09ffa8b.pdf 71192.VOR.pdf 2026-01-30T13:11:59.8060812 Output 2715332 application/pdf Version of Record true © The Author(s) 2026. This article is licensed under a Creative Commons Attribution 4.0 International License. true eng http://creativecommons.org/licenses/by/4.0/ |
| title |
Reinforcement Learning-Based Intelligent Path Planning for Optimal Navigation in Dynamic Environments |
| spellingShingle |
Reinforcement Learning-Based Intelligent Path Planning for Optimal Navigation in Dynamic Environments Cheng Cheng |
| title_short |
Reinforcement Learning-Based Intelligent Path Planning for Optimal Navigation in Dynamic Environments |
| title_full |
Reinforcement Learning-Based Intelligent Path Planning for Optimal Navigation in Dynamic Environments |
| title_fullStr |
Reinforcement Learning-Based Intelligent Path Planning for Optimal Navigation in Dynamic Environments |
| title_full_unstemmed |
Reinforcement Learning-Based Intelligent Path Planning for Optimal Navigation in Dynamic Environments |
| title_sort |
Reinforcement Learning-Based Intelligent Path Planning for Optimal Navigation in Dynamic Environments |
| author_id_str_mv |
11ddf61c123b99e59b00fa1479367582 |
| author_id_fullname_str_mv |
11ddf61c123b99e59b00fa1479367582_***_Cheng Cheng |
| author |
Cheng Cheng |
| author2 |
Anil Kumar Yadav Purushottam Sharma Cheng Cheng Shiv Shankar Prasad Shukla |
| format |
Journal article |
| container_title |
Neural Processing Letters |
| container_volume |
58 |
| container_issue |
1 |
| container_start_page |
10 |
| publishDate |
2026 |
| institution |
Swansea University |
| issn |
1370-4621 1573-773X |
| doi_str_mv |
10.1007/s11063-025-11821-2 |
| publisher |
Springer Nature |
| college_str |
Faculty of Science and Engineering |
| hierarchytype |
|
| hierarchy_top_id |
facultyofscienceandengineering |
| hierarchy_top_title |
Faculty of Science and Engineering |
| hierarchy_parent_id |
facultyofscienceandengineering |
| hierarchy_parent_title |
Faculty of Science and Engineering |
| department_str |
School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science |
| document_store_str |
1 |
| active_str |
0 |
| description |
Path selection and planning are crucial for autonomous mobile robots (AMRs) to navigate efficiently and avoid obstacles. Traditional methods rely on analytical search to identify the shortest distance. However, Reinforcement learning enhances performance by optimizing a sequence of actions efficiently. It is an iterative approach used for computational sequence modeling and dynamic programming. RL received sensory input from the environment in the form of observation or state. The agent interpreted every reward or penalty through trial-and-error interaction. Policy maximizes the rewards and selects the optimal action among all possible actions. A challenging problem in traditional reinforcement learning is environment generalization for dynamic systems. Q-learning faces challenges in dynamic environments because it relies on rewards or penalties based on the entire sequence of actions from the start to the end state. This approach often fails to produce optimal results when the environment changes unexpectedly due to state transitions, iterations, or blocked routes. Such limitations make Q-learning less effective for dynamic path planning. To overcome these challenges, this study focuses on optimizing reward functions for efficient navigation in RL-based path planning, aiming to enhance navigation efficiency and obstacle avoidance. The proposed method evaluates the shortest decision path by considering total steps, counted steps, and discount rates in dynamic environments. By implementing this RL with an optimized reward mechanism, the study analyzes state reward values across different environments, and it evaluates the effect on state-action pair-based Q-Learning and neural networks using Deep Q-Learning algorithms. Here, results demonstrate that the optimized reward function effectively decreases the number of iterations and episodes while achieving a 30% to 70% reduction in overall trajectory distance. These results highlight the effectiveness of reward-based reinforcement learning, demonstrating its potential to improve path optimization, learning rate, episode completion, and decision accuracy in intelligent navigation systems. Q-learning-based reinforcement learning becomes more effective by combining multiple agents and utilizing decision-making techniques such as federated and transfer learning on larger maps to ensure convergence. |
| published_date |
2026-02-01T05:33:28Z |
| _version_ |
1856805806744272896 |
| score |
11.09611 |

