Journal article 45 views 3 downloads
Reinforcement Learning-Based Intelligent Path Planning for Optimal Navigation in Dynamic Environments
Neural Processing Letters, Volume: 58, Issue: 1, Start page: 10
Swansea University Author:
Cheng Cheng
-
PDF | Version of Record
© The Author(s) 2026. This article is licensed under a Creative Commons Attribution 4.0 International License.
Download (2.59MB)
DOI (Published version): 10.1007/s11063-025-11821-2
Abstract
Path selection and planning are crucial for autonomous mobile robots (AMRs) to navigate efficiently and avoid obstacles. Traditional methods rely on analytical search to identify the shortest distance. However, Reinforcement learning enhances performance by optimizing a sequence of actions efficient...
| Published in: | Neural Processing Letters |
|---|---|
| ISSN: | 1370-4621 1573-773X |
| Published: |
Springer Nature
2026
|
| Online Access: |
Check full text
|
| URI: | https://cronfa.swan.ac.uk/Record/cronfa71192 |
| Abstract: |
Path selection and planning are crucial for autonomous mobile robots (AMRs) to navigate efficiently and avoid obstacles. Traditional methods rely on analytical search to identify the shortest distance. However, Reinforcement learning enhances performance by optimizing a sequence of actions efficiently. It is an iterative approach used for computational sequence modeling and dynamic programming. RL received sensory input from the environment in the form of observation or state. The agent interpreted every reward or penalty through trial-and-error interaction. Policy maximizes the rewards and selects the optimal action among all possible actions. A challenging problem in traditional reinforcement learning is environment generalization for dynamic systems. Q-learning faces challenges in dynamic environments because it relies on rewards or penalties based on the entire sequence of actions from the start to the end state. This approach often fails to produce optimal results when the environment changes unexpectedly due to state transitions, iterations, or blocked routes. Such limitations make Q-learning less effective for dynamic path planning. To overcome these challenges, this study focuses on optimizing reward functions for efficient navigation in RL-based path planning, aiming to enhance navigation efficiency and obstacle avoidance. The proposed method evaluates the shortest decision path by considering total steps, counted steps, and discount rates in dynamic environments. By implementing this RL with an optimized reward mechanism, the study analyzes state reward values across different environments, and it evaluates the effect on state-action pair-based Q-Learning and neural networks using Deep Q-Learning algorithms. Here, results demonstrate that the optimized reward function effectively decreases the number of iterations and episodes while achieving a 30% to 70% reduction in overall trajectory distance. These results highlight the effectiveness of reward-based reinforcement learning, demonstrating its potential to improve path optimization, learning rate, episode completion, and decision accuracy in intelligent navigation systems. Q-learning-based reinforcement learning becomes more effective by combining multiple agents and utilizing decision-making techniques such as federated and transfer learning on larger maps to ensure convergence. |
|---|---|
| Keywords: |
Q-learning (QL); Reinforcement learning (RL); Reward function; Policy iteration; Path optimization; Trajectory planning; Navigation |
| College: |
Faculty of Science and Engineering |
| Funders: |
Authors have been supported by UKRI EPSRC Grant funded Doctoral Training Centre at Swansea University, through project RS718. Authors also have been supported by UKRI EPSRC Grant EP/W020408/1. |
| Issue: |
1 |
| Start Page: |
10 |

