loading page

Multi-UAV Energy Consumption Minimization using Deep Reinforcement Learning: An Age of Information Approach
  • Jeena Kim,
  • Seunghyun Park,
  • Hyunhee Park
Jeena Kim
Myongji University - Natural Science Campus
Author Profile
Seunghyun Park
Hansung University
Author Profile
Hyunhee Park
Myongji University - Natural Science Campus

Corresponding Author:[email protected]

Author Profile

Abstract

This letter introduces an innovative approach for minimizing energy consumption in multi-UAV (Unmanned Aerial Vehicles) networks using Deep Reinforcement Learning (DRL), with a focus on optimizing the Age of Information (AoI) in disaster environments. We propose a hierarchical UAV deployment strategy that facilitates cooperative trajectory planning, ensuring timely data collection and transmission while minimizing energy consumption. By formulating the inter-UAV network path planning problem as a Markov Decision Process (MDP), we apply a Deep Q-Network (DQN) strategy to enable real-time decision-making that accounts for dynamic environmental changes, obstacles, and UAV battery constraints. Our extensive simulation results, conducted in both rural and urban scenarios, demonstrate the effectiveness of employing a memory access approach within the DQN framework, significantly reducing energy consumption up to 33.25\% in rural settings and 74.20\% in urban environments compared to non-memory approaches. By integrating AoI considerations with energy-efficient UAV control, this work offers a robust solution for maintaining fresh data in critical applications, such as disaster response, where ground-based communication infrastructures are compromised. The use of replay memory approach, particularly the online history approach, proves crucial in adapting to changing conditions and optimizing UAV operations for both data freshness and energy consumption.
07 Mar 2024Submitted to Electronics Letters
11 Mar 2024Assigned to Editor
11 Mar 2024Submission Checks Completed
19 Mar 2024Reviewer(s) Assigned