The trajectory optimization for autonomous wireless communication via machine learning

Click the Poster to View Full Screen, Right click to save image

Xinyi Sun

CoPIs:
Yuyang Zhang

College:
The Dorothy and George Hennings College of Science, Mathematics, and Technology

Major:
Mathematical Sciences

Faculty Research Advisor(s):
Maryam Cheraghy

Abstract:
With the increasing utilization of drone technology for monitoring and data acquisition, optimizing flight trajectories to enhance efficiency has emerged as a pressing concern. Drones often operate under stringent time constraints, particularly in tasks such as geographic surveying, agricultural monitoring, and emergency response. Thus, developing trajectory optimization strategies capable of completing tasks within limited timeframes while maximizing data collection has become a pivotal focus of contemporary research.

Prior literature has proposed various trajectory optimization techniques, including model-based predictive control and heuristic search strategies. However, these methods often rely on predefined environmental models and struggle to adapt to dynamically changing real-world conditions, particularly in unknown or partially known environments. Moreover, existing studies have inadequately addressed the impact of UAV power consumption and time constraints on trajectory planning during mission execution. These limitations underscore the necessity for more flexible and adaptive trajectory optimization methodologies.

In response to these challenges, this research endeavors to devise a method facilitating efficient trajectory planning in unknown environments. Deep learning and reinforcement learning serve as the core technologies due to their capacity to enable drones to learn and adapt through real-time interactions with the environment. Specifically, this study employs the Deep Q-Network (DQN) algorithm, which enables drones to make decisions autonomously when confronted with unfamiliar obstacles, without relying on detailed prior environmental knowledge. Firstly, The environment and behavior of drones are modeled using Markov decision processes (MDP) to make appropriate decisions in unknown environments. Secondly, we use data sample acquisition technology to obtain a large amount of data for training by simulating the flight of drones under different environmental conditions. This data is then used to train the Deep Q Network (DQN), which enables the drone to autonomously adjust its flight trajectory based on the state of the environment to maximize data collection efficiency. Finally, the effectiveness and feasibility of the proposed method in practical scenarios are verified by the UAV data acquisition simulation. With further experiments and data analysis, we will provide a more comprehensive evaluation and performance analysis of the method.


Previous
Previous

The top performing stocks: A Capital Structure analysis of the best 2023 companies

Next
Next

The Effect of Beach Width in Shore Protection – A Case Study at Ortley Beach, New Jersey