Online velocity control and data capture of drones for the internet of things: an onboard deep reinforcement learning approach

Kai Li*, Wei Ni, Eduardo Tovard, Abbas Jamalipour

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)

Abstract

Applications of unmanned aerial vehicles (UAVs) for data collection are a promising means to extend Internet of Things (IoT) networks to remote and hostile areas and to locations where there is no access to power supplies. The adequate design of UAV velocity control and communication decision making is critical to minimize the data packet losses at ground IoT nodes that result from overflowing buffers and transmission failures. However, online velocity control and communication decision making are challenging in UAV-enabled IoT networks, due to a UAV's lack of up-to-date knowledge about the state of the nodes, e.g., the battery energy, buffer length, and channel conditions.

Original languageEnglish
Pages (from-to)49-56
Number of pages8
JournalIEEE Vehicular Technology Magazine
Volume16
Issue number1
Early online date18 Dec 2020
DOIs
Publication statusPublished - Mar 2021

Keywords

  • Internet of Things
  • Velocity control
  • Batteries
  • Reinforcement learning
  • Schedules
  • Wireless communication
  • Trajectory

Fingerprint

Dive into the research topics of 'Online velocity control and data capture of drones for the internet of things: an onboard deep reinforcement learning approach'. Together they form a unique fingerprint.

Cite this