Learning to Fly with a Video Generator

Chia Chun Chung, Wen Hsiao Peng, Teng Hu Cheng, Chia Hau Yu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper demonstrates a model-based reinforcement learning framework for training a self-flying drone. We implement the Dreamer proposed in a prior work as an environment model that responds to the action taken by the drone by predicting the next video frame as a new state signal. The Dreamer is a conditional video sequence generator. This model-based environment avoids the time-consuming interactions between the agent and the environment, speeding up largely the training process. This demonstration showcases for the first time the application of the Dreamer to train an agent that can finish the racing task in the Airsim simulator.

Original languageEnglish
Title of host publication2021 International Conference on Visual Communications and Image Processing, VCIP 2021 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728185514
DOIs
StatePublished - 2021
Event2021 International Conference on Visual Communications and Image Processing, VCIP 2021 - Munich, Germany
Duration: 5 Dec 20218 Dec 2021

Publication series

Name2021 International Conference on Visual Communications and Image Processing, VCIP 2021 - Proceedings

Conference

Conference2021 International Conference on Visual Communications and Image Processing, VCIP 2021
Country/TerritoryGermany
CityMunich
Period5/12/218/12/21

Fingerprint

Dive into the research topics of 'Learning to Fly with a Video Generator'. Together they form a unique fingerprint.

Cite this