Reinforcement Learning-based Robotic Source Seeking in Turbulent Environments Inspired by Fruit Flies
Publication Date
6-1-2025
Document Type
Conference Proceeding
Publication Title
IFAC Papersonline
Volume
59
Issue
4
DOI
10.1016/j.ifacol.2025.07.061
First Page
157
Last Page
162
Abstract
Navigating mobile robots in a turbulent flow field presents significant challenges due to unpredictable odorant plume dispersion and intermittent environmental cues. This paper presents a reinforcement learning (RL)-based approach for robotic source-seeking in such environments, inspired by fruit flies' navigation behaviors. A Deep Q-Network (DQN) model is trained using experimentally recorded trajectories of fruit flies to develop an adaptive search strategy. The robot learns to make navigation decisions based on limited sensory feedback, leveraging stochastic environmental cues to improve its movement toward the source. The RL-based approach demonstrates its ability to generalize across different trajectories, achieving higher accumulated rewards than biological trajectories. Simulation results demonstrate the model's robustness and adaptability, highlighting the potential of RL for bio-inspired navigation in mobile robotics and environmental monitoring.
Keywords
mobile robots, odor plume tracking, reinforcement learning, Source seeking
Department
Computer Engineering
Recommended Citation
Gauravkumar Koradiya, Vikas Bhandawat, and Wencen Wu. "Reinforcement Learning-based Robotic Source Seeking in Turbulent Environments Inspired by Fruit Flies" IFAC Papersonline (2025): 157-162. https://doi.org/10.1016/j.ifacol.2025.07.061