Call for Papers: Robotics Perception in Adversarial Environments

Thursday, August 29, 2019




  • January 31st, 2020: Manuscript Submission
  • Review and publishing (if accepted) of the manuscripts will be on a rolling-basis.


  • A full peer-reviewed process will be conducted to select high quality articles with scientific significance for the open-access special issue.
  • CiteScore 3.36. SJR 0.527, Q2 quartile.
  • A special 30% discount on open-access fees apply, since the special issue (research topic) is based on the ICRA 2019 Workshop on “Underwater Robotics Perception”, see link:
  • We strongly recommend to check if your institution has already an Institutional Membership with Frontiers or the procedure to apply for a Fee-Waiver, see link:
  • Authors are invited to submit a max. 12,000 words manuscript to present their original research using LaTex or Word provided templates. For all details about submission and author guidelines, see link: 
  • If you have any questions about the previous do not hesitate to contact one of the guest editors.


Robotics perception research has advanced tremendously in recent years thanks to the development of affordable and cutting edge sensor technologies (e.g., LiDAR, sonars, etc.) and data-driven techniques. While progress is still being made, several of these methods are trained, applied and evaluated with abundant and high-quality data. However, many field or in-the-wild robotics applications face substantial performance drops with respect to applications in constrained/structured environments due to low-quality visual data common in these scenarios; which suffers from various types of degradation and environmental disturbances (fog, ash, or inclement weather). And although some of these artifacts can be overcome by sophisticated algorithms and models, their impact becomes more noticeable as the level of degradation or change passes some empirical threshold.

Based on this, and as an extension of the ICRA 2019 workshop on “Underwater Robotics Perception”, the goal of this Research Topic is to review the recent progress of robust visual perception technologies and methods in challenging adversarial environments.

We welcome computer vision and robotics experts from various fields to share their experience and perspective while working with applications for dynamic environments with non-dependable data, e.g., autonomous driving, agricultural robotics, underwater exploration, mining, search and rescue robotics, highly agile UAVs, environmental conservation, and many others.

We welcome articles with theoretical or practical significance. Authors can report theoretical innovations or robust perception frameworks that cope with data volatility and degradation, as well as system papers that describe applications under challenging conditions and insights into why a particular approach performs well and the surmounted challenges.


  • Robust recognition from low-quality and/or scarce data in different sensor domains (optical cameras, LiDARS, sonars, multibeam, event-cameras, multi-and hyper-spectral sensing, etc.).
  • Robust recognition in highly dynamic environments or long-term deployment robotic systems.
  • Image/video restoration and enhancement from degradations due to low illumination, color distortion, inclement weather, poor visibility, etc.
  • Novel sensor developments or sensor fusion and calibration techniques for robust visual perception.
  • Simulated environments and continuous system integration, i.e., synthetic data generation, simulation to real-world transition, hardware-in-the-loop.
  • Low-quality and scarce data mining, augmentation, and processing methods for visual systems.
  • Deep learning practices and machine learning pipelines in any of the mentioned topics.
  • Heavily tested systems in field trials and best practices for deployment and data management.
  • Surveys of computer vision algorithms and applications under adversarial and challenging environments.
  • Applications of any of the previous to vision-based localization, registration, mapping, modeling, pose estimation and other areas.


  • Research Associate Arturo Gomez Chavez- Jacobs University Bremen gGmbH
  • Dr. Christian A. Mueller - Jacobs University Bremen gGmbH
  • Dr. Amy Tabb - U.S. Department of Agriculture, Agricultural Research Service
  • Dr. Max Pfingsthorn - OFFIS Institute for Information Technology
  • Prof. Sören Schwertfeger - ShanghaiTech University
  • Dr. Enrica Zereik- Italian National Research Council (CNR)
  • Prof. Francesco Maurelli - Jacobs University Bremen gGmbH