[Video Recording]


Computer vision systems nowadays have advanced performance but research in adversarial machine learning also shows that they are not as robust as the human vision system. Recent work has shown that real-world adversarial examples exist when objects are partially occluded or viewed in previously unseen poses and environments (such as different weather conditions). Discovering and harnessing those adversarial examples provides opportunities for understanding and improving computer vision systems in real-world environments. In particular, deep models with structured internal representations seem to be a promising approach to enhance robustness in the real world, while also being able to explain their predictions.

In this workshop, we aim to bring together researches from the fields of adversarial machine learning, robust vision and explainable AI to discuss recent research and future directions for adversarial robustness and explainability, with a particular focus on real-world scenarios.


  • NVIDIA Best Paper Award
  • Towards Analyzing Semantic Robustness of Deep Neural Networks
    Abdullah J Hamdi (KAUST)*; Bernard Ghanem (KAUST))

  • NVIDIA Best Paper Runner-Up
  • Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses
    Fu Lin (Georgia Institute of Technology)*; Rohit Mittapalli (Georgia Institute of Technology); Prithvijit Chattopadhyay (Georgia Institute of Technology); Daniel Bolya (University of California, Davis); Judy Hoffman (Georgia Tech)

  • NVIDIA Female Leader in Computer Vision Award
  • Prof. Judy Hoffman (Georgia Tech)


    08:45 - 09:00         Opening Remarks

    09:00 - 09:30         Invited Talk 1: Andreas Geiger - Attacking Optical Flow

    09:30 - 10:00         Invited Talk 2: Wieland Brendel - To Defend Against Adversarial Examples We Need to Understand Human Vision

    10:00 - 12:00         Poster Session 1

    12:00 - 12:30         Invited Talk 3: Alan Yuille - Adversarial Robustness

    12:30 - 13:00         Invited Talk 4: Raquel Urtasun - Adversarial Attacks and Robustness for Self-Driving

    13:00 - 14:30         Lunch Break

    14:30 - 15:00         Invited Talk 5: Alex Robey - Model-based Robust Deep Learning

    15:00 - 15:30        Invited Talk 6: Judy Hoffman - Achieving and Understanding Adversarial Robustness

    15:30 - 16:00        Invited Talk 7: Honglak Lee - Generative Modeling Perspective for Synthesizing and
                                          Interpreting Adversarial Attacks

    16:00 - 16:30         Invited Talk 8: Bo Li - Secure Learning in Adversarial Autonomous Driving Environments

    16:30 - 17:00        Invited Talk 9: Daniel Fremont - Semantic Adversarial Analysis with Formal Scenarios

    17:00 - 17:45        Panel Discussion

    17:45 - 19:00        Poster Session 2

    Accepted Papers

    Call For Papers

    Submission deadline:July 10 July 20, 2020 Anywhere on Earth (AoE)

    Reviews due: July 26 , July 27, 2020 Anywhere on Earth (AoE)

    Notification sent to authors: July 29, 2020 Anywhere on Earth (AoE)

    Presentation materials deadline:: August 16, 2020 Anywhere on Earth (AoE)

    Camera ready deadline: September 10, 2020 Anywhere on Earth (AoE)

    Submission server: https://cmt3.research.microsoft.com/AROW2020/

    Submission format: Submissions need to be anonymized and follow the ECCV 2020 Author Instructions. The workshop considers two types of submissions: (1) Long Paper: Papers are limited to 14 pages excluding references and will be included in the official ECCV proceedings; (2) Extended Abstract: Papers are limited to 4 pages including references and will NOT be included in the official ECCV proceedings. Please use the CVPR template for extended abstracts .
    Based on the PC recommendations, the accepted long papers/extended abstracts will be allocated either a contributed talk or a poster presentation.

    We invite submissions on any aspect of adversarial robustness in real-world computer vision. This includes, but is not limited to:


    Organizing Committee

    Program Committee

    • Akshayvarun Subramanya (UMBC)
    • Alexander Robey (University of Pennsylvania)
    • Ali Shahin Shamsabadi (Queen Mary University of London)
    • Aniruddha Saha (University of Maryland Baltimore County)
    • Anshuman Suri (University of Virginia)
    • Bernhard Egger (MIT)
    • Chen Zhu (University of Maryland)
    • Chenglin Yang (Johns Hopkins University)
    • Chirag Agarwal (UIC)
    • Gaurang Sriramanan (Indian Institute of Science)
    • Jamie Hayes (University College London)
    • Jiachen Sun (University of Michigan)
    • Jieru Mei (Johns Hopkins University)
    • Kibok Lee (University of Michigan)
    • Lifeng Huang (Sun Yat-Sen university)
    • Mario Wieser (University of Basel)
    • Maura Pintor (University of Cagliari)
    • Muhammad Awais (Kyung-Hee University)
    • Muzammal Naseer (ANU)
    • Nataniel Ruiz (Boston University)
    • Peng Tang (Salesforce Research)
    • Qihang Yu (Johns Hopkins University)
    • Rajkumar Theagarajan (University of California, Riverside)
    • Sravanti Addepalli (Indian Institute of Science)
    • Tianyu Pang (Tsinghua University)
    • Won Park (University of Michigan)
    • Xiangning Chen (University of California, Los Angeles)
    • Xingjun Ma (Deakin University)
    • Xinwei Zhao (Drexel University)
    • Yash Sharma (University Tuebingen)
    • yulong cao (University of Michigan, Ann Arbor)
    • Yuzhe Yang (MIT)
    • Ziqi Zhang (Peking University)


    Please contact Adam Kortylewski or Cihang Xie if you have questions. The webpage template is by the courtesy of CVPR 2020 Workshop on Adversarial Machine Learning in Computer Vision.