Real-world Challenges for Adversarial Patches in Object Detection

Adversarial attacks make small, malicious changes to inputs that deceive ML models into making incorrect predictions. They might look normal or acceptable to human observers, but still manage to fool a ML system – e.g., a sticker in the observed scene in computer vision domain. These attacks are usually designed by employing digital transformation techniques designed to account for environmental variables. However, real-world factors – how light interacts with different shapes and materials, add extra complexity. Our project will study how these real-world conditions affect the performance of adversarial patches using a physically based rendering engine

 

Download as PDF

Student Target Groups:

  • Students in ICE and Computer Science.

Thesis Type:

  • Master Thesis / Bachelor Thesis

Goal and Tasks:

  • Literature review on adversarial learning anddefense mechanisms;
  • Model digital adversarial patches in realworld scenarios to examine how actual physical conditions influence their effectiveness; compare your method to existing methods;
  • Summarize the results in a written report and prepare an oral presentation.

Recommended Prior Knowledge:

  • Programming skills in Python and C++; interest in efficient code writing;
  • Prior experience with deep learning frameworks (preferably PyTorch).

Start:

  • a.s.a.p.

Contact: