BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Fuzz Testing and Evaluation
DTSTART:20200124T150000
DTEND:20200124T170000
DTSTAMP:20260406T171719Z
UID:45cd345e94255f2914ef55a16886f1b37f6bb303e188e84e07a1c497
CATEGORIES:Conferences - Seminars
DESCRIPTION:Ahmad Hazimeh\nEDIC candidacy exam\nExam president: Prof Jean-
 Pierre Hubaux\nThesis Advisor: Prof Mathias Payer\nCo-examiner: Prof Jim L
 arus\n\nAbstract\nFuzzing is a prominent dynamic testing technique that ai
 ms to discover software bugs through brute force. Fuzzers have evolved in 
 different directions\, with the common goal of maximizing the efficiency o
 f the process. However\, the lack of proper benchmarks and performance met
 rics results in ad-hoc evaluations that prohibit fair comparisons between.
  In this study\, we examine an existing ground-truth benchmark\, LAVA-M\, 
 that has become the de-facto for fuzzer evaluation\, and we shed the light
  on its shortcomings. Among the fuzzers that were evaluated against LAVA-M
 \, Angora had displayed the highest detection rate and was thus the subjec
 t of scrutiny. We explore Angora's points of strength and weakness and dis
 cuss the pitfalls in its evaluation. Lastly\, the third work performs a su
 rvey of previous evaluations\, highlights the drawbacks of common practice
 s\, and suggests guidelines for more consistent evaluations. Based on thes
 e works\, we propose Magma\, a ground-truth fuzzing benchmark with real pr
 ograms and bugs\, designed to closely mimic in-the-wild scenarios for eval
 uating software testing tools.\n \nBackground papers\nAngora: Efficient F
 uzzing by Principled Search\, by Peng Chen\, Hao Chen\, IEEE S&P 2018.\nLA
 VA: Large-Scale Automated Vulnerability Addition\, by Brendan Dolan-Gavitt
  et al.\, IEEE S&P 2016.\nEvaluating Fuzz Testing\, by George Klees et al.
 \, ACM CCS 2018.\n\n\n\n\n 
LOCATION:BC 129 https://plan.epfl.ch/?room==BC%20129
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
