A fundamental challenge of software testing is the statistically well-grounded extrapolation from program behaviors observed during testing. For instance, a security researcher who has run the fuzzer for a week has currently no means (1) to estimate the total number of feasible program branches, given that only a fraction has been covered so far; (2) to estimate the additional time required to cover 10% more branches (or to estimate the coverage achieved in one more day, respectively); or (3) to assess the residual risk that a vulnerability exists when no vulnerability has been discovered. Failing to discover a vulnerability does not mean that none exists—even if the fuzzer was run for a week (or a year). Hence, testing provides no formal correctness guarantees.
Tue 6 NovDisplayed time zone: Guadalajara, Mexico City, Monterrey change
15:30 - 17:00 | |||
15:30 22mTalk | Text Filtering and Ranking for Security Bug Report Prediction Journal-First Fayola Peters Lero - The Irish Software Research Centre and University of Limerick, Thein Than Tun , Yijun Yu The Open University, UK, Bashar Nuseibeh The Open University (UK) & Lero (Ireland) DOI | ||
15:52 22mTalk | STADS: Software Testing as Species Discovery Journal-First Marcel Böhme Monash University DOI | ||
16:15 22mTalk | The Impact of Regular Expression Denial of Service (ReDoS) in Practice: An Empirical Study at the Ecosystem Scale Research Papers James C. Davis Virginia Tech, USA, Christy A. Coghlan Virginia Tech, USA, Francisco Servant Virginia Tech, Dongyoon Lee Virginia Tech, USA | ||
16:37 22mTalk | FraudDroid: Automated Ad Fraud Detection for Android Apps Research Papers Feng Dong Beijing University of Posts and Telecommunications, China, Haoyu Wang , Li Li Monash University, Australia, Yao Guo Peking University, Tegawendé F. Bissyandé University of Luxembourg, Luxembourg, Tianming Liu Beijing University of Posts and Telecommunications, China, Guoai Xu , Jacques Klein University of Luxembourg, SnT |