Random Testing of Compilers’ Performance Based on Mixed Static and Dynamic Code Comparison
This paper proposes an automated test method for detecting performance
bugs in compilers. It is based on differential random testing, in
which randomly generated programs are compiled by two different
compilers and resulting pairs of assembly codes are compared. Our
method attempts to achieve efficient and accurate detection of
performance difference, by combining dynamic measurement of execution
time with static assembly-level comparison and test program
minimization. In the first step, discrepant pairs of code sections in
the assembly codes are extracted, and then the sums of the weights of
discrepant instructions in the sections are computed. If significant
differences are detected, the test program is reduced to a small
program that still exhibits the static difference and then the actual
execution time of the codes are compared. A test system has been
implemented on top of the random test system Orange4, which has
successfully detected a regression in the optimizer of a development
version of GCC-8.0.0 (latest as of May, 2017).
Mon 5 NovDisplayed time zone: Guadalajara, Mexico City, Monterrey change
13:30 - 15:00 | |||
13:30 45mTalk | Random Testing of Compilers’ Performance Based on Mixed Static and Dynamic Code Comparison A-TEST | ||
14:15 45mTalk | Test Patterns for IoT A-TEST Pedro Martins Pontes Faculty of Engineering, University of Porto and INESC TEC, Bruno Lima Faculty of Engineering, University of Porto and INESC TEC, João Pascoal Faria Faculty of Engineering, University of Porto and INESC TEC |