Sun 4 Nov 2018 11:15 - 11:30 at Rock Lake - NL4SE Workshop II

Test generation can have a large impact on the software engineering process by decreasing the amount of time and effort required to maintain a high level of test coverage. This increases the quality of the resultant software while decreasing the associated effort. In this paper, we present TestNMT, an experimental approach to test generation using neural machine translation. TestNMT aims to learn to translate from functions to tests, allowing a developer to generate an approximate test for a given function, which can then be adapted to produce the final desired test.

We also present a preliminary quantitative and qualitative evaluation of TestNMT in both cross-project and within-project scenarios. This evaluation shows that TestNMT is potentially useful in the within-project scenario, where it achieves a maximum BLEU score of 21.2, a maximum ROUGE-L score of 38.67, and is shown to be capable of generating approximate tests that are easy to adapt to working tests.

Sun 4 Nov

10:30 - 12:00: NL4SE - NL4SE Workshop II at Rock Lake
fse-2018-NL4SE10:30 - 10:45
fse-2018-NL4SE10:45 - 11:00
Kevin LeeUniversity of California at Davis, USA, Casey CasalnuovoUniversity of California at Davis, USA
fse-2018-NL4SE11:00 - 11:15
Danielle GonzalezRochester Institute of Technology, USA, Suzanne PrenticeUniversity of South Carolina, USA, Mehdi MirakhorliRochester Institute of Technology
fse-2018-NL4SE11:15 - 11:30
Robert WhiteUniversity College London, UK, Jens KrinkeUniversity College London
fse-2018-NL4SE11:30 - 11:45
Kate M. Bowers, Reihaneh H. HaririOakland University, USA, Katey A. PriceAlbion College, USA
fse-2018-NL4SE11:45 - 12:00
Sergey MatskevichDrexel University, USA, Colin GordonDrexel University