Tue 6 Nov 2018 14:15 - 14:37 at Horizons 6-9F - Deep Learning Chair(s): David Rosenblum

With the rise of machine learning, there is a great deal of interest in treating programs as data to be fed to learning algorithms. However, programs do not start off in a form that is immediately amenable to most off-the-shelf learning techniques. Instead, it is necessary to transform the program to a suitable representation before a learning technique can be applied.

In this paper, we use abstractions of traces obtained from symbolic execution of a program as a representation for learning word embeddings. We trained a variety of word embeddings under hundreds of parameterizations, and evaluated each learned embedding on a suite of different tasks. In our evaluation, we obtain 93% top-1 accuracy on a benchmark consisting of over 19,000 API-usage analogies extracted from the Linux kernel. In addition, we show that embeddings learned from (mainly) semantic abstractions provide nearly triple the accuracy of those learned from (mainly) syntactic abstractions.

Tue 6 Nov

fse-2018-research-papers
13:30 - 15:00: Research Papers - Deep Learning at Horizons 6-9F
Chair(s): David RosenblumNational University of Singapore
fse-2018-research-papers13:30 - 13:52
Talk
Vincent HellendoornUniversity of California at Davis, USA, Christian BirdMicrosoft Research, Earl T. Barr, Miltiadis AllamanisMicrosoft Research, Cambridge
fse-2018-research-papers13:52 - 14:15
Talk
Gang Zhao, Jeff HuangTexas A&M University
fse-2018-research-papers14:15 - 14:37
Talk
Jordan HenkelUniversity of Wisconsin–Madison, Shuvendu K. LahiriMicrosoft Research, Ben LiblitUniversity of Wisconsin–Madison, Thomas RepsUniversity of Wisconsin - Madison and GrammaTech, Inc.
fse-2018-research-papers14:37 - 15:00
Talk
Shiqing MaPurdue University, USA, Yingqi LiuPurdue University, USA, Wen-Chuan LeePurdue University, Xiangyu ZhangPurdue University, Ananth GramaPurdue University, USA