Testing and Explaining the Behavior of Code Intelligence Models
When: February 16, 2022, 16:00 - 17:00
Where: Online
Rafiqul Islam Rabin from the SERG Group at the University of Houston will present his work on testing and explaining machine learning models in the field of software engineering. In particular, Rabin investigated metamorphic transformations on program-code to find changes in prediction.
Abstract:
Testing and Explaining the Behavior of Code Intelligence Models
Deep neural networks (DNNs) are increasingly being used in various code intelligence tasks such as code summarization, vulnerability detection, type annotation, and many more. While the performance of neural models for intelligent code analysis continues to improve, our understanding of how reliable these models are largely unknown. To reliably use such models, researchers often need to reason about the behavior of the underlying models and the factors that affect them. However, this is very challenging as these models are opaque black boxes and usually rely on noise-prone data sources for learning. To this end, I will present our recent approaches for testing and explaining the behavior of code intelligence models, such as evaluating the generalizability of models on unseen data, identifying relevant features that models learn from input programs for making predictions, and quantifying the impacts of noise in training these models. Our work raises awareness and provides new insights into important issues of training neural models in code intelligence systems that are usually overlooked by software engineering researchers. We further plan to study and use these insights as action to improve the performance of code intelligence models.
Related publications: