Modern software systems never stop to grow; and they grow fast. To catch up with the speed of the highly dynamic and complex society we now live in, companies deliver new features to their software systems virtually at every second of the day. After all, the faster a new feature gets to the market, the better.

Technically speaking, this means that software developers are adding (and modifying) a high number of lines of source code every day. In a perfect world, these modifications are done in an effective manner, as the internal structure of the software system is designed with evolution in mind. However, it is easy to imagine that developers that are working under time pressure or on unclear requirements may not take optimal decisions.

A series of “bad decisions” might make software simply harder to maintain and evolve. As a consequence, our society has to spend more money to keep evolving the software systems we so much rely on. The Consortium for Information and Software Quality estimates that, in 2018, the US has spend around US$ 2.84 trillions due to poor software quality, where around 18% (or US$510 billions) due to technical debt and 21% due to challenges in legacy systems.

Software tends to rot over time. Lehman has indeed predicted that, without action, software systems tend to increase in complexity and suffer from a quality decline. Software refactoring techniques, or the act of improving the internal structure of the system in a way that future evolutions are easier and therefore less costly, are fundamental. Software refactoring does pay off; a study at Microsoft, for example, showed that developers perceived better maintainability, readability, and even reduced time to markets in codebases that have gone through systematic refactoring. The same developers, however, also observed that refactoring might involve substantial costs and risks.

The goals of this line of research are: