I recently hosted an episode of Software Engineering Radio called "Will McGugan on Text-Based User Interfaces"!

  • Home
  • Teaching
    • Overview
    • Data Abstraction
    • Operating Systems
  • Research
    • Overview
    • Papers
    • Presentations
  • Outreach
    • Software
    • Service
    • Blog
  • About
    • Biography
    • Schedule
    • Contact
    • Blog
    • Service
    • Papers
    • Presentations

Contents

  • Introduction
  • Setup
  • Findings
  • Conclusion

Does parameter tuning improve search-based test data generation?

post
research paper
software testing
Parameter tuning is expensive … and maybe not worth it?
Author

Gregory M. Kapfhammer

Published

2013

Introduction

Ever wondered about the intricacies of parameter tuning in search-based test data generation? In a recent research paper (Kotelyanskii and Kapfhammer 2014) , I delve into the challenges and outcomes of parameter tuning for a tool called EvoSuite. This tool uses a genetic algorithm to generate a JUnit test suite for a Java class. The paper presents an empirical study that further supports previous research findings: tuning EvoSuite’s parameters with a well-known optimizer called SPOT does not yield configurations significantly better than the defaults. Keep reading to discover the key findings of this intriguing research!

Setup

This paper’s experiment involved a random selection of 10 Java projects available in the SF100 repository, with 475 classes in total. The evaluation metric for these experiments was the lower-is-better inverse branch coverage metric. To collect enough data points to support a rigorous statistical analysis, we ran EvoSuite for 100 trials with the default configuration and 100 trials with the configuration returned after parameter tuning with SPOT.

Findings

The paper presents several key findings:

  • Improvements: The configurations returned by the parameter tuning algorithm only performed better on eleven of the 475 classes.

  • Disparities: Many Java classes in the randomly chosen subset that were either “easy” (i.e., all configurations always achieved perfect coverage) or “hard” (i.e, all configurations always achieved no coverage because, in some cases, EvoSuite could not generate any data).

  • Limitations: The SPOT-derived configuration either performed worse than the defaults or had no statistically significant impact, suggesting the limits of parameter tuning.

Conclusion

The research suggests that EvoSuite’s default parameters have been set by experts and are thus suitable for use in future experimental studies and industrial testing efforts. This negative result highlights the challenges of parameter tuning in search-based test data generation.

Further Details

If you’re interested in diving deeper into this research, I encourage you to read the full paper (Kotelyanskii and Kapfhammer 2014) . After you have read the paper, your insights about parameter tuning or search-based test data generation are appreciated! If you have ideas or experiences related to this topic, please contact me. If you want to stay informed about new developments and blog posts related to this research paper, consider subscribing to my mailing list.

Return to Blog Post Listing

References

Kotelyanskii, Anton, and Gregory M. Kapfhammer. 2014. “Parameter Tuning for Search-Based Test-Data Generation Revisited: Support for Previous Results.” In Proceedings of the 14th International Conference on Quality Software.

GMK

Top