Large Language Models in Software Engineering: A Focus on Issue Report Classification and User Acceptance Test Generation
Authors | Gabriele De Vito, Sergio Di Martino, Filomena Ferrucci, Fabio Palomba, and . |
conference | Ital-IA 2024 - 4th National Conference on Artificial Intelligence. |
Abstract
In recent years, Large Language Models (LLMs) have emerged as powerful tools capable of understanding and generating natural language text and source code with remarkable proficiency. Leveraging this capability, we are currently investigating the potential of LLMs to streamline software development processes by automating two key tasks: issue report classification and test scenario generation.
For issue report classification the challenge lies in accurately categorizing and prioritizing incoming bug reports or feature requests. By employing LLMs, we aim to develop models that can efficiently classify issue reports, facilitating prompt response and resolution by software development teams.
Test scenario generation involves the automatic generation of test cases to validate software functionality. In this context, LLMs offer the potential to analyze requirements documents, user stories, or other forms of textual input to automatically generate comprehensive test scenarios, reducing the manual effort required in test case creation.
In this paper, we outline our research objectives, methodologies, and anticipated contributions to these topics in the field of software engineering. Through empirical studies and experimentation, we seek to assess the effectiveness and feasibility of integrating LLMs into existing software development workflows. By shedding light on the opportunities and challenges associated with LLMs in software engineering, this paper aims to pave the way for future advancements in this rapidly evolving domain.