Introduction

Welcome to my brand new blog series all about how AI is helping make cars smarter, faster and safer. We will be diving into how AI solves real problems in automotive world by speeding up how we create and improve software for vehicles. Get ready to explore how this cool tech is making cars better, safer, and even more advanced! (Funnily enough, this introduction was written by AI itself).

On a more serious note, by acknowledging the huge impact AI will have in shaping the future of software development, I developed a small tool indented to close the gap between Software Architecture (ASPICE.SWE2) and Software Integration Testing (ASPICE.SWE5). Specifically, the tool I present in the following article will aim to streamline software development processes by helping the Software Test Engineer to generate Test Specifications from a given Architectural Description.

In an ideal case this tool would be the first from a series aimed to tackle down common problems from embedded software development area.

Product Architecture

Test-Assistant is a tool developed as a VS Code extension using TypeScript. Rather than developing a standalone tool, I chose to create an extension to VS Code because of the convenience to have both SW Architectural Description, implementation and test cases in the same place.

Below diagram shows a high-level overview of Test-Assistant. It start with defining the SW Architecture, which will be defined using PlantUML Language (other modelling languages will be supported in future versions - ideally). As per ASPICE standard, one of the users of SW Architectural Description is the SW Test Engineer who shall use it to create test specification.

The LLM used for this tool was mainly gpt-4-1106-preview because of the reproducibility features added in this version. Because the tool is designed to be used by several engineers over a project, having consistent, deterministic outputs is one of the key challenges of the design. Of course, is is possible to change the LLM as well as a different bunch of parameters from configurations of the tool.

Several prompting techniques were explored, varying from chain-of-thought to semi-automatic prompt generation which are part of the "Prompt Recipes" and "Prompt libraries" developed for this tool. The output of the extension represents the test cases for analyzed set of classes, which shall be reviewed and refined by the test engineer in the end.

Proof-of-concept

We would start from a well know design pattern which is the Command Pattern. Next step is to create a UML class diagram for this architecture which will be written using PlantUML. Both the visual representation of the pattern and the reference language for it can be seen below.

First phase of the process is called in-context learning in which we will give the LLM a few demonstrations on how the selected architecture is intended to be implemented and used. This phase allow the model to comprehend and execute novel tasks, specific to our context without the need of retraining. You can see the whole process in the GIF below (in-context learning phase was implemented inside command palette; the whole process takes about 2 minutes so the GIF below was trimmed to show only the input-output part)

Once the first phase is completed we can proceed for the test cases generation. The test cases were created using Google Test. For each of the above classes, the assistant created a test fixture and a test case. We want to know if the dependencies between objects are met, thus the intense usage of EXPECT_CALL and ON_CALL. This will help validating the static design, for now. Below you can see how this is done with Test-Assistant (Again, the GIF was trimmed down due to timing reasons)

Finally, let's see if the output we got is compilable. Of course not. Since the tool worked only by providing as input the SW Architecture given by the UML class diagram, the test cases were also created solely on the function prototypes given in the diagram. Hence, for Receiver class, the diagram gives the following definition for one of the methods void do_something(std::string a)while the implementation is defining it as void do_something(const std::string& a). Rigorously speaking, this would have mean the test case is failed but after a few manual updates for the test cases we got it compiled with the output below.

It is important to note that Test-Assistant is totally implementation agnostic. We never referred to the actual implementation of those classes but solely to their SW Architectural Specifications. In this way we manage to achieve a complete decoupling between SWE5 and SWE4, so desired in the actual world. Of course, in order for this to happen in practice the architecture needs to be fully consistent with the implementation which is not always the case.

Product roadmap

There are a few features which I have in mind for this tool, ranging from support for C and AUTOSAR to improved user experience and ASPICE compliance. If any of these features will become reality, we'll see, but anyway I want to thank you for reading this faršŸ˜€.