Artificial intelligence (AI) is increasingly being employed to support various processes within the software development life cycle, aiding engineers in creating systems that are efficient and reliable. While AI is not a silver bullet for engineering trustworthy systems, it offers valuable tools and techniques that can enhance several aspects of software development. This involves providing explainability during the development process using large language models (LLMs) and other AI techniques. These tools help developers understand complex system behaviours, clarify the implications of changes, and offer insights into code functionality, enabling better decision-making. By automating complex or repetitive tasks, AI helps improve efficiency and reduce human error, supporting engineers in their goal of building high-quality software.
Generative AI has achieved significant advancements, however, concerns regarding the accuracy and reliability of its outputs persist. These inaccuracies can lead to significant consequences, including erroneous decision-making, dissemination of false information, privacy breaches, legal liabilities, and other adverse effects. Despite ongoing efforts to mitigate these risks through explainable AI and regulatory practice, such as transparency, privacy protection, bias mitigation, and social and environmental responsibility, misinformation generated by AI remains a challenge.
Key research areas focusing on advanced AI toward reliability and explainability in engineering trustworthy systems include:
- Automating Software Model Analysis: AI techniques are used to identify potential inconsistencies or suggest optimizations, which improves the accuracy of designs and enhances system reliability.
- Improving Model Accuracy with Reliable and Explainable AI: Develop more sophisticated algorithms and training techniques to enhance the accuracy and reliability of generative AI output. Advance existing methods to make AI systems more interpretable and transparent, enable users to understand and trust AI-generated content.
- AI-powered Verification and Verified AI: AI and ML can enhance existing verification processes by automating tasks such as reading user specifications and converting them into verification code to reduce manual effort and increase efficiency. Applying formal verification techniques to 1. ensure the correctness and dependability of AI systems; 2. Establish and evaluate the accuracy and reliability of generative AI output.
- Classifying and Organizing Software Artifacts: AI helps in systematically categorizing and managing software artifacts, ensuring more efficient retrieval and better traceability throughout the development process.
- Applying AI to Code Analysis: AI-driven tools assist in detecting bugs, identifying security vulnerabilities, and highlighting code smells. This leads to the development of more robust and secure systems.
These advanced AI-supported processes do not replace human expertise but serve as valuable tools that augment the capabilities of software engineers. By leveraging AI thoughtfully, it is possible to address challenges in the development of trustworthy systems while maintaining realistic expectations about its limitations and areas of applicability.
We address real-world challenges by engaging in both academic and industrial collaborations at national and international level. Please contact us if you are interested in building a partnership or conducting research.
Contact
![]() |
Eun-Young Kang ÌǹûÅÉ¶Ô Software Engineering eyk@mmmi.sdu.dk +45 65507967 |
![]() |
Qusai Ramadan ÌǹûÅÉ¶Ô Center for Industrial Software (CIS) qura@mmmi.sdu.dk +45 65503719 |