ÌǹûÅɶÔ

Skip to main content
DA / EN

Responsible AI

Illustration of responsible AI directions

Privacy

Federated Machine Learning

Privacy is a primary aspect of Responsible AI, especially when handling sensitive data. Our work in Federated Learning (FL) focuses on enabling collaborative AI systems training across decentralized datasets without compromising user privacy. For example, hospitals can collaboratively train early disease detection systems using medical data, without moving patient records out of the institutions. FL ensures that sensitive patient information remains private while still enabling AI systems to learn from diverse data sources, in full compliance with data regulations like GDPR, and the EU’s AI Act, which imposes strict rules for data processing and privacy protection in high-risk AI applications.
Further, to address potential privacy leakage from various attacks on FL systems, we integrate privacy-preserving mechanisms such as Differential Privacy (DP), Homomorphic Encryption (HE), and Secure Multi-Party Computation (SMPC) into the FL framework. These methods offer robust safeguards, ensuring that sensitive information remains protected throughout the training process, even in adversarial scenarios.

Machine Unlearning

Machine Unlearning empowers users to have control over their data by enabling AI systems to forget specific information upon request. This is crucial for complying with privacy regulations, such as GDPR's ‘right to be forgotten’, which gives patients the right to withdraw consent for using their data. Our work in Machine Unlearning focuses on developing efficient algorithms for selective data removal from trained models, ensuring privacy and regulatory compliance while preserving system performance. This aligns with the EU’s AI Act, which includes provisions for data removal and ensuring that AI models respect individuals' rights to privacy.

Fairness-Conscious AI


Our work ensures that AI systems do not discriminate against individuals or groups based on sensitive attributes like race, gender, or socio-economic status. For example, in healthcare, AI systems used for diagnosing diseases or deciding treatment plans must ensure that their decisions are not biased against specific demographic groups. If the AI system is trained primarily on data from one demographic (e.g., predominantly male patients), it may perform poorly for other groups (e.g., female patients or underrepresented groups). By adopting fairness-conscious practices, we aim to eliminate discrimination and build trust in AI systems, ensuring that their benefits are distributed equitably across society. These efforts are in line with the EU’s AI Act, which advocates for fairness in AI systems, especially those used in high-risk sectors such as healthcare.

Transparency and Interpretability

We are committed to ensuring that AI systems are both transparent and interpretable. This means making the inner workings of the AI system clear—how it is built, what data it uses, and how it makes decisions. By providing clear and accessible explanations of AI model behavior, we build trust and empower users to engage confidently with these technologies. For example, in healthcare, AI systems used for diagnosing medical conditions, such as detecting tumors or polyps in medical images, can not only provide visual explanations like heatmaps or saliency maps to show the areas influencing the diagnosis, but also disclose key details such as the datasets used for training, the algorithms applied, and how the model was validated. This combination of transparency and interpretability ensures that both doctors and patients can trust the system, understand the reasoning behind its decisions, and make more informed, collaborative decisions in critical healthcare scenarios. These efforts contribute to the EU’s AI Act's emphasis on transparency and ensuring that high-risk AI applications are understandable and explainable to users.

 

Contact

Vinay Gogineni Vinay Chakravarthi Gogineni
ÌǹûÅÉ¶Ô Applied AI and Data Science
vigo@mmmi.sdu.dk




Research projects

See our research projects at ÌǹûÅÉ¶Ô Applied AI and Data Science

Research projects

Last Updated 13.03.2025