Designing Trustworthy AI in Higher Education
Abstract:
Applying Artificial-Intelligence-(AI)-based systems and tools in the context of
higher education imposes many challenges with respect to data privacy and ethics.
For example, the EU AI act that was adopted in March 2024, classifies many AI
systems used in education as high-risk AI systems. High-risk AI systems must follow
a strict set of requirements in order to be used in practice. Beyond the legal
obligations, the trustworthy use of AI systems is not yet widespread. There are
already approaches for assessing the trustworthiness of AI systems that shall ensure
that such systems comply with existing guidelines for ethical AI. In this chapter, we
review available design approaches for building trustworthy AI systems and
evaluate their applicability in the context of higher education. In the real-life use
case of developing an AI-based analysis system for e-portfolios from students in
introductory computing courses at university, the existing design approaches are
further detailed and adapted to the specific context of higher education.
Furthermore, we assess the trustworthiness of the developed AI-based analysis
system using the OECD Framework for the Classification of AI systems. Based on the
findings, we conclude and recommend a scenario-based design process that helps
building trustworthy AI-based systems in higher education.
Published:
AI - Ethical and Legal Challenges, InTech Open Online First(to appear in Elmer Dadios (ed), AI - Ethical and Legal Challenges [Working Title]).