About EduTrust AI

Artificial Intelligence (AI) in Education: Layers of Trust

Trustworthy AI in Education is an under-researched area. When it comes to the use of AI in society, academics focus on responsibility, accountability, and trust (e.g., in AI systems and ethics), while politically the focus is on social acceptance; trust is actually a complex social phenomenon. Thus, when looking at the trustworthy use of AI in the educational sector, we need research that takes both perspectives. It is not enough to have transparent, interpretable, and FAIR AI systems— they also need to be trusted by stakeholders and accepted by society at large.   

The trustworthiness of AI in education involves a multifaceted interplay between social, cultural and technical aspects of AI such as reliability, transparency, explainability, fairness, and accountability, and the intricate socio-technical dynamics among diverse groups of important stakeholders. Thus, trust lies within the complex web of interactions between different human and machine actors, various entities, and the regulatory system that comprises the ecosystem of the educational sector. 

Artificial Intelligence in Education: Layers of Trust (EduTrust AI) aims to deepen our understanding of the use of AI in education by:

1) identifying layers of trust associated with the use of AI in the educational sector that considers the complex accountability relationships (conceptual framework),

2) developing guidelines (educational, technological, and regulatory) for the application of AI in education to be adequately transparent, interpretable and accountable for various stakeholders,

3) developing targeted materials and tools for stakeholder groups for building and supporting trust in the use of AI in the educational sector, and

4) translating insights about legal, psychological and socio-cultural determinants of trust into legal requirements for the educational sector.

AI generated using Microsoft Designer

Project period:  November 2023 – October 2027

Project leader: Professor Barbara Wasson

Funded by: the Trond Mohn Research Foundation (TMS2023TMT03)

Photo: AI generated in Adobe Firefly

EduTrust AI is an interdisciplinary collaboration between the Centre for the Science of Learning & Technology (SLATE), and the Faculty of Law at the University of Bergen. EduTrust AI contributes scientific value by creating new knowledge, methods, guidelines (educational, technological, and regulatory) and tools, and gives input to a practicable framework related to the challenging questions around the use of student data and AI systems in education. This is relevant for the fields of law, information and computer science, learning sciences, and the social sciences. 

Trustworthy AI Synergy

EduTrust AI is one of the three projects included in Trustwothy AI Synergy (TAIS), funded by the Trond Mohn Research Foundation (TMS).