
Goals
Raising awareness of the importance of Trustworthy AI
Primary objective
To deepen the understanding and meet the rapid technological development and the related challenges— such as competence, reliability, privacy in the use of AI in education— the primary objective is:
To develop research excellence in the area of Trustworthy AI in Education, and provide a framework, multi-disciplinary insights, materials, and tools for building trust in the use of AI in the educational sector

Secondary objectives
1) Map stakeholder motives and interests in the educational ecosystem, map accountability relationships in the educational ecosystem, and develop a conceptional framework of the layers of trust for the responsible and trustworthy use of AI in education.
2) Explore and analyse relevant EU/EEA (GDPR/AI ACT) and national legal frameworks (e.g., opplæringsloven and universitets- og høyskoleloven) that regulate the processing of personal data and AI, with a view of verifying whether these frameworks safeguards trustworthy AI in education— and, if necessary, propose amendments on both EU/EEA and national level, as called for by Personvernkommisjonen (Norwegian Privacy Commission).
3) Analyse a variety of AI systems in education against the ethical guidelines for trustworthy AI (Lawful, ethical, robust) to identify the key requirements and competencies that should be addressed in building trust between stakeholders identified in the conceptual framework.
4) Develop a repository of communication processes, guidelines, and tools (e.g., games) to address trust in AI in education for multiple stakeholders (parents, students, teachers, privacy officers, EdTech companies, etc.), thus increasing their knowledge of responsible use of AI in education.
5) Contribute to national and European work [1] on legal guidelines for AI and education.
6) Identify competence needs and new educational and training offerings for a variety of stakeholders on responsible and trustworthy use of AI in Education.
[1] the Norwegian education and higher education laws & Council of Europe’s on-going work on binding legal guidelines for AI and Education
Thus, through an interdisciplinary collaboration between the Centre for the Science of Learning & Technology (SLATE), the Faculty of Psychology and the Faculty of Law at the University of Bergen, EduTrust AI contributes scientific value. The project does this by creating new knowledge, methods, guidelines (educational, technological, and regulatory) and tools, and gives input to a practicable framework related to the challenging questions around the use of student data and AI systems in education. This is relevant for the fields of law, information and computer science, learning sciences, and the social sciences.