education.png

Problem Statement

With the rise of GenAI, students can now complete complex academic tasks with substantial technological assistance. In many cases, this assistance can reduce students’ direct engagement with learning materials and limit their understanding of underlying concepts. As a result, it is becoming increasingly difficult to determine whether students have genuinely achieved the intended learning outcomes or have relied heavily on AI tools to produce their work.

Traditionally, assignments were evaluated primarily on the quality of the submitted work. In the current landscape, where the proportion of student versus AI contribution is harder to discern, oral examinations are emerging as one of the most reliable methods for assessing a student’s own understanding of their work, particularly for seminar papers, bachelor’s theses, and master’s theses.

However, oral examinations are among the most resource-intensive assessment methods, requiring significant time and effort from professors, lecturers, and teaching assistants. This creates a need for innovative approaches to make oral examinations both scalable and effective while ensuring their ability to verify student comprehension.


Proposed Solution

AI Examiner is a research and development project focused on building a digital, voice- and chat-based examination system. By combining interactive natural language processing, real-time evaluation, and human-in-the-loop controls, the system aims to provide scalable, auditable, and rigorous assessments for both class examinations and individual student work (e.g., theses).

The project will explore multiple versions of the system, progressively testing increased automation levels while maintaining human oversight and interpretability.


System Architecture and Features


Evaluation Opportunities