Multimodal Learning Analytics


New project page: 
Multimodal Learning Analytics
Funded by: 
AT&T Foundation, Lemann Foundation, NSF
Year: 
2009-2014
Publications: 
Description: 

Using advanced sensing and artificial intelligence technologies, we are investigating new ways to assess project-based activities, examining students' speech, gestures, sketches, and artifacts in order to better characterize their learning over extended periods of time.

Politicians, educators, business leaders, and researchers are unanimous in stating that we need to redesign schools to teach "21st century skills": creativity, innovation, critical thinking, problem solving, communication, and collaboration. None of those skills are easily measured using current assessment techniques, such as multiple choice tests or even portfolios. As a result, our schools are paralyzed by the push to teach new skills and the lack of reliable ways to assess those skills, or provide students with formative feedback. One of the difficulties is that current assessment instruments are based on products (an exam, a project, a portfolio), and not on processes (the actual cognitive and intellectual development while performing a learning activity), due to the intrinsic difficulties in capturing detailed process data for large numbers of students. However, new sensing and data mining technologies could make it possible to capture and analyze massive amounts of process data of classroom activities. We are conducting research on the use of biosensing, signal- and image-processing, text-mining, and machine learning to explore multimodal process-based student assessments.