Fusing theory-guided machine learning and bio-sensing: considering time in how children learn science from dynamic multimedia
Jason C. Coronel et al.
Abstract
A new era of message processing research will emerge from the convergence of powerful machine learning algorithms with dynamic data from everyday devices equipped with biological sensors. Our study takes critical steps into this era by integrating theory-guided artificial neural networks with eye movements to understand how people learn science concepts from dynamic multimedia. Essential to our theory-guided machine learning approach is a cognitive conceptualization of time as the dynamic interdependence between past and new information that guides how multimedia is attended to and understood. We tracked the eye movements of 197 children as they watched an educational video. We trained two neural network architectures differing in theory guidance to predict learning outcomes using eye movements. The theory-guided architecture, which considered the temporal interdependence of information, yielded more accurate out-of-sample predictions. Our work advances the use of theory-guided machine learning and the development of systems that monitor real-time learning.
2 citations
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.25 × 0.4 = 0.10 |
| M · momentum | 0.55 × 0.15 = 0.08 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.