This post introduces the Multimodal Learning Analytics (MMLA) research field. It is mainly targeted for Masters or Ph.D. students who are just beginning their research journey in the field of MMLA. This post provides a preliminary understanding of the field to start with. Moreover, It also offers references to some good to read articles and shares the information regarding useful resources and some well-known journal/conferences venues in the field. This post offers a brief introduction to Learning Analytics (LA) as well which I think is essential before diving deep into the discussion over MMLA.
The proliferation of Massive Open Online Courses (MOOC), Online learning, and Learning Management Systems (LMS) (e.g. Moodle) provided an enormous amount of data about students’ learning activities (How: when student interacts with technological resources, their interactions are traced in the form of logs). This data presented an opportunity for researchers to understand it for improving learning and teaching experiences. As a result, Educational Data mining (EDM) and Learning Analytics (LA) research fields are emerged. These both research fields focus on analyzing learning data with the ” aim to benefit learners as well as informing and enhancing the learning sciences “
(Baker & Siemens, 2014). Though these research areas share commonalities, there are differences between these two research fields
(Baker & Siemens, 2014). Siemens & Baker (2012) presented a comparison of EDM and LA research communities on multiple aspects (e.g. mode of discovery, techniques, and methods). I recommend reading the article Learning Analytics and Educational Data Mining: Towards Communication and Collaboration to have a better understanding of differences between EDM and LA.
The definition of EDM by International Educational Data Mining Society is as follows
Educational Data Mining is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from educational settings and using those methods to better understand students, and the settings which they learn in.
The Society for Learning Analytics Research (SoLAR) defines LA as (This definition was given by SoLAR in the paper call of the 1st Internation Conference on Learning Analytics and Knowledge (LAK’11) link
“the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs”
From my perspective, the major difference between the above two definitions is the purpose of analyzing the data. EDM is primarily interested in understanding the learning data, gaining insights out of it. While LA’s main focus on optimizing the learning experience and the context where it occurs.
Now, as we have a preliminary understanding, let’s go further in LA with the help of an example. Let’s say we have a dataset of Students activities in LMS. Here, I am using Student’s Academic Performance Dataset
(Amrieh et al. 2015,2016) which contains attributes regarding students’ demographics, and activities in the classroom along with their grades (Low, Medium, High). A box plot showing data from visitedResource (how many times students visited online resources) and class grades(L,M,H) (shown below, taken from here. As it can be seen that students with Low grades visited the resources less as compared to High and Medium grade students. This is just one of the insights from the data. A prediction model built using this data can be used to predict the performance of students next year. A Python’s notebook available here which shows the in-detail analysis of the data e.g. building prediction models.
Need Attention: If you are from a technical background, you might think LA is just data analytics with learning data (it happened with me). Well, there is a difference and for this, I would like to quote from
(Baker & Seimens, 2014)
The theory-oriented perspective marks a departure of EDM and LA from technical approaches that use data as their sole guiding point.
The point is data analytics discover the insights from the data whereas LA in addition to finding insights, also connect the insights to theory from learning science.
Now, moving a bit towards MMLA, let’s discuss some of the limitations of current research in LA. The majority of research in LA primarily analyzes digital traces of students activities which are left while interacting with digital resources. These interactions though offers an in-depth insight into student’s learning trajectory, only covers a part of learning. A study
(Pardo & Delgado-Kloos, 2014) reported that a significant portion of students activities happened outside of LMS. It raises a concern on relying on trace-based analysis results only. Another important issue is that not all learning activities happen on computers (e.g. traditional classroom settings, group discussion, teacher-student interaction, etc). Comparatively, there is less number of studies which focuses on the learning context without a computer (e.g. face-to-face settings). This bias is perceived as introducing street light effect
(Ochoa, 2016a). This effect is actually a Joke which goes as follows, An old man lost his keys in the garden during the night. He went near the streetlight and started searching. A policeman was passing by and stopped seeing that old man.
Police: What are you searching? Oldman: My keys. Police: Where did you lose them? Oldman: In the garden. Police: Then, why are you searching here? Oldman: Because light is better here.
It seems the same for LA studies which are heavily based on digital traces. The reason why we are investigating digital-traces is because we want to understand learning which occurs through technology or because it’s easy to access digital traces resultant from technological interactions.
Street Light Effect
Researchers tend to look for answers where the looking is good, rather than where the answers are likely to be hiding (Freedman, 2010).
To conclude this section, I would say LA has great potential to offer actionable insights to numerous stakeholders (e.g. teachers, administrators, policy-makers). It offers a new perspective on learning. Having said that, it should not completely rely on digital-traces based analysis methods. If we look at the definition, it says “.. collection, analysis, and reporting of data about learners and their contexts ..”. This data could be about learners’ activities in the learning platform, their speech while discussion, their hand-written notes, movement of objects while learning by doing, and emotions felt while learning. On the contrary, most of the LA studies utilized a small subset of this data (e.g. digital-traces).
Multimodal Learning Analytics
The limitations of digital-trace based analysis methods demanded researchers to extend the capabilities of LA by utilizing a variety of learning data collected from learners and learning context. Availability of economically feasible sensor and efficient computational technology allowed researchers to take this challenge. As a result, MMLA emerged as a potential solution to allows researchers to go beyond technology based/mediated learning to investigate learning in real-world settings.
Before embarking on our adventure on exploring MMLA, let’s get an understanding of a very basic and tricky question- What is modality? It is long-standing concept in the field of Human-Computer Interface (HCI). In simple terms, it refers to the ways we perceive or experience the world. For example, humans have five senses each representing a modality e.g. touch, vision, hearing, taste, and smell.
Let’s have look on some definitions of modality.
(Bordegoni, 1997) defines the modality as
Modality refers to a particular way or mechanism of encoding information for presentation to humans or machines in a physically realized form.
(Bernsen, 1997) argued on the basis of above-given definition that modality is representational modality, not a sensory modality. He based his claim on the fact that modality has been traditionally used in philosophy.
(Niglay, 1993) stated that modal may cover modality as well as mode.
Modality refers to the type of communication channel used to convey or acquire information. It also covers the way an idea is expressed or perceived, or the manner as the action is performed.
Mode refers to a state that determines the way information is interpreted to extract or convey meaning.
I am taking an example from
(Niglay, 1993) to further explain the definition- A user is interacting with a system which is showing the information using text and video. Now, according to the modality definition, this system is multimodal because it contains different types of data e.g. text, visuals, and audio. As per the
(Niglay, 1993) definition, modality defines the type of data exchanged. Now, if we look from a user’s perspective, in order to understand the information from the system user needs to use his multiple senses (e.g. eyes, ear). So here mode defines the way meaning is interpreted and therefore it is multimodal. These two perspectives of modality are also supported by
(Babber, 2001). He offered two definitions for the multimodal system. A human-centered definition includes more than one sensory (from five human’s senses) and response modality (e.g. gesture) and a technology-centered definition which “is based on the fact that computer systems can present information using different modes, such as visual display, sound, etc., and can receive information from different interaction devices, e.g. speech, keyboard, etc.”
The above-discussed perspectives of modality helped me to understand this concept better. I hope it does the same for you guys as well.
Now, as we have an initial idea about modality, let’s discuss about MMLA (Multimodal Learning Analytics) and its advantages.
Multimodal learning analytics (MMLA) sits at the intersection of three ideas: multimodal teaching and learning, multimodal data, and computer-supported analysis. At its essence, MMLA utilizes and triangulates among non-traditional as well as traditional forms of data in order to characterize or model student learning in complex learning environments
The first part is multimodal teaching and learning. This is the way we actually teach and learn. Think about a traditional classroom, the teacher usually uses blackboard/presentation, voice, and gestures to deliver the lecture to students. This is also applied to learning which occurs through technology. Students read texts, watch videos, and engage in simulations. So the learning part is also multimodal. The collection of multimodal data about the afore-mentioned activities (e.g. gesture) allows us to have a holistic view of learning and teaching. This data then used to build computational models for prediction or for generation of visualizations (for more details on types of data and analysis used in current research in MMLA, read Multimodal Learning Analytics and Education Data Mining: Using Computational Technologies to Measure Complex Learning Tasks by
Blikstein & Worsley, 2016).
For the theoretical foundations for Multimodal systems, three theroies can be referenced- Gestalt theory, Working-memory theory, and Activity theory
(Oviatt, 2017) (for more details refer the first chapter of The handbook of Multimodal-Multisensor interfaces )).
To illustrate the potential of MMLA, I am discussing a study [Expertise estimation based on simple multimodal features](https://www.researchgate.net/publication/262205312_Expertise_estimation_based_on_simple_multimodal_features ” target=”_blank )
(Ochoa, 2013). In this study, students were given mathematics problem to solve in groups using calculator. During the activity, audio, video, and pen strokes were recorded for later analysis. Additionally, some human-coded features were also recorded (e.g. whether students solved the problem successfully or not). Among several features extracted from collected data, one particular feature PCU(Proportion of Calculator Usage) was able to identify expert and non/expert 80% of the time. In order to compute PCU, the position and the direction towards calculator pointing was computed and compared in each frame. On the basis of position and direction, the student was estimated who was using calculator and then calculator usage proportion computed. This study demonstrate the potential MMLA holds for us.
Some useful Conferences and Journal venues
Here, I am providing some conferences and journals which you can refer. This list is generated on the basis of papers I have been reading on MMLA research.
- Conference on Learning Analytics and Knowledge, LAK.SoLAR organizes this conferene each year.
- European Conference on Technology Enhanced Learning (EC-TEL) organized by [EATEL]EATEL each year.
- International Conference on Advanced Learning Technology organized by IEEE Computer Society and the IEEE Technical Committee on Learning Technology.
- International Conference on Artificial Intelligence on Education.
- International Conference on Educational Data Mining organized by international Educational Data Mining Society.
- International Conference on Multimodal Interaction.
- International Conference of Human-Computer Interaction.
- International Conferences on the Learning Sciences organized by The International Society of the Learning Sciences link
- Journal of Computer Assisted Learning link
- Journal of Computer in Human Behavior link
- Journal of Computer & Education link
- IEEE Transaction on Learning Technologies link
- Journal of Learning Analytics link
Timeline of events in MMLA
Here are some key events of MMLA from the beginning.
- The first workshop on MMLA organized at the International Conference on Multimodal Interaction, ICMI’12 (extended abstract)
(Scherer et. al, 2012)(link. An article by Marcelo Worsley also presented in the workshop at ICMI’12 Multimodal Learning Analytics: Enabling the Future of Learning through Multimodal Data Analysis and Interfaces.
- Second workshop on MMLA organized as grand challenge
(Morency et al. 2013)at ICMI’13 (paper-call). As part of this challenge, a multimodal math corpus also released
- In the same year (2013), Paulo Blikstein presented a paper entitled as Multimodal Learning Analytics (paper on ResearchGate) and Morcelo worsley presented ‘Towards the development of multimodal action based assessment’ at LAK’13 [ Paper link )
- Following this, the third workshop on MMLA organized at ICMI’14. It was also in the form of a grand challenge which offered two multimodal datasets- Multimodal Math data corpus
(Oviatt, 2013)and Multimodal dataset for predicting presentation skill (paper-call). I recommend reading [‘Multimodal Learning Analytics as a Tool for Bridging Learning’])
(Worsley, 2014a)from the MMLA workshop.
- A paper ‘Leveraging multimodal learning analytics to differentiate student learning strategies’ by Marcelo Worsley at LAK’15.
- First Cross-LAK 2016: International Workshop on Learning Analytics Across Physical and Digital Spaces organized at LAK’16 [paper-call]
- Fifth MMLA workshop with a data challenge organized at LAK’16.
- Sixth MMLA and Second Cross-LAK both workshops were co-located with LAK’17.
- Second Cross-MMLA 2018 Multimodal Learning Analytics Across Physical and Digital Spaces at LAK’18
- Third Cross-MMLA 2019 Multimodal Learning Analytics Across Physical and Digital Spaces [paper-call] at LAK’19
I hope, the above list will give you some ideas about initial progress and some key events in MMLA. The MMLA literature is growing and this list is getting long, for example, a recent survey paper found 82 articles on MMLA
(Worsley, 2018) (their criteria, however, considered 46 papers for further inspection). Apart from the papers mentioned above, I would recommend reading the following articles
- Lahat, D., Adali, T., & Jutten, C. (2015). Multimodal data fusion: an overview of methods, challenges, and prospects. Proceedings of the IEEE, 103(9), 1449-1477. [paper-link]
- Worsley, M., Abrahamson, D., Blikstein, P., Grover, S., Schneider, B., & Tissenbaum, M. (2016). Situating Multimodal Learning Analytics. International Conference for the Learning Sciences 2016, 1346–1349. [paper-link]
- Ochoa, X., & Worsley, M. (2016). Augmenting Learning Analytics with Multimodal Sensory Data. Journal of Learning Analytics, 3(2), 213–219. https://doi.org/10.18608/jla.2016.32.10 [paper-link]
- Blikstein, P., & Worsley, M. (2016).Multimodal Learning Analytics and Education Data Mining: using computational technologies to measure complex learning tasks. Journal of Learning Analytics, 3(2), 220-238. [paper-link]
- Ochoa, X. (2017). Multimodal Learning Analytics. Handbook of Learning Analytics, 129–141. https://doi.org/10.18608/hla17.011
- Di Mitri, D., Schneider, J., Specht, M., & Drachsler, H. (2018). From signals to knowledge: A conceptual model for multimodal learning analytics. Journal of Computer Assisted Learning, 34(4), 338-349. https://doi.org/10.1111/jcal.12288
- Chua, Y. H. V., Dauwels, J., & Tan, S. C. (2019). Technologies for automated analysis of co-located, real-life, physical learning spaces. LAK19 Proceedings of the 9th International Conference on Learning Analytics & Knowledge, 11–20. https://doi.org/10.1145/3303772.3303811
Some useful video resources
- Introduction to Learning Analytics by George Siemens [video-link]
- A presentation by Paulo Blikstein on Multimodal Learning Analytics from LAK’13 [video-link]
- Revealing the Invisible with Multimodal Learning Analytics by Researchers from EdGE at TERC, Landmark College, and MIT [video-link]
- Marcelo Worsley presentation on MMLA for examining coordinates dynamics in Mathematics [video-link]
- Marcelo worsley presenation on Towards the Development of Multimodal Action Based Assessment @ LAK’13 [video-link]
- George Siemens Keynote, LALA 2015 [video-link]
- Multimodal Classroom Analytics by Sidney D’mello (LAK’17 Keynote)[video-link]
- Big data from a little person: using multimodal data for understanding regulation of learning by Sanna Järvelä (LAK’17 Keynote)[video-link]
- LAK presentations
- LAK’12 [video-link]
- LAK’16 [video-link]
- LAK’17 [video-link]
- LAK’18 [video-link]
- LAK’19 Keynotes
- Introductions & Keynote – Ryan Baker [video-link]
- Multimodal Tutor by (Daniele Di Mitri) -> Demo-link)
- Multimodal Tracker by Luis Pablo Prieto -> ([Demo-link])
- Amrieh, E. A., Hamtini, T., & Aljarah, I. (2016). Mining Educational Data to Predict Student’s academic Performance using Ensemble Methods. International Journal of Database Theory and Application, 9(8), 119-136.
- Amrieh, E. A., Hamtini, T., & Aljarah, I. (2015, November). Preprocessing and analyzing educational data set using X-API for improving student’s performance. In Applied Electrical Engineering and Computing Technologies (AEECT), 2015 IEEE Jordan Conference on (pp. 1-5). IEEE.
- Baber, C., & Mellor, B. (2001). Using critical path analysis to model multimodal human-computer interaction. International Journal of Human-Computer Studies, 54(4), 613–636. https://doi.org/10.1006/ijhc.2000.0452
- Baker, R., Siemens, G. (2014) Educational data mining and learning analytics. In Sawyer, K. (Ed.) Cambridge Handbook of the Learning Sciences: 2nd Edition, pp. 253-274 pre-print draft
- Bernsen, N. O. (1997). Defining a taxonomy of output modalities from an HCI perspective. Computer Standards and Interfaces, 18(6–7), 537–553. https://doi.org/10.1016/S0920-5489(97)00018-4
- Blikstein, P. (2013, April). Multimodal learning analytics. In Proceedings of the third international conference on learning analytics and knowledge (pp. 102-106). ACM.
- Blikstein, P., & Worsley, M. (2016).Multimodal Learning Analytics and Education Data Mining: using computational technologies to measure complex learning tasks. Journal of Learning Analytics, 3(2), 220-238.
- Bordegoni, M., Faconti, G., Feiner, S., Maybury, M. T., Rist, T., Ruggieri, S., … & Wilson, M. (1997). A standard reference model for intelligent multimedia presentation systems. Computer standards & interfaces, 18(6-7), 477-496.
Learning and Academic Analytics, Siemens, G., 5 August 2011, http://www.learninganalytics.net/?p=131
- Coutaz, J., & Caelen, J. (1991, November). A taxonomy for multimedia and multimodal user interfaces. In Proceedings of the 1st ERCIM Workshop on Multimodal HCI (pp. 143-148).
- Di Mitri, D., Schneider, J., Specht, M., & Drachsler, H. (2018). From signals to knowledge: A conceptual model for multimodal learning analytics. Journal of Computer Assisted Learning, 34(4), 338-349.
- Freedman, D. H. (2010). Why scientific studies are so often wrong: The streetlight effect. Discover Magazine, 26. Retrieved from http://discovermagazine.com/2010/jul-aug/29-why-scientific- studies-often-wrong-streetlight-effect
- Morency, L. P., Oviatt, S., Scherer, S., Weibel, N., & Worsley, M. (2013, December). ICMI 2013 grand challenge workshop on multimodal learning analytics. In Proceedings of the 15th ACM on International conference on multimodal interaction (pp. 373-378). ACM.
- Nigay, L., & Coutaz, J. (1993). A Design Space For Multimodal Systems: Concurrent Processing and Data Fusion. Proceedings of the INTERACT Conference on Human Factors in Computing, 172–178. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.18.150&rep=rep1&type=pdf
- Ochoa, X., Chiluiza, K., Méndez, G., Luzardo, G., Guamán, B., & Castells, J. (2013, December). Expertise estimation based on simple multimodal features. In Proceedings of the 15th ACM on International conference on multimodal interaction (pp. 583-590). ACM.
- Ochoa, X., Weibel, N., Worsley, M., & Oviatt, S. (2016). Multimodal learning analytics data challenges. In 6th International Conference on Learning Analytics and Knowledge, LAK 2016 (pp. 498-499). Association for Computing Machinery.
- Ochoa, X., & Worsley, M. (2016a). Augmenting Learning Analytics with Multimodal Sensory Data. Journal of Learning Analytics, 3(2), 213–219. https://doi.org/10.18608/jla.2016.32.10
- Pardo, A., & Delgado-Kloos, C. (2011, February). Stepping out of the box: towards analytics outside the learning management system. In Proceedings of the 1st International Conference on Learning Analytics and Knowledge (pp. 163-167). ACM.
- Oviatt, S., Cohen, A., & Weibel, N. (2013, December). Multimodal learning analytics: description of math data corpus for ICMI grand challenge workshop. In Proceedings of the 15th ACM on International conference on multimodal interaction (pp. 563-568). ACM.
- Oviatt, S. (2017). Theoretical foundations of multimodal interfaces and systems. The Handbook of Multimodal-Multisensor Interfaces, 1.
- Oviatt, S., Schuller, B., Cohen, P., Sonntag, D., & Potamianos, G. (2017a). The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinations. Morgan & Claypool.
- Scherer,S., Worsley,M. and Morency, L. 2012. 1st international workshop on multimodal learning analytics: extended abstract. In Proceedings of the 14th ACM
international conference on Multimodal interaction (ICMI ’12). ACM, New York, NY, USA, 609-610.
- Siemens, G., Baker, R.S.J.d. (2012). Learning Analytics and Educational Data Mining: Towards Communication and Collaboration. Proceedings of the 2nd International Conference on Learning Analytics and Knowledge.
- Worsley, M. (2012). Multimodal learning analytics: enabling the future of learning through multimodal data analysis and interfaces. In Proceedings of the 14th ACM international conference on Multimodal Interaction (ICMI ’12). ACM, New York, NY, USA, 353-356. DOI: https://doi.org/10.1145/2388676.2388755
- Worsley, M., & Blikstein, P. (2013, April). Towards the development of multimodal action-based assessment. In Proceedings of the third international conference on learning analytics and knowledge (pp. 94-101). ACM.
- Worsley, M., & Blikstein, P. (2014). Analyzing engineering design through the lens of computation. Journal of Learning Analytics, 1(2), 151-186.
- Worsley, M. (2014a). Multimodal learning analytics as a tool for bridging learning theory and complex learning behaviors. In Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge (pp. 1-4). ACM.
- Worsley, M., & Blikstein, P. (2015). Leveraging multimodal learning analytics to differentiate student learning strategies. Proceedings of the Fifth International Conference on Learning Analytics And Knowledge – LAK ’15, 360–367. https://doi.org/10.1145/2723576.2723624
- Worsley, M., Abrahamson, D., Blikstein, P., Grover, S., Schneider, B., & Tissenbaum, M. (2016). Situating Multimodal Learning Analytics. International Conference for the Learning Sciences 2016, 1346–1349.
- Worsley, M. (2018). Multimodal Learning Analytics ’ Past , Present , and , Potential Futures. 2nd Multimodal Learning Analytics Across (Physical and Digital) Spaces, CrossMMLA, (2016), 1–16.