METS: Multimodal Learning Analytics of Embodied Teamwork
Learning
Linxuan Zhao
Monash University
Australia
Zachari Swiecki
Monash University
Australia
Dragan Gašević
Monash University
Australia
Lixiang Yan
Monash University
Australia
Samantha Dix
Monash University
Australia
Hollie Jaggard
Monash University
Australia
Rosie Wotherspoon
Monash University
Australia
Abra Osborne
Monash University
Australia
Xinyu Li
Monash University
Australia
Riordan Alfredo
Monash University
Australia
Roberto Martinez-Maldonado
Monash University
Australia
Figure 1: Embodied teamwork in an immersive healthcare simulation where a team of students constantly reconfgure
themselves into sub-groups (e.g., see simultaneous, coded dialogue unfolding at the left and right of the learning space) to
complete a joint task.
Abstract
Embodied team learning is a form of group learning that occurs
in co-located settings where students need to interact with oth-
ers while actively using resources in the physical learning space
to achieve a common goal. In such situations, communication dy-
namics can be complex as team discourse segments can happen
in parallel at diferent locations of the physical space with varied
team member confgurations. This can make it hard for teachers to
assess the efectiveness of teamwork and for students to refect on
their own experiences. To address this problem, we propose METS
(Multimodal Embodied Teamwork Signature), a method to model
team dialogue content in combination with spatial and temporal
data to generate a signature of embodied teamwork. We present
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for proft or commercial advantage and that copies bear this notice and the full citation
on the frst page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specifc permission
and/or a fee. Request permissions from permissions@acm.org.
LAK 2023, March 13ś17, 2023, Arlington, TX, USA
© 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-9865-7/23/03. . . $15.00
https://doi.org/10.1145/3576050.3576076
a study in the context of a highly dynamic healthcare team sim-
ulation space where students can freely move. We illustrate how
signatures of embodied teamwork can help to identify key difer-
ences between high and low performing teams: i) across the whole
learning session; ii) at diferent phases of learning sessions; and iii)
at particular spaces of interest in the learning space.
CCS Concepts
· Applied computing → Collaborative learning; Computer-
assisted instruction.
Keywords
Healthcare simulation, Collaborative learning, Communication,
Teamwork, Multimodality
ACM Reference Format:
Linxuan Zhao, Zachari Swiecki, Dragan Gašević, Lixiang Yan, Samantha
Dix, Hollie Jaggard, Rosie Wotherspoon, Abra Osborne, Xinyu Li, Rior-
dan Alfredo, and Roberto Martinez-Maldonado. 2023. METS: Multimodal
Learning Analytics of Embodied Teamwork Learning. In LAK23: 13th Inter-
national Learning Analytics and Knowledge Conference (LAK 2023), March
13ś17, 2023, Arlington, TX, USA. ACM, New York, NY, USA, 11 pages.
https://doi.org/10.1145/3576050.3576076
186