The GRASP project is investigating how middle school students use body movement to build deeper reasoning about critical science concepts. How can gesture aid in students’ ability to construct explanations of scientific phenomena, particularly ones that have unseen structures and unobservable mechanisms such as molecular interactions? The project will apply findings from this study to design and research gesture-controlled computer simulations using motion sensing input technologies.

GRASP (Gesture Augmented Simulations for Supporting Explanations) studies the role that motions of the body play in forming explanations of scientific phenomena. This four-year project is conducting empirical studies with middle school students, gathering data to apply to enhancing educational tools. We’re designing and testing the addition of gesture control to computer simulations using motion-sensing input devices to see if these digital technologies can guide learners to perform physical actions that can help them comprehend, recall and retain.

This NSF-funded project is led by researchers at the University of Illinois, Urbana-Champaign (UIUC), where the research with students is conducted. As a collaborator, the Concord Consortium provides computer simulations and technical expertise to incorporate gesture-sensing technologies into the simulations.

This work brings together two prominent areas of study with high potential for impact in education. One is the emerging field of embodied cognition that has provided evidence that human reasoning is deeply rooted in the body’s interactions with the physical world. Some researchers believe that it is impossible to separate the nature of thinking from the bodies we inhabit, hence learning and understanding are shaped by the actions taken by our bodies. Research findings and guidelines for connecting body movements with learning outcomes are starting to emerge, but there is still much to be understood about the kinds of movements that support the development of specific ideas, particularly unobservable mechanisms, which is the focus of the GRASP project.

The second area of study is human-computer interaction (HCI). Inexpensive devices make the control and navigation of a computer possible using body motion. This technology, once expensive and confined to research labs, is now affordable and commercially available. Microsoft’s Kinect and the Leap Motion system are two leading examples. Educational researchers have taken note of these technology developments and the potential for new embodied interaction techniques that facilitate student learning. Studies using specialized and expensive technologies already demonstrate the strong potential of embodied interactions with technology to generate new learning. Our research aims to identify embodied learning opportunities that can be created with software freely available on the web and with relatively inexpensive interface devices, making it accessible to a broad audience of learners.

Principal Investigators

Nathan Kimball
Robb Lindgren
David E. Brown

Project Inquiries

Share This

National Science Foundation (NSF) Logo

This material is based upon work supported by the National Science Foundation under Grant No. DUE-1432424. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

The GRASP project seeks to enhance student learning by helping students build explanations of science phenomena. Building explanations is an essential dimension of the Science and Engineering Practices of the Next Generation Science Standards (NGSS) and a skill distinct from providing descriptions of science phenomena. To build explanations students must develop explanatory models, that is, models that can explain hidden structures and unobservable mechanisms behind observable phenomena. Our research seeks to identify types of body motion that support causal explanations of observable phenomena. Since body motions that promote causal explanations are so central to this research and distinct from other motions, they are given a special identifier: embodied explanatory expressions (EEEs). GRASP is exploring differences in the kinds of embodied expressions used to support explanations versus those used to offer descriptions of science phenomena. Understanding this distinction is an important step for characterizing the role of embodiment at different levels of science reasoning and for creating gesture-based simulations that can elevate student thinking.

Three STEM topics form the basis of investigation: molecular interactions, heat transfer, and Earth systems. These topics are chosen because they involve phenomena that can be observed but have unobservable causes or hidden mechanisms. Furthermore, the topics are all prominent in the NGSS and have robust literatures examining student reasoning and conceptual development. Finally, the Concord Consortium has already developed rich resources of simulations in these topic areas that can be the starting point for the investigations with the additions of gesture-based control.

In order to generate research findings that inform the broader understanding of embodied reasoning around science concepts and inform the design of new technologies that facilitate student explanations three research questions will be pursued:

  1. What are the characteristics of embodied expressions that support scientific reasoning at the descriptive level (observations of visible phenomena) and at the explanatory level (causal accounts that include hidden structures and unseen mechanisms)?
  2. What are the embodied explanatory expressions (EEEs) that facilitate reasoning about three critical science topics: molecular interactions, heat transfer, and Earth systems?
  3. How can online simulation environments effectively integrate EEEs into their interface design? What features of interfaces most effectively elicit prescribed movements from students?

In this four-year project, the first two years will focus on research questions 1 and 2 with the outcomes informing the development of gesture-based simulations in the latter two years when the focus will be question 3. Data is gathered from a diverse sample of middle school students in interviews where they are asked to interact with various phenomena (e.g., a syringe with a plunger) to explain why it behaves as it does. For instance, the interviewer might ask, “If the plunger tip is blocked, why does it become difficult to compress or why does it spring back when released?” As students develop their explanations, they may build on tools available in the interview or simulations on a computer. Finally, they may be asked to illustrate their understandings with words and gestures. These interviews are video recorded and will be analyzed using methodologies of micro-analysis—looking in detail of the moment-by-moment interactions of students—and of embodied learning research.

Log In

Don't have a profile?

Create a profile and...

Create your profile now »