Automated Scoring for Argumentation

Promoting understanding of Earth's complex systems and supporting scientific argumentation practices.

Importance

What if students were able to get immediate feedback on their open-ended responses in science class? Could that dramatically enhance their ability to write scientific arguments? We’re exploring these questions by investigating the effects of technology-enhanced formative assessments on student construction of scientific arguments.

In collaboration with the Educational Testing Service (ETS), we’re using an automated scoring engine to assess students’ written responses in real time and provide immediate feedback.

We’re using natural language processing techniques to score the content of students’ written arguments within two High-Adventure Science curriculum modules, “What is the future of Earth’s climate?” and “Will there be enough fresh water?” In each module, students encounter scientific argumentation tasks, in which they use evidence from models and data to construct scientific arguments. Previously, students would have to wait until their teacher had read their responses to get feedback. Students now get just-in-time feedback that encourages additional experiments with the models, a closer look at the data, and the opportunity to add more evidence and reasoning to their explanations. We hope to help students build stronger scientific arguments.

Our enhanced automated scoring tools are built with c-rater software from ETS and incorporate detailed rubrics that elicit multiple levels of understanding. While other feedback systems only provide individual-level information, we’re integrating a reporting system with customized feedback for students to monitor their progress as well as class-level snapshots to help teachers revise their instructional approach. We’re helping teachers gain a deeper understanding of automated scoring and modeling the use of auto-generated feedback as a tool to improve both teaching and learning. The goal is to enhance teachers’ use of formative assessment technologies for improved student learning.


Research

We’re investigating the effect of automated scores and feedback on students’ content learning and argumentation skills. We seek to answer the following questions:

  • To what extent can automated scoring tools diagnose students’ explanations and uncertainty articulations as compared to human diagnosis?
  • How should feedback be designed and delivered to help students improve scientific argumentation?
  • How do teachers use and interact with class-level automated scores and feedback to support students’ scientific argumentation with real-world data and models?
  • How do students perceive their overall experience with the automated scores and immediate feedback when learning core ideas in climate change and freshwater availability topics through scientific argumentation enhanced with modeling?

Publications

View More


Activities

View, launch, and assign activities developed by this project at the STEM Resource Finder.

Project Funder
This material is based upon work supported by the National Science Foundation under Grant No. DRL-1418019. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Principal Investigator
Years Active
2014-2019