Automated Scoring for Argumentation
Importance
What if students were able to get immediate feedback on their open-ended responses in science class? Could that dramatically enhance their ability to write scientific arguments? We’re exploring these questions by investigating the effects of technology-enhanced formative assessments on student construction of scientific arguments.
In collaboration with the Educational Testing Service (ETS), we’re using an automated scoring engine to assess students’ written responses in real time and provide immediate feedback.
We’re using natural language processing techniques to score the content of students’ written arguments within two High-Adventure Science curriculum modules, “What is the future of Earth’s climate?” and “Will there be enough fresh water?” In each module, students encounter scientific argumentation tasks, in which they use evidence from models and data to construct scientific arguments. Previously, students would have to wait until their teacher had read their responses to get feedback. Students now get just-in-time feedback that encourages additional experiments with the models, a closer look at the data, and the opportunity to add more evidence and reasoning to their explanations. We hope to help students build stronger scientific arguments.
Our enhanced automated scoring tools are built with c-rater software from ETS and incorporate detailed rubrics that elicit multiple levels of understanding. While other feedback systems only provide individual-level information, we’re integrating a reporting system with customized feedback for students to monitor their progress as well as class-level snapshots to help teachers revise their instructional approach. We’re helping teachers gain a deeper understanding of automated scoring and modeling the use of auto-generated feedback as a tool to improve both teaching and learning. The goal is to enhance teachers’ use of formative assessment technologies for improved student learning.
Research
We’re investigating the effect of automated scores and feedback on students’ content learning and argumentation skills. We seek to answer the following questions:
- To what extent can automated scoring tools diagnose students’ explanations and uncertainty articulations as compared to human diagnosis?
- How should feedback be designed and delivered to help students improve scientific argumentation?
- How do teachers use and interact with class-level automated scores and feedback to support students’ scientific argumentation with real-world data and models?
- How do students perceive their overall experience with the automated scores and immediate feedback when learning core ideas in climate change and freshwater availability topics through scientific argumentation enhanced with modeling?
Publications
- Pryputniewicz, S., & Pallant, A. (2019). Automated scoring helps student argumentation. @Concord, 23(1), 10-11.
- Lee, H., Pallant, A., Pryputniewicz, S., Lord, T., Mulholl, M., & Liu, O. L. (2019). Automated text scoring and real-time adjustable feedback: Supporting revision of scientific arguments involving uncertainty. Science Education, 103(3), 590–622.
- Mao, L., Liu, O. L., Roohr, K., Belur, V., Mulholland, M., Lee, H.-S., & Pallant, A. (2018). Validation of automated scoring for formative assessment of students’ scientific argumentation in climate change. Educational Assessment, 23(2), 121–138.
- Zhu, M., Lee, H.-S., Wang, T., Liu, O. L., Belur, V., & Pallant, A. (2017). Investigating the impact of automated feedback on students’ scientific argumentation. International Journal of Science Education, 39(12), 1648–1668.
- Lord, T., & Pallant, A. (2016). Can a robot help students write better scientific arguments?. @Concord, 20(1), 8-9.
Activities
View, launch, and assign activities developed by this project at the STEM Resource Finder.