Automated Scoring for Argumentation

The High-Adventure Science: Automated Scoring for Argumentation project builds upon the success of a prior NSF-supported project, High-Adventure Science. In addition to promoting students’ understanding of Earth’s complex systems, this project supports students’ scientific argumentation practices through real-time automated scoring, diagnostics, and feedback.

Using curricula, assessments, and technologies developed in prior NSF-funded projects, we will work with partners at Educational Testing Service (ETS) and the University of California, Santa Cruz to develop and test enhanced automated scoring tools. These tools will be built with ETS’ c-rater software and will incorporate detailed rubrics that elicit multiple levels of understanding; this is considered critical for both science content learning and argumentation practices, yet has been inadequately tested in previous automated scoring applications.

As an improvement to most feedback systems that only provide individual-level information, we will create an integrated reporting system with customized feedback for students to monitor their individual progress as well as class-level snapshots to help teachers revise their instructional approach. The project will also provide professional development to help teachers gain a deeper understanding of automated scoring and to model the use of auto-generated feedback as a tool to improve both teaching and learning.

This project has the potential to enhance teachers’ use of formative assessment technologies for improved student learning at multiple levels. First, the online curricula will be strengthened with automated scoring and immediate, specific feedback. At the classroom level, the automated scoring and feedback approach provides a model to address the challenge of turning assessments into learning opportunities. At the national level, the goals of this project closely align with those of the Race to the Top initiative, which is also striving to take advantage of automated scoring for formative assessment.

Principal Investigators

Amy Pallant
Lydia Liu
Hee-Sun Lee

Project Inquiries

apallant@concord.org

Share This


National Science Foundation (NSF) Logo

This material is based upon work supported by the National Science Foundation under Grant No. DRL-1418019. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

The High-Adventure Science: Automated Scoring for Argumentation project will conduct three phases of research: feasibility studies, design studies, and a pilot study to investigate the effect of automated scores and feedback on students’ content learning and argumentation skills. We will use two online curriculum modules developed and tested in the NSF-funded HAS project as testbeds for integrating automated scoring and feedback. These two modules, one focused on climate change and the other on fresh water availability, were designed based on research on the use of authentic science practices in classrooms and on computational modeling. They specifically address scientific uncertainty involved in scientists' data collection and model building.

In the climate module, students explore factors that influence the Earth's climate, such as CO2, albedo, and human-produced greenhouse gases. Students use interactive models to explore positive and negative feedback loops in Earth’s climate system. In the freshwater availability module, students use models and real-world data to study the water cycle and evaluate the supply and demand for freshwater in various areas of the world. They use interactive models to explore the relationships between groundwater levels, sediment permeability, rainfall, recharge of aquifers, and human impact on groundwater levels. Each module requires 5-6 class periods and includes a pre-test, embedded assessments, and a post-test.

The research and development goals are as follows:

  1. Increase scientific argumentation practice with evidence from scientists’ real-world data and computational dynamic models related to climate and water systems for high school students;
  2. Establish instructional validity of the theoretical framework of uncertainty-infused scientific argumentation in the classroom;
  3. Develop and validate automated scoring to facilitate immediate feedback that aims to promote argumentation practice;
  4. Investigate when, how, for whom, and under what conditions feedback can be effective in promoting argumentation skills along the dimensions of feedback type (e.g., content vs. epistemic; diagnostic only vs. diagnostic plus suggestive);
  5. Apply advanced log analysis techniques and video screencast analysis to study how students use and interact with the automated scores and feedback;
  6. Understand students’ perceptions of the benefits and challenges in using automated scores and feedback;
  7. Develop ongoing professional development resources to enhance teacher use of automated diagnostics in teaching argumentation;
  8. Conduct classroom observations and collect screencast data to help teachers improve instruction and assessment practices; and
  9. Develop an interactive score reporting system that provides both customized individual feedback to students and class-level snapshots to teachers based on automated scoring.

With a focus on argumentation through modeling, as well as an emphasis on understanding how students and teachers respond to and interact with automated scores and feedback, we seek to answer the following questions:

  1. To what extent can automated scoring tools diagnose students’ explanations and uncertainty articulations as compared to human diagnosis?
  2. How should feedback be designed and delivered to help students improve scientific argumentation? How do students respond to and interact with automated scores and feedback during the modules? How does students’ use of feedback relate to their changes in scientific argumentation during instruction and learning outcome at the end of instruction? Should feedback be offered with or without scores?
  3. How do teachers use and interact with class-level automated scores and feedback to support students' scientific argumentation with real-world data and models? How do teachers’ views and practices about automated scores and assessments change?
  4. How do students perceive their overall experience with the automated scores and immediate feedback when learning core ideas in climate change and freshwater availability topics through scientific argumentation enhanced with modeling?

Log In

Don't have a profile?

Create a profile and...

Create your profile now »

Loading...