Shared representations for accessible and AI-supported inquiry

As we enhance the accessibility of our inquiry-based STEM simulations and integrate generative AI into our learning platforms, we’re constantly evaluating and leveraging emerging technologies to improve teaching and learning. Recent work has led to an exciting convergence and potentially transformative insight surrounding these two goals.

The representations needed for a large language model (LLM) to understand students’ use of a simulation—from student-controlled changes to simulation inputs to their outputs—are fundamentally similar to those that can make a simulation more accessible to a screen reader.

Representations under the hood

For example, in our Neural Engineering curriculum developed in partnership with Boston College, we are investigating how AI can support high school students’ computational thinking in a biology curriculum in which students design and program a robotic arm controlled by their own muscle activity. Supported by generative AI tools, students learn to design algorithms and translate them into computer programs that read sensor signals and control the robot.

In one activity, students use visual programming blocks to make a robotic gripper grab a cup—or not—depending on whether a simulated arm is flexed.

Students program using a visual programming language called Dataflow. They control the input and see the output of their code in a dynamic model of an arm flexing to control a mechanical gripper.

In this simulation, students program using a visual programming language called Dataflow. They control the input and see the output of their code in a dynamic model of an arm flexing to control a mechanical gripper.

Dataflow’s visual interface makes it easy to observe how program components connect and interact. But visual presentation alone is not sufficient for other systems to interpret how a simulation works. Allowing an AI engine to interpret and reason about the system—or enabling a screen reader to describe it—requires structured representations of the simulation’s state, components, and relationships. These representations make it possible to describe how elements are connected, how parameters change, and how those changes affect system behavior.

From a technology development perspective, this is particularly promising: the work required to create the structured representations that modern AI systems need to understand and potentially support inquiry within them also may make simulations interpretable by screen readers.

In both cases, we need the same core representations:

  • Semantic descriptions of state — counts and descriptions of what blocks exist and how they’re connected, the degree to which the virtual arm is flexed, or if the gripper is crushing the cup
  • Functional descriptions of relationships — input and output values and their connections (e.g., the gripper can close to a percentage between 0% and 100% based on an input signal between 0 and 1)
  • A way to track changes over time — what happens when a student modifies a connection in the code or runs the simulation

These representations go beyond simple alt-text captions for a static image, and far beyond making a simulation keyboard accessible, though those affordances are also important. We’re creating a structured, dynamic account of what the simulation is doing and what programming the student has done to change it.

AI-based tutorial analysis must include semantic, functional, and temporal relationships to reason about student understanding. That’s the exciting part. That underlying semantic layer—the machine-readable account of what the simulation means—could potentially be shared with an accessible representation.

If successful, this approach could meaningfully expand access to simulation-based learning.

What this looks like in practice

Through experimentation with LLM-based analysis of the programmable gripper simulation, we have begun to see concrete results. Early testing suggests that the AI can interpret student-constructed programs, identify errors, and provide specific feedback—for example, noting that a sensor block is connected but the activation threshold is set too high for the gripper to close.

Importantly, each step toward making the simulation legible to the AI has simultaneously structured descriptions and annotations that we can instrument for screen readers and audio description tools.

This convergence has promising practical implications.

For teachers: as we build AI tutoring capabilities into our simulations, we are also uncovering new ways to strengthen accessibility for sophisticated learning interactions.

For publishers and curriculum developers considering accessibility compliance: the representational engineering required for AI-readiness and for accessibility significantly overlaps, creating opportunities to leverage shared semantic infrastructure for both LLM interpretation and screen reader support.

Of course, structured semantic representation is only one component of meaningful accessibility, which also requires a host of interaction affordances, and iterative testing with learners using assistive technologies.

Looking ahead

We are encouraged by the promise new AI approaches may hold for broadening access to powerful computational learning tools.

As we look ahead, we are guided by these fundamental questions: How can we best describe, in structured and meaningful language, what a simulation is showing and what the student has done? Can we identify patterns of intervention that scaffold student learning and feed back information that supports without giving away the answer? What do teachers need to know about student interactions that will help them assess student learning?

Ongoing research supported by the National Science Foundation is helping us explore the promise new AI approaches may hold for broadening access to powerful computational learning tools.

If you’re working on simulation accessibility, AI-powered feedback, or both, let’s talk. Please contact us at hello@concord.org.

Leave a Reply

Your email address will not be published. Required fields are marked *