← Back
1 / 10

Why Written Responses?
Why AI?

The Case for AI-Assisted Written Response Feedback

A framework for LEAP assessment preparation

Written Responses Measure
Higher-Order Thinking

Multiple choice items have a ceiling. Even well-designed MC questions max out at Understand.

The cognitive work is recognition, not production.

Bloom's Taxonomy & Assessment Types

Bloom's Level Cognitive Operation MC? Written?
Remember Retrieve facts
Understand Restate, summarize
Apply Use in new context Limited
Analyze Explain how/why, connect
Evaluate Argue, justify with evidence
Create Construct original synthesis

Where the Points Are

Written Response Weight by Subject

English I/II
~40%
38 of 94 pts
Algebra/Geometry
~35%
19 of 55 pts
U.S. History
~20%
12-14 pts
Biology
~16%
8-10 pts
Civics
~13%
8 pts

Rubrics Map to Bloom's

Rubric Language Bloom's Level Student Demonstrates
"Identifies" Remember/Understand Names without reasoning
"Describes" Understand Restates in own words
"Explains" Analyze Shows how/why with reasoning
"Supports with evidence" Analyze Connects information to claim

A student who identifies but doesn't explain scores a 1, not a 2.
That's not a writing problem—it's a thinking problem.

Why AI?

The feedback bottleneck is human.

Rubrics Are Machine-Readable

LDOE scoring annotations reveal exactly what separates score levels:

"The response earns a score of 1. It offers a full and accurate explanation of one way... While the response identifies a second contribution—'by joining the war'—it does not explain that contribution."

Identify vs. Explain. One example vs. two. Source-dependent vs. outside knowledge.
AI can detect these distinctions because they're structural, not stylistic.

Bloom's-Calibrated Feedback

Generic Feedback

"You need more detail."

Bloom's-Calibrated

"You've identified the compromise (Remember), but haven't yet explained how it addressed the underlying conflict (Analyze). What did each side gain?"

The second tells the student exactly what cognitive move they're missing.

Formative, Not Summative

AI doesn't replace the state test scorer. It helps students practice before the test.

Student writes
AI scores against rubric
Student revises

10 minutes, not 10 days Low-stakes. Immediate. Repeatable.

The Opportunity

The question isn't whether this is possible.
It's whether students get access to practice before the test—
or only find out what they didn't understand after the score comes back.