Catherine Ritz
What’s the Story Behind Your Program Data?
At the MAFLA Fall Conference this past weekend, I had the chance to work with a room full of world language educators and program leaders to dig into a question that too often gets overlooked: What’s the story behind your program data?
We often collect reams of numbers—assessment results, enrollment trends, survey data—but rarely pause to ask what they mean or what story they tell about our students, our teachers, and our programs. Yet, when we take the time to look beyond the numbers, data becomes so much more than charts and spreadsheets—it becomes a mirror, helping us see where we’re thriving, where we’re struggling, and where inequities may be hiding in plain sight.
Data helps us make more informed decisions, yes—but more importantly, it helps us see reality more clearly. It allows us to move from assumptions and anecdotes to evidence, to notice patterns we might otherwise miss, and to start meaningful conversations about teaching, learning, and growth. When we engage with data collaboratively, it stops feeling like a “gotcha” tool and starts serving what it should: reflection, improvement, and equity.
For those who couldn’t join me at MAFLA, I wanted to share some key ideas and resources from the session. My hope is that this post offers you a way to start—or continue—important conversations about data in your own context, whether you lead a department, a district program, or your own classroom.
Why We Need Data (and Why It’s Complicated)
As Knight and Fagella-Luby remind us in Data Rules (ASCD, 2024), our views of reality are often clouded by perceptual errors—confirmation bias, stereotyping, attribution errors, and more. It’s human nature to see what we expect to see, to interpret patterns through our own experiences, or to make assumptions based on limited evidence. In schools, that might look like assuming a dip in student proficiency scores means students aren’t trying hard enough, or believing that one strong class reflects an entire program’s success.
That’s where data comes in. When we engage with it thoughtfully, data helps us counter our blind spots and test our assumptions. It gives us a shared foundation for conversation—something objective we can all look at together. Instead of relying on opinions or anecdotes (“I think students are doing better this year”), we can ask deeper, evidence-based questions: What does the data actually show? Who is represented—and who might be missing from the picture?
In this way, data grounds our conversations in evidence rather than assumption, helping us move from reactive decision-making to intentional, informed action that truly serves our learners.
But data alone isn’t enough. We need to use it equitably and thoughtfully. Drawing on Equity in Data: A Framework for What Counts in Schools (Knips, Lopez, Savoy, & LaParo, 2023), it’s worth highlighting a few key principles that can help us approach data with both rigor and humanity:
➡️ Data is fluid and everywhere.
When we think of “data,” we often picture spreadsheets and test scores—but that’s only one piece of the puzzle. In reality, data lives in student work samples, performance assessments, teacher reflections, classroom observations, and even hallway conversations. These qualitative forms of evidence can tell us as much—if not more—about what’s happening in our programs than a set of numbers ever could. Mixed methods and storytelling are not “extras”; they’re essential for capturing the full picture.
➡️ Disaggregation reveals inequities.
Averages can make programs look successful, but they can also hide who is being left behind. When we break down our data by student groups—by race, gender, socioeconomic status, or special education needs—we start to see new stories emerge. Are certain groups progressing more slowly? Are all students getting equitable access to language courses or heritage language support? These questions aren’t easy, but they’re necessary if we’re serious about equitable outcomes.
➡️ Triangulation adds credibility.
No single data point tells the whole truth. When we look at multiple forms of data—say, STAMP results alongside classroom observations and student surveys—we begin to see patterns that help validate our conclusions. Triangulating data makes our interpretations stronger, and it builds confidence among teachers and stakeholders that decisions are grounded in a balanced view of evidence.
➡️ Data gives clues, not answers.
One of my favorite reminders from Equity in Data is that “data is the temperature, not the diagnosis” (p. 25). It can signal when something isn’t quite right, but it can’t tell us exactly why or how to fix it. That’s where professional judgment, collaboration, and teacher expertise come in. Data can guide us toward the right questions—but people, not spreadsheets, uncover the answers.
Ultimately, equitable data use is about more than accuracy—it’s about curiosity, humility, and collective learning. When we approach data with these principles in mind, we turn it from a compliance task into a genuine opportunity for growth.
Turning Data into Dialogue
During the workshop, we practiced using the Data-Driven Dialogue Protocol, a structured process that helps educators engage with data collaboratively and objectively. Too often, our discussions about data can drift into personal interpretation—someone offers a theory, another person defends a practice, and before we know it, we’re debating opinions rather than looking at evidence. A clear protocol keeps us grounded in the data itself.
The Data-Driven Dialogue Protocol offers four deliberate stages that guide participants from curiosity to action:
-
Predictions: What do we expect to see in the data?
This step surfaces our assumptions, hopes, and biases before we even look at the numbers. Naming these expectations openly helps us recognize how they might shape what we notice later. -
Go Visual: How can we represent the data so it’s easier to interpret?
Creating a visual—charts, graphs, or even color-coded sticky notes—slows us down just enough to process what’s actually there. It also makes the information more accessible to everyone, not just the “data people.” -
Observations: What do we actually see—without jumping to conclusions?
This is the hardest and most transformative part. Participants are asked to describe only the facts, avoiding words like because or therefore. It can feel awkward at first—almost too mechanical—but that’s exactly the point. The structure forces us to separate evidence from interpretation, clearing space for more balanced, evidence-informed discussions. -
Inferences: What might explain the trends, and what actions should we take next?
Only after grounding ourselves in the evidence do we move to interpretation. Here we explore causes, connections, and next steps—always keeping our earlier observations in view to ensure our reasoning remains anchored in what the data actually shows.
While the protocol can feel a bit forced or overly structured at first, participants often find that it becomes liberating over time. By removing personal bias and speculation early on, the process builds a shared sense of trust and clarity. Instead of jumping to conclusions or defending positions, we begin to wonder together: What’s really going on here, and what might we do about it?
In short, the protocol turns data conversations into true inquiry—focused, equitable, and productive.
Beyond the Numbers: Using Performance Data
Quantitative data tells one side of the story—but it’s only half the picture. The other half comes from qualitative data: the observations, reflections, conversations, and performance assessments that reveal the lived experiences behind the numbers. In world language programs, this kind of data often shows up in student work samples, classroom interactions, or reflections that capture how learners are using the language in authentic contexts. Among these sources, performance assessments play a particularly important role because they let students demonstrate what they can do with the language—engaging in meaningful, real-world communication rather than simply recalling vocabulary or grammar rules.
During the workshop, we explored how to make sure these assessments truly capture what they are meant to measure. That starts with two essential concepts: validity and reliability. A valid assessment measures what it claims to measure—for example, an interpersonal speaking task should assess students’ ability to engage in spontaneous conversation, not their memorization of scripted lines. A reliable assessment yields consistent results—students should receive similar scores no matter who evaluates them or when the task is scored.
We practiced using a sample Interpersonal Speaking task, built around a real-world scenario, and worked together to norm our scoring using an ACTFL-developed rubric. The norming process—reading, listening, or viewing the same student samples and discussing how to apply the rubric—helps everyone calibrate their expectations. It’s an eye-opening exercise: what one teacher considers “Intermediate Mid,” another might label “Intermediate High,” until they sit side by side and unpack the evidence.
When teachers use performance data this way, they move beyond test scores to a richer, more human understanding of growth. The numbers tell us how much progress students are making—but performance data shows us what that progress looks and sounds like in real communication.
From Data to Action
Looking at data is only the beginning—the real work happens when we decide what to do with it. During the final part of the workshop, we shifted from analysis to action planning, exploring how to use what we learned to make thoughtful, strategic decisions that strengthen our programs.
We connected our findings to two practical leadership tools:
- Strategic Action Plans, which translate data into clear goals, priorities, and measurable next steps. These plans help ensure that insights don’t end up forgotten in a spreadsheet—they become part of a living roadmap for improvement.
- Stakeholder Engagement Plans, which identify who needs to be at the table, what data matters most to them, and how to invite authentic collaboration. When teachers, administrators, and even students have a voice in interpreting the data, they’re far more likely to take ownership of the solutions that follow.
Taking action with data isn’t about reacting quickly—it’s about responding thoughtfully. The best decisions come from curiosity, reflection, and dialogue. Sometimes the next step is professional learning around an area of need; other times it’s adjusting a curriculum sequence, revisiting assessment practices, or allocating resources differently. What matters most is that each decision is rooted in evidence and shared understanding.
Because at the end of the day, data doesn’t change programs—people do. Our role as leaders is to help our teachers, administrators, and communities see the story the data is telling, and then co-author the next chapter together.
Your Turn
What’s one story your data might be trying to tell you?
What questions are waiting to be asked?
I’d love to hear from you—share your thoughts in the comments or reach out at catherine@teachlearnlead.org.
Let’s keep digging into our data stories—together.
Knight, J., & Faggella-Luby. (2024). Data rules: Elevating teaching with objective reflection. ASCD.
Knips, A., Lopez, S., Savoy, M., Laparo, K. (2023). Equity in data: A framework for what counts in schools. ASCD.