Red Flags in Your Data: What They’re Trying to Tell You

Wait, what was the original question?

Pen and highlighters in hand, I looked again at the responses on the spreadsheet. Half of them had nothing to do with the question we had asked on the survey.

I opened the survey to verify the question text. The question was exactly what I thought it was. So why did respondents’ answers make it seem like I was analyzing data from three different questions?

A Closer Look at the Survey Question and Responses

One of my clients runs a summer literacy program staffed by young adults. The young adults are simultaneously part of a program designed to help them think about pursuing careers in service and ministry. As part of my client’s program evaluation efforts, they survey the young adults at the beginning and end of the summer. One of the end-of-the-summer questions was this:

Please share 1-3 ways you grew in your knowledge of your local chapter’s host church community’s unique challenges and assets this summer. (Shared with permission of the client.)

Yes, I know it’s techincally not a question; it’s a request. Technically, it’s a survey item. For the sake of simiplicity and general application, I’m calling it a question.

The types of responses fell into three categories:

  1. Lists of facts they learned about their local chapter’s host church’s community’s unique challenges and assets.
    • The church runs a foodbank.
    • The church uses its facility for community events.
    • There’s a high level of poverty in the community.
  2. Lists of ways they gained knowledge about their local chapter’s host church community’s unique challenges and assets.
    • I spoke to the church’s pastor on several occasions.
    • I observed what the church did during Sunday worship and throughout the week.
    • I built relationships with community members.
  3. Lists of ways they grew over the summer.
    • I grew in my communication skills.
    • I developed new relationships.
    • I discovered that I have a gift for working with children.

Red Flags in the Data

The types of responses were not one but two red flags for me. First, several of the people taking the survey did not understand what we were asking in the question. Some thought we were interested in what they had learned. Others thought we were interested in how they learned.

Their misunderstanding was our fault. Our survey question was not clear, and we had not anticipated that it might be interpreted in two ways.

Second, some of the people taking the survey did not read the full question. They responded to “Please share 1-3 ways you grew in your knowledge.” That wasn’t the full ask. We had that question at the end of the survey. True, the young adults could have rushed on the survey, not fully reading the question before answering. But I think we had an additional problem.

These responses signaled to me that the question placement was a problem. It was the second question we asked them about their summer experience. Several of the young adults were eager to rattle off how they grew over the summer, and we didn’t ask them that until much later in the survey.

Revising the Survey in Light of the Data

Based on the data, I recommended to my client that we revise their survey by changing and re-ordering some of the questions.

We will reword the original question and split it into two items:

  1. Tell us 1-3 things you learned about your host church community’s unique assets this summer?
  2. Tell us 1-3 things you learned about your host church community’s unique challenges this summer?

We’re also moving the question, “What were 2-3 ways you grew this summer?” to be the first question about the summer experience.

Two Takeaways for Survey Developers

This experience reinforced two valuable lessons for me:

First, pilot test your survey. Identify a few people in your target audience who could take your survey and give you feedback on the questions. In this case, we could have asked a few program alumni or some chapter staff to take our survey before we went live.

Second, pay careful attention to the responses. Don’t be so hasty to compute the statistics or identify the themes that you overlook what else the data might be signalling.

In research and evaluation, we want to get the best possible data. That means we need to create high-quality data collection tools from the start and be willing to refine them over time. Equipped with good data, you can make wise decisions that will lead to offering exceptional educational programs.