Why screeners often fail

by | 03.12.2025 | Articles

Where does the right selection of participants really begin? Why screeners often fail — and how a personal approach significantly improves market research results.

Recruiting study participants has always been a critical issue in market and social research. In professional circles, one often reads about the “quality” of samples, the clear delineation of target groups, and the “right” segmentation. But one aspect is surprisingly often overlooked: the actual selection of the people who are surveyed does not begin with the sending of a screener – it begins earlier and often ends later than one might think.

Screeners are still considered a “guarantee” that participants correspond exactly to the desired target group. The prevailing belief here is often: Anyone who passes the screener is suitable. Period. But anyone who recruits in practice knows that it’s not that simple.

Ms. Berghoff, many clients see the screener as a crucial tool. In your opinion, where does the actual participant selection begin?

Helena Berghoff: The selection process begins long before the screener—namely, with understanding the objective. I always ask clients first: What do you really want to find out? Only then can we determine which people we need. A screener is merely a translation of this objective, not its origin. Before writing a single screener question, you should ask yourself the following questions:

  1. What question should the research answer?
  2. Who is best placed to answer this question?
  3. Which behavior is truly relevant — and which is not?
  4. What level of diversity within the target group is desirable?
  5. Which individuals might be atypical but provide particularly valuable input?

Only once these questions have been clarified can recruitment begin in a meaningful way. A good screener is not a strict filter, but rather a guide that leaves room for interpretation. And a good recruitment process ensures that people — not data — make the final decision.

What are some typical problems you see in customer screeners?

Helena Berghoff: Incorrectly worded questions, unclear exclusion criteria, or overly rigid specifications mean that although formally suitable participants can be found, the recruitment process misses the mark in terms of content. At the same time, relevant individuals are excluded who, according to the data, do not fit the profile but would provide valuable insights in terms of content.

In your opinion, what are the most common reasons why screeners fail?

Helena Berghoff: Screeners are often developed by clients who believe they know their market or customers well—but do not always reflect how diverse actual behavior is.

Many screeners are based on internal segment definitions derived from marketing or CRM data. However, only experts understand these terms — not the participants. This makes their answers unreliable.

The more complex the screener, the higher the likelihood that participants will try to answer “correctly” – especially if the study offers incentives.

A single checkmark in the wrong place can push highly relevant individuals out of the running. Those who filter exclusively based on formal criteria often lose precisely those people who could plausibly explain the “why” behind their behavior.

The result is often that it is not the “best” participants who are found, but those who are “best pre-selected.”

Many companies today are strongly data-oriented. How do you assess this development?

Helena Berghoff: Data from CRM, customer segmentation, transaction behavior, or tracking tools serve as the basis for recruitment profiles. This sounds efficient—and it can be—but data can distort the picture because, although it is objective, it is never complete.

  • Data records reflect past behavior, not future needs.
  • They contain no nuances: no hesitation, no frustration, no motivation.
  • People are reduced to a few characteristics — often ones that are only of limited relevance to qualitative research.

A practical example:
A customer wants to survey “heavy users” of a product and bases this on sales or usage data. Although this data clearly shows who has purchased or used the product particularly frequently, it does not reveal who has since turned away, who is dissatisfied with the product, or who is about to switch providers.

However, it is precisely these individuals who are often particularly valuable in qualitative studies because they reveal barriers, fault lines, and undiscovered weaknesses — aspects that pure usage data cannot capture. If you rely too heavily on data, you lose these voices — and with them, valuable insights.

You advocate for face-to-face interviews during the recruitment phase. Why?

Helena Berghoff: Data shows behavior, but not attitude. While a screener often only allows yes/no answers, a phone call, video call, or preliminary interview allows for a genuine assessment. In conversation, you can sense things that would never be apparent in a questionnaire. Whether someone responds thoughtfully. Whether someone is honest. Whether someone can describe their own experiences that are valuable to the study.

An experienced recruiter recognizes nuances that no screener in the world can detect: uncertainties, ambivalence, special usage scenarios, or even misunderstandings about the category — all of which can indicate that a person will provide particularly valuable insights. Sometimes someone is “perfect” according to the screener – and in conversation you realize that they have hardly anything to say. Conversely, the most exciting interview partners are often precisely those who would have “failed” according to the screener.

What role does the use of AI play in this?

Helena Berghoff: Artificial intelligence helps us to process large amounts of data more quickly. We use it for pattern recognition and initial orientation. For us, best practice is not to leave the entire recruitment process to AI, but to use it as a supplement:

  • The AI performs rough filtering based on segments, behavior, or data patterns.
  • Humans do the fine-tuning — in personal conversations, with experience and contextual knowledge.
  • Follow-up phone calls are extremely important — the more digital the processes become, the more valuable direct human validation is.

This combination creates a recruitment logic that is both efficient and high-quality.

What advice would you give to clients who want to improve their recruitment?

Helena Berghoff: The industry is currently at a turning point. Data and AI simplify processes, but at the same time, there is a growing risk of viewing people merely as data sets. However, good qualitative research thrives on real stories, experiences, and perspectives. Rigid screener thinking often prevents precisely this diversity.

Better participant selection begins when we view screeners as tools rather than filtering machines. Use data for guidance, but not for final decisions. Use AI, but always with human validation. And place greater emphasis on personal conversations again.

Our experience shows that when we take this holistic, personal approach, we get the participants who really help us move forward — not just those who look good on paper.

TO THE ARTICLE ON MARKTFORSCHUNG.DE