Your current browser (Unknown Browser 0) is no longer supported by its developer or this website, so it may not work.
To make sure this website works and to get better security, you can switch to a different or newer browser.

How many residents do you need to hear from?

The short answer is about 200 to 600 - learn more below.

Enter a number of responses
Or move the slider to a "margin of error"
margin of error *
< >
* Advanced Calculator (you don't need to use it - see why below)

What percentage of your population do you need to hear from?

The main question we get about survey input is: “What percentage of our population do we need to hear from?”

The answer is always a surprise… it isn’t a percentage. The percentage of your population doesn’t matter to statistics. Only the total number of responses matters.

A good target range is about 200 to 600 people – no matter the size of your population – if you have a valid scientific sample.

Hard to believe, but its true whether you have 10,000 people or 10 million in your community.

But this only works if have a scientific “unbiased” sample of responses, from people that are not “self-selected” or correlated to the survey topic. Here’s why.

Valid Data In a Nutshell

You can get a good representation of what your community thinks with a surprisingly small number of responses – if that small number of responses is statistically representative of the whole population.

This means that the responses in your sample have to look like the responses you would get from the people not in your sample. Your set of respondents needs to be a “valid” representation of the whole community.

In other words, the respondents in your sample and their interests can’t be different in meaningful ways from all the people (and their interests) not in your sample.

You’ve probably heard of “margin of error”. This tells you how close your sample should be to the true result you would get if you could get responses from the whole population. Our calculator above shows that it takes 196 people to get a 7% margin of error, 267 people for a 6% error, 384 people for a 5% error or 600 people for a 4% error.

The larger the number of responses, the lower the margin of error. But here’s why the number of responses isn’t the whole story – it’s really just the minimum possible error before you get to potential self-selection or question quality problems.

If you ask people to take a survey and enough of them decide to participate (or not) because of their interest in the topic, then you have a self-selection problem. This makes your sample “biased” compared to the broader population – even if you randomly selected the people you asked for input and even if your sample has the same demographics as your population.

It doesn’t take much self-selection to make your sample of responses much different from what the rest of your population would say.

Just think of who comes to meetings or posts things online, you’ll recognize the self-selection problem in all your public input and engagement. You can’t cure it by getting more self-selected responses either. Self-selection bias turns survey data into junk, even with hundreds or thousands or millions of responses.

In fact, one of the dead giveaways that you have self-selection is when you see a bump or spike above your normal engagement level. That’s when you can be sure that people are actively recruiting like-minded friends to be heard.

One simple way to remember this is that “engagement” is about hearing from people especially interested in a topic. This can be useful for issues where that’s who you want to reach, but its never something you can generalize to a whole community.

Meanwhile “scientific surveys” are about hearing from people not especially interested in a topic. The topic and questions go to them (and they participate at high rates), not the other way around. This is what allows you to generalize from what a (scientific) sample thinks to what the community as a whole thinks.

Finally, besides the number and representativeness of the responses, you need to worry about how the input is structured. You need good questions and answer choices that tap into what residents actually know and that cover the right set of choices. Even if you have a large number of perfectly representative responses, having biased questions or unbalanced answer sets or missing choices (or 20 other mistakes) can still make your data useless.

So, to summarize you need all three of these things for statistically valid community input:

1) A large number of responses

2) Not self-selected to the topic

3) Answering well-structured questions

That’s how you get highly representative input from a few hundred responses.

Why These Three Things? – Explained with M&Ms

1) You need a large number (200 to 600) responses

Whether you have 10,000 people or 10,000,000 people in your community, a statistically valid sample of 600 people is the same 4% margin of error. Our calculator above gives you the standard margin of error with a 95% confidence level. For 600 responses, this means that 19 out of 20 times (95% of those times) when you take a sample, the data from your sample will be within +4% or -4% of the true answer. 

The population size only starts to matter when the total population drops below a few thousand people, and small populations just make the margin of error slightly smaller. Try our advanced calculator above if you don’t believe this or want to play around with confidence intervals.

To better understand why total population doesn’t matter, think of yourself randomly picking M&Ms out of a big 5 pound bag and keeping track of what percent you have of each color as you group them on a table. As you go from a few M&Ms on the table, to dozens, to hundreds you start to notice that the percentages for each color aren’t changing much with each additional M&M.

If the bag that you picked from happened to be ten times bigger or as big as the room you are in, nothing would change. All that matters is that you have picked out a certain number of M&Ms from whatever size bag. You are trying to measure what the percentages in the bag, which can be the same no matter how big or small the bag is.

Still not convinced? Here is a video where we did this with 600 M&Ms.

2) You need responses that are not self-selected to the topic

How you get your sample of responses matters for statistical validity. If people self-select themselves or self-organize themselves to give input based on the topic, the bias from self-selection will overwhelm the margin of error. You end up with a good representation of an unrepresentative group.

We had one customer with online survey data (489 responses) that told them 85% would pay for a community center. Putting 489 responses in the calculator gives a 4.4% margin of error so you might think the true result is about 81% to 89%. Not so fast. They did a FlashVote scientific survey on the same topic and found that only 33% would pay – completely different and the opposite side from 85%.

So why does this happen? Self-selection afflicts all traditional public input like meetings, phone calls, emails, social media, online surveys or other online engagement. People follow an issue, find out about an opportunity to give input and decide to participate at higher rates based on their interest in the specific topic. Then they share it with their like-minded friends. Then if they are motivated, they can participate multiple times and multiple ways.

The worst part is that you can’t know how bad your data is unless you have good data too. So most people don’t even realize that the input they are getting is unrepresentative.

Suppose you are picking out M&Ms from a bag that someone previously emptied and filled with all green ones. It doesn’t matter how many you pick out, you’ll see that all M&Ms are green. Without a regular bag for comparison, of course you end up thinking that all M&Ms are green. And why wouldn’t you?

But that’s why if you if have 100 people come to a meeting “against” something, all you know is that 100 total people don’t like something (and probably organized themselves to show up as a group). You don’t know anything about what the rest of the community thinks based on that.

In fact, the varying or spiking attendance for meetings or online polls is proof you are seeing self-selection by topic. The hotter the topic, the worse the data. But if you had a mere 100 regular people who were not self-selected tell you what they thought, you’d actually have a 9.8% margin of error – not bad at all- when generalizing to the whole community

3) You need unbiased questions and answer choices

Suppose you have a huge number of randomly selected people giving input at a 100% response rate – an ideal sample. You can still get junk data by using bad questions. The most common problems are leading questions (“How great are we?”) and unbalanced answer sets (“Great OR Really great”).Those are just 2 of the 23 quality control checks we use to cover everything from readability to requiring trade-offs for context.

There is a lot more to say about good questions but let’s just say if you randomly picked out M&Ms in a totally dark room, you would have real trouble seeing and recording the colors correctly. 

Summary

You can think of the “margin of error” as the best case, lowest possible error you can have. It is the “minimum error” you get with a perfect sample (of whatever size). Other sources of sample bias and error can easily overwhelm that.

If your respondents are self-selected to a topic, the margin of error is irrelevant because you are not sampling the population – you are sampling people with the most interest in the topic. That’s more like a petition than a scientific survey. And that’s the difference between engagement and scientific survey data.

Similarly, if you have bad or biased questions your data will be unreliable, even if your questions are answered by a perfectly random and large sample with a tiny margin of error.

In the case of traditional offline and online public input, our research shows that the noisy, self-selected few are not just unrepresentative – they are usually the opposite of what the broader community wants. This is why every local government needs representative community input. And why traditional public input can make decisions worse.

This is also why we created FlashVote to give you statistically valid community input in 48 hours – so you can finally hear from the many and serve the many, not just the noisy.

Feedback