Here's an interesting trend that keeps showing up: A growing number of surveys are being based on informal groups of respondents rather than statistically significant population samples. This means that the results of these surveys may or may not reflect the broader population being studied. There appears to be three drivers of this trend:
1. Cost and speed - it is much cheaper, easier and quicker to do surveys informally, rather than investing in the rigor of statistically significant research methodology. The appeal of doing 100 quick-and-dirty surveys instead of investing in one statistically rigorous survey is obvious.
2. The availability of easy-to-use Internet survey tools - Tools like Survey Monkey make it easy to do quick and simple online surveys. This has resulted in a substantial increase in the number of surveys conducted (and results published) which may or may not measure the population that the survey is trying to understand. Think of it as a litmus test without really knowing whose chemistry you're testing.
3. There is value in the information produced by non-statistical surveys - While informal survey results are not projectable to a broader population, they can be useful. They provide something to react to quickly and cheaply which can help decision-makers and researchers think out-of-the-box. Also, time or resource constraints often eliminate the use of formal surveys and some information is often better than none.
We see the value of non-statistical surveys and occasionally use them in our work. We find them useful for surfacing issues and research topics and in the early stages of our research to help us scope our work. We also like that they are quick, easy and cheap to do.
I really enjoyed reading this article. These types article will encourage our self to get knowledge about this types. Thanks looking for more good techniques..and thanks for sharing this most important information.....
Posted by: Term Papers | January 01, 2010 at 09:53 PM
Franz and Dawn: Agreed, caution is required. We see a lot of studies where the survey work simply doesn't come close to supporting the study conclusions.
And Dawn you are right about the media. They often don't differentiate between formal and informal surveys, so informal survey results get treated the same in the press.
This is a topic we talk about a lot at Emergent Research. Our background and training is in formal research methods. At first we skeptical of informal methods. We now see the value and use them, but still use formal methods for most of our work.
Steve
Posted by: Steve | August 07, 2008 at 08:49 AM
Scott, I can certainly see the value of those less expensive, potentially less accurate survey methods for something like helping you to design or to evolve hypotheses for further research.
The problems comes in when these kinds of results are made public. From what I have seen, the folks in the media simply assume when they are provided with such survey results that they are accurate and that they can be extrapolated to apply to the population at large. Even when they are published responsibly (with appropriate disclaimers), journalists often skip those caveats and indulge in the kind of verbal shorthand that gives people an inaccurate idea of what the research means.
There is also the critical issue of the way research is often used as the intellectual justification for all sorts of public policy initiatives. If the goal is fact-based public policy (which is, I think, a laudable goal), then it is all the more critical that the facts involved are generated from rigorous, accurate research. 'Net-based research can be fun and can make for great sound bites but people should not be drafting legislation based on such sloppy stuff.
Posted by: Dawn Rivers Baker | August 06, 2008 at 07:17 AM
This is not unlike the trend in the 90s, of doing your own analytical work. More data, cheaper and easier methods, ease of publishing. The trend is there, unstoppable, there is no doubt about it.
During that time I also saw some very poor attempts at getting deep results from scant evidence.
So there is room for caution here. I support more people utilizing and understanding these kinds of studies. You just need to make sure that you have a clear view of the object of your study. Is it an attempt to understand the territory, or doing something predictive? There is a big difference. What is the cost of making an error? I would at least talk to someone with experience in the analysis domain before charging forward.
Posted by: Franz | August 05, 2008 at 10:29 AM