> David, while you seem to have (often well-informed) opinions
> about most every market on this board, but you apparently
> don't even understand the basics of research.
> Focus groups are a qualitative method. The release even
> used the word "qualitative." If this "consultant" is asking
> questions in an "identical fashion" like in a questionnaire
> and tabulating responses, he is not doing qualitative
If you do enough inteerviews in a focus group or personal interview setting to get to the point of replicability, you can tabulate the responses. If you do 6 to 8 focus group projects, and within them, you get sepcific responses form every participant on certain questions, this data can be used for a quantitative tabulation.
Companies that use electronic gaterning methods such as dials or touchpads often start a project by gathering responses to a question set that allows the qualitatitive data to be better understood. At the same time, if enough interviews are done, you have attributable answers from each respondent.
Further, if audio (it is radio after all) is used to test talent, music blends, commercials, ect., then one can get EKG type readings at the respndent level but combine for an overall view of common and dissimilar reactions based on any subset that can be created using other questions as a base, such as age, sex, usage of radio, ethnicity, etc.
In other words, the only reason why a focus group could not be used to gater cuantitative data is if there is not a large enough sample to create a replicable, useful sample.
> The broadcasting industry is known for its tendency to hire
> research hacks, unqualified individuals who are able to
> sell themselves as "consultants" or "gurus." (Same can be
> said of politicians.) There is a built-in conflict of
> interest when a consultant attempts to do research.
Funny, but the research companies most used in the industry are not programming companies... Critical Mass, Colemean, Paragon, Harker, Edison, Mark Ramsey, Pinnacle, etc. All offer research guidance, such as insight in the interpretation.
In fact, in radio situations the biggest issue is interpreting what th elistener has said. And most of the prior work has to do with the supervision of recruiters, preparatin of test material, selection of test venues so tha thtey are located appropriately for the desired sample, etc. Very little work involves project design as radio research is mostly about music testing, testing of formats and testing of talent and overall image. Once you have designed the basic products, it does not matter what association you are a member of.
> Consultants have an axe to grind. What they sell is their
> expertise in the industry. A good qualitiative researcher
> knows research but otherwise comes in to a study with a
> blank canvas - knowing nothing.
Folks that have tired that generally fail. The reason is that we have a rather bizarre model. We do NOT program for the audience, but for the rating service that measures the audience. We do not want research virgins in smaple, we want research friendly individuals. And we select our samples not with total randomity, because we are attempting to mattch Arbiytron in certain target demos, lenght of radio usage, age, sex and ethnic balance. Getting that across to a company that has no knowledge of radio would take a great amount of time.
Radio research companies are set up to efficiently and quickly measure the key items on a station's research checklist. An AMT is more about the recruit than anything else. An outsider does not have the intuitive, experience baed understanding of the dynamics between P2 and P1 listeners, for example. A radio-specific company does.
> In his website, consultant Harber states as his
> qualifications that he has worked in the radio industry and
> sat on the other side of the mirror. Translation: He
> watched some focus groups and figured he could do that.
I know Harker. He took time off and learned, because he realized there were stations doing research that did not use the data well because there was a missing step... the bridge between data and programming.
Most of the really good researchers in radio came from programming, in fact. The techniques of sample building are relatively simple. Any reasonably intelligent person can teach themselves how to use SPSS and get cluster and factor analysis. The sample frame was already designed by Arbitron, so we mimic that.
> Professional associations in research are not for soliciting
> business, they are for professional development. In fact
> many, like the QRCA (Qualitative Research Consultants
> Association), specifically bar those on the client side from
> events and membership.
Yep. They are restricting the entrance so companies will not form in-house research divisions. The catch there is that if you do it in-house and have competent people, you do not need to belong to those associations.
> The size of the sample matter far less than how the sample
> is drawn.
I already said that. However, if you are going to use a primarily perceptual method, such as focus groups (a party without the booze) or one-on-ones (far more reliable) to get quantitative data, then you need a sample size that is replicable. So, in addition to getting th right people, you need enough people.
A radio research company can do studies where they look at parts of the sameple... such as "every third person" or "the first 50" etc. and can compare in multiple projects to see where the sample is "big enough" for the desired purpose. They can even do testing to determine the optimal hook length, do reverse order hook testing to determine where fatigue comes in, and whether it is a function of format lifestyle, age, sex, P1 or P2 level, etc. No non-radio research company is going to be able to do this economically.
> This is statistics 101. You should know better.
> A collection of "group interviews" (I won't dignify what
> Harper did by calling them "focus groups") can not be
> considered a random sample and therefore can not be
> considered statistically valid.
If the total participant pool in the group of groups is reflective of the universe under study, you certainly can make quantitative analysis on the results as long as a response is recorded for each indivisual. This is, again, why I like using the dial or touchpad to get responses on non-interpretative questions.
> An. additional element of
> bias, unless "respondents" wrote down their answers to
> Harper's "questionnaire," respondents were likely influenced
> by each other. That's useful in a true exploratory focus
> group but not in a survey you would "tabulate the same way
> you would tabulate a phone, in home or intercept perceptual
But if the data is recorded (paper to me is tedious, and often has a literacy/likes to write" bias) for each respondent, then you have a quantitative component to a qualitative study. At that point, the issue is, assuming proper recruiting, that you have "enough" respondents to make any cell you look at reasonably stable and reliable.
> Apparently the radio industry seeking to find researchers
> who understand radio has decided to hire consultants who
> don't understand research. For many radio clients, their
> idea of "understanding radio" is someone who agrees with
> their biases and pre-conceptions, and tells the client what
> he wants to hear. Maybe the poor quality of audience
> research - especially qualitative (exploratory, diagnostic
> and motivational) studies - is a big part of the reason why
> terrestrial radio is losing its audience.
In many ways, you are attributing a non-existent problem to non-necessary research. Radio stations do not research the radio industry. They research the specific issues of each station. No local station, needing to make ratings and revenue goals, can possibly worry about what will happen to radio 10 years from now.
When we talk about "radio" as opposed to "a radio station" the issues change. But the fact is that nearly the same percentage of persons listen to radio today as they did when Aribtron started, and the TSL in 1950 per person is within about an hour of what it was in 1950...
The researchers in radio are pretty good at what they do. I have seen a good programming team in a previously unresearched large market (over 40 stations) take research from one of the companies I named and hit #1 in less than 30 days. If the research were bad, the station would have had no impact or limited impact.
I have also seen a format developed entirely by research (one of those, "that will never work" ones...) become the fastest growing and most successful format of the last year in a bunch of top 10 and top 20 US markets. Only because the researcher knew reseaarch and radio could this one have been done, because a call was made to repeat the study zeroing in on findings in the first one that were less than obvious, creating, int he aftermath, a new format. An outsider would not have seen it, ever.