> David and KM: I did not intend a personal attack and I
> And I do understand how it's done in radio. I just don't
> agree it is how research should be done. It's a flawed
> method. Bad research and over-reliance on consultants may
> be a big part of the reason terrestrial radio is in such bad
What method is flawed? Take an AMT, with the usual procedure of recruit specs, recruit, test and analysis. What statistically or procedurally is wrong with the system?
One of the flaws of your argument is tht you believe that researchers in radio are also program consultants. They are not. Researchers provide research, and also provide orientation in using it. That would be pretty much what any custom project in any field would entail. The only place where you are not going to get interpretative assistance or orientation would be products like syndicated research, pantry checks, omnibus surveys, etc., which are like rental cars. You go along for the ride, but you have to know how to drive to use it!
> The laws of statistics and probability are the same whether
> one is dealing with radio or toothpaste.
But the realities of recruit specs, recruiting methods are unique. Add in the fact that one is generally trying to duplicate the type of panel or group that Arbitron would / could recruit and you need dedicated radio specialists.
Also take into account that the costs are very low and most non-radio companies would have no interest in putting int he hours needed to learn the business for a couple of $30 k AMTs a year. Radio researchers have all the structure down, and know how to find recruiters, have all the forms, equipment and experience.
> Sample size in the example given is not adequate for a
> reliable sample. Minimum sample size in a random sample at
> the 95% confidence level (plus or minus 5%) would be 403.
> At this sample size you are talking about something like 80%
> (plus or minus 20%).
The fact is that we are dealing, in an AMT, with 80 to 100 persons. This is likely more bodies than a station will have diary mentions in a month in a medium market. Remeber, you are not testing the market, you are pre-screening for users of the product or the kind of music your product represents. And you are probably specifying an age span that represents 75% or less of the age span, focused on the median. In other words, in the group you sample, you are getting a better sample than Arbitron will have for your station.
No station could afford a 400 person AMT. It would cost about $100 thousand, and not be $75,000 better than the 100 person test. For broader playlist stations, the cost could be as high as $140,000. And that is one test... with major markets testing 3 to 4 times a year, we are talking about over a quarter million dollars extra costs... in the top 10 markets, that is as much as 5% of the billing of an average station.
The fact is, I can do one 80 person test after another against the same specs and have songs scores so close as to not matter, no matter how many times I test. In fact, I can do two 50 person sessions, and find that the scores vary by less than 5% from session to session, mostly attributed to fatigue (which is why reverse order pods and split sessions is the rule I apply).
Considering that a market between #10 and #15 has about 2,500 diaries over 12 weeks for an average of 40 rated stations, the AT with 100 screened listeners is actually of higher confidence than the ratings survey.
> But what we are talking about here is not even a
> statistically valid and projectable sample. How the sample
> is drawn is crucial. Focus group participants are largely
> self-selected: They agreed to come. They were recruited by
> local facilities from their internal data bases of past or
> likely participants (not recruited at random from the
> population at large).
At some recruiters this is true. There is an issue that previous respondents to other projects are research friendly, so they would also be the same kind of respondent as Arbitron gets. I have done two blind tests. One with a panel recruit from lists, using a screener given to the recruiter and reconfirmed directly and by "trick questions" in the test. I have compared this with a random recruit using RDD or SSI lists of the market which are not stratified. The results are absolutely, to the song, the same as long as the people you invite meet the recruit specs. However, the costs of a random recruit are much higher. An alternative is street or mall intercepts, but then there is a larger geographic bias than the built in one based on how far one has to drive to a phone recruited test location.
> Administering a questionnaire (and tabulating the results)
> may be common practice (in broadcast audience research and
> in market research - the two are not that different). It
> may even be expedient. But the result is bastardized
> research. Not only are the numbers not reliable or valid
> (due to sampling issues), but a questionnaire before the
> group alters the results of the group discussion; results of
> a questionnaire after the group discussion are altered by
> the discussion.
This is true only if you key in awareness of an issue you want blindness on. However, if you ask, beyond standard demographics, favorite station, hours listened, morning show usage, etc., you may actually focus the respondent on their behaviour in a positive way. After all, you do not, generally recruit non-listeners unless you are launching a new station. And for a new staiton, you would likely test the listeners of the closest competitor(s) against their listeners and then follow with an AMT.
> Group dynamics are the unique and essential feature of focus
> groups. I agree with your overseas friend. Most projects
> done using focus groups should be done using individual
> depth interviews. If a client is worried about peer
> influence in a group, they should not be doing groups.
Generally, radio is listened to on an individual basis, unlike much TV which is done often in group settings. This makes it really appropriate to do one on one itnerviews, as a group dynamic is not desirable.
> Ernst Dichter, the father of motivational research, did not
> believe in doing groups at all.
Neither do I and have nit done one for 20 years. I discourage those who want to do them, too. I do 20 to 25 one on one projects a year ont he other hand.
I have watched enough focus groups to know that even the best moderaators have trouble with alpha males, and chatty females and all the other types. This makes me very skeptical of the results... and participatio of the more timid... in such settings.
That said, I have used ATU settings to draw a group after each session for a moderated chat, pulling a group with very specfic characteristics that will add depth to the findings of the questions in the ATU. Depending on the moderator, thse can help understand the findings to quantitative questions. Obviously, the moderator must have an understanding,, right down to to the artist level, of radio to be able to pick up on verbal cues.
> I wouldn't go that far.
> However, the word of mouth occuring in groups can be
> invaluable to uncover and explore hidden or latent issues -
> to get answers to the questions the client did not know to
> ask or know how to ask effectively.
In radio, you have segmented usage. That is, people will listen in differnt dayparts for different reasons and different locations. In a focus group, 2 out of every 10 will not be morning radio users, yet you would need morning users in general. But those 2 can not contribute to the discussion and may even bias the discussion. One on ones are far better.
> The term focus group (which is applied to so many different
> activities and approaches as to become meaningless)
> originated in the 20's with John Watson, the father of
> behaviorism, who was doing propaganda studies. He did group
> interviews in which he showed groups a film (he had the
> group "focus" on a communication) - with before and after
> questionnaires to see how their attitudes on the topic had
> changed. In the 50's, shrinks got into the field and
> started doing focus groups for ad agencies who wanted to
> figure out how to use unconscious urges to sell something.
> But the name "focus group" was retained. In the 70's, the
> MBA's took over and focus groups became a way to do quick
> and dirty CYA research, as well as an opportunity for
> marketing people to take a junket. Research done for
> broadcast client is pretty much the same; the only
> difference is jargon consultants use to dazzle broadcast
Focus groups are not too widely used in my experience. The bulk of radio research is on music, consisting of AMTs and Call Out on a regular and ongoing basis, respectively. Focus groups, perceptuals and derivitives are used when there are issues such as format searches, competitive battles or morning show / talk issues.
The key issue is not just recruit but moderator. I have seen fine recruits spoiled by unintuitive modeerators or moderators who did not understand radio. One radio research company I know of has thier own moderator on staff because they will not accept the typica facility provided moderator or a station staffer who will "lead the witnesses."
> Like broadcasters, most marketing people think their product
> category is different and requires some kind of special
> expertise. Bull! All research deals with people and the
> same people who listen to radio are the ones who brush their
> teeth - and they make decisions in the same ways whether
> tuning a radio or grabbing a toothpaste package off a store
A person who has reviewed diaries, talked to many listeners and knows the music and programming is infinitely better at questionnaire design, recruit specs, moderation or anything else in a project than an outsider. Outside research companies will not take the small projects of a radio station since they do not offer profit alternatives as the learning curve is so steep.
Start by explaining what is a P1 and how one woul dgo about recruiting them...
> David Ogilvy said, "Most people use research the way a drunk
> uses a lamp post. More for support than illumination" Most
> broadcast and marketing clients want researchers who share
> their frame of reference, who will approach the study based
> on the client's assumptions, biases and preconceptions, and
> who will tell the clients what they want to hear - and tell
> them that the clients are right to do what they had already
> decided to do (but now it's OK because the consultant did
> some research and told us it was the right thing).
The research company is not the consulatant. It provides data, and advise and caveats on its use and interpretation. Considering that most radio research is about what songs to play and how often or what topicss to talk about and till when, there is no inherent bias brought to the table by a researcher... you test the songs, rank and filter them by different characteristics and put them on the air.
> Harker may be a good consultant but there is an inherent
> conflict of interest between that role and the role of
He is not a consultant. Program consulatants are folks like McVay and Vallie and Pollack and so on. Researchers are often hired with a consultant's recommendation, but they are diffeerent creatures.
Why do you think consultants actually do the research? They do not.
> It's like the difference between a reporter and
> a talk show host, pitchman or commentator. Even more, it's
> much analogous insurance salesmen who try to present
> themselves as financial planners (and seem to keep finding
> that people need life insurance and annuities).
You have a very, very false premise at the base of this discussion, which is that radio researchers are also program consultants. They are not.