What ails media measurement in India?

Prasad Sangameshwaran Updated - February 28, 2014 at 01:06 PM.

Disagreements are the name of the game. Here are a few research challenges unique to the media industry

Clicks and brickbats Will the numbers add up? SHUTTERSTOCK/CIENPIES DESIGN

Last week a significant development in the Indian print media made industry watchers sit up and take note. The Readership Studies Council of India (RSCI) and the Media Research Users Council (MRUC), which conduct the India Readership Survey (IRS), met in Mumbai to discuss the thorny issue of the revised IRS findings that were released just a few weeks earlier. The findings had met with stiff resistance from 18 leading publishers and the Indian Newspaper Society wanted the report withdrawn. The MRUC-RSCI meeting decided to keep the findings in abeyance till March-end and promised to contact members and subscribers to “hold off usage of the study” until the re-validation is completed by March 31. While that meeting put the issue to rest, at least for the time being, the spotlight is once again on the growing discomfort between media owners and measurement studies.

Television measurement too has seen more than its fair share of controversy. Two years ago, news broadcaster NDTV dragged to court TAM Media along with its parent companies, the WPP-owned Kantar Media and Nielsen, alleging that the viewership ratings were manipulated and the audience measurement system was tampered with to favour certain players. Several other players too voiced their dissatisfaction with the existing system. They pointed out that TAM had peoplemeters (equipment used to measure TV viewership across channels) in just over 8,000 homes, which meant manipulating even a few of these could cause significant change in the data. Some years ago, broadcaster Zee had locked horns with TAM after channel executives allegedly gained access to the database of homes with peoplemeters. Zee’s premise was that if a broadcaster could gain access to the households, it could also easily bribe them to manipulate data.

A sore point

To be sure, media measurement is a contentious issue the world over. In the US, when Nielsen was making the shift from tracking viewership through the diary method to technologies like the peoplemeter, it found itself at loggerheads with media mogul Rupert Murdoch — his Fox network claimed the shift would adversely affect its rating, especially in households not familiar with the new technology. In the Philippines, the collusion between a major broadcaster and the measurement agency brought down the entire measurement infrastructure. Last year, in one fell swoop, Australian media owners did away with Roy Morgan, an agency that was tracking media measurement in that country for three decades. “Contentious issues and media measurement are like Siamese twins. They are joined at the hip,” says Paritosh Joshi, of Provocateur Advisory, who chairs the technical committee of MRUC and is also a member of the Broadcasters Audience Research Council (BARC), which plans to launch its own TV viewership ratings later this year.

What makes media measurement such a thorny issue in the first place? Madhukar Sabnavis, vice-chairman and country head, discovery and planning, Ogilvy & Mather India, who’s also part of the MRUC, says, “Most other researches are not of this scale and are not comparative involving multiple companies (except maybe the retail audits). Hence, I guess the companies doing the study in other cases are subconsciously willing to live with the inadequacies or limitations. But in media measurement, it’s comparative and has become the benchmark for pricing, selling and selection of the media as an advertising vehicle. So the stakes are higher.”

The country’s diversity poses a significant challenge to measuring data. “The diversity of the audience is the biggest challenge for any quantitative research in a country like India. That makes recruitment of representative samples a big challenge,” agrees Sabnavis. Nikhil Rangnekar, CEO — SA1 of Spatial Access, a media audit specialist, provides a few examples (see his views on Page 2) . Take the case of Mumbai, which has approximately a 20 per cent Gujarati population. If the sample in Mumbai doesn’t represent this large linguistic group, the readership of Gujarati publications is bound to be deflated in any readership survey. Similar is the case with the Tamil community in Bangalore. Such groups exist across the country on the basis of language, ethnicity, religion and so on. Therefore, he points out, it is important for any research to identify such groups, estimate their size and, if found significant enough, ensure the right representation in the sample. “This is critical in media research as compared to other researches for the simple reason that the above factors play a significant role in media consumption while they might not in a brand track study,” says Rangnekar.

Sample, not simple

A sample survey, by its very definition, picks out a representative sample for a population. Where does one pick the sample from? For example, if a study wants to report data for towns with population exceeding one million in Maharashtra, should it survey respondents in Pune or Nagpur? Nagpur being close to Madhya Pradesh has a different demographic construct than Pune, and hence selection of the right city or village becomes a critical decision, points out Rangnekar.

Sabnavis adds, “The actual implementation and monitoring are also big challenges. IRS recently instituted technology and back-checking to address this. But in any quantitative study with many variables impacting the final results, a small deviation in each variable could mean a big deviation in the end result. It’s important that stakeholders understand that.”

Another issue pertains to higher income groups participating in the research process. “It is not easy to gain access to the top-end households, especially in gated communities. However, research agencies have managed to work out a process to gain access to these households. The questionnaire’s length and the time involved to administer it have to be minimal. New techniques such as CAPI (computer-aided personal interviews) can shrink interview time and capture information accurately,” says Sathyamurthy Namakkal, President and Head, DDB MudraMax Media. While that might be the appropriate solution, at present media measurement in the country uses one common approach to track segments across the spectrum, be it a respondent at the top of the market staying in South Mumbai or someone at the bottom staying in rural Uttar Pradesh. “The current methodology involves a one-size-fits-all, knock-on-the-door approach for research inputs,” says a senior industry executive closely involved with MRUC.

Then, there is the issue of funding for research. Though media owners rubbish the suggestion that research funding is minimal, an MRUC source points out that in the latest edition of the IRS, the sample size had to be cut down from 3 lakh to 2.4 lakh respondents in order to save ₹2.5 crore. However, there is one thing everyone cat.a.lyst spoke to agreed upon: that research is a journey of continuous improvement. Keep the faith.

(With inputs from Meenakshi Verma Ambwani)

Published on February 27, 2014 13:41