An email exchange on AAPORnet about InterSurvey's new methodology
(Also, see here for an excerpt from Intersurvey's
own description
of their
methods)
Douglas Rivers, CEO of InterSurvey, wrote (1/28/00): |
Tom Duffy, Macro International Inc., NYC, wrote (1/28/00): I found
Intersurvey's idea intriguing, but then I looked at the According to
the page given below, 721 adults responded to the
Also, though a
lot of work evidently went into recruiting a panel with It would help
to have this info in the methodological sections of
the |
Kathy Frankovic, director of polling for CBS News, replied (1/29/00): This survey was conducted in essentially the
same way that CBS News |
Douglas Rivers, CEO of InterSurvey, wrote further
(1/30/00): |
Karen Donelan, of the Harvard School of Public Health, followed up (1/30/00): A question for anyone interested, not just for Doug Rivers: While I understand the advantages of a randomly selected sample, a 56% CASRO rate (AAPOR #4, roughly) isn't that grand. I did a survey with NORC that achieved much higher cooperation last year. So to start with, can we quantify the non-reponse? Might those who are unwilling to participate be the same as those people who are generally unwilling to have computers/Internet in their homes? I would be especially interested in the UNWEIGHTED cooperation among persons 65+, low income, racial/ethnic minorities and others traditionally underrepresented on-line. Second, I can't get past the idea that these respondents are, by definition, now "internet users"--self selected by virtue of their agreement to cooperate and introduce this technology into their homes and now capable of experiencing all of those wonderful things that make new Internet users different than other people. Does having the Internet in your home change your view of the world? In what ways? Are you not now somehow "different" than you were before? How is this panel, now "exposed" to this technology, still representative of a national population of US adults? We may see that the selection is better than a volunteer sample--but can we really say, after the first survey, that this will yield better data? I applaud the innovation and the attempt to do better. I remain to be convinced that this will work longer term. I am still unclear, following the exchanges about making pledges and taking vows of purity, if CBSNews is calling this the CBSNews Poll or not, and if to the general public, that distinction would matter anyway. Karen Donelan |
Douglas Rivers, CEO of InterSurvey, replied (1/31/00): 1) RESPONSE RATES. I, too, would like to achieve a higher response rate than our current 56% and we are experimenting with some different procedures with the objective of raising the response rate about 60%. You don't state the nature of your study (Was it a RDD general population study? Who was the sponsor? Were respondents told that the study was being conducted for a government agency? etc.) The response rate we are achieving is typical of what high quality academic telephone surveys of similar populations are getting today. (For example, the 1998 NES Pilot Study reported a 41.5% response rate.) 2) COOPERATION RATES. It's difficult to calculate cooperation rates for specific demographic groups, since we do not have demographic information on respondents who do not agree to cooperate. (I don't know what you mean by an "UNWEIGHTED cooperation rate," but the sample selection probabilities in our panel do not vary much by strata and, among cooperating respondents, almost uncorrelated with any demographic characteristic that we have checked.) However, I can provide you with some panel demographics (which reflect the combination of contact and cooperation rates). Our panel is composed of about 50% computer-owing households (matching the CPS data). African-Americans compose about 10% of our panel (compared to 12% in the adult population), while Asian Americans are slightly overrepresented. The age distribution of the panel matches the population closely, except among persons over 65 (8% of the panel vs. 16% of the population). In terms of education, 51% of the panel has a HS education or less (vs. 50% of the population), and 11% report having a graduate degree (vs. 8% of the population). I'd be interested in similar data from phone surveys. 3) INTERNET USERS. Yes, it's true that we have created Internet users and this could have some impact on behavior, which we are monitoring closely. (Every sample has a combination of new and older panel members, so the issue of panel effects is an empirical one.) However, WebTV is primarily an interactive TV experience, not an Internet experience. Furthermore, we have data on prior computer and Internet usage, so we can select subsamples of Internet users who we did not artificially create. |