An email exchange on AAPORnet about InterSurvey's new methodology

(Also, see here for an excerpt from Intersurvey's own description
of their methods)

Douglas Rivers, CEO of InterSurvey, wrote (1/28/00):

For those unfamiliar with InterSurvey, we have recruited
a national panel of over 30,000 persons using RDD.  All
selected households are provided with free hardware
(WebTV) and Internet access.  Thus, we use probability
sampling with a frame that includes households without
computers or prior Internet access.  For an example of one
of our surveys, see
http://cbsnews.cbs.com/now/story/0,1597,154215-412,00.shtml


Tom Duffy, Macro International Inc., NYC, wrote (1/28/00):

I found Intersurvey's idea intriguing, but then I looked at the
example survey and their home page.

According to the page given below, 721 adults responded to the
CBS/Intersurvey poll. However, I didn't see an explanation as to how
these 721 responses were obtained: was this a randomly selected sample
of the panel, with a decent non-response conversion protocol? What was
the interviewing "window"? What was the response rate? Or was this a
self-selected sample of a frame of 30,000 people? One or two
additional lines of info at the bottom of the page would help some of
us understand what these polls really mean.

Also, though a lot of work evidently went into recruiting a panel with
the objective of having it be a "random" sample of Americans who are
willing to trade poll participation for free access and hardware, are
the probabilities of selection to this panel known? And are they used
when weighting the data? Was any analysis conducted on the potential
bias resulting from the above "trade" (simultaneous RDD "control"
samples, cognitive testing)? And why is this panel methodologically
superior to other panels that start with random recruitment? A panel
is a panel, even if it is as large as 30,000 or more.

It would help to have this info in the methodological sections of the
Intersurvey page. Otherwise, it is difficult to believe Intersurvey's
claim that this methodology "makes existing research methodologies
obsolete" (http://www.intersurvey.com).


Kathy Frankovic, director of polling for CBS News, replied (1/29/00):

This survey was conducted in essentially the same way that CBS News
has done telephone reaction panels in the past.  Just as we would start
with a randomly selected telephone sample of adults interviewed before
a major event, in this case we began with a randomly selected subset of
the InterSurvey panel. 

This group was asked a set of politically-oriented questions in the week
before the event, without being told that these questions were being
asked for CBS News, and without being told that this was part of a
special panel for the State of the Union address.   In addition, they were
sent a letter asking them to log in to their web TV at 10:15 p.m. ET on Jan.
27 (the night of the State of the Union address).  No mention was made
in that request of the speech itself.  If selected respondents would not
be able to log in from their WebTV at that time, they were given an 800
number to dial. 

Respondents on Thursday night were subject to our usual weighting
process to account for respondent differences in the probabilities of
selection as well as the normal demographic weighting done on
telephone samples.  In addition, a non-response adjustment was made
based on responses to the political questions asked before the speech in
order to control for any political bias in the post-speech sample.  We
have followed similar procedures in the telephone reaction polls we've
done for many years.    We and InterSurvey will be reviewing the data in
the next few weeks and we'll have a presentation on AAPOR about the
poll. 

The policy of CBS News is NEVER to call a non-probability sample a CBS
News Poll.


Douglas Rivers, CEO of InterSurvey, wrote further (1/30/00):

Kathy Frankovic responded with some specific
details about the CBS study, but here are a few quick answers to your
questions about the InterSurvey panel:

1) To date, InterSurvey panel recruitment has been handled by NORC using a
complex design. We normally use the probabilities of selection to weight
subsamples from the panel. The initial response rate, using the CASRO
definition (roughly, contact rate x cooperation rate), is about 56%.

2) All studies, including the CBS one that you ask about, use randomly
selected subsamples from the panel, not self-selection. In rereading our
marketing materials, I realize that this isn't explicitly stated. (The
thought of using self-selection at the final stage never occurred to us!)

3) Your questions about panels are good ones. In terms of sampling, there is
no fundamental methodological difference between InterSurvey and other high
quality, randomly recruited panels. The difference is that interviewing is
initiated by sending an e-mail message to the selected panel member and that
the interview is conducted using a Web browser. Their device automatically
downloads e-mail and turns on a red light on the WebTV box, notifying them
that a message has arrived. This means that we don't have to call or mail
panel members--much faster than mail and much less intrusive than calling.
It also means that we can interview outside of normal interviewing hours
(e.g., after 10 pm, as was required for the CBS survey). Furthermore, we can
use visual content, including TV-quality video, as part of our surveys. We
are trying to combine the Web with general population probability sampling.

I hope this is responsive to your questions.

Doug


Karen Donelan, of the Harvard School of Public Health, followed up (1/30/00):

A question for anyone interested, not just for Doug Rivers:

While I understand the advantages of a randomly selected sample, a 56% CASRO rate (AAPOR #4, roughly) isn't that grand. I did a survey with NORC that achieved much higher cooperation last year. So to start with, can we quantify the non-reponse? Might those who are unwilling to participate be the same as those people who are generally unwilling to have computers/Internet in their homes? I would be especially interested in the UNWEIGHTED cooperation among persons 65+, low income, racial/ethnic minorities and others traditionally underrepresented on-line.

Second, I can't get past the idea that these respondents are, by definition, now "internet users"--self selected by virtue of their agreement to cooperate and introduce this technology into their homes and now capable of experiencing all of those wonderful things that make new Internet users different than other people. Does having the Internet in your home change your view of the world? In what ways? Are you not now somehow "different" than you were before?

How is this panel, now "exposed" to this technology, still representative of a national population of US adults? We may see that the selection is better than a volunteer sample--but can we really say, after the first survey, that this will yield better data?

I applaud the innovation and the attempt to do better. I remain to be convinced that this will work longer term. I am still unclear, following the exchanges about making pledges and taking vows of purity, if CBSNews is calling this the CBSNews Poll or not, and if to the general public, that distinction would matter anyway.

Karen Donelan
Harvard School of Public Health


Douglas Rivers, CEO of InterSurvey, replied (1/31/00):

More questions, which I'll do my best to answer.

1) RESPONSE RATES. I, too, would like to achieve a higher response rate than our current 56% and we are experimenting with some different procedures with the objective of raising the response rate about 60%. You don't state the nature of your study (Was it a RDD general population study? Who was the sponsor? Were respondents told that the study was being conducted for a government agency? etc.) The response rate we are achieving is typical of what high quality academic telephone surveys of similar populations are getting today. (For example, the 1998 NES Pilot Study reported a 41.5% response rate.)

2) COOPERATION RATES. It's difficult to calculate cooperation rates for specific demographic groups, since we do not have demographic information on respondents who do not agree to cooperate. (I don't know what you mean by an "UNWEIGHTED cooperation rate," but the sample selection probabilities in our panel do not vary much by strata and, among cooperating respondents, almost uncorrelated with any demographic characteristic that we have checked.) However, I can provide you with some panel demographics (which reflect the combination of contact and cooperation rates). Our panel is composed of about 50% computer-owing households (matching the CPS data). African-Americans compose about 10% of our panel (compared to 12% in the adult population), while Asian Americans are slightly overrepresented. The age distribution of the panel matches the population closely, except among persons over 65 (8% of the panel vs. 16% of the population). In terms of education, 51% of the panel has a HS education or less (vs. 50% of the population), and 11% report having a graduate degree (vs. 8% of the population). I'd be interested in similar data from phone surveys.

3) INTERNET USERS. Yes, it's true that we have created Internet users and this could have some impact on behavior, which we are monitoring closely. (Every sample has a combination of new and older panel members, so the issue of panel effects is an empirical one.) However, WebTV is primarily an interactive TV experience, not an Internet experience. Furthermore, we have data on prior computer and Internet usage, so we can select subsamples of Internet users who we did not artificially create.