blank.gif (51 bytes) Democracy
Polls, Policy and the People

by Randy Lloyd

n recent rhetoric surrounding the impeachment and trial of Bill Clinton much was made of public opinion polls. Clinton supporters took every opportunity to announce that The American People were opposed to his removal from office. Not only had he been reelected in a democratic process, they said—arguing removal would overturn the will of the people—but polls showed a vast majority of Americans were happy with the job that Bill Clinton was doing and equal numbers of people opposed his removal. The underlying belief in the rule of the American people in a representative process was a strong argument for acquittal. The argument based on this belief has two major flaws, however.

In a democratic society structured around representative processes, the people rule through the influences they exert on their representatives. To some degree, the people should get what they want. Representatives need to be mindful of the policy desires of their constituents. Representatives need to be, by definition, delegates at times, transmitting the will of the people into action. It would be a mistake, however, to assume that representatives are always mere delegates. After all, the Founding Fathers were not optimistic about direct democratic processes or the rationality of the people. Susceptible to whims and emotions, the people can be dangerous both to minorities and the viability and maintenance of the republic. Also, a delegate is instructed to vote as directed, substituting another’s will for his own in the policy process. But just which constituent does the delegate represent? On many issues there will not be a unanimous opinion among constituents. Policy is frequently opposed by large segments of the population, favored by equally large numbers, and ignored by even larger portions of the public.

The lack of clear direction from constituents is the reason that representatives tend to be, and perhaps should be, trustees. A trustee is one who uses his own best judgment in deciding policy, in the absence of clear direction from constituents. Rather than directly charging representatives with policy positions, the representative process leaves democratic influence to majority rule by elections. If policy is left to the good judgment of the representative, then the voters should make sure that they have chosen someone with good judgment. The ability and integrity of candidates, along with issue positions, is what the campaign battle is supposed to illuminate. Further, the ideological and issue orientation of the representatives chosen by election should in some general sense reflect the majority of the voters in a district. This is why liberals tend to be elected in liberal districts, and conservatives likewise in conservative districts.

In a process such as impeachment, which is a political and not a legal undertaking, the people do and should have some influence in the decision to remove the president from office. They should not, however, have a direct say, creating delegates of representatives. The Founders created a system of independent branches and each was to jealously guard its own power and prerogatives. This is why Republicans abandoned Nixon over Watergate in the 1970s. They were doing their constitutional duty to ensure the independence of the legislative branch of government, to oppose the concentration of power in one person or one branch and to check the path to tyranny. Alternatively, with Bill Clinton, many congressmen abandoned the Constitution, to instead defend narrow, partisan aims.

Since the purpose of the impeachment process is to safeguard liberty through defense of the Constitution, the people’s desires, frequently driven by emotions favoring unconstitutional actions, should play little part in the deliberations. But suppose representatives should abdicate their responsibility to the Constitution in the name of representation—of following the will of the people. A major problem in determining the will of the people would still exist. Up to a point, polls are informative and useful, but without careful consideration of the process, polls can be less than useful, even harmful.

Determining what the public wants on any issue is not easy. As noted, many people don’t care about issues, and large numbers are uninformed but willing to give a "knowledgeable" response to pollsters so as not to look foolish. A forced choice may give misleading results. Polls often ask "yes" or "no" questions, when the answer may be conditional. And frequently people have not thought about the issue and so allow their response to be colored by the emotional direction of the question itself. In the recent polls surrounding the rape allegations brought by Juanita Broaddrick, polls report that a vast majority believe Bill Clinton over the alleged victim. However, a huge percent of those polled had never heard the allegations, and among informed individuals the results overwhelmingly suggest support for Juanita Broaddrick. So, who should have their opinions counted, and how should they be counted?

The first priority must still be the representative process. Polls are usually national, taking the pulse of the entire country. Congressmen, however, represent constituents in a district that may not be in step with the nation as a whole. Should a congressman represent the views of the nation or his constituents? Obviously he should respond to those whom he represents, with constituent contact being the avenue of influence. And polls themselves have a number of limits that might mislead representatives as to the true opinion of the people.

Since polls ask questions of a relatively small number of people—generally about 1,000—care must be taken to ensure that the group of people questioned is representative of the population. Asking questions of so few people is not itself a problem if statistical principles are maintained. But if a non-representative sample is used, problems can arise. The most likely reason a problem may arise is that pollsters fail to account for the types of people they reach. For instance, if pollsters make phone calls only between 9 a.m. and 5 p.m. they will reach stay-at-home women, the unemployed, students and the few men who will be home during the day. Asking questions of women only, or any single group of people, will distort the response, and calling this representative of the whole is a mistake. An example of this is the problem frequently found during the 1996 presidential election. Unregistered people were asked for whom they would vote, but unregistered people are generally poorer and more likely to be Democratic supporters, and the results of the polls showed evidence of major sampling errors for this reason. Another famous case of surveys gone awry took place in the early days of polling, during the Depression, when nearly 100,000 readers of Literary Digest magazine were asked to respond to questions about the election. The tally predicted a landslide victory for the Republican candidate. This turned out to be far from the election day result. The problem was that during the Depression, the only people who could afford to subscribe to Literary Digest were people unrepresentatively inclined to support Republicans.

In the recent polling regarding the impeachment of Bill Clinton, many pollsters may have been less than perfect in their selection of sample respondents. There are some anecdotal reports that poll takers did not complete surveys to Republican identifiers, and other reports that there was an unusually high number of people who would not answer questions. There could be some distortion if there is any systematic relationship among those who refused to answer polls. The variety and consistent results from different polling organizations during the impeachment period lessen concerns for sampling errors, however. And although variations of 10 percent or more between pollsters was common, it is not indicative of deliberate attempts to distort the outcome. The CBS News poll of February 1999 reported 73 percent of respondents satisfied with Bill Clinton, where the U.S. News & World Report poll reported 57 percent support for the president in the same time period, which in polling is still in the neighborhood. Even the highly regarded Zogby poll reported large support numbers. The difference can be attributed to different sample frames and differences in the wording of questions.

Other problems of polls surface with the construction of the survey itself. Depending on question wording, the order that questions are asked, and the choices that people are allowed to offer, results will differ. For instance, how many people, even now, know that impeachment does not mean removal from office? Impeachment is akin to an indictment. Many people who were asked if Bill Clinton should be impeached likely did not know that the term does not mean removal from office—a trial in the Senate must do that.

Another example is a survey done here in Nevada some years ago that asked whether people wanted to change the frequency of the state’s legislative sessions. Then, as still today, the Legislature was meeting once every two years for a few months, and some consideration was being given to making the sessions annual. The question asked respondents if they thought the legislature should meet "biennially." But, does biennially mean once every two years or twice a year? Most people aren’t certain. The response was very one-sided, but meaningless in what it said about the public’s opinion.

Again, some concern was expressed in the polls related to Juanita Broaddrick’s allegation against Bill Clinton. CNN’s poll asked people about the allegations of "rape"—an emotionally-charged term that people are reluctant to agree with, particularly when used to define the president’s behavior. The Fox News poll used the term "sexual assault" and had different results.

Then there is the problem of question order. If a person is asked a series of questions designed to elicit a "yes" answer, followed by a question that might bring a "no" response, the conditioning of all the prior answers may lead to a "yes" response for the latter. For instance: are you in favor of motherhood? (Yes.) Are you in favor of apple pie? (Yes.) Do you think that Bill Clinton is doing a great job as president? The cadence can have a strong effect on the response.

Limited-response options also can mislead. As an example, the February 12 ABC News poll asked whether the outcome of the Senate vote in the impeachment trial was based on party politics or the facts of the case. A large majority of respondents opted for the "party politics" choice, with far fewer choosing "the facts of the case." The problem here is that another choice more in keeping with sentiment, but not offered, would be that some senators voted for partisan reasons, and others voted based on the facts of the case.

In any event, the results were surely reported in order to suggest a highly partisan vote by Republicans, even though the question did not specify which party’s members voted for partisan reasons.

Another question, typical of many polls, was asked in the CNN-Time poll of January 7, before the Senate vote. It queried people as to whether they approved or disapproved of the Senate’s handling of the trial. Implied in the question, and certain to be interpreted this way, was that the Republicans were doing a bad job. But the question does not ask respondents to identify the culprits, nor does it ask people if the reason they disapprove is that the Senate is not working hard enough to bring the president to justice. The implication, unsupported by the question, was that the American people disapproved of the Republican attempts to remove the president.

Even if a sample is accurately drawn and the survey is properly crafted, there are still problems. People frequently have little or no knowledge, understanding or interest in many of the political questions of the day, but pollsters treat their opinions equally. Should a person who is lukewarm to Bill Clinton’s continued presence be countered by another who is adamantly opposed to his existence? For instance, let us suppose that the one-third of respondents who reported they wanted him removed were adamant, while the two-thirds who wanted him to stay held "it would be nice" attitudes, but really didn’t care. Polls rarely control for or report the strength of respondent sentiment.

With all of the possible problems, poll mistakes are unlikely to account for more than a small portion of the high ratings for Bill Clinton in the impeachment period. Better organized and conducted polls were still reporting high support. So we can be comfortably rely on the general findings that Americans supported Bill Clinton. But what does that support really mean?

Even if accurate, the sentiment these polls purport to express about American attitudes toward Bill Clinton may have little enduring meaning. One of the historical patterns of presidential popularity is that in times of crisis—ironically, even when a president is under fire for his conduct—the American people rally behind him. In the Gulf War, George Bush had job approval ratings in the 90 percent range. Even though the current crisis is not a war, the natural inclination of Americans is to support the president. This crisis appears to be no different, with one exception. Whereas George Bush had 90 percent approval ratings, Bill Clinton’s ratings have hovered in the 60 percent range. In essence, Bill Clinton’s extraordinarily high approval ratings are really extraordinarily low. After the crisis abates, expect his ratings to tumble.

In general, polls can be easily manipulated to find any answer desired, or offer meaningless results if poorly administered or interpreted. It is only integrity and competence that make polls useful tools for understanding the policy desires of the people. Even with the utmost care in polling, however, government by polls is wrong. The representative process is necessary to guard against the excesses of democracy. After all, if the polls are correct, the American people overwhelmingly favor a president who has been characterized—charitably, it turns out—as a draft-dodging, womanizing, pot smoker. NJ

Randall D. Lloyd is a senior research fellow with NPRI and a lecturing professor in political science at the University of Nevada, Reno.


Join NPRI

Journal front | Search | Comment | Sponsors