Evaluation of Interviewers (MR)

Interviews are checked for quality as the competed questionnaires are turned in. Costs are monitored while the survey is progressing. These activities are necessary to insure successful completion of the project. On a long term basis, it is important to evaluate interviewers in a way that will enable the organization to identify the better ones and so to build gradually, a better field force.

The first step in the evaluation process is the check for cheating. In the past any interviewer fund to be falsifying questionnaires was immediately dropped. This may still be desirable in some cases. Some fieldwork managers are now checking these cases to determine if the character of the job was a significant actor in the cheating. Questions embarrassing or awkward for respondents lead some interviewers to skip them and to enter factitious answers. If instructions are unclear, some interviewers will proceed as they understand and, as a result, obtain incorrect and seemingly false responses. A significant portion of what has been considered cheating may have been the results of job situations of these types, which destroyed interviewer morale. In any case, a record of problems in this area should be made on the interviewer’s card.

Completed questionnaires are the only practical basis on which to evaluate the quality of an interviewers work. Theoretically, expert interviewers could re-interview a sample of each interviewer’s respondents to see for they obtained the same results; but this is too expensive and because of the respondent’s prior exposure, does not provide a completely reliable base for comparison.

An interviewer can be rated on several factors the more important ones are cost, refusals and following instructions.

Cost: Total cost (expenses and salary) per completed interview is the basis for comparison of interviewers. In personal interviews costs differ by city size so comparisons should be made only among interviewers working in similar locations. Since some interviewers cover city, suburban and rural areas in the same assignment, it may be necessary to tabulate the cost by interviewing site within each interviewer assignments in order to make a realistic analysis. The detailed cost data necessary for such an analysis however, are seldom obtained.

Refusals: The percentage of refusals can be compared among interviewers. This comparison should also be limited it interviewers working in similar areas.

Following Instructions: Each interviewer can be graded on the basis of the number of mistakes made. It is usually desirable to weight the various kinds of mistakes since some are more serious than others. Thus, the interviewer’s performance on open ended questions as evinced by the relevance of the responses recorded, should receive a heavier weight than many other types of work performance. The acceptance of an unsatisfactory answer, such as an ambiguous or partial answer, is more important than failure to date or sign the questionnaire. After totaling the number of points “off” for each interviewers, a rating can be assigned – for example, the number 10 can be assigned to those interviewers who, on the basis of their score fall into the upper 10 percent of all interviewers. This would mean that such individuals had fewer mistakes than 90 percent of all interviewers.

The National Opinion Center supplies its coders with error sheets used to note the following types of interviewer error:

Type of error Error Weight

Answer missing 3
Irrelevant or circulate answer 3
Lack of sufficient detail 2
“Don’t know” –with no probe 2
Dangling probe 1
Multiple codes in error 1
Superfluous questions asked 1

The rating on each project should be made part of the records/maintained for each interviewers. Results on a particular study fit a trend and indicate what should be done about rehiring or retaining a particular interviewer. Upon completion of a survey, interviewers should receive a report that indicates their completion and refusal rate, grade and how well they did relative to interviewers working in comparable areas.

The entire procedure outlined above suffers in that only a relative rating is obtained, and if the results are uniformly bad, it will only reveal that some interviewers were better than others. It is not a completely satisfactory method, since it lacks objective standards: however, since no absolute standards are usually available and each study is likely to differ from the next, a relative comparison is of considerable value.