Conducting Surveys

Section 8: Developing and Verifying Data Collection Instruments

Note: This guide is intended to ensure that surveys conducted in the OAG meet reasonable requirements and expectations of survey professionals as well as the VFM audit standards of the Office of the Auditor General. The use of the terms "must" and "should" in this guidance document do not necessarily have the status of OAG standards and policies. However, they reflect methodological requirements and expectations in the conduct of surveys.

The term "data collection instruments" describes the tools used to collect information as part of a survey. Proper design of data collection instruments is essential for reaching reliable and valid conclusions. Information must be obtained on a comparable basis across individuals if the intention is to make aggregate or general statements on the basis of survey information. This is especially true when the intention is to make quantified generalised statements about a larger population (e.g. x percent, most, more than, etc.). If the questions or instructions differ among individuals, or are interpreted differently by different members of the audit team, the data will not be reliable, and general statements will be unwarranted.

The adequate and appropriate design of data collection instruments is very important for validity. In the case of questionnaires, the questions posed, their wording, their structure and the order in which they are presented can have significant impact on the relevance and accuracy of the responses and the likelihood that questions will be answered.

For these reasons, data collection instruments used in a survey should be carefully planned and should ask a standard set of questions that can be administered in a standard fashion to all respondents.

Developing questionnaires and structured interviews is both a technical skill and an art. For example, there is a sizeable literature demonstrating that the ordering of questions and their placement towards the beginning or end of a questionnaire can have profound effects on the answers received. For some specific topics such as personal background, there is extensive research on the wording and structure of effective questions. Anyone developing a questionnaire or structured interview needs to be aware of the basic principles of questionnaire design, technical issues and any technical literature on the topic being surveyed (see VFM Manual, section on Competence of the Audit Team). Of course, substantive knowledge of the topic is also essential.

Establishing reliability and validity

There are a number of approaches to assessing the reliability and validity of survey data. When a questionnaire is used, establishing reliability commonly involves administration of the questionnaire or portions of the questionnaire to the same respondents at different times or under different circumstances in order to assess how stable the answers are.

The basic principle for establishing validity is the same as for corroborating audit observations and conclusions generally, i.e., compared to evidence from different sources and of a different nature (see VFM Manual, section on Sufficient Evidence). Approaches to establishing validity include comparing survey results with behavioral observations, comparing the sample surveyed with groups that are expected to be similar or dissimilar in critical ways, comparison of results to the results of other data collection instruments expected to measure the same thing, obtaining expert opinion, and internal analyses of the instrument.

The extent to which reliability and validity must be established and the approach(es) to use depends upon the nature of the information collected and the uses to which it will be put. In particular, it is important to establish reliability and validity when there is an attempt to measure individual characteristics, such as knowledge, morale, attitudes, etc. Often, multiple approaches may be required if the subject matter is complex.

Rating Scales. The use of rating scales ("on a scale from one to nine, place a check mark on the line, etc.") to compare the responses of individuals or groups of individuals particularly requires examination of reliability and validity. Specific additional corroboration is required when an auditor wants to use a rating scale to compare different respondents to each other, to other groups or to a criterion on a certain topic. For example, meeting audit objectives may require the assessment of staff satisfaction with employment or certain management functions. Alternatively, it may require an assessment of knowledge, such as knowledge of environmental issues.

In these and similar cases, it is often incorrect to interpret literally the level of response on a scaled item or group of items. For example, it is incorrect to assume that a morale rating above the neutral point indicates a satisfactory level of morale. There is a known tendency of survey respondents to give positive answers to some types of questions and negative answers to others. Determining the adequacy of satisfaction ratings and of answers to knowledge questions requires that the answers be compared or anchored to some reference point. For example, employee morale ratings should be compared to those obtained in other similar organizations.

There are a variety of scale types, each with their particular methodological assumptions, advantages, and disadvantages under specific circumstances. There are also a variety of technical choices, such as the number of points on a scale, whether or not to use a mid-point and how to label the scale, that have all been shown to have a profound influence on the answers received. The FRL surveys should be consulted on these choices.

Relying on established instruments

Establishing validity and reliability can be time consuming and expensive, especially in knowledge testing, the measurement of attitudes or the assessment of employee morale. Confidence is greatest when more than one approach has been used. Considerable effort can be saved by using established instruments of known reliability and validity. The developers of established instruments may require administration by specially trained personnel, perhaps their own staff, purchase of the instruments or payment of royalties.

In determining the extent to which one can rely on established data collection instruments, it is important to consider the following: