This week we focus on using quantitative research methods in your quality management system.
Quantitative research methods
Qualitative research can often lead to quantitative research, which puts figures on issues identified. Sometimes the boundary is blurred. When the Labour Party, for instance, commissioned more than 30 focus groups before the UK's 1997 general election, this was a form of quasi-quantification of a qualitative method - the sheer weight of numbers of focus groups was felt to lend some statistical significance to the results.
Many researchers would dispute that this was possible. In general, though, it is common to quantify the trends shown by qualitative research by more conventional quantitative methods.
In a survey, a large, and thus statistically significant, sample of the population of interest is asked questions which relate to the issues at hand. Four important means of conducting surveys are by post, by telephone, in person and over online.
By post. Postal surveys are the most frequently used form of survey. They are easy to use and cheap compared with personal and telephone interviewing, so can be used in situations where other methods are not practicable.
Large overall samples can be used, allowing the investigation of small market segments within acceptable statistical levels. Genuinely random samples may be identified, for example from electoral lists.
However, the questions which can be asked are necessarily simpler and the questionnaire is shorter than in personal interviewing. Postal surveys are supposedly less reliable, particularly because the non-response rate — those not returning the questionnaire — is often so high that their statistical validity may be questioned; the majority of the sample the non-respondents, might behave differently from those who have responded.
Good research design and explanations of why a survey is being conducted can encourage recipients to reply. Some postal surveys promise to make a charitable donation for every questionnaire returned, and this can motivate people to respond. Or respondents can be offered some form of reward in return for their completed questionnaires.
Some postal surveys are followed up by telephone calls to a sample of those not completing the questionnaire, to see if their responses are different from those of the people who have replied.
By telephone. Telephone surveys can give very quick results and are often used for opinion polls if time is critical. They are also relatively cheap. In countries where telephone ownership is limited' it may be difficult to establish representative samples. Sampling can be complicated by the growth of mobile networks, the fragmentation of directory listings and the use of messaging and answering systems.
Interviews can last only a short time and the types of question are limited, partly because an interviewer cannot check visually that a question has been understood. Personal interviewing. This is the traditional face-to-face approach to consumer market research, and it is still the most versatile.
The interviewer is in control of the interview, and can take account of interviewees' body language as well as their words. It is expensive, however, and depends on the reliability and skills of the interviewer.
Reputable agencies try to exert the necessary control over their personnel, usually by having a field manager conduct follow-ups of a sub-sample.
Online. People who use particular services on the Internet or who visit certain sites can be asked questions which reveal their preferences, experiences and behaviour. Response and analysis times can be fast. The method can be effective for targeting specific groups of Internet-using consumers, but these may not be genuinely representative consumer populations.
Surveys need to be highly user-friendly, with few questions and perhaps incentives for completion. Good questionnaire design is crucial to successful survey research. Questionnaires must be carefully and skilfully developed.
In the first instance they must be comprehensive. If a question is omitted it will obviously not be answered, and it may be impossible to ask the respondents later. Second, the questions need to be in a language that the respondents understand, so that the answers will be clear and unambiguous.
Many words used by researchers and their clients, even those used in their everyday language, may be strange to the respondents they are testing, particularly if these include people less educated than the questionnaire designers.
For instance, words such as 'incentive', 'quota' and 'marginal' may not be properly understood by everybody. In addition, if the form of questioning is too complex or too vague it may elicit confused answers. For example, a question that asks respondents how often they take flights in a year and gives possible responses as 'very frequently', ‘frequently', 'quite often', 'not very often' and 'never' may seem easy to answer, but respondents' interpretations of the terms may differ.
One respondent might consider five times a year to be frequently, while another thinks 25 times a year is frequently. The resulting analysis would not be helpful.
Finally, questionnaires should not include leading questions, that is, questions leading to answers preconceived by the researcher or the client. An example of this type of question might be 'Do you think it is right to impose a 5 per cent tax on fuel bills?' The choice of words in this question — the use of 'right' and 'impose' — is clearly prompting the answer 'No' from the respondent.
This is an obvious example; some questions are subtler, and often the person setting them will not recognise that they have asked a leading question. The most basic fault of much research is that, as a result of bad design, it produces the answers that the researcher expects or wants to hear. The questions must be neutral, to encourage the respondents to reply truthfully.
Here you'll find the latest blog articles on all things compliance, particularly focussed on quality, environment, health & safety, rail, and information security.