Quantitative marketing research
   HOME

TheInfoList



OR:

Quantitative marketing research is the application of
quantitative research Quantitative research is a research strategy that focuses on quantifying the collection and analysis of data. It is formed from a deductive approach where emphasis is placed on the testing of theory, shaped by empiricist and positivist philoso ...
techniques to the field of
marketing research Marketing research is the systematic gathering, recording, and analysis of qualitative and quantitative data about issues relating to marketing products and services. The goal is to identify and assess how changing elements of the marketing mix i ...
. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "
four Ps The term "marketing mix" is a foundation model for businesses, historically centered around product, price, place, and promotion (also known as the "4 Ps"). The marketing mix has been defined as the "set of marketing tools that the firm uses to ...
" of marketing: Product, Price, Place (location) and Promotion. As a
social research Social research is a research conducted by social scientists following a systematic plan. Social research methodologies can be classified as quantitative and qualitative. * Quantitative designs approach social phenomena through quantifiable ...
method, it typically involves the construction of
questionnaire A questionnaire is a research instrument that consists of a set of questions (or other types of prompts) for the purpose of gathering information from respondents through survey or statistical study. A research questionnaire is typically a mix ...
s and
scales Scale or scales may refer to: Mathematics * Scale (descriptive set theory), an object defined on a set of points * Scale (ratio), the ratio of a linear dimension of a model to the corresponding dimension of the original * Scale factor, a number w ...
. People who respond (respondents) are asked to complete the survey. Marketers use the information to obtain and understand the needs of individuals in the marketplace, and to create strategies and marketing plans.


Data collection

The most popular quantitative marketing research method is a survey. Surveys typically contain a combination of structured questions and open questions. Survey participants respond to the same set of questions, which allows the researcher to easily compare responses by different types of respondent. Surveys can be distributed in one of four ways: telephone, mail, in-person and online (whether by mobile or desktop). Another quantitative research method is to conduct experiments into how individuals respond to different situations or scenarios. One example of this is A/B testing of a piece of marketing communications, such as a website landing page. Website visitors are shown different versions of the landing page, and marketers track which is more effective.


Differences between consumer and B2B quantitative research

Quantitative research is used in both consumer research and business-to-business (B2B) research. However, there are differences in how consumer researchers and B2B researchers distribute their surveys. Generally, surveys are distributed online more than in-person, by telephone or by mail. However, in B2B research, online research is not always possible, often because it is difficult to get hold of certain business decision-makers via email. As a result, B2B researchers still often conduct surveys via telephone.


Typical general procedure

Simply put, there are five major and important steps involved in the research process: #Defining the problem. # Research design. #
Data collection Data collection or data gathering is the process of gathering and measuring information on targeted variables in an established system, which then enables one to answer relevant questions and evaluate outcomes. Data collection is a research com ...
. #
Data analysis Data analysis is a process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Data analysis has multiple facets and approaches, enc ...
. # Report writing & presentation. A brief discussion on these steps is: # Problem audit and problem definition - What is the problem? What are the various aspects of the problem? What information is needed? # Conceptualization and operationalization - How exactly do we define the concepts involved? How do we translate these concepts into observable and measurable behaviours? #
Hypothesis A hypothesis (plural hypotheses) is a proposed explanation for a phenomenon. For a hypothesis to be a scientific hypothesis, the scientific method requires that one can test it. Scientists generally base scientific hypotheses on previous obse ...
specification - What claim(s) do we want to test? # Research design specification - What type of methodology to use? - examples: questionnaire, survey # Question specification - What questions to ask? In what order? # Scale specification - How will preferences be rated? # Sampling design specification - What is the total population? What sample size is necessary for this population? What sampling method to use?- examples: Probability Sampling:- (
cluster sampling In statistics, cluster sampling is a sampling plan used when mutually homogeneous yet internally heterogeneous groupings are evident in a statistical population. It is often used in marketing research. In this sampling plan, the total popul ...
,
stratified sampling In statistics, stratified sampling is a method of sampling from a population which can be partitioned into subpopulations. In statistical surveys, when subpopulations within an overall population vary, it could be advantageous to sample each ...
,
simple random sampling In statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sample ...
,
multistage sampling In statistics, multistage sampling is the taking of samples in stages using smaller and smaller sampling units at each stage. Multistage sampling can be a complex form of cluster sampling because it is a type of sampling which involves dividing ...
,
systematic sampling In survey methodology, systematic sampling is a statistical method involving the selection of elements from an ordered sampling frame. The most common form of systematic sampling is an equiprobability method. In this approach, progression throu ...
) & Nonprobability sampling:- (Convenience Sampling, Judgement Sampling, Purposive Sampling, Quota Sampling, Snowball Sampling, etc. ) # Data collection - Use mail, telephone, internet, mall intercepts # Codification and re-specification - Make adjustments to the raw data so it is compatible with statistical techniques and with the objectives of the research - examples: assigning numbers, consistency checks, substitutions, deletions, weighting, dummy variables, scale transformations, scale standardization # Statistical analysis - Perform various descriptive and inferential techniques (see below) on the raw data. Make inferences from the sample to the whole population. Test the results for statistical significance. # Interpret and integrate findings - What do the results mean? What conclusions can be drawn? How do these findings relate to similar research? # Write the research report - Report usually has headings such as: 1) executive summary; 2) objectives; 3) methodology; 4) main findings; 5) detailed charts and diagrams. Present the report to the client in a 10-minute presentation. Be prepared for questions. The design step may involve a pilot study in order to discover any hidden issues. The codification and analysis steps are typically performed by computer, using
statistical software Statistical software are specialized computer programs for analysis in statistics and econometrics. Open-source * ADaMSoft – a generalized statistical software with data mining algorithms and methods for data management * ADMB – a softwar ...
. The data collection steps, can in some instances be automated, but often require significant manpower to undertake. Interpretation is a skill mastered only by experience.


Statistical analysis

The data acquired for quantitative marketing research can be analysed by almost any of the range of techniques of
statistical analysis Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability.Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. . Inferential statistical analysis infers propertie ...
, which can be broadly divided into descriptive statistics and
statistical inference Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability.Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. . Inferential statistical analysis infers properti ...
. An important set of techniques is that related to
statistical survey Survey methodology is "the study of survey methods". As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey da ...
s. In any instance, an appropriate type of statistical analysis should take account of the various types of error that may arise, as outlined below.


Reliability and validity

Research should be tested for reliability, generalizability, and validity. Generalizability is the ability to make inferences from a sample to the population. Reliability is the extent to which a measure will produce consistent results. * ''Test-retest reliability'' checks how similar the results are if the research is repeated under similar circumstances. Stability over repeated measures is assessed with the Pearson coefficient. * ''Alternative forms reliability'' checks how similar the results are if the research is repeated using different forms. * ''Internal consistency reliability'' checks how well the individual measures included in the research are converted into a composite measure. Internal consistency may be assessed by correlating performance on two halves of a test (split-half reliability). The value of the Pearson product-moment correlation coefficient is adjusted with the
Spearman–Brown prediction formula The Spearman–Brown prediction formula, also known as the Spearman–Brown prophecy formula, is a formula relating psychometric reliability to test length and used by psychometricians to predict the reliability of a test after changing the test le ...
to correspond to the correlation between two full-length tests. A commonly used measure is
Cronbach's α Cronbach's alpha (Cronbach's \alpha), also known as tau-equivalent reliability (\rho_T) or coefficient alpha (coefficient \alpha), is a reliability coefficient that provides a method of measuring internal consistency of tests and measures. Numero ...
, which is equivalent to the mean of all possible split-half coefficients. Reliability may be improved by increasing the sample size. Validity asks whether the research measured what it intended to. * '' Content validation'' (also called face validity) checks how well the content of the research are related to the variables to be studied; it seeks to answer whether the research questions are representative of the variables being researched. It is a demonstration that the items of a test are drawn from the domain being measured. * '' Criterion validation'' checks how meaningful the research criteria are relative to other possible criteria. When the criterion is collected later the goal is to establish predictive validity. * '' Construct validation'' checks what underlying construct is being measured. There are three variants of construct validity: ''convergent validity'' (how well the research relates to other measures of the same construct), ''discriminant validity'' (how poorly the research relates to measures of opposing constructs), and '' nomological validity'' (how well the research relates to other variables as required by theory). * ''Internal validation'', used primarily in experimental research designs, checks the relation between the dependent and independent variables (i.e. Did the experimental manipulation of the independent variable actually cause the observed results?) * ''External validation'' checks whether the experimental results can be generalized. Validity implies reliability: A valid measure must be reliable. Reliability does not necessarily imply validity, however: A reliable measure does not imply that it is valid.


Types of errors

Random sampling errors: *sample too small *sample not representative *inappropriate sampling method used *
random errors Observational error (or measurement error) is the difference between a measured value of a quantity and its true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. In statistics, an error is not necessarily a "mistake ...
Research design errors: *bias introduced *measurement error *data analysis error *sampling frame error *population definition error *scaling error *question construction error Interviewer errors: *recording errors *cheating errors *questioning errors *respondent selection error Respondent errors: *non-response error *inability error *falsification error Hypothesis errors: *
type I error In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is convicted"), while a type II error is the fa ...
(also called alpha error) **the study results lead to the rejection of the null hypothesis even though it is actually true *
type II error In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is convicted"), while a type II error is the fa ...
(also called beta error) **the study results lead to the acceptance (non-rejection) of the null hypothesis even though it is actually false


See also

*
Choice Modelling Choice modelling attempts to model the decision process of an individual or segment via revealed preferences or stated preferences made in a particular context or contexts. Typically, it attempts to use discrete choices (A over B; B over A, B & C) i ...
* Brand strength analysis * Data mining * DIY research * Enterprise Feedback Management *
Online panel An Online panel is a group of selected research participants who have agreed to provide information at specified intervals over an extended period of time. A panel can be distinguished from a database in the following ways: *Its members are ask ...
* Qualitative marketing research


References


Bibliography

* Bradburn, Norman M. and Seymour Sudman. ''Polls and Surveys: Understanding What They Tell Us'' (1988) * Converse, Jean M. ''Survey Research in the United States: Roots and Emergence 1890-1960'' (1987), the standard history
Glynn, Carroll J., Susan Herbst, Garrett J. O'Keefe, and Robert Y. Shapiro. ''Public Opinion'' (1999)
textbook
Oskamp, Stuart and P. Wesley Schultz; ''Attitudes and Opinions'' (2004)
*
James G. Webster James G. Webster (born 1951) is a professor and audience researcher at Northwestern University. Webster's publications have documented patterns of audience behavior, sometimes challenging widely held misconceptions. He has also made foundational ...
, Patricia F. Phalen, Lawrence W. Lichty; ''Ratings Analysis: The Theory and Practice of Audience Research'' Lawrence Erlbaum Associates, 2000
Young, Michael L. ''Dictionary of Polling: The Language of Contemporary Opinion Research'' (1992)
{{Social surveys . Quantitative research Business intelligence Applied statistics