D. Damian, J. Chisan, An empirical study of the complex relationships between requirements engineering processes and other processes that lead to payoffs in productivity, quality and risk management. IEEE Trans. Softw. Eng. 32(7), 433–453 (2006)CrossRefGoogle Scholar
M. Denscombe, The Good Research Guide For Small-Scale Social Research Projects, 4th edn. (Open University Press, Maidenhead, 2010)Google Scholar
K.M. Eisenhardt, Building theories from case study research. Acad. Manag. Rev. 14(4), 532–550 (1989)Google Scholar
B. Flyvberg, Five misunderstandings about case-study research. Qual. Inq. 12(2), 219–245 (2006)CrossRefGoogle Scholar
R.L. Glass, Pilot studies: What, why, and how. J. Syst. Softw. 36, 85–97 (1997)CrossRefGoogle Scholar
M.M. Kennedy, Generalizing from single case studies. Eval. Q. 3(4), 661–678 (1979)CrossRefGoogle Scholar
B. Kitchenham, L. Pickard, S.L. Pfleeger, Case studies for method and tool evaluation. IEEE Softw. 12(4), 52–62 (1995)CrossRefGoogle Scholar
C. Robson, Real World Research, 2nd edn. (Blackwell, Oxford, 2002)Google Scholar
P. Runeson, M. Höst, A. Rainer, B. Regnell, Case Study Research in Software Engineering: Guidelines and Examples (Wiley, Hoboken, 2012)CrossRefGoogle Scholar
J.M. Verner, J. Sampson, V. Tosic, N.A.A. Bakar, B.A. Kitchenham, Guidelines for industrially-based multiple case studies in software engineering, in Research Challenges in Information Science, 2009. RCIS 2009. Third International Conference on, 2009, pp. 313–324Google Scholar
L. Warne, D. Hart, The impact of organizational politics on information systems project failure-a case study, in Proceedings of the Twenty-Ninth Hawaii International Conference on System Sciences, vol. 4, 1996, pp. 191–201Google Scholar
R.J. Wieringa, Towards a unified checklist for empirical research in software engineering: first proposal, in 16th International Conference on Evaluation and Assessment in Software Engineering (EASE 2012), ed. by T. Baldaresse, M. Genero, E. Mendes, M. Piattini (IET, Ciudad Real, 2012), pp. 161–165Google Scholar
R.J. Wieringa, A unified checklist for observational and experimental research in software engineering (version 1). Technical Report TR-CTIT-12-07, Centre for Telematics and Information Technology University of Twente (2012)Google Scholar
R.K. Yin, Case Study research: Design and Methods (Sage, Thousand Oaks, 1984)Google Scholar
R.K. Yin, Case Study research: Design and Methods, 3rd edn. (Sage, Thousand Oaks, 2003)Google Scholar
This short article gives a brief guide to the different study types and a comparison of the advantages and disadvantages. See also Levels of Evidence
These study designs all have similar components (as we’d expect from the PICO):
- A defined population (P) from which groups of subjects are studied
- Outcomes (O) that are measured
And for experimental and analytic observational studies:
- Interventions (I) or exposures (E) that are applied to different groups of subjects
Overview of the design tree
Figure 1 shows the tree of possible designs, branching into subgroups of study designs by whether the studies are descriptive or analytic and by whether the analytic studies are experimental or observational. The list is not completely exhaustive but covers most basics designs.
Figure: Tree of different types of studies (Q1, 2, and 3 refer to the three questions below)
Download a PDF by Jeremy Howick about Study Designs
Our first distinction is whether the study is analytic or non-analytic. A non-analytic or descriptive study does not try to quantify the relationship but tries to give us a picture of what is happening in a population, e.g., the prevalence, incidence, or experience of a group. Descriptive studies include case reports, case-series, qualitative studies and surveys (cross-sectional) studies, which measure the frequency of several factors, and hence the size of the problem. They may sometimes also include analytic work (comparing factors “” see below).
An analytic study attempts to quantify the relationship between two factors, that is, the effect of an intervention (I) or exposure (E) on an outcome (O). To quantify the effect we will need to know the rate of outcomes in a comparison (C) group as well as the intervention or exposed group. Whether the researcher actively changes a factor or imposes uses an intervention determines whether the study is considered to be observational (passive involvement of researcher), or experimental (active involvement of researcher).
In experimental studies, the researcher manipulates the exposure, that is he or she allocates subjects to the intervention or exposure group. Experimental studies, or randomised controlled trials (RCTs), are similar to experiments in other areas of science. That is, subjects are allocated to two or more groups to receive an intervention or exposure and then followed up under carefully controlled conditions. Such studies controlled trials, particularly if randomised and blinded, have the potential to control for most of the biases that can occur in scientific studies but whether this actually occurs depends on the quality of the study design and implementation.
In analytic observational studies, the researcher simply measures the exposure or treatments of the groups. Analytical observational studies include case””control studies, cohort studies and some population (cross-sectional) studies. These studies all include matched groups of subjects and assess of associations between exposures and outcomes.
Observational studies investigate and record exposures (such as interventions or risk factors) and observe outcomes (such as disease) as they occur. Such studies may be purely descriptive or more analytical.
We should finally note that studies can incorporate several design elements. For example, a the control arm of a randomised trial may also be used as a cohort study; and the baseline measures of a cohort study may be used as a cross-sectional study.
Spotting the Study Design
The type of study can generally be worked at by looking at three issues (as per the Tree of design in Figure 1):
Q1. What was the aim of the study?
- To simply describe a population (PO questions) descriptive
- To quantify the relationship between factors (PICO questions) analytic.
Q2. If analytic, was the intervention randomly allocated?
- Yes? RCT
- No? Observational study
For observational study the main types will then depend on the timing of the measurement of outcome, so our third question is:
Q3. When were the outcomes determined?
- Some time after the exposure or intervention? cohort study (‘prospective study’)
- At the same time as the exposure or intervention? cross sectional study or survey
- Before the exposure was determined? case-control study (‘retrospective study’ based on recall of the exposure)
Advantages and Disadvantages of the Designs
Randomised Controlled Trial
An experimental comparison study in which participants are allocated to treatment/intervention or control/placebo groups using a random mechanism (see randomisation). Best for study the effect of an intervention.
- unbiased distribution of confounders;
- blinding more likely;
- randomisation facilitates statistical analysis.
- expensive: time and money;
- volunteer bias;
- ethically problematic at times.
A controlled trial where each study participant has both therapies, e.g, is randomised to treatment A first, at the crossover point they then start treatment B. Only relevant if the outcome is reversible with time, e.g, symptoms.
- all subjects serve as own controls and error variance is reduced thus reducing sample size needed;
- all subjects receive treatment (at least some of the time);
- statistical tests assuming randomisation can be used;
- blinding can be maintained.
- all subjects receive placebo or alternative treatment at some point;
- washout period lengthy or unknown;
- cannot be used for treatments with permanent effects
Data are obtained from groups who have been exposed, or not exposed, to the new technology or factor of interest (eg from databases). No allocation of exposure is made by the researcher. Best for study the effect of predictive risk factors on an outcome.
- ethically safe;
- subjects can be matched;
- can establish timing and directionality of events;
- eligibility criteria and outcome assessments can be standardised;
- administratively easier and cheaper than RCT.
- controls may be difficult to identify;
- exposure may be linked to a hidden confounder;
- blinding is difficult;
- randomisation not present;
- for rare disease, large sample sizes or long follow-up necessary.
Patients with a certain outcome or disease and an appropriate group of controls without the outcome or disease are selected (usually with careful consideration of appropriate choice of controls, matching, etc) and then information is obtained on whether the subjects have been exposed to the factor under investigation.
- quick and cheap;
- only feasible method for very rare disorders or those with long lag between exposure and outcome;
- fewer subjects needed than cross-sectional studies.
- reliance on recall or records to determine exposure status;
- selection of control groups is difficult;
- potential bias: recall, selection.
A study that examines the relationship between diseases (or other health-related characteristics) and other variables of interest as they exist in a defined population at one particular time (ie exposure and outcomes are both measured at the same time). Best for quantifying the prevalence of a disease or risk factor, and for quantifying the accuracy of a diagnostic test.
- cheap and simple;
- ethically safe.
- establishes association at most, not causality;
- recall bias susceptibility;
- confounders may be unequally distributed;
- Neyman bias;
- group sizes may be unequal.