Skip to main content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Health Literacy: Evaluating a Research Study

This health literacy guide is designed to help students find and assess sources of quality health information

Help with Epidemiologic Terms

This glossary provides brief definitions of terms often found in research articles.

Evidence Pyramid

The evidence pyramid visually illustrates the relative strength of various types of research study methodologies.

Brief definitions and examples of the various research methodology types are also included in the evidence pyramid section of this guide.

Understanding Error and Causation

a. Role of chance, bias and confounding in epidemiological studies 

b. Causation in epidemiology: association and causation

The above pages are from the module Epidemiology for Practitioners (by P Sai Kumar, Imperial College London), which is one of the online learning resources available on HealthKnowledge.

c. Sources of Error - A chapter from Understanding the fundamentals of epidemiology: an evolving text (by Victor Schoenbach, Dept. of Epidemiology, Univ. of North Carolina at Chapel Hill) that's about different types of errors that impact research studies. It's a bit technical at times but provides clear definitions and examples of types of study errors.

So How Do You Evaluate a Research Study? Step 1

It helps to first summarize key aspects about the study such as:

Study Aspect Explanation/Guidance
1. What question is the study seeking to answer? This question is also called "the hypothesis."
2. What is the population being studied?
  • Human or animal?
  • Both genders or just one?
  • Specific race/ethnicity/age group?
  • Number of people in the study?
3. What is the factor or action being studied and how was it measured?

This factor or action in epidemiologic terms is called the "exposure."

  • Factor examples: tobacco use, living in a rural area
  • "Actions taken" examples: a new therapy or program being implemented
4. What is the outcome of interest and how was this outcome measured? Examples of outcomes: smoking cessation, longer survival time, better quality of life
5. What type of study design was used to investigate the relationship between the exposure and the outcome? See the evidence pyramid section for a description of the different study design methodologies.
6. What was the effect of/relationship between the factor and/or action on the outcome(s)? Was there any effect/relationship between the factor/action and outcomes(s)? How big/strong was the effect? Positive? Negative?
7. What conclusions did the researchers make based on the outcomes observed?
8. Who funded the study? Where did the money to execute the study come from? Do the funders have a say in the study design?

So How Do You Evaluate a Research Study? Step 2

Once you have described the study, below are some general attributes to consider in assessing the quality/strength of a research study. Of note, assessing the quality of a research study is complex and detail intensive. As such, the below table is not exhaustive of all factors to consider, but rather seeks to introduce some of the "big picture" issues to evaluate.

Study Attributes to Consider
Guidance
1. Does the study design allow for making associations or cause/effect conclusions?

See the evidence pyramid section for a description of different study types and their strengths/weaknesses.

List of criteria for evaluating whether an association is likely to be causal.

2. Could the study results have occurred by chance? Was the study sample size large enough to determine whether the results could have happened just by chance? The smaller the study sample, the higher the likelihood the result could be due to chance. This type of study problem is called random error. Learn more about random error.
3. Could the way the study was implemented impacted the observed study results? If so, how? A researcher may have selected the appropriate study design, but if implementation is problematic or flawed, so will be the study results. Study implementation problems are forms of study bias. Learn more about different types of bias that can affect the accuracy of a study.
4. Could the differences between the two (or more) groups being compared be due to some other factor (other than the exposure being explored) that they share? This type of problem is called confounding. See this explanation of what confounding is.
5. Do the conclusions made by the authors match the outcomes/results? If not, this is a serious red flag.
6. Are the results of the study consistent with others exploring the same question? If the answer is no, what do the researchers ascribe to the differing result? Does this rationale make sense?
7. Are the study results generalizable to other settings or populations? Would the findings from this study apply to other populations? For example, if a study includes only children, would it also apply to adults? If a study was conducted in a predominantly white community, would the results hold true in a predominantly minority community? 
8. Is there any potential conflict of interest between the funding sources and/or the researchers? Are any potential conflicts of interest noted by the study authors? Does the funding source or the researchers have a vested interest in the study outcomes (for ex., a study of a new drug conducted by the company that created the drug)? Are the researchers employed or stockholders in the organizations supplying the research funding? Do the funders have a say in the study design?