Online Course

NRSG 780 - Health Promotion and Population Health

Module 3: Epidemiology

Causal Relationships

One of the leading standards is the Branford-Hill criteria to establish a relationship for causality. Through the review of the literature of different types of studies, assessments are made regarding:

  1. Strength of the association
  2. Dose-response relationship – the higher the dose, the more likely the problem
  3. Consistency of the association – the relationship holds up regardless of the type of study
  4. Specificity of the association
  5. Temporal relationship – the factor is present before the onset of the problem
  6. Biological plausibility
  7. Coherence of the evidence with other studies
  8. Experimental evidence  reducing exposure lowers risk*
    * Not part of original Bradford-Hill criteria

Two key measures which determine the importance of causal associations:

  1. Relative risk requires assessing the magnitude of risk in exposed vs. unexposed.

    For example: What is the risk of lung cancer in individuals who smoke as compared to those who do not?

  2. Population attributable risk assesses the percent of the diseases due to exposure to a risk factor.

    For example: Approximately 80% of lung cancer is attributable to cigarette smoking.

Quality of Evidence

As we know, much of clinical practice is based on tradition, not evidence. All aspects have not been studied and we know that scientific knowledge is doubling at least every five years. Evidence-based practice requires that clinicians and other health care providers know the scientific literature and the quality of evidence.

When assessing for quality of evidence, ask:

  • What types of studies have been published?
  • What are their strengths and weaknesses?
  • Is there strong evidence for causality?
  • Is there good evidence of effective interventions?

In order to assess the quality of evidence we look at the types of studies that have been done:

  • Case series, case reports that may or may not represent the disease pattern in the population
  • Case-control studies
  • Cohort studies
  • Clinical trials – RCTs are highest quality evidence for demonstrating causality
  • Community trials – Best evidence that RCT results can benefit general community. 

qualityQuality of evidence is ranked by the U.S. Preventive Health Services Task Force (USPSTF) according to types of studies that have been conducted:

  • Evidence from at least one properly randomized controlled trial.
  • Evidence from well-designed controlled trials without randomization.
  • Evidence from well-designed cohort or case-control analytic studies, preferably from more than one center or research group.
  • Evidence from multiple time series with or without the intervention. Dramatic results in uncontrolled experiments (such as the results of the introduction of penicillin treatment in the 1940s) could also be regarded as this type of evidence.
  • Opinions of respected authorities, based on clinical experience; descriptive studies and case reports; or reports of expert committees.

Strength of recommendations is classified by the USPTSF on an A-D and I gradient based on the extent of the scientific evidence:

  • The USPSTF recommends the service. There is high certainty that the net benefit is substantial.
  • The USPSTF recommends the service. There is high certainty that the net benefit is moderate or there is moderate certainty that the net benefit is moderate to substantial.
  • The USPSTF recommends selectively offering or providing this service to individual patients based on professional judgment and patient preferences. There is at least moderate certainty that the net benefit is small.
  • The USPSTF recommends against the service. There is moderate or high certainty that the service has no net benefit or that the harms outweigh the benefits.
  • The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of the service. Evidence is lacking, of poor quality, or conflicting, and the balance of benefits and harms cannot be determined.

Note: The lack of evidence of effectiveness, or the “I” recommendation, does not mean an intervention is ineffective. It may mean that:

  • current studies are inadequate to determine effectiveness,
  • high quality studies have produced conflicting results,
  • evidence of significant benefits is offset by evidence of important harm from intervention, or studies of effectiveness have not been conducted.

Exercise:

Read the article, State Infant Mortality Rate Reaches Record Low - Baltimore Sun, that uses epidemiological evidence. Look at the data and the study design carefully. Does the data support the reporter’s conclusions?

Click here for an answer to question.

This website is maintained by the University of Maryland School of Nursing (UMSON) Office of Learning Technologies. The UMSON logo and all other contents of this website are the sole property of UMSON and may not be used for any purpose without prior written consent. Links to other websites do not constitute or imply an endorsement of those sites, their content, or their products and services. Please send comments, corrections, and link improvements to nrsonline@umaryland.edu.