Multi-factorial epidemiology

THE FALLIBILITY OF MULTIFACTORIAL EPIDEMIOLOGY

Epidemiology is the study of the occurrence, distribution, and temporal trends of disease and health in human populations. The primary ambition of epidemiology is to identify causes of diseases, which then could be made to justify various measures of prevention and cure.

Working as it does with humans, epidemiology is constrained by ethical considerations that forbid dangerous experiments in humans. In fact, with the exception of testing the effectiveness of vaccines and medicines that offer hope of improvement, epidemiology is unsupported by the basic scientific experiments, and is forced merely to observe superficially what goes on in the world of health and disease.  Epidemiological observations are uniquely affected by many observable and unobservable components, and become impervious to clear and valid causal interpretations.

Observational studies are plagued by the usual errors, biases, and other confounding disturbances that are only partially if at all controllable, leading - with rare exceptions - to interpretations of causality that are inevitably based on variable judgments that cannot be objectively validated.

Tension therefore arises between judgmental epidemiology and the sensible and essential requirement for independently testable and objective evidence that justifies public health policies. Unfortunately, flawed observational epidemiological studies have become a principal tool of advocacy and public health claims, based on the lame claim that they portray human experience.

Epidemiologists have long tried to characterize their discipline as a science, hoping to be endorsed by the popular perception that science deals with proven facts. This may have been true until some 50 years ago, when epidemiology helped to achieve spectacular advances in finding the necessary causes of infectious diseases – single infectious agents such as bacteria and viruses – which would have been impossible without the crucial contributions of other disciplines, notably bacteriology, vaccine research, and clinical studies. Removal of such individual causes by sanitation, vaccination, or medicine permitted vast natural experiments that resulted in the control or disappearance of the diseases in question, and confirmed unambiguously their causative roles – epidemiology alone could not have confirmed this.  Similar success has been obtained for some non-infectious chronic diseases in occupational settings, where causative factors could be specifically identified, and where their removal led to the control or disappearance of related diseases.

As infectious diseases have waned, there has been a surge of diseases that are not caused by any single and specific agent but depend on a constellation of factors, and which therefore are called multifactorial diseases. In general, determinations of causality have been elusive for most such conditions, and laboratory and clinical studies have proven unable to determine specific mechanism for such diseases as cancer, cardiovascular disorders and many other conditions.

The fundamental evidentiary problem of multifactorial epidemiology is at least three way: a) the pervasive impossibility of a clean measurement of authentic primary data with testable and narrow margins of error; b) the impossibility of accounting for the meaning and impact of the many factors that could have a causal role in the conditions being studied; c) the extreme difficulty of obtaining consistently replicable results, given the instability of primary data, and the shifting composition and influences of potential causal factors and of biases from study to study.

In attempting to overcome the impasse, a set of judgmental criteria was adopted to infer causality from observational statistics of multifactorial conditions. They include the familiar criteria of consistency, strength, specificity, temporal relationship, and coherence – of which more later - as catalogued in 1965 by A. B. Hill and named after him.[1] Bereft of quantitative and qualitative benchmarks, these criteria have remained judgmental and not linked to independent experimental verification. In the words of the Surgeon General’s report on cigarette smoking: “ The causal significance of an association is a matter of judgment.” – justifiably a prudent judgment in the case of cigarettes, owing to the exceedingly robust association of smoking and lung cancer risk .[2]

Following the Surgeon General, a succession of professional authorities have agreed that most causality determinations in multifactorial epidemiology have been and continue to be defined by sensible judgments. To mention just a few of these authorities, in a 1970 textbook McMahon and Pugh noted: “a causal association may usefully be defined as an association between categories or events or characteristics in which an alteration in the frequency or quality of one category is followed by a change in the other.”.[3] In a later textbook, Kleinbaum and associates wrote: “ In epidemiology we use a probabilistic framework to assess evidence regarding causality – or more properly to make causal inferences…[but] we need not regard the occurrence of the disease as a random process; we employ probabilistic considerations to express our ignorance of the causal process and how to observe it.”.[4]

Doll and Peto framed even more explicitly the issue of multifactorial causality, as they wrote:

”[E]pidemiological observations...have serious disadvantages... [T]hey can seldom be made according to the strict requirements of experimental science and therefore may be open to a variety of interpretations. A particular factor may be associated with some disease merely because of its association with some other factor that causes the disease, or the association may be an artifact due to some systematic bias in the information collection..."

It is commonly, but mistakenly, supposed that multiple regression, logistic regression, or various forms of standardization can routinely be used to answer the question: “Is the correlation of exposure (E) with disease (D) due merely to a common correlation of both with some confounding factor (or factors) ?”

...Moreover, it is obvious that multiple regression cannot correct for important variables that have not been recorded at all. ”….[T]hese disadvantages limit the value of observations in humans, but...until we know exactly how cancer is caused and how some factors are able to modify the effects of others, the need to observe imaginatively what actually happens to various different categories of people will remain.”(emphasis added). [5]

Parallel remarks are to be found in the Reference Guide to Epidemiology of the Federal Judicial Center’s Reference Manual on Scientific Evidence, the principal reference for instructing US courts in regard to epidemiology. The Manual states: “…epidemiology cannot objectively prove causation; rather, causation is a judgment for epidemiologists and others interpreting the epidemiological data.” [6], and “.. the existence of some [associated] factors does not ensure that a causal relationship exists. Drawing causal inferences after finding an association and considering these factors requires judgment and searching analysis.” [7] and “[w]hile the drawing of causal inferences is informed by scientific expertise, it is not a determination that is made by using scientific methodology.”.

Thus, while epidemiologists insist that their discipline is a science, clearly it is not the solid experimental science that produces reliable causal connections to fuel new scientific discoveries, successful technological advances, and defensible public health policies. More to the point, if multifactorial epidemiology does not operate in the framework of science, what warrants of reliability could it offer? A brief inquiry into how observational studies of multifactorial epidemiology are conducted will clarify this point.



[1] Hill AB. The environment and disease: Association or causation? Proc R Soc Med 1965;58: 295-300.

[2] U.S. Surgeon General. Smoking and health. Report of the advisory committee to the Surgeon General of the Public Health Service. U.S. Department of Health, Education, and Welfare. Public Health Service Publication No.1103., Washing­ton, DC. 1964. p. 19

[3] McMahon B, Pugh TF. Epidemiology: principles and methods. Little Brown, Boston. 1970

[4] Kleinbaum DG, Kupper LL, Morgenstern H. Epidemiological research. Wadsworth, London. 1982.

[5] Doll R, Peto R. The causes of cancer. J Nat Cancer Inst 1981;66:1192-1312.

[6] Green MD, Freedman DM, Gordis L. Reference Guide on Epidemiology. Reference Manual on Scientific Evidence. Second edition. Federal Judicial Center, Washington DC.2000. . p. 374

[7] Green MD, Freedman DM, Gordis L. Reference Guide on Epidemiology. Reference Manual on Scientific Evidence. Second edition. Federal Judicial Center, Washington DC.2000.  p. 374