In recent years, our knowledge of risk factors relating to open-angle glaucoma (OAG) has improved substantially. A number of studies have evaluated the cross-sectional association betwen risk factors and OAG, whereas only a few have investigated the risk factors for glaucoma development.1 Prospective data provide better evidence on which to base inferences on causation because of ascertainment of temporality, which is one of the major causative criteria and has always been an inherent problem in studying a disease with low incidence. The first requirement in assessing glaucoma onset is to study a cohort of individuals or patients over a period of time that is long enough to allow the development of the disease. The second requirement is an adequate definition of progressive change for the development of new cases of OAG. The third requirement is the collection of all of the possible clinical and non-clinical information from each study participant at baseline and, whenever possible, at different times during the follow-up, until the end of the investigation. These three requirements allow all the factors that were noted before the occurrence of the end-point to be weighed, thus elucidating and emphasising the potential relationship between each single risk or protective factor and the studied outcome.
In OAG, the following study designs, which meet all three requirements, are most often used: longitudinal population-based studies (PBS), randomised controlled clinical trials (RCTs) and cohort studies. Few major differences exist between these three study designs. PBS are usually designed to assess the incidence of OAG in a sample of the population of a well-defined geographical area, and are generally based on two examinations taken at least four years apart. This design usually takes into account the factors that were collected at the first examination and the factors that could be retrieved during the follow-up time before the second examination. This allows precise information to be collected only at baseline, as the indirectly acquired follow-up data cannot be matched precisely with the time of occurrence of progression. RCTs are designed to evaluate the efficacy of treatments, using an untreated group or a standard-treated group as control. Information is collected at baseline and at all the observation time-points until the end of the study. This design allows a very precise temporal relationship to be assessed (as the information is collected before the occurrence of the outcome), but is always restricted to a clinically very well-defined population – those with ocular hypertension (OHT), pseudoexfoliation (PEX), etc. This limits the potential for the results to be generalised, which differs from the PBS. Moreover, the intervention interferes with the results. Cohort studies tend to have the same pros and cons as RCTs, differing only in that single hypothetical factors are the targets of the investigation, and the results may provide information concerning only those individuals affected by the studied condition.
An important issue to be outlined is the different relative importance in terms of ‘causality’ that should be attributed to the various factors. Indeed, there are factors that may precede the progression – such as disc haemorrhage or high cup-to-disc (C/D) ratio – that are often strongly associated with the outcome. These cannot be interpreted as ‘causal factors’ of the progression, but simply as predictive factors that can be clinically observed. Furthermore, there are other factors, such as high intraocular pressure (IOP), for which a more relevant causal effect has been established. This review will therefore focus only on those longitudinal studies in which OAG onset was documented by the clinical detection of visual field (VF) and/or optic disc progressive changes. The data will be summarised and discussed in the context of each study design.