BACKGROUND: The mortality risk among cancer patients measured from the time of diagnosis is often elevated in comparison to the general population. However, for some cancer types, the patient mortality risk will over time reach the same level as the general population mortality risk. The time point at which the mortality risk reaches the same level as the general population is called the cure point and is of great interest to patients, clinicians, and health care planners. In previous studies, estimation of the cure point has been handled in an ad hoc fashion, often without considerations about margins of clinical relevance.
METHODS: We review existing methods for estimating the cure point and discuss new clinically relevant measures for quantifying the mortality difference between cancer patients and the general population, which can be used for cure point estimation. The performance of the methods is assessed in a simulation study and the methods are illustrated on survival data from Danish colon cancer patients.
RESULTS: The simulations revealed that the bias of the estimated cure point depends on the measure chosen for quantifying the excess mortality, the chosen margin of clinical relevance, and the applied estimation procedure. These choices are interdependent as the choice of mortality measure depends both on the ability to define a margin of clinical relevance and the ability to accurately compute the mortality measure. The analysis of cancer survival data demonstrates the importance of considering the confidence interval of the estimated cure point, as these may be wide in some scenarios limiting the applicability of the estimated cure point.
CONCLUSIONS: Although cure points are appealing in a clinical context and has widespread applicability, estimation remains a difficult task. The estimation relies on a number of choices, each associated with pitfalls that the practitioner should be aware of.