Considering lead-time bias in evaluating the effectiveness of lung cancer screening with real-world data
Abstract Low-dose computed tomography screening can be used to diagnose lung cancer at a younger age compared to no screening. Real-world studies observing mortality after lung cancer diagnosis are subject to lead-time bias. This study developed a method using a nationwide cancer registry and stage...
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/ea58d8495bae47c98e5d171cf56cb00c |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | Abstract Low-dose computed tomography screening can be used to diagnose lung cancer at a younger age compared to no screening. Real-world studies observing mortality after lung cancer diagnosis are subject to lead-time bias. This study developed a method using a nationwide cancer registry and stage shift from trial for the adjustment of lead-time bias. 78,897 Taiwanese nationwide lung cancer patients aged 55–82 were matched with 788,820 referents randomly selected from the general population at a ratio of 1:10 by age, sex, calendar year, and comorbidities, to estimate the pathology- and stage-specific life expectancy (LE). Loss-of-LE is the difference between the LE of cancer patients and that of referents. By multiplying LE and loss-of-LE by the pathology and stage shift in the National Lung Screening Trial (NLST), we compared the effectiveness of cancer screening measured by LE gained and loss-of-LE saved. The mean LEs of stage IA and IV adenocarcinoma were 14.5 and 1.9 years, respectively, indicating a LE gain of 12.6 years. However, the mean loss-of-LEs of stage IA and IV adenocarcinoma were 3.7 and 15.1 years, respectively, with a saving of only 11.4 years, implying an adjustment of different distributions of age, sex, and calendar year of diagnosis from stage shift and a reduction in lead-time bias. Applying such estimations on the results of 10,000 participants with the same pathology and stage shift in the NLST, the benefit of screening using LE gained would be 410.3 (95% prediction interval: 328.4 to 503.3) years. It became 297.1 (95% prediction interval: 187.8 to 396.4) years when using loss-of-LE saved, indicating the former approach would overestimate the effectiveness by 38%. Our approach of multiplying loss-of-LE by pathology and stage shift to estimate loss-of-LE saved could adjust for different distributions of age, sex, and calendar year at early diagnosis and reduce lead-time bias. |
---|