The amount of radiation that patients are exposed to from computed tomography (CT) scans varies widely between institutions and countries, and is largely due to differences in the technical settings of the scanning machines at each institution, according to an international study led by UC San Francisco.
The authors recommend that consistent international standards be established for optimizing doses without sacrificing accuracy.
The study appears in the British medical publication The BMJ.
“Radiation, including radiation from CT, has been shown to be associated with an increased risk of cancer. Therefore, it is important to minimize exposures whenever possible,” said lead author Rebecca Smith-Bindman, MD, a UCSF professor of radiology, epidemiology and biostatistics, and of obstetrics, gynecology and reproductive medicine. “Our study indicates that this can be accomplished through the creation and implementation of consistent international technical standards for CT scanners.”
For such standards to be established, she said, “we need to learn how institutions set up their scanning protocols in the first place, and how to develop consensus about balancing image quality with diagnostic accuracy.”
The study used data collected from the UCSF CT International Dose Registry between November 2015 and August 2017. The researchers analyzed records of more than 2 million diagnostic CT scans conducted on 1.7 million adults in 151 institutions across seven countries (Switzerland, Netherlands, Germany, United Kingdom, United States, Israel and Japan). The records included scans of the head, abdomen, chest, and combined abdomen and chest.
The researchers controlled for variables including anatomical area scanned, patient characteristics, make and model of CT scanner, type of institution, and the scanners’ technical settings. They found that dose variations persisted even when patients were scanned on the same make and model of CT scanner or were scanned for the same clinical reason, and even after adjusting for factors such as patient size.
For example, among patients scanned for a suspected blood clot in the lungs, the average effective dose showed more than a 15-fold difference — from 31 millisieverts to 2 mSv — between institutions with the highest tenth and the lowest tenth in dose. This was true even when comparing scans done on a single type of scanner. A millisievert is a unit of radiation dose. The average background radiation dose that people in the U.S. are exposed to from non-medical sources is 3 mSv per year.
The researchers found that the largest driver of dose variation was how providers or clinical staff at each institution set the machine’s technical parameters, including X-ray tube settings, number of scans per scanning session, and the total area of the body being scanned.
Smith-Bindman said that the current common practice and expectation is that CT dose standards and benchmarks should be created individually by each hospital, region or country.
“This practice is driven by the belief that differences in the types and models of CT machines, and characteristics of local patient populations, require this local creation of standards,” she said. “But given our results, this does not make sense. A single set of achievable quality standards for radiation dose should be set and applied to all hospitals and imaging facilities.”
With education and collaboration among institutions, setting such standards might be easier than is widely believed, said Smith-Bindman, a member of the UCSF Philip R. Lee Institute for Health Policy Studies and the UCSF Helen Diller Family Comprehensive Cancer Center.
“The keys to optimizing CT scanning protocols are updating physician awareness, sharing best practices for optimizing doses and recalibrating expectations about image quality for a diagnostic CT scan,” she said. “Institutions that had lower doses used multiple-phase scanning infrequently and used lower x-ray tube settings across all clinical indications.”