AI at RSNA 2024: Game-Changer or Overhyped?
Artificial intelligence (AI) dominated the conversation at RSNA 2024, reflecting its growing presence in radiology. Yet, while AI is often hailed as revolutionary, its current contributions remain modest—offering incremental improvements rather than transformative change. My goal at RSNA 2024 was to challenge this perspective and assess whether today’s AI tools are truly ready to revolutionize radiology.
Current AI Landscape: Incremental, Not Transformative
Current AI tools frequently focus on functionalities such as sorting worklists, offering non-definitive diagnostic suggestions, and identifying limited findings per modality. While useful, these tools fall short of addressing the core challenges radiologists face, such as overwhelming workloads and the need for near-perfect diagnostic precision. For AI to truly transform radiology, it must go beyond incremental improvements and deliver comprehensive, practical solutions.
A Radiologist’s Wishlist
If AI is to fulfill its promise it must address our most pressing needs. Here is my personal wish list:
Enhanced Reporting Tools: AI should improve radiology reports with large language models (LLMs) capable of:
Detect inconsistencies in findings and impressions.
Improve clarity by reducing verbosity.
Generate standardized reports while allowing individual dictation style.
Provide management recommendations and differential diagnoses derived from the imaging report text.
Tramslate technical jargon into patient-friendly summaries.
Comprehensive Imaging Analysis: Current AI tools are often limited to detecting one or two findings per modality, a constraint driven by U.S. FDA regulations that classify each AI-identified finding as a separate medical device requiring independent validation. This regulatory approach hampers the development of versatile, multi-functional AI systems. To advance the field, we need AI capable of analyzing all clinically relevant features in a study, supported by robust scientific validation but streamlined regulatory processes to minimize unnecessary bureaucratic hurdles.
Actionable Recommendations: AI should provide clear, evidence-based recommendations that radiologists can confidently integrate into patient care, such as "recommend follow-up imaging" for a screening mammogram. In contrast, abstract metrics like "this mammogram has a risk score of 77" lack transparency, invite inconsistent interpretation among radiologists, and risk introducing confusion and complexity. By delivering actionable insights instead of ambiguous scores, AI can enhance clinical decision-making and improve patient outcomes.
Enhanced Electronic Medical Record Integration: AI can enhance radiology by delivering concise, relevant patient information to improve clinical context during imaging interpretation. For example, in cancer follow-ups, it can highlight therapy changes, such as the start of immunotherapy, while summarizing key medical history, recent trauma, or vaccinations. By streamlining access to critical data, AI improves efficiency and diagnostic accuracy. It can also reduce vague imaging indications like “pain” or “rule out pathology,” replacing them with meaningful clinical insights for more informed interpretations.
Ethical and Practical Concerns
Ethical considerations loom large in AI's integration into radiology. Key issues include:
Bias and Fairness: AI often underperforms across diverse patient populations. Addressing these gaps is essential but may trade accuracy for fairness.
Transparency: AI vendors must always disclose false-negative rates, performance degradation over time, and limitations in generalizability.
Data Ownership: Who owns the data AI uses to train and improve? De-identified data may not remain anonymous indefinitely, raising privacy concerns.
Regulatory Challenges: Current FDA policies mandate static models that cannot continuously learn, hindering innovation. A shift toward longitudinal oversight is necessary.
Payment Concerns: Who should bear the cost of utilizing AI in radiology? What potential challenges could arise from charging additional patient fees for AI-enhanced radiology services?
Hype: The ethical implications of over-relying on AI that isn’t clinically mature are profound.
Based on my observations at RSNA, these concerns are far from hypothetical. In discussions with AI vendors, several suggested that AI could allow radiologists to reduce vigilance on cases deemed likely normal. This idea is not only premature but also deeply troubling.
Consider the example of mammography. In Europe, where AI is being tested as a tool to reduce double-reading, every mammogram is still fully reviewed by a human radiologist. Yet on the RSNA exhibition floor, some vendors implied that U.S. radiologists might rely on AI and merely skim mammograms flagged as normal. This would effectively transform our current single-reader model into a dangerous "half-reader" approach for cases AI deems unremarkable. Such a shift is alarming, especially since these algorithms have not yet undergone sufficient validation nor demonstrated the reliability required to justify such a fundamental change in practice. These proposals are not just premature—they are potentially hazardous.
Imagine telling a patient, “The AI suggested your mammogram looks normal, so your radiologist did not conduct a thorough review”. Trust in medical care is built on thoroughness, not shortcuts marketed as efficiency. Surveys consistently show that patients are skeptical of AI for standalone diagnosis and expect human radiologists to thoroughly evaluate their images. Vendors implying radiologists can reduce vigilance due to AI risk undermining thorough patient care—a fundamental principle of radiology.
The Path Forward
Despite my critiques, I saw true signs of progress. A handful of AI tools offered promising features such as:
Guiding radiologists' attention to specific areas of interest with increased sensitivity and specificity proven by objective scientific studies.
Seamlessly integrating imaging findings and measurements into reports.
Reconstruction algorithms that reduce radiation or contrast dose and improve image quality.
A Call to Objectivity
Radiologists must remain vigilant, questioning AI's role at every turn.
Radiologists, AI developers, and the media alike must adopt an objective, evidence-based approach to AI. Hype can undermine patient care by fostering unrealistic expectations and encouraging the premature or risky use of AI technologies. While RSNA 2024 showcased exciting advances, it also underscored how far we are from AI truly revolutionizing radiology.
As one lecture concluded: “AI is here—embrace it, validate it, advance it.” I would add: “and approach it with realism and a healthy dose of skepticism.”
The Revolution is Coming
There’s no doubt in my mind that AI will transform radiology in ways we can both anticipate and scarcely imagine. However, after attending RSNA 2024, I am convinced that this revolution has not yet arrived. Whether truly groundbreaking AI for radiology will emerge next year or decades from now remains uncertain.
What gives me pause are the questions I often hear from family, friends, and even healthcare administrators—questions like, “Can’t an AI program just replace what you do?” My answer is simple: no, not yet.
While AI's transformative potential is undeniable, its current state demands careful validation and realistic expectations. The winds of revolution are stirring—but radiologists’ human expertise continues to anchor the field.