Aquifer Blog

Clinical Reasoning in the Age of AI: Confronting Cognitive Deskilling

Written by Emily Stewart, MD | May 04, 2026

At Aquifer, we have been grappling with the implications of artificial intelligence (AI) in health professions education for several years, even prior to its recent acceleration into everyday learner workflows.

At AAIM’s Academic Internal Medicine Week (AIMW) this year, that long-standing concern came into sharp focus. Across sessions and conversations, a consistent theme emerged: as AI becomes embedded in how learners document, synthesize, and study, the risk of cognitive deskilling or simply missing out on cognitive skilling is becoming harder to ignore.

This shift is already shaping how educators approach the teaching and assessment of clinical reasoning.

 

Cognitive Work Is Becoming Less Visible

Generative AI has rapidly changed how learners approach core clinical tasks. Documentation can be drafted in seconds. Differential diagnoses can be expanded with minimal effort. Clinical summaries can be polished beyond a learner’s current level of training.

These capabilities introduce efficiency, but they also make it more difficult to see the thinking behind the work.

Several educators at AIMW raised a similar concern: as AI tools become embedded in routine workflows, traditional artifacts such as clinical notes and case summaries are becoming less dependable as evidence of a learner’s independent clinical reasoning.

This aligns with broader concerns in the field and literature. The AAMC has highlighted the growing impact of AI on medical education, including questions about how learners engage with clinical information and how competence should be assessed in technology-rich environments. Similarly, recent perspectives in Academic Medicine highlight the potential for overreliance on AI tools to reduce opportunities for deliberate practice in diagnostic reasoning. As Wartman and Combs argue, the integration of AI into medicine is reshaping how learners engage with knowledge, requiring a greater emphasis on higher-order clinical reasoning rather than information retrieval.1

The implication is clear: as AI improves output, educators need new ways to ensure that cognitive effort remains part of the learning process.

This shift has immediate implications for how we assess learners. When traditional components of skill development can no longer be assumed to represent independent reasoning, long-standing approaches to evaluation begin to lose their footing. Educators described a growing disconnect between what is submitted and what is truly understood.

Current competency frameworks were not designed for this environment. In one session, discussion centered on whether ACGME Milestones and student performance indicators should evolve to account for AI-mediated work. The lack of clear guidance reflects how quickly the landscape is changing.

This shift is also reflected in ongoing discussions across health professions education, with recent scholarship emphasizing the need to focus more directly on reasoning processes rather than outcomes as AI becomes more integrated into training environments. Approaches that require learners to actively engage in decision-making, make their thinking explicit, and demonstrate progression over time are gaining traction as more dependable ways to assess clinical competence.

Learning Design Must Reinforce, Not Replace, Thinking

The structure of the learning environment plays a critical role in whether cognitive skills are strengthened or eroded.

When learning experiences prioritize speed or completion, they can unintentionally encourage reliance on external tools. In contrast, environments that require active participation in clinical problem-solving help preserve the mental work required for expertise.

At AIMW, there was strong interest in approaches that keep learners engaged in the full reasoning process, working through ambiguity, making decisions, and refining their thinking over time.

Case-based learning remains particularly well-suited to this need. When designed effectively, it requires learners to:

  • Interpret clinical data in context

  • Commit to diagnostic and management decisions

  • Receive feedback that targets their reasoning

This type of structured engagement helps ensure that clinical thinking is practiced, not bypassed.

Resources that integrate these elements, combining case-based learning with formative feedback and longitudinal insight into performance, can help educators maintain rigor while working within real-world time constraints.

Re-centering Clinical Reasoning in an AI-Enabled Environment

AI is not going away, and its role in clinical education will continue to expand.

The responsibility now is to ensure that its integration does not come at the expense of developing independent clinical thinkers. This requires a more deliberate approach to how we design learning and assessment. Educators will need to create environments where reasoning cannot be outsourced.

Cognitive deskilling is not inevitable—but it is a real risk if left unaddressed.

In many cases, this will mean re-evaluating long-standing assumptions about what counts as evidence of competence. Clinical notes, summaries, and other written work remain important, but they are no longer sufficient on their own. Greater emphasis must be placed on the process behind those outputs.

This shift does not require abandoning efficiency. It does require being clear about what must be preserved. Clinical reasoning develops through practice, and that practice must remain visible, intentional, and protected within the learning experience.

For organizations and educators alike, this is a moment to act with purpose. The integration of AI into health professions education is moving quickly. Ensuring that clinical reasoning remains central will depend on the choices we make now about curriculum design, assessment strategy, and the tools we adopt to support both.

The path forward requires clarity about what we value and discipline in ensuring clinical reasoning remains central, even as the tools around it evolve.

Interested in how educators are addressing this challenge?

Explore how Aquifer supports the development and assessment of clinical reasoning through case-based learning, formative feedback, and longitudinal insights.

 

1 Wartman, S. A., & Combs, C. D. (2018). Medical education must move from the information age to the age of artificial intelligence. Academic Medicine, 93(8), 1107–1109. https://journals.lww.com/academicmedicine/fulltext/2018/08000/medical_education_must_move_from_the_information.7.aspx