THE NEW POSITION PAPER ESTABLISHES SIX CORE PRINCIPLES FOR THE RESPONSIBLE INTEGRATION OF AI, EMPHASIZING THE ORTHODONTIST’S ROLE AS ‘HUMAN-IN-COMMAND’.

BY ORTHODONTIC PRODUCTS STAFF

As artificial intelligence becomes more integrated into orthodontic diagnostics, treatment planning, and practice management, the American Association of Orthodontists (AAO) has released a comprehensive position paper to guide its responsible implementation. Titled “Responsible Integration of Artificial Intelligence in Orthodontic Clinical Practice,” the paper establishes a principles-based framework to ensure patient safety, professional accountability, and ethical adoption of AI systems.

The guidance comes at a critical time. While rule-based AI has been part of orthodontics for years, the recent emergence of more sophisticated, adaptive systems has introduced new variables and potential risks.

“AI has been part of orthodontics for decades, primarily through rule-based systems that produce the same reliable output for a given input,” said Heather Stone Hopkins, DMD, MS, who led the AAO’s Task Force on Artificial Intelligence. “What has changed is the rise of adaptive AI systems that can learn and evolve over time. These tools may not produce the same output for the same input, which introduces new clinical risk.”

To address this, the AAO’s paper outlines six core principles: AI Governance and the Human-in-Command (HIC); Regulatory Alignment and Risk-Based Oversight; Trustworthiness and Transparency; Patient Autonomy; Education and Clinical AI Competency; and Operational Integration and Data Privacy. Together, these principles provide a roadmap for clinicians, developers, and regulators to navigate the evolving landscape of AI in orthodontics.

ESTABLISHING THE HUMAN-IN-COMMAND

Central to the AAO’s framework is the concept of the “Human-in-Command” (HIC), defined as the licensed orthodontist of record who retains ultimate authority and accountability for patient care. The paper stresses that in clinical practice, “AI systems must function as supportive tools that enhance, but never replace, the expertise of a licensed professional.”

According to Hopkins, the HIC role extends far beyond simply approving or overriding an AI-generated suggestion. “Chairside, this does not look like clicking ‘approve’ or ‘override’ on an AI output,” she explained. “It looks like the orthodontist determining whether an AI system should be relied on at all in that clinical moment and within that specific context of use.”

This responsibility includes performing due diligence before adopting any AI tool. The HIC must ensure that systems have been trained on data that is diverse and representative of their own patient population. As the paper states, without clear oversight, practices face risks including the “use beyond validated context of use,” “inappropriate or unaccountable delegation,” and an “erosion of public trust.”

Hopkins offered virtual monitoring as a practical example of how HIC oversight differs based on an AI tool’s classification. An FDA-cleared AI tool, functioning as Software as a Medical Device (SaMD), might analyze 1,000 patient images and flag only 10 for orthodontist review, having been validated to safely screen the other 990. In contrast, a supportive tool classified as Clinical Decision Support Software (CDSS) might flag images needing intervention, but the orthodontist as HIC remains responsible for reviewing all 1,000 images before any clinical decisions are made.

SUPPORTIVE VS. DIRECTIVE AI: UNDERSTANDING THE KEY DISTINCTIONS

The AAO’s guidance emphasizes that AI oversight is determined by function and risk, not technology type. This leads to three key classifications for clinical software:

Clinical Decision Support Software (CDSS)Software as a Medical Device (SaMD)AI/ML SaMD

CDSS is designed to assist healthcare professionals, but not replace their judgment. These tools analyze or display information to inform a decision, but the orthodontist must be able to independently assess the basis for the recommendations. Because the clinician retains full oversight, CDSS tools are generally considered lower risk and are often exempt from FDA regulation as medical devices.

SaMD is defined by the International Medical Device Regulators Forum (IMDRF) as “software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.” These tools are classified as medical devices and are subject to FDA regulation because their outputs directly inform or drive treatment, and the clinician often cannot independently review the underlying logic.
This is a subcategory of SaMD that incorporates adaptive artificial intelligence or machine learning algorithms. Unlike traditional SaMD that uses fixed, rule-based logic, AI/ML SaMD learns from data to make predictions or data-driven outputs. Because these systems can evolve over time, they are subject to the FDA’s Total Product Lifecycle (TPLC) approach. This regulatory framework provides oversight across the entire life of the software—from initial design and development to post-market deployment—using pre-market validation and continuous performance monitoring to ensure the tool remains safe and effective as it learns.
Orthodontic Example: Digital cephalometric analysis software that calculates standard measurements based on landmarks identified by the user. The software performs a calculation but does not interpret the findings or recommend treatment.Orthodontic Example: Digital bonding software that generates a custom transfer tray based on simulated bracket placement. Once the orthodontist finalizes the plan, the software autonomously designs an appliance used for clinical delivery, performing a medical function.Orthodontic Example: An FDA-authorized remote monitoring analyzer that uses ML algorithms to autonomously detect and quantify clinical features like aligner fit, attachment loss, or tooth movement from patient-submitted photos.

A RISK-BASED FRAMEWORK: FUNCTION OVER FORM

The AAO paper makes a critical distinction that will shape the future of AI regulation and use in the specialty. The central issue addressed by this paper is not the presence of AI in orthodontics, but the manner in which its use reshapes the orthodontist’s role in clinical decision-making. Oversight is determined not by the technical architecture of the AI system, but by the function it performs and the degree of influence it exerts in patient care.

This risk-based approach aligns with guidance from global regulatory bodies like the FDA and the International Medical Device Regulators Forum (IMDRF). It prioritizes how a tool is used—its context of use (COU)—and the potential risk it poses to patients. As Hopkins noted, supportive AI, where a clinician can independently review and modify an output, carries lower risk. Directive AI, where outputs cannot be independently reviewed and are used to guide treatment, presents a significantly higher risk and may require FDA oversight.

The paper calls on developers to clearly define and validate the COU for all AI tools and warns against the clinical harm that can result from “off-label use and misclassification.”

BUILDING TRUST THROUGH TRANSPARENCY AND ACCOUNTABILITY

A recurring theme in the position paper is that clinical trust in AI must be earned through evidence and transparency. “Trust is not earned by performance claims alone,” the paper asserts. “It requires transparency across the full product lifecycle, clear documentation of training data sources, validation procedures, update history, and mechanisms for ongoing risk monitoring.”

The AAO outlines five pillars of trustworthiness: dataset quality and bias mitigation; rigorous clinical validation; explainability and auditability; continuous monitoring; and vendor accountability. This framework places shared responsibility on developers to build ethical tools and on clinicians to verify their integrity.

“Performance numbers alone are not enough to validate a medical AI tool,” Hopkins stated. “A claim like ‘95% accuracy’ is meaningless if the system was trained on biased, narrow, or outdated data. What you get out of an AI system is only as good as what was put into it.”

The paper calls on orthodontists to request documentation on dataset diversity and validation before adopting a tool and urges them to “decline or discontinue use of tools that lack adequate transparency.” In turn, it calls on vendors to present this information clearly so clinicians can make informed decisions.

CLOSING THE KNOWLEDGE GAP: EDUCATION AND CLINICAL COMPETENCY

To ensure orthodontists can fulfill their role as HIC, the AAO calls for a profession-wide commitment to education. The paper argues that as licensed professionals, orthodontists have an ethical and legal obligation to possess the “knowledge and judgment to evaluate, implement, and provide Human-in-Command (HIC) oversight.”

The guidance proposes a three-tier competency model:

  1. Foundational AI Literacy: A baseline understanding of AI principles, machine learning fundamentals, and data bias.
  2. AI Training for Clinical Readiness: Hands-on training with clinical AI tools, including how to interpret outputs, identify uncertainty, and assess for bias.
  3. Continuous Learning: Ongoing professional development to keep pace with AI advancements and changing regulations.

Without adequate training, the paper warns of risks such as “misinterpretation of AI outputs leading to treatment errors,” “over-reliance on AI without critical evaluation,” and an “erosion of clinical authority.” To close this gap, the AAO calls for integrating AI coursework into residency programs, establishing AAO-led continuing education, and stimulating vendor-sponsored training.

The ultimate goal is to create a future where AI enhances, rather than dictates, clinical care. As the paper concludes, “By prioritizing patient safety, ethical governance, and clinician oversight, orthodontists can ensure that AI becomes not a replacement for expertise but a reflection of it, a tool that amplifies the profession’s enduring commitment to excellence and the well-being of every patient.” OP

GETTING AI-READY: THREE CORE COMPETENCIES FOR YOUR PRACTICE

According to Heather Stone Hopkins, DMD, MS, practices looking to implement AI tools should first master three core competencies:

1. AI Literacy: Supportive vs. Directive Use Understand the fundamental difference between the two main types of AI. Supportive AI, like automated ceph tracing, proposes outputs that the clinician can fully review and modify. Directive AI, like some remote monitoring platforms, screens large volumes of data without direct human review and directs the clinician’s attention to flagged issues. Recognizing this distinction is critical, as the level of reliance changes the clinical risk and the required oversight.
2. Human-in-Command Workflows and Delegation Clarity Before implementing AI, a practice must define its internal protocols. This includes understanding state-level allowable duties for staff and establishing a clear chain of command for reviewing, escalating, and acting on AI-generated outputs. Just as a practice has risk-based infection control protocols, AI review workflows should also be predefined and scaled according to the risk level of the task.
3. Vendor Evaluation and Clinical Appropriateness Orthodontists must develop a baseline competency in vetting AI vendors. This means knowing what documentation to request—such as information on training data, validation studies, and known limitations—and how to evaluate whether a tool is valid, appropriate, and truly necessary for the practice’s specific patient population and clinical needs. OP

Photo: ID 420281318 © Seventyfourimages | Dreamstime.com