Bias In Orthodontic AI: Ensuring Fairness In Treatment Planning

Fairness is among the core principles of healthcare. All healthcare providers are supposed to be well-wishers of all their patients and must provide equal treatment to all, irrespective of their personal attributes. As per the World Medical Association Geneva’s Declaration, factors like “age, disease/disability, ethnic origin, creed, nationality, gender, political affiliation, race, sexual orientation, social standing or any other factor” should not impact a physician’s duties/obligations to their patients.

Unfortunately, bias is a part of human cognition and can not be eliminated completely. Healthcare experts strive to minimize bias but how hard they try, they can never get rid of implicit biases. A modern solution is to use artificial intelligence for decision-making. AI does a great job of providing clinically acceptable and accurate diagnoses and treatment plans. However, the automation of the processes is still linked to biases and unequal treatment. Therefore, developers are now working to increase fairness in AI’s offerings and make it more acceptable.

Why Is It Important To Ensure Fairness In Orthodontic AI?

AI and machine learning algorithms can inadvertently inherit different types of biases in training data. The submission of vast quantities of data to computerized systems calls for optimal fairness. If the applied system is inequitable and unjust. This can lead to discrimination in healthcare outcomes. The disparities can raise ethical concerns and may lead to neglection of certain communities. Moreover, ensuring fairness in orthodontic AI applications and general dentistry is an ethical and legal imperative. Orthodontists must provide equitable healthcare services to all patients.

Why Ensuring Fairness Is Difficult?

A major difficulty in ensuring the complete fairness of AI systems in orthodontics is the identification of bias. Fairness is a subjective and personal perspective and most AI developers design algorithms to translate fairness to the extent that they do not perpetuate any existing biases. However, there is no perfect scale to quantify the level of acceptable bias in an AI-driven orthodontic system.

Strategies To Mitigate Bias And Ensure Fairness: Fairness Of Artificial Intelligence Recommendations In Healthcare (FAIR) Statement

As already discussed, mitigation of bias and delivery of fairness is a must for smooth wide-scale acceptance of AI algorithms. To attenuate the effects of bias and to bring fairness to orthodontic AI, experts suggest a FAIR policy.

Preprocess Training Data

A major contributor to bias in AI systems is poor data. Various types of data biases can impact machine learning. For example, minority bias arises in healthcare because there is a greater amount of clinical research on the majority, and studies of minorities are scarce. If this type of skewed information is used as training data for an orthodontic AI system, the resultant algorithm can promote minority bias.

Thus, the first responsibility of bias minimization lies with researchers. Clinical researchers should focus more on minorities to compensate for the unequal distribution of clinical data.

Add Diversity

An effective method of mitigating bias is adding diversity to the representative datasets used for algorithm development. Developers must ensure the use of data from people with different socioeconomic backgrounds, races, ethnic origins, and sexual orientations. This allows the development of algorithms that offer treatment to a broad spectrum of patients and thus, prevents incorporation of potential biases.

Strengthen AI Algorithms

The two major types of AI algorithmic bias are label bias and cohort bias. Label biases arise when there is an incorrect interpretation of data. The use of wrong labels during algorithm development can impact the offered treatment options. For example, racial bias in commercially available healthcare algorithms exists because the developers use cost as a proxy for healthcare needs. This incorrect label leads to an underestimation of Black patients’ needs and furthers economic and racial bias.

Similarly, cohort biases are the result of designing algorithms from easily available data. This results in the neglect of special communities like LGBTQ+. So, to overcome these issues, AI developers need to make sure that the algorithm design is robust and that proper data and labels are used for development.

Pay Special Attention To Complex Cases

When missing data is used for AI system development, informativeness bias kicks in. This holds for complex cases where developers need to train the system harder. For example, identifying melanoma in dark-skinned patients is more challenging than it is in white-skinned individuals. Therefore, robust training of the algorithm (with special attention to features) is important before making it available for use.

Use AI Teachers

It was seen in clinical practice that surgical AI systems (SAIS) carrying out robotic surgeries exhibited bias (under-skilling and over-skilling bias in different scenarios). To overcome these, scientists deployed a TWIX system that taught the AI system to provide a visual explanation for the specific assessment it made (that otherwise would have been provided by a human). This induction of explainability and transparency significantly reduced biases and improved performance.

Train Doctors

Keeping in mind the benefits of AI in healthcare, many orthodontists tend to over-rely on automated systems. This potentially leads to automation biases that can negatively impact decisions. Overconfidence in the automated systems may also lead to a feedback loop where the clinician accepts whatever the algorithm proposes (even if it’s wrong). To overcome these issues, doctors should be trained to be cautious about the potential pitfalls of AI and must not succumb to AI’s shortcomings.

Audit And Validate Algorithms

The healthcare landscape is continually changing. For example, we now see an increased ratio of the queer community. The LGBTQ+ community requires special medical attention. To ensure fairness in the AI systems, regular algorithm audits should be conducted to check the validity of a system for potential patients. Orthodontic clinics should set up special departments that monitor AI’s performance and identify potential biases.

Final Word

Mitigation of bias and insurance of fairness in AI systems is a crucial requirement for orthodontic AI. Different strategies target various types of bias in AI systems. The primary step in bias minimization is preprocessing data before forwarding it to machine learning for training. Using diverse data and focusing on minorities can increase the application of a particular algorithm. Developers should strengthen the algorithms by using correct labels and ensuring wide-scale validity for all communities.

Implementation of AI teachers like TWIX reduces bias in AI-powered surgeries. Orthodontists must be properly trained to prevent feedback loops and automation biases. Regular auditing of algorithms ensures validity for the rapidly evolving healthcare system.


References

  1. https://www.wma.net/policies-post/wma-declaration-of-geneva/
  2. Batra, A. M., & Reche, A. (2023). A new era of dental care: harnessing artificial intelligence for better diagnosis and treatment. Cureus, 15(11).
  3. Ueda, D., Kakinuma, T., Fujita, S., Kamagata, K., Fushimi, Y., Ito, R., … & Naganawa, S. (2024). Fairness of artificial intelligence in healthcare: review and recommendations. Japanese Journal of Radiology, 42(1), 3-15.
  4. Kiyasseh, D., Laca, J., Haque, T. F., Otiato, M., Miles, B. J., Wagner, C., … & Hung, A. J. (2023). Human visual explanations mitigate bias in AI-based assessment of surgeon skills. NPJ Digital Medicine, 6(1), 54.

You'll find more articles in my blog:

Read more