Orthodontic AI: Addressing And Overcoming Bias

Bias has long been a part of the healthcare industry. The advancements in the field of medicine and surgery have provided us with numerous procedures that can save lives but at the same time cost a fortune. Socioeconomic inequalities are deeply rooted in society and thus injustice and discrimination have plagued since the very beginning.

AI can reduce diagnostic times and improve treatment plans for orthodontists. However, the system can simultaneously promote inequities and discrimination in the health system (which are already present). You may see intentional or inadvertent introduction of bias in medical AI.

Causes Of AI Bias

Bias in orthodontic AI can be attributed to numerous causes. To start with, scores of data are required to teach and train an AI program. This data is extracted from health records (electronic), clinical studies, and scientific research, etc.

In cases of orthodontic AI, this data contains landmarks on cephalograms, diagnostic criteria adopted by expert orthodontists, patient photographs, growth markers, etc. The data is then used to construct smart algorithms (which act as the machine’s background reference).

If the provided data is laden with bias, AI can not point it out. As deep learning (a subclass of machine learning) does not require human-based programming and learns from the provided data, pointing out bias in data is almost impossible. An example is the cardiovascular research. There exists a gender bias, in the majority of studies on heart diseases (with women being under-represented). If such skewed data is fed into AI algorithms, this will result in inaccurate clinical performance.

In addition to the bias in data collection, experts dealing with this data can incorporate their prejudices and preferences. This can potentially compound the issue of bias. The ultimate result is the suffering of medical technology from various types of bias. The most common types of bias (in medicine/dentistry) that AI needs to look for are discussed below.

Types Of Bias In Healthcare And AI

Random And Systemic Bias

Random biases exist in research due to variability in sampling and are common in quantitative studies. Systemic bias arises due to improper case selection for different study groups. In a particular study, not all individuals/groups get an equal chance of selection which leads to bias incorporation. According to a study, random and systemic biases are present in dental research which can get compounded in artificial intelligence algorithms.

Selection Bias

As already mentioned, selection bias arises due to personal favoring and preference of the investigator. For example, in a study of extraction vs non-extraction treatment for a malocclusion, an investigator consciously or unconsciously picks up more participants favorable for extraction, and the results will tilt towards an extraction plan. And if such data is used to train the AI deep learning, you will get algorithms favoring extraction more than what is required. According to a review, bias can gravely impact orthodontic outcomes. The most common types of bias seen in orthodontics include:

  • Selection bias
  • Performance bias
  • Attrition bias
  • Detection bias

Cognitive Bias

Studies show that humans use 2 modes of reasoning to reach a decision. Heuristic reasoning is intuitive and automatic while analytic reasoning is rule-based and the result of explicit processing. In matters of healthcare, heuristic reasoning dominates which can be problematic as it is often biased. With the advent of AI, there can be a reduction in cognitive bias because (though artificial neural networks mimic the human brain) AI processing is based on analytics and rules.

Racial Bias

Racial bias is a serious issue in healthcare and research. This type of bias is present in conventional healthcare practices and can carry forward with AI implementation. According to a 2023 study, machine learning and artificial intelligence are used to create craniofacial models (for orthodontic treatment). Current AI models tend to accentuate racial disparities and there is a need to mitigate the situation. Such models perform poorly when deployed on ethno-racial minorities. Thus, efforts should be made to establish algorithmic fairness and address inequities in orthodontics.

Socioeconomic Bias

Racial bias is oftentimes linked to socioeconomic bias. Research shows that socioeconomic bias stems from racial disparities. There has been an evident underestimation of sickness in Black patients. Generally, AI algorithms overlook social discrepancies. This effect can be attributed to a lack of healthcare access to people with lower socioeconomic status (SES). As there is a lower quality of healthcare data for such patients, there is a tendency for AI to miss it which results in lower performance.

Another study concluded that AI algorithms provide the worst predictive models for lower SES patients. Special indices (like the HOUSES index) allow AI researchers to assess bias in the predictive models.

How To Overcome Bias In Orthodontic AI?

Experts now suggest a 4D solution to counter bias in orthodontic AI. The D’s of the fairness plan are discussed below:

Data

The primary step in eliminating bias is to regulate the data used for training AI algorithms. AI researchers and experts can create “equity checkpoints” in the data collection to ensure optimal training and monitoring. Special attention should be paid to ensuring ethnic diversity (with emphasis on neglected communities), gender balance, and socioeconomic equity in data. The provided data should be:

  • Diverse
  • Representative of all communities
  • Unbiased

Development

Next comes the development of algorithms. Here the programmers need to carefully select data variables that are unbiased. Machines must be tested in different populations before designing algorithms. Support vector machines (SVM) separate unassociated factors from data (associated) and are used in predictive diagnosis. The latest research shows that the greatest minimization of bias in orthodontic AI was achieved with the SVM method.

Delivery

Orthodontists and dentists must be aware of biases in the machinery and AI interfaces under use. Doctors can consciously ensure equitable access to services for all patients while diligently paying attention to the underrepresented communities.

Dashboard

AI program developers need to provide a “dashboard” where healthcare professionals can give feedback regarding the problems in AI technologies. Clear feedback from the clinicians can help developers alter algorithms such that there is elimination or at least minimization of bias.

Final Word

Bias in decision-making and treatment is an unavoidable part of healthcare. It can negatively impact treatment plans. Orthodontic AI can also be subject to bias. Systemic, gender, racial, and socioeconomic biases exist in orthodontic studies which can be compounded if biased data is used to design AI algorithms. Selection bias can also affect AI because of personal preferences and prejudice of investigators (of studies). AI can be spared of cognitive bias because neural networks lack heuristic reasoning in treatment planning.

The best way to overcome bias in orthodontic artificial intelligence is the 4D solution. Data scrutiny before feeding to a computer, careful selection of data variables in algorithm development, conscious delivery of AI-based care (by the orthodontist), and dashboard of feedback by AI service providers can help eliminate bias in orthodontic AI.


References

  1. Al Hamid, A., Beckett, R., Wilson, M., Jalal, Z., Cheema, E., Obe, D. A. J., … & Assi, S. (2024). Gender Bias in Diagnosis, Prevention, and Treatment of Cardiovascular Diseases: A Systematic Review. Cureus, 16(2).
  2. Jain, S., Debbarma, S., & Jain, D. (2016). Bias in dental research/dentistry. Ann Int Med Dent Res, 2(5), 5-9.
  3. Koletsi, D., Spineli, L. M., Lempesi, E., & Pandis, N. (2016). Risk of bias and magnitude of effect in orthodontic randomized controlled trials: a meta-epidemiological review. European journal of orthodontics, 38(3), 308-312.
  4. Hicks, E. P., & Kluemper, G. T. (2011). Heuristic reasoning and cognitive biases: Are they hindrances to judgments and decision making in orthodontics?. American journal of orthodontics and dentofacial orthopedics, 139(3), 297-304.
  5. Allareddy, V., Oubaidin, M., Rampa, S., Venugopalan, S. R., Elnagar, M. H., Yadav, S., & Lee, M. K. (2023). Call for algorithmic fairness to mitigate amplification of racial biases in artificial intelligence models used in orthodontics and craniofacial health. Orthodontics & Craniofacial Research, 26, 124-130.
  6. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
  7. Juhn, Y. J., Malik, M. M., Ryu, E., Wi, C. I., & Halamka, J. D. (2024). Socioeconomic bias in applying artificial intelligence models to health care. In Artificial Intelligence in Clinical Practice (pp. 413-435). Academic Press.
  8. Juhn, Y. J., Ryu, E., Wi, C. I., King, K. S., Malik, M., Romero-Brufau, S., … & Halamka, J. D. (2022). Assessing socioeconomic bias in machine learning algorithms in health care: a case study of the HOUSES index. Journal of the American Medical Informatics Association, 29(7), 1142-1151.
  9. Mason, T., Kelly, K. M., Eckert, G., Dean, J. A., Dundar, M. M., & Turkkahraman, H. (2023). A machine learning model for orthodontic extraction/non-extraction decision in a racially and ethnically diverse patient population. International Orthodontics, 21(3), 100759.

You'll find more articles in my blog:

Read more