Bias In Orthodontic AI: Case Studies And Solutions

In general terms, bias refers to disparities in the performance of a predictive task within different groups and subgroups which leads to compounding existing inequities. In healthcare, it is considered an inaccurate evaluation of a condition/person that can have a positive or negative impact. By bias, we mean negative implicit bias throughout this article.

It is a common problem faced in the field of healthcare. Healthcare bias exists due to the presence of various types of differences in groups and societies. Due to the differences, we see inequities in:

  • Status (socioeconomic)
  • Race
  • Ethnicity
  • Religion
  • Gender
  • Disability
  • Sexual orientation

This can lead to lower quality of healthcare among certain groups. A major cause of bias is underrepresentation of groups in study datasets.

Bias In AI-Based Orthodontics And Healthcare

Artificial intelligence has the potential to revolutionize the field of healthcare. By eliminating the racial, and socioeconomic inclination (present in human decisions), AI has the capacity to minimize bias. However, AI is still prone to reinforcing it. Bias in orthodontics can lead to medically neglected groups and undertreatment.

For a better understanding, experts divide bias in the medical field into three dimensions:

Data-Driven

AI in healthcare heavily relies on deep learning mechanisms which learn from examples rather than human coding. AI may reinforce existing bias in studies and clinical trials if the provided data is biased. Reports show that training data-driven bias in AI can lead to a discriminative approach. As AI is a servant to the provided data only, accurate generalization is not possible due to the lack of data on the underrepresented.

Algorithmic

The algorithms developed by the combination of machine learning and neural networks can be subject to bias. The predictions from AI algorithms can be biased which can lead to further inequities in treatment. AI algorithms work on set patterns and are devoid of emotions and empathic considerations. So, it may unintentionally discriminate against people of a certain race, gender, or status based on indicators like income, color, etc.

The “black box” effect of AI means that the AI system does not reveal the internal workings that how it reached a particular diagnosis/decision. Thus, there is potential for bias risk.

Human (Cognitive)

Cognitive bias has the lion’s share of healthcare bias. The involvement of human decisions makes any treatment prone to different types of cognitive bias. For example, researchers have reported that framing bias guided political decisions regarding “lifesaving ventilator production” during the coronavirus pandemic. During this time, we also saw doctors tempted to prescribe medications despite lacking clear evidence due to the fear of lack of action. Such a bias is known as action bias.

Cases Of AI Bias In Healthcare

According to a study published in The AI Ethics Journal, bias in AI healthcare occurs firstly due to data collection. The characteristics of the studied population can make the data biased. Secondly, we can see bias due to the prejudice of the annotator (expert), and thirdly, during the process of AI learning. It suggests that efforts should be made to increase responsible data sharing and develop novel algorithms to minimize bias.

It was revealed in another study that AI in healthcare can lead to diagnostic bias. Gender bias in diagnosis was seen in thoracic diseases like pneumothorax. There was an inaccurate diagnosis provided by AI when it came to female patients. Thus, a specialized algorithm was designed which did a great job of diagnosing the disease in women. However, the algorithm was the worst-performing for men.

Gender disparities in the diagnosis of depression were also noted. It was seen that male depression patients were less prone to seeking treatment (based on questionnaires) but no such trend was present in women. This could potentially lead underdiagnosis of men in diagnostic AI.

Bias In Orthodontic AI

Computer-aided diagnosis (CAD) is an integral component of modern orthodontic treatments. Studies show that there is always some component of bias involved with AI when radiographical data is assessed. There exists a gender imbalance in medical imaging datasets used for computer-aided diagnosis. The diagnosis and planning are less accurate for female patients.

How AI Solves The Issue Of Bias?

Research And Development

Researchers need to play a significant role in mitigating bias. By choosing an inclusive development process via a multidisciplinary approach, researchers can improve studies. Healthcare AI technologies can take the help of expert methodologists and clinicians in identifying and minimizing bias in a program. Moreover, AI companies can bring in representatives from underrepresented populations to point out sources of bias. Many suggest a “human-in-the-loop” idea to control bias by passing the algorithmic outputs to a human for a final decision.

Data Collection

Data collectors can shift to recording more data on historically underserved and minority groups to bridge the gap. By providing equal research and clinical data on minority groups, there can be better generalization by AI algorithms and thus, lesser bias. Training the algorithm developers and data scientists to recognize bias from published academic journals can also be helpful.

Algorithm Development

An open science approach allows interdisciplinary working, that can improve the transparency of the AI algorithms. By using this method, professionals can design algorithms on smaller and local datasets. The AI model is first trained on a wide dataset and then given to a specific user. This can then be implemented on a specific patient that represents the patient group under consideration.

Ethical Frameworks For AI

Experts suggest that AI companies should start developing metrics for ethics and equity in their programs, especially in AI healthcare. FDA advises using the Total Product Lifestyle (TPLC) framework for healthcare deep/machine learning. TPLC can carry out an ethical analysis of AI-based medical devices which can mitigate AI bias.

Final Word

Bias is an unavoidable feature of healthcare. Its presence compounds inequities (based on race, gender, and socioeconomic disparities, etc.). Bias in healthcare can be data-driven, algorithmic, or cognitive. The underrepresentation of a group in the majority of studies makes the research biased. When fed to machine learning of AI, this biased data can lead to bias in AI algorithms. Cognitive biases like framing bias and action bias are also seen. To mitigate bias, AI companies can focus more on minority groups when conducting research and clinical trials. Healthcare AI firms can also hire methodologists and clinicians to minimize bias. Ethical frameworks can be applied to AI algorithms for ethical analyses and an open science approach can increase the transparency of AI actions.


References

  1. Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M. E., … & Staab, S. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(3), e1356.
  2. Velichkovska, B., Denkovski, D., Gjoreski, H., Kalendar, M., & Osmani, V. (2023, April). A Survey of Bias in Healthcare: Pitfalls of Using Biased Datasets and Applications. In Computer Science On-line Conference (pp. 570-584). Cham: Springer International Publishing.
  3. Halpern, S. D., Truog, R. D., & Miller, F. G. (2020). Cognitive bias and public health policy during the COVID-19 pandemic. Jama, 324(4), 337-338.
  4. Ramnath, V. R., McSharry, D. G., & Malhotra, A. (2020). Do no harm: Reaffirming the value of evidence and equipoise while minimizing cognitive bias in the coronavirus disease 2019 era. Chest, 158(3), 873.
  5. Gaonkar, B., Cook, K., & Macyszyn, L. (2020). Ethical issues arising due to bias in training AI algorithms in healthcare and data sharing as a potential solution. The AI Ethics Journal, 1(1).
  6. Ganz, M., Holm, S. H., & Feragen, A. (2021). Assessing bias in medical ai. In Workshop on Interpretable ML in Healthcare at International Connference on Machine Learning (ICML).
  7. Larrazabal, A. J., Nieto, N., Peterson, V., Milone, D. H., & Ferrante, E. (2020). Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proceedings of the National Academy of Sciences, 117(23), 12592-12594.
  8. Velichkovska, B., Denkovski, D., Gjoreski, H., Kalendar, M., & Osmani, V. (2023, April). A Survey of Bias in Healthcare: Pitfalls of Using Biased Datasets and Applications. In Computer Science On-line Conference (pp. 570-584). Cham: Springer International Publishing.
  9. Rajkomar, A., Hardt, M., Howell, M. D., Corrado, G., & Chin, M. H. (2018). Ensuring fairness in machine learning to advance health equity. Annals of internal medicine, 169(12), 866-872.
  10. https://blog.mdpi.com/2024/07/02/open-science-principles-ai/
  11. https://www.fda.gov/media/122535/download

You'll find more articles in my blog:

Read more