In healthcare, the decisions of the doctor have great importance. Incalculable times we have seen doctors’ bravura save patients’ lives. Doctors are trained to have strong wits and face challenges with confidence. However, all physicians and surgeons are faced with the inevitable foe i.e., decision noise. Noise is explained as unpredictable variability in doctor’s judgments. Noise in decisions and bias go hand in hand. In numerous cases, a biased decision is the result of a noisy judgment. Decision noise is attributed to the “human factors” of the doctor.
Past Experience And Personal Choices: Causes Of Noise And Bias
Like other human beings, orthodontists have feelings, likings, past experiences, and prejudices. Despite their best efforts, doctors may succumb to their emotional feelings which can change the course of treatment. A major contributor to bias and noise is experience. Doctors deal with patients of different races, communities, castes, and socioeconomic statuses. Therefore, many professionals unknowingly prefer one patient type over the other. According to a study, most healthcare providers in the US appear to have implicit bias. They have positive attitudes towards White patients and generally have a negative attitude towards patients of color.
Unfortunately, implicit biases can not be eliminated from human decisions. Advancements in behavioral sciences have led to considerable reductions in decision noise and biases in doctor decisions. But the best way to get rid of biases is by taking the human out of the loop. Yes, artificial intelligence programs have drastically evolved in the recent past and therefore, can be used to make clear, clinically acceptable, and unbiased health decisions.
How To Address Noise And Bias With Orthodontic AI
AI’s advent in healthcare and dentistry has been taken as a positive step. The algorithms are accurate and orthodontists get the ideal reliable treatment plans within minutes. With AI’s implementation in diagnosis and treatment planning, behavioral scientists thought the problem of noise and bias would end. While it effectively flung heuristics (emotional choices of humans) out of the window, there was still perpetuation of biases. Thus, noise and bias need to be addressed in orthodontic AI. The best way to minimize these unwanted adversities is by adopting the following steps:
Good Machine Learning Practices
It has been revealed in numerous studies that the core reason for biased AI algorithms is improper machine training and skewed data. We have seen multiple AI algorithms that propagate bias. Thus, the first step in bias elimination would be to inform the AI developers about the biases and noise. Experts suggest algorithm developers and manufacturers need to adopt good machine-learning practices. There is a dire need for collaborative working between developers and teams with diverse expertise.
In 2021, the UK’s Medicines and Healthcare Products Regulatory Agency, Health Canada, and the FDA jointly published a guideline that medical devices should follow good machine learning practice (GMLP). The 10 guiding principles listed in the publication aimed to ensure safety, effectiveness, and high-quality products. Since then, there has been an increased emphasis on introducing performance metrics in automated healthcare devices/systems. These instructions are not yet enforceable policies but continue to serve as a guideline.
Test Tools
The next duty lies on the users/purchasers of AI tools. Since the widespread use of AI in orthodontics, we have seen over reliability of physicians on the automated solutions. This has led to the emergence of another type of problem i.e., feedback loop. Research shows that feedback loops in machine learning can lead to decision noise. Thus, to overcome this problem, doctors and staff members of an orthodontic clinic need to test the AI tools before putting them to use.
As a purchaser, you should first test it within relevant subpopulations during implementation and even after some time to monitor any drift toward bias. AI effectively saves the time and effort of all healthcare members (orthodontists, staff members, etc.). Thus, professionals can invest this time in testing the AI tools.
Standardize Data Testing
Another step to prevent the incorporation of bias in orthodontic AI tools is by standardizing the healthcare database. The data originators and collectors must follow a standard protocol and allow only valid, unbiased data to be included in the health database. This also includes collecting the right data from medical devices like smartphones, smartwatches, fitness bands, medical sensors, etc. When unbiased data is available for research and appliance development, there are fewer chances of the end product having biases and noise.
The Role Of Health Organizations
Health organizations across the globe need to take these issues seriously and adopt steps to minimize errors. Health organizations in the USA, UK, and the FDA have collectively published 10 principles to ensure good machine learning in an attempt to minimize bias. As of now, these principles are not enforced as laws but act as guidelines. Organizations need to ensure the validity of AI programs across all subpopulations. Federal agencies must also require accessible and clear labeling (of medical products) regarding the populations intended for use. Moreover, there is a need to develop bias-identifying systems/algorithms to eliminate issues.
Blind Taste Test
A simple yet effective way of controlling noise and bias in AI predictions is by adopting Pepsis’ blind taste challenge. Conducted in the 1970s, this test asked people to judge the quality of the drink based on the taste (they hid the labels to remove the conscious preference process). The same can be done with AI algorithms. The orthodontic AI model should be trained on all the data (for example cephalometric landmarks) to check for bias. Then retrain the model on the same data except the one variable (you think is causing the bias). This can help improve the development of further algorithms.
Final Word
Noise and bias are unavoidable in manual orthodontics. Automation with AI can reduce implicit biases. However, other types of biases can still impact the quality of AI algorithms. Thus, the best way to ensure unbiased and noise-free decisions in orthodontic AI is to adopt good machine learning practices in algorithm development. The users (orthodontists) must thoroughly test the AI tools on subpopulations. The health databases should follow a high standard of development and the health organizations/federal agencies should design frameworks to identify and mitigate bias/noise. According to the blind taste challenge, the model should be trained on all data and then retrained by removing the specific variable that seems to be causing the bias.
References
- Mullins, C. F., & Coughlan, J. J. (2023). Noise in medical decision making: a silent epidemic?. Postgraduate medical journal, 99(1169), 96-100.
- Hall, W. J., Chapman, M. V., Lee, K. M., Merino, Y. M., Thomas, T. W., Payne, B. K., … & Coyne-Beasley, T. (2015). Implicit racial/ethnic bias among health care professionals and its influence on health care outcomes: a systematic review. American journal of public health, 105(12), e60-e76.
- Marin, M. J., Van Wijk, X. M., & Durant, T. J. (2022). Machine learning in healthcare: mapping a Path to Title 21.
- Biswas, S., She, Y., & Kang, E. (2023, December). Towards Safe ML-Based Systems in Presence of Feedback Loops. In Proceedings of the 1st International Workshop on Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components (pp. 18-21).
- Chakraborty, J., Majumder, S., & Menzies, T. (2021, August). Bias in machine learning software: Why? how? what to do?. In Proceedings of the 29th ACM joint meeting on european software engineering conference and symposium on the foundations of software engineering (pp. 429-440).
- https://hbr.org/2020/11/a-simple-tactic-that-could-help-reduce-bias-in-ai