The development of automated clinical applications aims to reduce the burden on the doctor and enhance the provision of quality treatment. The world of artificial intelligence and machine learning is relatively new to medicine. Despite the immense potential of AI in healthcare, there are certain challenges to its widespread acceptance.
The greatest obstacle for companies opening arms to artificial intelligence is data security. Preserving the privacy of patient data is an integral task for healthcare providers. We have seen a considerable amount of financial, and reputational losses in the form of patient data leaks. Thus, many service providers (including orthodontists) are reluctant to blindly submit data to a computer program.
Another important issue closely related to the ethical implication of AI in orthodontics is bias. Human decisions in healthcare are often subject to implicit bias. According to this concept, doctors unconsciously have prejudices and stereotypes due to prior life experiences. This can lead to unjustified leverage to certain patients and at the same time may lead to neglection of some other group/community.
Thus, geniuses came up with the plan of limiting human interference in the decision-making process. The revolution of computers led to the introduction of machine learning algorithms which learn without being coded by a human. Soon we saw the development of neural networks similar to human brains like artificial neural networks and convolutional neural networks. These systems processed information in the same way as a human brain but were free of prejudices, and biases.
However, the implementation of AI is not as satisfying as was thought. There are various reasons for this shortcoming of the versatile system. Though artificial intelligence was capable of eliminating some types of bias in healthcare, it unconsciously extended some other types. Let’s look at how AI has gotten rid of some major biases in healthcare.
Eliminating Bias In Orthodontics: How Far Has AI Come?
Cognitive biases lay the foundation for improper medical outcomes. Anchoring heuristics and availability heuristics are known to influence doctors’ diagnoses. Artificial intelligence has been at the forefront of breaking biases.
The different types of cognitive biases that revolve around an orthodontist include:
Confirmation Bias
This is a common bias present in healthcare, especially orthodontics. In confirmation bias, the orthodontist picks up clinical evidence that he believes is important and neglects any data/evidence that contradicts it. Several orthodontists refuse to consider an alternative diagnosis once an initial diagnosis has been established (even though radiographic data contradicts it).
AI throws confirmation bias out of the window because instead of selectively taking up data, it provides decisions based on extensive training (on multiple datasets). Machine learning algorithms learn from numerous situations and give the same unprejudiced answer every time.
Anchoring Bias
In anchoring bias, the doctor persists in prioritizing information/data that support his initial impressions of the evidence (even though these first impressions are incorrect). Anchoring bias can delay workup and diagnosis.
AI is free of anchoring bias because it thoroughly analyzes the patient data and then provides a diagnosis.
Affect Heuristics
An orthodontist’s actions are subject to affect heuristics when his decisions are influenced by emotional reactions instead of deliberate, rational thinking. Patients may receive altered treatment if the doctor has negative or overwhelmingly positive feelings towards them.
AI is void of any such emotions. The diagnoses and clinical decisions are entirely based on the extent of the condition. As AI and machine learning have learned which methods will respond best in a specific condition, mostly there is a reliable and rational decision for all orthodontic problems.
Outcomes Bias
Most of the doctors believe that the relation between decisions and outcomes is intuitive. This means that treatment results (good or bad) are always attributable to prior decisions. This prevents him from getting valid feedback and improving his clinical performance.
The most commendable aspect of AI is that it’s continually evolving. Modern AI-based programs continually adjust the treatment plans according to the needs and minimize the reliance on the initial decisions.
Bias In AI Orthodontics: Ethical Challenges Exist
Artificial intelligence is an amazing way of minimizing cognitive bias in healthcare. However, the modernized system is not yet perfect. Certain biases are jeopardizing the reliability of automated healthcare.
Over-reliance on AI for orthodontic solutions can perpetuate bias. Reports suggest that bias and discrimination in AI are well-documented issues. The primary reason for this ethical dilemma is a lack of balanced data. You can expect to see more pronounced inaccuracies when dealing with self-reported data. For example, in the recent COVID-19 infections, imbalanced self-reported data was provided to AI for machine learning which led to bias incorporation. However, in some cases, it is the interpretation of the data that is suboptimal.
Recently, an AI algorithm was trained to allocate medical resources based on the extent of the disease and medical billing records. This model was implemented by US health insurers. However, evident discrepancies (biases) were found in the model. The algorithm, based on the past received treatment, self-identified that black patients have a lesser need for treatment. To further the socioeconomic bias, the model allocated more treatment resources for a white patient (with the same conditions) than a black patient. Contrary to societal values, it continued to offer lesser treatment resources for a black patient having more health complications (and requiring better care).
When getting to the core of the problem, we realize that data aggregation is a major problem. Thus, data dis-aggregation should be adopted when training an AI model. Plus, the data provided for training should be carefully scrutinized and tested for bias. To improve the fairness of AI we need to adopt “selective forgetfulness” and hide information (such as the economic status of a neglected community patient) that may promote discrepancies.
Final Word
For decades, clinicians have been trying to minimize the problem of bias in medical decisions. Implicit bias is part and parcel of conventional medicine. In his daily routine, an orthodontist is exposed to various types of biases including confirmation bias, anchoring bias, affect heuristics, and outcome bias. These biases impact the treatment results. Artificial intelligence helps doctors get rid of these biases but still may promote some other types of biases and inequities. For example, several AI algorithms (developed on socioeconomic data) have been found to increase racial and economic bias. Therefore, steps like data dis-aggregation and selective forgetfulness need to be adopted during AI system development to minimize discrepancies.
References
- Heath, S. (2020). What is implicit bias, how does it affect healthcare. Patient Satisfaction News. Xtelligent Healthcare Media.
- Althubaiti, A. (2016). Information bias in health research: definition, pitfalls, and adjustment methods. Journal of multidisciplinary healthcare, 211-217.
- Ly, D. P., Shekelle, P. G., & Song, Z. (2023). Evidence for anchoring bias during physician decision-making. JAMA internal medicine, 183(8), 818-823.
- Waters, E. A., Pachur, T., Pogge, G., Hunleth, J., Webster, G. D., & Shepperd, J. A. (2023). Linking cognitive and affective heuristic cues to interpersonal risk perceptions and behavior. Risk analysis, 43(12), 2610-2630.
- https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/