Bias And Ethics In AI-Based Orthodontic Decision-Making

Bias in healthcare can directly impact the quality of decision-making. Various types of bias can affect a doctor’s diagnostic and treatment decisions. Implementation of advanced artificial intelligence algorithms can increase the inherent biases and lead to further weakening of decision-making.

In this article, we shall have a look at how different types of bias impact decision-making in orthodontics and healthcare. Experts believe that bias reduction can be achieved by submitting AI and deep learning models to ethical principles and legitimate regulatory boards.

Impact Of Bias On Orthodontic Decision-Making

In a conventional orthodontic setup, the diagnosis and treatment plan are solely subject to the skill and expertise of the orthodontist. However, there are multiple flaws in human decision-making. Intrinsic cognitive and external environmental factors influence the treatment plans. Cognitive reasoning is the basis of rational thinking which can lead to systemic deviations which are referred to as decision-making biases.

Cognitive bias is attributed to cognitive shortcuts such as heuristics (mental shortcuts that allow you to solve problems and make quick, efficient judgments). Availability heuristics help minimize the time taken but at the same time make the decisions vulnerable to cognitive biases and systematic reasoning errors. As availability heuristics rely on the available data, the most readily retrievable data information may not be the most accurate. Studies show that cognitive type of bias (heuristics) reduces the accuracy of judgments and orthodontic decision-making.

Affective bias occurs when the doctor’s emotional state can impact the treatment decision outcomes. Anxiety and depression can modify doctor’s choices. Orthodontists suffering from burnout syndrome (work-related stress) are likely to make bad decisions (negative judgments).

Stereotyping bias arises as a result of over-generalization of a specific social group or community. Other similar biases include age bias, sex/gender bias, and racial bias. An extensive study highlighted the role of these biases in medical decision-making and concluded that there is a need to adopt special steps. Therefore, a major aim of AI introduction in orthodontic decision-making was to minimize bias.

Biases In Orthodontic AI

The advent of AI in orthodontics effectively reduced human biases such as cognitive and affective biases, etc. However, the automation of orthodontic decision-making led to different types of biases:

Data Bias

Such biases stem from skewed data for algorithm training. This can potentially have harmful effects on the accuracy of the treatment plans. When abnormal real-world data is fed to the AI algorithm, bias transfers to the AI system. There may be an underrepresentation of a specific ethnic group in the provided data. Doctor’s decisions can be altered if such an algorithm is applied to the underrepresented group.

Algorithmic Bias

Even if the feeding data is free of bias, AI-based decisions can promote inequity due to improper designing and learning mechanisms. A significant racial bias has been observed in healthcare AI algorithms. When algorithms are developed on easily measurable data (without considering protected groups) there is a potential for cohort bias.

Automation Bias

Automation biases get incorporated due to interactions between AI systems and healthcare professionals. This type of bias arises when orthodontists overly rely on automation for tasks. Moreover, clinicians can also be exposed to the feedback loop. This occurs when the doctors accept all the AI recommendations even if they are wrong/mistaken.

Privilege Bias

Another type of bias observed by healthcare experts is privilege bias. This arises when a certain group of population is unable to access AI in healthcare settings. This disparity can be attributed to a lack of sensors/medical devices in the mentioned group/community. Thus, this can ultimately lead to unequal distribution of AI’s health benefits and eventually cause unwanted mistakes in healthcare decisions.

How To Reduce AI Bias And Improve Ethics In Orthodontic Decision Making

A major step in mitigating bias in AI orthodontics is to enhance the ethics of the automated systems. The introduction of ethical checkpoints and controls can help minimize bias in AI-driven healthcare. The most important steps include:

Controls

Healthcare organizations dealing with AI can set up entity-level controls to create an effective control environment. This can help identify bias at the top, so there is no percolation of inequities. Organizations must design internal controls that filter the data and ensure accurate algorithm development

AI Governance

Policies and frameworks should be designed to analyze the extent of bias in a provided AI model. Governments should design policies and control frameworks that hold AI systems accountable.

Make Routine Assessments

Orthodontists using AI systems for decisions should make assessments of the system’s performance. This calls for doctor training in identifying bias incorporated in the provided algorithms. If a system does not perform well for a specific group, the clinician should look for blind spots in bias identification. An acceptable threshold of bias should be set and anything above that rejected or modified.

Make AI Explainable

An important principle of ethics in healthcare is explainability. The evolving AI algorithms should be explainable and not be subject to the traditional “black box effect”. Auditable algorithms minimize bias and increase the transparency of the provided decisions. According to the EU General Data Protection Regulation (GDPR), every individual has the right to understand the process by which an automated system has reached that particular decision.

Moreover, the decisions and plans offered by AI can lead to a trust deficit between the doctor and the patient. This is because the computer is not held accountable for any incorrect treatment option provided. Therefore, it is the need of the hour that the AI technology (or the ones making it) should be responsible for the suggestions and decisions.

Final Word

Bias and inequities play a significant role in the healthcare industry. Physicians are subject to cognitive, affective, and gender/racial biases. These biases are known to negatively impact the decisions that raise ethical concerns of practice.

The introduction of AI in orthodontics has solved cognitive biases (and heuristic issues) by taking the human out of the picture. However, there still exists bias due to skewed data (data bias), and poor algorithm designing (algorithmic bias). Doctors using AI for treatment decisions may also face automation bias (due to heavy reliance on AI for plans/diagnosis) and privilege bias (due to unequal distribution of smart devices).

There is a need to make ethical checkpoints and frameworks to identify bias in data and algorithms. AI governance and periodic assessment of the systems can help minimize bias but require optimal training of the doctors. Orthodontic AI systems need to be transparent and accountable to reduce ethical and equity issues of healthcare.


References

  1. Hicks, E. P., & Kluemper, G. T. (2011). Heuristic reasoning and cognitive biases: Are they hindrances to judgments and decision making in orthodontics?. American journal of orthodontics and dentofacial orthopedics, 139(3), 297-304.
  2. Pirillo, F., Caracciolo, S., & Siciliani, G. (2011). The orthodontist burnout. Progress in Orthodontics, 12(1), 17-30.
  3. Featherston, R., Downie, L. E., Vogel, A. P., & Galvin, K. L. (2020). Decision making biases in the allied health professions: a systematic scoping review. PLoS One, 15(10), e0240716.
  4. Appelman, Y., van Rijn, B. B., Ten Haaf, M. E., Boersma, E., & Peters, S. A. (2015). Sex differences in cardiovascular risk factors and disease prevention. Atherosclerosis, 241(1), 211-218.
  5. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
  6. https://gdpr-info.eu/art-22-gdpr/#:~:text=22%20GDPR%20Automated%20individual%20decision,significantly%20affects%20him%20or%20her.

You'll find more articles in my blog:

Read more