Responsible AI in Medical Imaging

Adoption of ‘Responsible AI’ in Medical Imaging

-Ilina Navani

10 June, 2024

In recent years, AI has gained the great potential to increase the efficiency and accuracy of many technologies across various fields. The widespread use of AI-based systems has heightened the need to adopt frameworks on responsible and ethical practices. AI is often complex and can increase the risk of errors, as well as unintended biases and discrimination in results. This can have serious consequences regarding the transparency and fairness of AI models, especially in the healthcare industries where people’s lives are at stake. Governments and organizations, therefore, deemed it important to develop a universal set of principles to deal with the ethical and societal issues that may arise from emerging AI technologies. Extensive research is being pursued, and guidelines are being developed, to understand how to responsibly deploy AI in clinical practice.

Key Features of Responsible AI

When looking at the responsible use of AI, one of the main concerns is the transparency and explainability of the adopted technologies. AI is often perceived as a ‘black box’ wherein the workings and decision-making processes of a model remain hidden, and,therefore, untrustworthy. However, this does not need to be the case. Transparency in AI allows us to describe, analyze, and communicate the ins and outs of a model in a way that is comprehensible to the public. AI systems should strive to be meaningfully transparent tobridge the conceptual gap that may exist between AI developers and users. As a result, transparency is tightly linked to explainability to ensure that the functions and predictions made by AI models can be understood and trusted. In medical imaging, explainability often stems from being transparent about the architecture of a particular AI model and its underlying technology. Explainability includes an adequate comprehension of both the technical details of an algorithm and of how outputs are presented to the user. Patients must understand how a model interactswith their data to achieve the desired outcome, such as predicting the presence of a disease. Hence, documentation of model training and selection processes, the criteria used to make decisions, and the measures taken to identify potential risks, is necessary. Finally, repeatability of the desired outcome is important to adjudge that a model can reliably make decisions to perform consistent actions given the same scenario.

In the healthcare sector, the implications of not having responsible AI frameworks are heightened, particularly in terms of user safety and security. AI systems should undergo proper risk assessments at each stage of development to minimize potential negative impacts along the way and ultimately provide reliable results. Unforeseen conditions and problems should be considered such that a model is able to safely deal with the consequences and adapt to new settings without negatively impacting its users. In radiology specifically, AI systems must be thoroughly inspected by regulatory boards, radiologists, and AI experts before they are put into use. This helps ensure quality standards and procedures while warranting robust performance. Additionally, data and information security is another major facet of implementing responsible AI practices. In healthcare, since training data can be very sensitive, AI systems must be required to comply with privacy and data protection laws. The access and use of patient data necessitate compliance with pre-processing work such as de-identification and anonymization of participants in ways that cannot be reidentified in light of new algorithms. Ultimately, AI technologies should be safe and secure and protected from any privacy breaches and violations of the data they are trained on.

Lastly, some of the leading features of responsible AI rest in the hands of us humans who design, operate, and utilize the technology. AI systems are often criticized for harboring biases that may arise during the training process. These biases may be intentional or unintentional, and both can have serious consequences on the outcome. Companies need to ensure that models base their decisions on honesty, fairness, and integrity, and do not favor a particular group of users over others. Thus, fairness testing should be carried out, in which a model’s decisions are recorded and compared between the real world and a counterfactual world where sensitive attributes (such as gender, age, or race) are adjusted. This is supported by data governance; wherein legitimately sourced and good-quality images are captured by AI systems in radiology. Determining thresholds for image features, such as exposure or lighting, allows for quality images that are clearly recognizable by a model. Such data governance practices must be emphasized in the training phase so that all images meet the same required standards, thus reducing the probability of unintentional discrimination in results.

The need to regulate biases and any unfairness in AI results is eventually in the control of the stakeholders involved in developing these emerging technologies. This makes accountability and responsibility principles highly important to the companies that develop AI, and healthcare professionals that deploy the AI. Determining who is responsible for the actions taken by AI systems is tricky, but it is something that needs to be properly discerned within society. Responsible AI in healthcare is not just about simply ticking some ethical ‘boxes’, but is about making honest and meaningful change in the radiology community, as well as the larger world. To accomplish this, we need to hold ourselves accountable by establishing ethical frameworks and guidelines that allow us to make responsible decisions and actions regarding AI in healthcare.


"Model Artificial Intelligence Governance Framework Second Edition." Personal Data Protection Commission Singapore | PDPC, 2020,

Share this

Related Posts