Are we Missing Women in (AI) Healthcare Again? The EU’s Thoughts on AI-Empowered Medical Diagnosis

With the recent trilogue concluding in the European Parliament concerning the Artificial Intelligence Act (AIA), along with the increasing involvement of Big Tech in the healthcare sector, we must maintain sight of the harms AI systems can bring in contexts of discrimination. Gaps in medical diagnosis for women are not new but can be exacerbated with the broad usage of health AI.

In 2020, the European Commission (EC) released a whitepaper on regulating AI. Still, the initial discussions focused on how the European Union (EU) faced challenges in light of the innovation-forward approach taken by China and the United States. The mention of ethical guidelines was meagre, with only a call to ensure that the outcomes of AI systems would not lead to prohibited discrimination. This reiterates how anti-discrimination measures have often been theoretical, defeating the purpose in critical sectors, such as healthcare.

However, soon after, the EC’s Advisory Committee published an Opinion on AI, explicitly mentioning the existence of a gender gap when diagnosing healthcare symptoms. They stated that AI systems which are symptomatic machines often rely on current data leading to misdiagnosis. Additionally, the EC’s Guidance on AI ethics notes that non-discrimination policies should go beyond the ordinary by valuing gender differences and other variables such as diverse personalities and cultural backgrounds. Throughout discussions in the EU, there has been an emphasis on the study of gender bias being relevant in sensitive applications, such as healthcare, where humans carry out procedures with medical ethics and fairness.

The Draft AIA further underscores how the use of AI that is biased can adversely affect the rights enshrined in the EU Charter of Fundamental Rights. The proposal seeks to align with the existing legislation of the EU which applies to sectors employing high-risk AI systems or are likely to employ them. Medical devices are presumed to be high-risk and already subject to third-party assessment under the relevant sectoral legislation. High-risk systems call for specific obligations such as developing a risk management system, using appropriate data and transparency of systems to allow users to interpret the systems’ outcomes. Through the listed criteria of data quality, there has been an attempt to prevent the phenomenon of ‘garbage in, garbage out’ i.e., ‘bias in, bias out’. However, we are yet to see whether the AIA successfully implements regulation through the life-cycle approach, i.e., the life of the system from the design stage to that of the assessment stage. Discrimination or amplification of inequalities can infiltrate into any stage during the life cycle of an AI system. For instance, a bias may exist in the training data, which is put into the algorithm’s code or can develop with new data input.

While the AIA may to some level tackle issues of gender discrimination through mechanisms involving data governance, accuracy and explanation methods, it would not be able to resolve all real-life issues. Institutional problems, such as the underrepresentation of women in the field of AI specifically, and Science, Technology, Engineering and Math, also need to be thoroughly tackled to build systems which are gender-sensitive, to begin with. One hope is that the AIA proposes mitigation measures through understanding the lifecycle of algorithms, which inevitably calls for wider stakeholder management, increased transparency, and thorough training of datasets.

Overall, legal measures globally are not yet advanced enough to combat algorithmic bias on a large scale. Legislations worldwide attempt to protect certain variables which would lead to unfair judgement, such as race and gender, however, these factors do not exist in healthcare algorithms to receive proper care and attention. In addition to legislation, there is an acknowledgement of systemic changes that need to be brought about in policies concerning women in technology. Digital spaces, including that of AI, are closely intertwined with the real world as both a continuum and a mirror. The aforementioned Opinion on AI has noted that it is crucial to deal with gender inequalities in the real world efficiently to prevent perpetuating inequalities in the digital realm. Therefore, it reiterates that if the data collected and used for developing AI is biased, the outcome will be biased too.

Want to learn more?:

Comments (0)
No login
gif
Login or register to post your comment