The use of artificial intelligence also poses risks. Researchers from the USA are now calling for warnings for AI systems, similar to those for prescription drugs.

AI systems are becoming more and more sophisticated and are therefore increasingly being used in safety-critical situations – including in healthcare. Researchers from the USA are therefore calling for these systems to be used appropriately to ensure “responsible use” in the healthcare system.

In a commentary in the journal Nature Computational Science, MIT professor Marzyeh Ghassemi and Professor Elaine Nsoesie from Boston University therefore call for warnings – similar to those for prescription medications.

Do AI systems in healthcare need alerts?

Devices and medications used in the US healthcare system must first go through a certification system. This is done, for example, by the federal agency Food and Drug Administration (FDA). Once they have been approved, they will continue to be monitored.

However, models and algorithms – with and without AI – largely circumvent this approval and long-term monitoring, as MIT professor Marzyeh Ghassemi criticizes. “Many previous studies have shown that predictive models need to be evaluated and monitored more carefully,” she explains in an interview.

This applies especially to the newer generative AI systems. Existing research has shown that these systems are “not guaranteed to work appropriately, robustly or unbiased”. This could lead to distortions, which could remain undetected due to a lack of monitoring.

This is what the labeling of AI could look like

Professors Marzyeh Ghassemi and Elaine Nsoesie are therefore calling for responsible usage instructions for artificial intelligence. These could follow the FDA approach to creating prescription labels.

As a society, we have now understood that no pill is perfect – there is always some risk. We should also have the same understanding of AI models. Every model – with or without AI – is limited.

These labels could make clear the time, place, and nature of an AI model's intended use. This could also contain information about the period in which the models trained with which data.

According to Ghassemi, this is important because AI models that have only trained in one location tend to perform worse when they are used in another location. However, if users have access to the training data, for example, it could sensitize them to “potential side effects” or “undesirable reactions”.

Also interesting:

Source: https://www.basicthinking.de/blog/2024/10/01/ki-warnhinweise/

Leave a Reply

Your email address will not be published. Required fields are marked *