Amsterdam UMC-led project drives innovation in healthcare with AI and natural language processing

80% of all patient data is unstructured. Notes from a conversation with a GP, the evaluation of a specialist in a university medical centre or even a recommendation from a pharmacist. While this 'unstructured' data is no problem for the human eye, it presents an unsurmountable challenge to an AI-algorithm. One that is "preventing AI from reaching its full potential," in the view of Amsterdam UMC, Assistant Professor Iacer Calixto. To give AI the helping hand that it needs, Calixto is set to lead a project that will "tackle the important challenges that hinder its use in clinical practice," thanks to funding from the NWO.

"We need to devise methods that are human-centered and responsible by design if we want these methods to be implemented in practice," says Calixto. The project will build on Natural Language Processing (NLP) techniques that already underpin the increasingly popular, ChatGPT. Currently, the unstructured nature of this data means that software such as ChatGPT cannot be easily used in the health care sector. However, the software itself offers plenty of opportunities for the sector. With promises to improve data entry, decision making and to free up crucial time that doctors and nurses can instead spend on patient care.

Ensuring privacy is maintained

Protecting the privacy of our patients is a top priority at Amsterdam UMC, and that isn't different when we are developing, testing or using AI-algorithms."

Mat Daemen, Vice-Dean of Research at Amsterdam UMC

To ensure that AI can also be used in a safe way, this project will also address issues relating to privacy. By developing new 'synthetic' patient records, based around simulated information. These records mimic real patient records, in order to facilitate healthcare and research, while protecting the information of the 'real' patients.

"One of the main bottlenecks of doing research in healthcare is access to high-quality data to train and validate machine learning models. Part of our project will generate synthetic patient records that include not only structured but also unstructured data such as free-text highlights from a consultation with a GP. These synthetic records, though not from real patients, can still be very useful to enable easier access to high-quality healthcare data for researchers and clinicians," says Calixto.

Responsibly Dutch

Another sticking point for the use of AI in the Dutch health sector, is a rather more self-evident one: language. Software such as ChatGPT are built on language databases, and these are predominantly in English. By building new models that are trained on Dutch medical records, the project will increase the reliability of existing tools as well as making them easier to use for professionals on the wards or in the treatment room.

This is a bold project that will ensure the Amsterdam UMC is one of the forces driving innovation in healthcare with artificial intelligence and natural language processing. Results obtained in this project, for instance, synthetic patient records will benefit the entire Dutch healthcare ecosystem, including other hospitals and university medical centers, says Calixto.

The responsibility of this AI project is not only limited to the important goal of maintaining patient privacy. The project will also seek to remove any aspects of discrimination and unfairness that may exist in existing AI models. For Daemen, this is an essential condition for the use of AI in Amsterdam UMC, and something that this project has at its core. "This project is an important addition to the efforts of many experts in Amsterdam UMC and in the Amsterdam region to introduce and use AI tools in a human centred and responsible way," he concludes.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Study finds health care evaluations of large language models lacking in real patient data and bias assessment